Dan Brahmy is the CEO of Cyabra, a Tel-Aviv-based startup that has developed a SaaS platform designed to measure the authenticity and impact of online conversations, to detect disinformation and flag deepfake content.

In this outstanding interview, Dan Brahmy discusses fake news, deepfakes, the threats that they pose to organizations, disinformation and what the future holds. These forward-looking perspectives can help you engage with new ideas and take your security strategy to the next level.

What kinds of threats do deepfakes pose to organizations?

Deepfakes and General Adversarial Networks (GAN’s) are known to be among the simplest and most recent threats to businesses and public organizations because they allow users to generate single frames and videos of people, without having the necessary authorization, thus the immense repercussions. Such threats can be incredibly dangerous for the brand image and reputation of executives within organizations, as they can lead to fraud or impersonation.

What technologies exist to detect deepfakes in real-time?

Most of the existing solutions are open-source or academia-based, due to the recent technological advancements, both on the creation and detection sides of the equation. This means that an end-to-end solution has yet to be engineered fully, but a worldwide effort needs to be delivered and should include the involvement of technological startups, regulators, and eventually platforms (i.e social media platforms).

Why are new technologies needed?

Deepfakes, GANS, and even the broader realm of Truth-tech issues need a filtering mechanism to enable large organizations to distinguish between the real, bad, and fake of the world, to pursue counter-measures and focus their resources on the proper conversations, visual content and authors involved in sensitive topics.

Without such technological advancements, these organizations/nefarious individuals live behind a curtain of uncertainty and doubt in relation to online interactions.

Tell us about the Cyabra story

Cyabra was founded by veterans from the Israeli SOCOM (81) with the sole purpose of acting as a filtering mechanism; allowing large organizations to receive the most genuine layer of insights, to eventually diminish the damage created by disinformation and deepfakes.

Tell us about the technology behind the product

Cyabra applies deep learning algorithms in order to measure the authenticity levels of involved authors based on behavioral patterns. These, in turn, allow end-users to measure the impact and “realness” around specific topics online, whether they include images or videos (deepfakes or GANS).

How does the misinformation detection really work?

A crucial part of measuring mis/disinformation revolves around measuring proliferation (a.k.a the snowball effect), which delivers insights around the velocity and authenticity of a specific narrative and its expected impact on growth of online audiences. Relying solely on fact-checking makes no sense because real/fake pieces of information still need to be spread through specific patterns in order to cross the chasm.

Is CEO fraud becoming more of a concern?

While CEO fraud and many more fraudulent activities around impersonations are becoming increasingly common, I can’t say that they’re yet developed enough to be at the epicenter of a CISO/CIO’s interest.

Your perspectives on the future of deepfake technologies and solutions?

Anyone can become anyone, and make others do/say anything. No identity will be unbreachable; visually or audio-wise. This makes consumers’ reality more and more blurred over time, and our job as a technological startup is deeply challenging, but also highly rewarding!

To receive more timely insights, analysis and resources, sign up for the CyberTalk.org newsletter.