If you haven’t been living under a rock—especially one that is soundproof and equipped with a closed-circuit ventilation system—you’ve likely heard about Generative AI. You may have seen the impressive and sometimes unsettling videos it can produce, and if you’re following the trend closely, you’re aware of the many new tools it has enabled. Most of these tools aim to make life easier, more efficient, enjoyable, or fun. But you must have also heard about how Generative AI is used in Deception (showing the false and hiding the real), Misinformation (false or inaccurate information) or Disinformation (deliberate false content) online activities. Let’s dive in.
Deception
What happens when Generative AI, with its wide range of applications and user-friendly operation, falls into the hands of malicious actors? At the intersection of digital media, AI, and Deception lies a significant risk, which the World Economic Forum has labeled the “biggest short-term threat to the global economy.” This perfect storm poses a threat to businesses, governments, and societies alike.
There is ample evidence of the chaos that can ensue from a well-orchestrated deception campaign involving fake news, misinformation, and disinformation. For example, a deepfake photo of an explosion at the Pentagon circulated on social media, causing the market to plummet by $0.5 trillion. In another case, false narratives surrounding a murder in the UK were spread by foreign accounts, resulting in anti-immigrant riots. Additionally, foreign disinformation campaigns regarding public health led many citizens in Africa to avoid getting vaccines.
While deception campaigns are not new—they have existed since the early days of conflict—what is different today is the technology that enables hyper-realistic Deception and accelerates their creation, scale, speed, and reach.
The latest Generative AI tools that create the engaging movie clips we love to share are also making it easier for bad actors to operate. The reduction in computational costs and the training of Large Language Models provide malicious individuals with the content creation tools they need at lower and lower prices. Moreover, we become prime targets for deception attacks since many interactions occur online—whether through WhatsApp, Telegram groups, or social media. We often encounter content not based on our social needs or interests but on algorithmic targeting.
Genuine social networks are becoming rare; instead, we find ourselves on algorithmic networks that dictate what we see, read, and hear. This creates an ideal environment for malicious players to thrive.
Disinformation Spreads Like A Wild Fire
In the past, when a bad cowboy entered town, he was easy to spot. He wore a black hat, committed terrible deeds, and spat tobacco on the bar floor. Eventually, the good sheriff would appear, wearing a white hat and, well, also spitting tobacco, but let’s set that behavior aside for now. The key point is that at high noon, the two would meet on Main Street, and following a tense stare-down, the sheriff would take out the bad guy, ensuring the town lived happily ever after.
But can we do something similar today? Can we simply identify and eliminate the bad actors in digital media who create harmful deception campaigns? Partially yes, but mostly no.
To understand why, consider how these deception campaigns, aimed at our economy, government, or society, evolve. Initially, a rogue player begins spreading misinformation or disinformation to sow doubt in people’s minds, as seen in the false Pentagon explosion photo, influence voter decisions, or instigate riots. The quality of the disinformation is essential, mainly how convincingly it resembles the truth. However, what truly matters is the quantity; the message must be repeated to have an impact. This repetition is often achieved through automated bots or human “bots”—volunteers or paid participants who are part of established networks that repeatedly echo specific messages.
Next, more established players in the media, such as journalists or influencers, pick up on the misinformation and share it with their extensive networks of followers. This leads to widespread public engagement, further propagating the deceptive message.
Therefore, identifying the instigators and tracking individual messages is essential. However, the best we can do is report these accounts to platforms like X, Facebook, or TikTok, hoping they will shut them down. Unfortunately, when one account is disabled, it reappears elsewhere under a different name. The concept of a high noon showdown is possible, but its effectiveness is limited.
What complicates control over these situations is the long tail of deception campaigns. Once they enter public discourse—once journalists, influencers, and news sites begin echoing the message—it becomes entrenched in the online environment. No one has the funding or capability to root it out. There is no way to put the genie back in the bottle. A lie you encounter today may still be read by your children thirty years from now, and if you struggle to distinguish truth from fiction, they will undoubtedly find it even more challenging.
Join The Fight Against Deception, Misinformation And Disinformation
The same technology that attacks us and manipulates our beliefs in democracy, government, and the brands we love is also part of the solution. Utilizing Generative AI provides ample opportunities to create specialized tools that support policymakers, journalists, marketing teams, security teams, and individuals in identifying and responding to Deception. This emerging field is closely related to cybersecurity but requires a new suite of tools, experts, and expertise. The battle is underway, but the bad actors no longer play on an empty field.
We are quickly entering a world of synthetic content creation where content will be personalized at the individual level. Think of how marketing campaigns (and deception campaigns are similar in many ways) labored at identifying groups of people that fall into a defined criterion to help craft compelling marketing messages. We already see that generative AI technology can craft messages, especially for you. A recent study by MIT showed that AI could study our behaviors, yes, those same behaviors we all openly share in our emails and social posts, for example, and mimic our decision-making with 85% accuracy. The upside is that such a capability could help you plan your next trip, as it can make the same decisions you have made regarding sites or hotel choices. Yet, it could also be used by a malicious player to understand which political issues you feel most emotional about and craft targeted messages to lure you into making specific political choices – push the undecided from one political camp to another.
The battlefield we enter is that of those who first figure out what to do with the technology to advance their agenda. There will be good players and, as always, bad players. Both will use the same technology—but for opposing reasons.
For the benefit of our society, we must start and build the necessary platforms that accelerate the development of solutions that can identify bad players in real-time. Social networks must accelerate their response rate in removing bad players. We must find ways to meet the challenge of the long tail of misinformation and disinformation. Once a lie is out there, it grows and spreads and must be managed to help reduce its impact.
It is a sticky situation. What I believe to be falsehoods may be viewed as truths by others. As Hannah Arendt noted, we live in a world of “truths,” not “Truth.” Different people hold many different beliefs, and one of the key differences between democracy and tyranny is the freedom to think independently. Who has the authority to decide which comments or shared articles should be removed? Some might argue that even false information can be spread for a good cause. Thus, while we have tools at our disposal, the challenge of mitigating the actions of those who seek to deceive is crucial yet difficult. A colleague of mine refers to this as a “complex and complicated” problem. This is why we need the best minds to help us navigate this issue, so we can more easily distinguish between genuine content and deceptive manipulation, and gain control over deception, misinformation and disinformation.