AI-Driven Disinformation: A New Front in Election Interference

In the US, we are in the midst of a presidential election cycle. Public anxiety about disinformation campaigns and the role of foreign actors, particularly Russia seems to increase during US election cycles.  During the 2016 presidential election, there were many news and social media bots but they didn’t have huge penetration.  By 2020, we saw a surge in deepfakes and other more sophisticated forms of disinformation. With tools like artificial intelligence (AI), especially generative models like ChatGPT, getting better and better we see the former twitter.com website full of effectively disinformation bots posting and reposting one another’s content.

AI in the Hands of Adversaries

Since the last US presidential election, AI has made dramatic leaps forward in terms of conversational chatting, image and video creation, etc.  A real cyberwar is occurring around us right now, adding to the problems are those are easily manipulated  are being used to become human disinformation bots, They are reposting information which is by what to those who actually know are silly and ridiculous statements.  Let’s look at some of the areas where you have likely

1. Deepfake Videos and Audio Manipulation

Deepfakes can falsely show a candidate doing or saying things they would never do. Such a clip could go viral before it’s debunked to public trust could be then be harmed,  there are historical adversaries of US voting systems that have traditionally used information warfare for strategic dis to take full advantage of- just look to the 2020 election and proven nation states trying to intervene, in the 2024 cycle we’ve already seen bad deep fakes (actually just simply AI-generated images) of  stars allegedly supporting candidates they would never support in any way. 

2. AI-Generated Propaganda

In the 1930’s and 40’s the NAZI government had an office of the minister of “Public Enlightenment and Propaganda” who was Joseph Goebbels. A statement he made “If you repeat a lie often enough, people will believe it, and you will even come to believe it yourself” sadly has shown to be true time and time again, and again when propaganda is targeted at those who are subject to confirmation bias they become true believers, which is a terrifying thing.

Large language models, the driving force behind GPT-type chatbots, could be employed to swarm social media and comment sections with subtle propaganda masquerading as human opinions. Automated systems that can produce and spread independent discourse at a scale no human can compete with could tip the balance of opinion among some voters, cement widespread acceptance of false narratives in favor of foreign special interests, or both.

3. Automated Misinformation Campaigns

Given such automation possibilities, AI-driven disinformation operations can be scaled up with little need for large teams of human operatives, who otherwise would be charged with maintaining a never-ending tsunami of accounts and posts. Bots can autonomously manage dozens (or more) of social media accounts, deploying AI-generated articles, memes, and even direct messages at the appropriate targets. Meanwhile, automation can also help in ‘catching a wave’ or a trending topic and so quickly transition to new tactics when news breaks about the upcoming.  “Trending” on social media sites often is pure bot generated where a bot wrangler in a nation-state gets direction on what specific type of propaganda will be pushed on that campaign, using AI so it’s not always a “copy/paste” to potentially millions of bots on a social network. Twitter has become the perfect storm for this as, in the last few years, they have moved away from objective monitoring and fallen in line with the whims of their new owner.

4. Microtargeting Using AI

AI is also getting better and better at microtargeting (which, in this case, from a cybersecurity perspective, we would call “Spear Phishing” to go after an individual or very small group of specific individuals.  By using public data to micro-target audiences based on their beliefs, behaviors, and voting habits. Russian influence operations have shown they are already quite skilled at using social media algorithms to find niche audiences and promote divisive content to them. With the help of AI, such operations could become even more precise. Public data via data brokers is an avenue the bad actors use to obtain information for microtargeting.  Data Brokers are worthy of their own article in the future.

Russia’s Evolving Tactics

Russia has a long history of interfering in U.S. elections, dating back to 2016. However, AI offers them a new and more potent toolkit for executing their goals. Instead of relying on large-scale efforts that can be more easily detected, they can leverage AI to create smaller, more agile disinformation cells. For example, AI could be used to mimic local grassroots movements, making it harder for detection systems to distinguish between organic activity and foreign interference.

Russia’s tactics might shift from overt political interference to such tactics as plants in Facebook comments sections to deliberately seed seeds of distrust in democratic institutions. Such seeds could look like AI-synthesised content undermining the or even conspiracy theories. The adversary would not have to attempt to push particular candidates or election outcomes to undermine faith in

Defending Against AI-Driven Disinformation

Combatting AI-driven disinformation will require a multi-pronged approach involving both technological and societal efforts:

1. Enhanced Detection Tools

AI is not only a threat but also a key part of the solution. Advances in machine learning can help detect deepfakes and other AI-generated content before they spread too far. Social media platforms and fact-checkers are already deploying AI to scan for unusual activity, but these systems need to be constantly updated to keep pace with new threats.

2. Collaboration Between Governments and Tech Companies

The U.S. government must work closely with major technology companies to ensure that malicious AI activities are detected and disrupted. This partnership needs to be proactive, with clear communication channels and shared intelligence on emerging threats. The role of the Cybersecurity and Infrastructure Security Agency (CISA) will be pivotal in coordinating these efforts.

3. Public Awareness and Media Literacy

Ultimately, the public is the final line of defense. Increasing media literacy and making citizens aware of the existence of AI-driven disinformation are crucial steps. If people are better equipped to spot manipulative content, the effectiveness of these tactics will be diminished. Educational campaigns can also help reduce the knee-jerk reactions to shocking AI-generated “news,” giving fact-checkers time to debunk false information.

As AI continues to evolve, so too will the methods used to undermine democracy. Countries like Russia, with a vested interest in destabilizing Western democracies, are likely to seize on AI’s potential for sowing chaos and confusion in the U.S. election process. To protect the integrity of elections, robust detection tools, international cooperation, and a well-informed public are essential. The future of democracy depends on our ability to adapt to these new technological threats.

The 2024 election is a crucial test, not just of the candidates, but of society’s resilience against the weaponization of AI in the information age. Good luck to us all.

Leave a comment