Finley Thomas, Mercedes Scheible, Jigyasa Maloo, Camilla Raffaelli, Joe Everest, Agathe Labadi, James Raggio
Samantha Mikulskis, Editor; Jennifer Loy, Chief Editor
October 26, 2024
AI-driven Disinformation[1]
Introduction
Artificial Intelligence (AI) is changing the way election information is spread, enabling the rapid creation and dissemination of fake and misleading content by domestic and foreign threat actors and extremists.[2] This advanced technology can be used to confuse voters, misrepresent political candidates, and support cyberattacks by enabling phishing schemes, and realistic deepfakes[3] that undermine confidence in election security.[4] With elections approaching worldwide, the threat of AI-driven misinformation is becoming increasingly sophisticated, diminishing public trust and even fueling violence.[5] Addressing the potential impact of AI on election security is necessary to safeguard the integrity of democracy.[6] Certain types of AI, such as Narrow AI, will likely increase the volume and spread of misinformation and deepfakes, likely flooding social media platforms and traditional media channels and impacting countermeasures. Despite Narrow AI's well-defined purpose to operate singular or few tasks, the easy access and proliferation of this type of tools are alarming law enforcement and increasing misinformation risk, especially during mainstream events, such as elections.
Summary
The 15 U.S.C 9401(3) outlines the definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”[7] In the Biden Administration’s October 2023 Executive Order, the government stressed AI’s “extraordinary potential,”[8] while warning of the risks associated with irresponsible use, including “fraud, discrimination, bias, and disinformation”[9] and a threat to national security. The US population has expressed growing concerns over the misuse and impact of AI. In a recent Pew Research report, a majority of Americans from both parties - Republicans (56%) and Democrats (58%), indicated they are extremely or very concerned about the impact of AI on the 2024 election.[10] Moreover, the apprehension is relatively more for adults (above 30 years) and older (above 65 years) age groups, with these age groups 50% more likely to voice concern than younger (between 18 - 29 years) age groups.[11]
Different types of AI-powered tools have surfaced in political campaigns, raising concerns about their impact on elections. Generative AI tools, such as ChatGPT, have already been used in different information operations aimed at disrupting the democratic process, through the generation of clickbait titles and disinformation aimed at polarizing voters and undermining the democratic process.[12] Narrow AI, despite having limited capabilities and being able to only execute specific tasks, can still be misused in its capacity to contribute to malicious efforts at disinformation. Threat actors can exploit AI-powered translation tools to translate disinformation and propaganda into different languages, amplifying their potential reach.[13] These tools can then be spread to voters and internet users through targeted content and social media posts, such as on X, using machine learning-powered profiling to amplify reach.[14] Political microtargeting has been an effective way to shape public opinion during elections, by crafting highly personalized content, campaign messages, and political ads tailored to specific individuals and groups.[15] It is argued microtargeting aids in reinforcing existing political beliefs as well as aggravating polarization.[16] Machine learning and the use of large language models (LLMs) have enhanced this strategy, enabling automation and scalability in real-time.[17] By analyzing existing personal data consisting of unique vulnerabilities and values, machine learning has the capability to predict voter behavior[18] and LLM facilitates personalized microtargeted outreach campaigns.
Domestic and foreign threat actors have both harnessed the power of AI technologies intending to spread disinformation, produce polarized narratives, and disrupt the election process. Extremist groups have actively exploited generative AI to create and disseminate extremist content, generating tailored images, videos, deepfakes, and memes designed to resonate with their target audience.[19] Social media, driven by AI algorithms, has become fertile ground for amplifying extremist views by flooding platforms with identical or similar messages, designed to boost engagement.[20] Likewise, foreign state actors, such as Russia and Iran, engage in malicious AI use, deploying automation and generative tools to interfere with elections abroad. In May 2024, the tech company OpenAI (creators of the tool ChatGPT), published a report explaining how they had deleted accounts connected to an Iranian information operation dubbed “Storm-2035.”[21] Rwanda also found itself the victim of AI-generated partisan content being posted by anonymous or fake accounts on social media platforms, such as Twitter and Facebook, ahead of its general elections.[22] Both operations had a relatively low impact, with most posts remaining in the single digits engagement-wise, before the accounts connected to them were banned by OpenAI.[23]
To reduce the impact of Russian AI-generated interference before the European Parliament 2024 elections and to preserve democratic integrity, the EU set up several pre-bunking and debunking initiatives to curb the spread of misinformation and disinformation.[24] The European Digital Media Observatory produced a frequent bulletin on all information concerning disinformation and interference with the elections, acknowledging efforts and assessing responses across Europe, whereas the East Stratcom Task Force launched the EUvsDisinfo project already in 2015, a project that detects and debunks Russian disinformation.[25]
Analysis
AI-generated disinformation will almost certainly become a leading threat to the integrity of electoral processes, as it can produce and disseminate false information on a large scale, making it difficult for voters to distinguish fact from fiction. The increasing accessibility of generative AI tools to the general public will likely lower the barriers to creating disinformation. Individuals with little to no technical knowledge can now likely produce deepfakes or manipulate images. This democratization of AI-generated content will likely overwhelm fact-checking methods, with the volume of disinformation surpassing traditional detection techniques. With AI evolving rapidly, election security frameworks will likely need to adapt to mitigate these emerging risks.
AI-driven disinformation campaigns will very likely lead to an increase in political violence, especially in politically tense environments. Extremist groups, such as The Proud Boys, will very likely use AI to generate content that provokes violence and deepens overall societal divisions. This will likely increase both online and offline violence, as disinformation escalates tensions and exacerbates preexisting grievances during election periods. In politically sensitive climates, even a small amount of polarizing content will likely have a destabilizing effect. These tensions will very likely cause social unrest, leading to coordinated acts of violence. The ability of AI to generate extremist narratives will almost certainly escalate these trends.
Foreign state and non-state actors will very likely use AI for election interference, leveraging advanced technology to create disinformation campaigns that influence voter behavior and increase political polarization. Targeted content will likely be used to subtly push polarized agendas, exploiting existing political and social divides and causing a loss of trust in democratic institutions. The flexibility and scale of AI tools will likely allow these actors to target disinformation campaigns at specific voter groups, increasing the likelihood of influencing the electoral process. In the long term, the ongoing use and spread of AI-generated disinformation will likely erode trust in democratic systems.
AI, especially LLMs, will likely spread misinformation due to programming flaws and limited oversight. Citizens asking LLMs for election news, information on candidates, or other pertinent information are likely to receive incorrect or misleading information. LLMs programmed to predict voter behavior will likely cause those who employ the technology to remain in an echo chamber and be told information the AI thinks they wish to hear over fact. Misinformation spread sporadically by AI, with no definable source, is very likely to cause confusion and overwhelm voters.
Disinformation memes will very likely spread faster and more effectively than in previous election cycles due to generative AI. Disinformation actors can produce vast amounts of memes, very likely causing them to go viral, likely confusing or misleading voters. Coordinated efforts by extremist groups will very likely increase the chances that this AI-generated content becomes trending or viral, very likely shifting national discussions and attention to extreme or defamatory content. Besides, extremist groups will likely take advantage of the shift of candidates and political party's campaign methods and communication by using memes and social media platforms, likely leading to further hate speech, marginalization of minorities, and political clashes. Candidates will very likely exploit social media trends to gain voters and focus their discourses on most reposted topics and memes. This will almost certainly generate further confusion and mislead voters.
Recommendations
The Counterterrorism Group (CTG) recommends social media users approach content critically, questioning and verifying claims and images before sharing, especially when it comes to emotionally charged content or posts lacking reputable sources.
Governments and civil society should launch pre-bunking campaigns to educate the public on how to identify AI-generated images and videos, along with other disinformation tactics to make these campaigns less effective.
Journalists and government agencies should take a larger role in debunking false narratives by correcting disinformation and providing context to what actually occurred.
Tech companies should monitor and remove accounts that misuse AI for disinformation, prioritizing transparency and accountability.
Governments and technology companies should work together to monitor fringe platforms, such as 4chan and Gab, to monitor extremist content that is being shared.
Social media platforms should flag AI-generated content, adding disclaimers to AI-created videos and images, to help users identify misleading information.
Educational campaigns should be launched to improve digital literacy, helping the overall public recognize AI-generated disinformation and understand how it spreads.
The government, civil society, and NGOs should develop further educational and informative campaigns on transparency and fact-checking tools and methods, likely giving access to transparency and unbiased programs to decrease disinformation reach and confusion.
[1] AI-driven Disinformation, generated by a third party database (Created by AI)
[2] Election disinformation takes a big leap with AI being used to deceive worldwide, AP News, March 2024, https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd
[3] AI-generated media that are created to appear real
[4] RISK IN FOCUS: GENERATIVE A.I. AND THE 2024 ELECTION CYCLE, CISA, January 2024,
[5] AI could 'supercharge' election disinformation, US tells the BBC, BBC, february 2024, https://www.bbc.com/news/world-68295845
[6] RISK IN FOCUS: GENERATIVE A.I. AND THE 2024 ELECTION CYCLE, CISA, January 2024,
[7] 15 USC 9401: Definitions, United States Code, https://uscode.house.gov/view.xhtml?req=(title:15%20section:9401%20edition:prelim)
[8] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, White House, October 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[9] Ibid
[10] Americans in both parties are concerned over the impact of AI on the 2024 presidential campaign, Pew Research Center, September 2024, https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/
[11] Ibid
[12] THREAT ASSESSMENT: Russia, Iran, and China seek to influence US elections, undermining democracy and sowing voter discord. Disinformation will almost certainly be their primary means of influence, by William Adams, Sabrina Bernardo, Clémence Van Damme, Yassin Belhaj, Samuel Pearson
[13] AI Threats in Elections: What Nonprofits Must Know, Alliance for Justice, July 2024, https://afj.org/article/ai-threats-in-elections-what-nonprofits-must-know/
[14] What AI is doing to Campaigns, POLITICO, August 2024, https://www.politico.com/news/2024/08/15/what-ai-is-doing-to-campaigns-00174285
[15] “Surveillance, Disinformation, and Legislative Measures in the 21st Century: AI, Social Media, and the Future of Democracies,” Social Sciences, 2024, https://doi.org/10.3390/socsci13100510
[16] Campaign microtargeting and AI can jeopardize democracy, LSE Blog, May 2024, https://blogs.lse.ac.uk/politicsandpolicy/campaign-microtargeting-and-ai-can-jeopardize-democracy/
[17] Effectiveness of large language models in political microtargeting assessed in new study, University of Oxford, June 2024, https://www.ox.ac.uk/news/2024-06-26-effectiveness-large-language-models-political-microtargeting-assessed-new-study
[18] “Predicting Propensity to Vote with Machine Learning,” SSRN, 2023, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4531018
[19] The Digital Weaponry of Radicalisation: AI and the Recruitment Nexus, Global Network on Extremism & Technology, July 2024, https://gnet-research.org/2024/07/04/the-digital-weaponry-of-radicalisation-ai-and-the-recruitment-nexus/
[20] How extremists may potentially use AI to further reach, recruitment, Institute for Strategic Dialogue, January 2024, https://www.isdglobal.org/isd-in-the-news/how-extremists-may-potentially-use-ai-to-further-reach-recruitment/
[21] THREAT ASSESSMENT: Russia, Iran, and China seek to influence US elections, undermining democracy and sowing voter discord. Disinformation will almost certainly be their primary means of influence, by William Adams, Sabrina Bernardo, Clémence Van Damme, Yassin Belhaj, Samuel Pearson
[22] Influence and cyber operations: an update, October 2024, OpenAI, October 2024, https://cdn.openai.com/threat-intelligence-reports/influence-and-cyber-operations-an-update_October-2024.pdf
[23] Ibid
[24]Prebunking AI-generated disinformation ahead of EU elections, European Digital Media Observatory, March 2024, https://edmo.eu/publications/prebunking-ai-generated-disinformation-ahead-of-eu-elections/
[25] Disinfo Bulletin, European Digital Media Observatory, June 2024, https://ec.europa.eu/newsroom/edmo/newsletter-archives/view/service/3754