Updated November 5, 2024
As we predicted in early October, the prevalence of state-sponsored information operations, including misinformation and disinformation campaigns, has reportedly intensified around the 2024 US presidential election, creating associated risks to the electoral process. On Nov. 4, the Office of the Director of National Intelligence (ODNI), the FBI, and the Cybersecurity and Infrastructure Security Agency (CISA) confirmed that they had already seen external state actors, notably Russia attempting to undermine confidence in the electoral process through false reports of electoral fraud, and exacerbate divisions among the voting public, particularly in swing states, such as Arizona, Georgia, Michigan, Nevada, North Carolina, Pennsylvania, and Wisconsin. The agencies warn that hostile influence operations could encourage violence and are likely to continue at a high tempo through election day itself and for some weeks following the vote. In addition, US agencies have accused Iran of having carried out malicious cyber activities and disseminated fake media content, as during previous elections. Experts also contend China has mainly targeted down-ballet races and specific members of Congress who are critical of Beijing by using sophisticated AI-generated content and social media manipulation.
Key Takeaways
- State actors, particularly China, Iran, and Russia, will likely escalate ongoing cyber efforts to create greater divisions amid an increasingly polarized domestic political environment.
- State-sponsored cyber tactics are likely to become increasingly sophisticated, with more phishing attacks and deepfakes designed to blur the line between credible information and malicious content.
- While the integrity of the voting systems and election processes will remain secure, the sheer amount of mis- and dis-information present in the public sphere could erode voter confidence in the result and the broader voting ecosystem.
Misinformation refers to false information shared with or without malicious intent. Disinformation is misinformation that is deliberately misleading and geared toward manipulating public perception. Social media platforms have become indispensable tools for spreading both types of information, which is often exacerbated by artificial intelligence (AI) through synthetic media. US government officials had previously issued warnings about foreign influence operations, particularly from China, Iran, and Russia - all three of which have deteriorating relations with Washington.
Recent incidents suggest that disinformation campaigns sometimes serve as a potential smokescreen for significant cyberattacks with the repeated targeting of political figures and organizations. Additionally, misinformation efforts can erode trust and aggravate existing operational vulnerabilities. The rapidly advancing networked environment requires individuals and businesses to enhance cyber situational awareness and readiness. It also requires implementing comprehensive risk mitigation strategies, including innovative social media monitoring and information-sharing partnerships, to mitigate the threats effectively.
The Proliferation of AI in Information Operations
The increasing integration of artificial intelligence (AI) into misinformation and disinformation operations raises concerns that China, Iran, and Russia may extend their reach and exert greater influence on US politics and elections. However, a report by Microsoft found that AI has not had the disruptive effect as originally feared. This is largely because AI’s advantages relate to scaling and efficiency – enabling more content creation with fewer resources – without necessarily improving the quality of the content. The report indicated that some threat actors have “pivoted back to techniques that have proven effective in the past — simple digital manipulations, mischaracterization of content, and use of trusted labels or logos atop false information.”
Despite the limited immediate impact of AI, the potential for future developments remains significant. As AI-enabled technologies rapidly improve, they may enable more sophisticated forms of information operations that could manipulate public perception on a larger scale. In particular, the emergence of deepfake technology enables adversaries to create convincing fake audio and video, further blurring the lines between reality and deception. This capability poses a unique challenge to democratic processes since it can undermine trust in authentic sources of information and exacerbate political narratives.
Conclusion
US government officials have expressed confidence in the integrity of the 2024 elections; however, threat actors are leveraging the proliferation of AI-enabled technologies as a force multiplier to achieve their strategic objectives. While AI will also enhance detection technologies, it is likely that a continuous stream of small-scale incidents, rather than a single calamitous event, could collectively undermine confidence in the electoral process.
In the contemporary digital threat environment, individuals and businesses must enhance their overall cyber situational awareness to recognize and respond to emerging threats. Building strong information-sharing partnerships with government and industry stakeholders can create a united front against misinformation campaigns. Stakeholders can use active social media monitoring to detect false narratives in the early stages to employ swifter countermeasures. Similarly, adequate digital literacy skills can empower individuals to critically examine information and reduce the impact of deceptive content. Businesses should invest in robust cybersecurity measures to mitigate operational vulnerabilities.
Stay up-to-date on topics and countries you wish to follow by subscribing to Crisis24 Watchlists and Risk Alerts.
Author(s)
Dr. Saba Sattar
Intelligence Analyst III
Dr. Saba Sattar is a scholar-practitioner with expertise in the Asia-Pacific region and cyber intelligence. She serves as a senior subject matter expert at Crisis24. Dr. Sattar has also joined the...
Learn More
Jonathan Vincent
Watch Operations Manager
Jonathan is a South Africa-based Watch Operations Manager with a secondary focus on cybersecurity. He joined Crisis24 in 2009. He studied Political Science, followed by a post-graduate degree in...
Learn More