Introduction
In a recent move to combat the misuse of artificial intelligence in political spheres, OpenAI announced that it has shut down a network of ChatGPT accounts linked to an Iranian influence operation. This operation, reportedly aimed at influencing the U.S. presidential election, was generating AI-created content that included social media posts. Despite the sophistication of the operation, it appears that it failed to reach a significant audience. However, the incident highlights growing concerns about the use of generative AI in political influence campaigns.
The Rise of AI in Political Manipulation
This incident is not an isolated case. OpenAI has previously banned accounts linked to state-affiliated actors who used ChatGPT for malicious purposes. In May, the company disrupted five different campaigns designed to manipulate public opinion using the AI tool. These efforts mirror tactics seen in earlier election cycles, where state actors utilized social media platforms like Facebook and Twitter to spread misinformation and sow discord.
The emergence of AI as a tool for political manipulation marks a new chapter in the battle against disinformation. Generative AI, like ChatGPT, can produce convincing text at scale, making it easier and cheaper for malicious actors to flood social channels with misleading content. This development has led companies like OpenAI to adopt a "whack-a-mole" approach, where they continually ban accounts associated with these influence efforts as they arise.
The Storm-2035 Operation
OpenAI's investigation into this latest cluster of accounts was bolstered by a report from Microsoft Threat Intelligence. Published last week, the report identified the group behind the operation as "Storm-2035," an Iranian network active since 2020. According to Microsoft, Storm-2035 operates multiple sites that imitate legitimate news outlets and engage U.S. voter groups across the political spectrum. The group employs polarizing messaging on contentious issues such as U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.
The strategy employed by Storm-2035 does not appear to focus on promoting a particular policy or candidate. Instead, the goal is to sow discord and deepen divisions within the U.S. electorate. This approach is consistent with other known influence operations, where the aim is to create confusion and conflict rather than support a specific agenda.
OpenAI's Findings
During its investigation, OpenAI identified five website fronts used by Storm-2035, which masqueraded as both progressive and conservative news outlets. These sites featured convincing domain names, such as "evenpolitics.com," designed to attract readers from different political backgrounds. ChatGPT was utilized to draft several long-form articles for these sites. One example includes an article falsely claiming that "X censors Trump’s tweets," a misleading narrative considering that Elon Musk, who now owns X (formerly Twitter), has encouraged former President Donald Trump to engage more on the platform.
On social media, OpenAI found that Storm-2035 controlled a dozen accounts on X and one account on Instagram. These accounts were used to post political comments rewritten by ChatGPT. One such post falsely alleged that Vice President Kamala Harris attributed "increased immigration costs" to climate change, accompanied by the hashtag "#DumpKamala." Despite the operation’s efforts, most of these posts received little to no engagement, with few likes, shares, or comments.
The Implications and Future Outlook
While this particular influence operation appears to have had limited impact, it underscores the potential dangers of using AI in political manipulation. The ease with which AI tools like ChatGPT can be used to create and disseminate misinformation poses a significant challenge for both technology companies and governments. As the 2024 U.S. presidential election approaches, it is likely that similar operations will emerge, each trying to exploit the polarized political climate.
OpenAI's actions demonstrate a proactive stance in addressing these threats, but the whack-a-mole approach has its limitations. As AI technology continues to evolve, so too will the tactics of those seeking to misuse it. This ongoing cat-and-mouse game highlights the need for robust AI governance and international cooperation to mitigate the risks associated with AI-powered disinformation campaigns.
Conclusion
Add a Comment: