OpenAI Thwarts Iranian Influence Operation by Blocking Use of ChatGPT for U.S. Election Propaganda

Published:

OpenAI Uncovers Iranian Covert Influence Operation Leveraging ChatGPT for U.S. Election Content

OpenAI recently uncovered a covert Iranian influence operation named Storm-2035, utilizing their ChatGPT tool to generate content that focused on various topics including the U.S. presidential election. The operation aimed to target individuals on both ends of the political spectrum by sharing articles on fake news websites posing as progressive and conservative outlets.

Despite their efforts, the content generated by the AI tool failed to gain significant engagement on social media platforms. However, the operation did manage to cover a wide range of topics such as the conflict in Gaza, Israel’s presence at the Olympics, and U.S. politics, to name a few. The articles also included commentary on fashion and beauty, possibly to make the content seem more authentic.

Microsoft also flagged Storm-2035 as a threat cluster actively engaging U.S. voter groups with polarizing messaging on issues like LGBTQ rights, the Israel-Hamas conflict, and U.S. presidential candidates. Some of the phony news sites created by the group included EvenPolitics, Nio Thinker, and Savannah Time, among others.

Additionally, Google’s Threat Analysis Group detected spear-phishing efforts by a threat actor known as APT42, linked to Iran’s IRGC, targeting high-profile individuals in the U.S. and Israel. APT42 utilized sophisticated social engineering techniques and malicious redirects to gather login credentials from users of popular email services.

Both OpenAI’s discovery of Storm-2035 and Google’s detection of APT42 highlight the ongoing threat of state-sponsored influence operations targeting elections and high-profile individuals, emphasizing the importance of cybersecurity vigilance in today’s digital landscape.

Related articles

Recent articles