OpenAI Takes Action Against Malicious Use of ChatGPT
OpenAI recently announced the suspension of several ChatGPT accounts linked to potentially harmful activities by Russian-speaking threat actors and two Chinese hacking groups. These accounts were reportedly involved in various cybercriminal endeavors, including malware development, automation on social media platforms, and research into U.S. satellite communications technologies.
A Closer Look at the Threat Actors
According to OpenAI’s threat intelligence report, the Russian-speaking group leveraged the AI model for several malicious purposes, such as creating and refining Windows malware, troubleshooting code across various programming languages, and establishing their command-and-control (C2) systems. OpenAI noted that the threat actors showcased a strong understanding of Windows internals, coupled with effective operational security measures.
The ScopeCreep Malware Campaign
Dubbed "ScopeCreep," this malware campaign utilized Go programming language to orchestrate its activities. OpenAI emphasized that there was no sign of widespread impact from these operations. The attackers employed temporary email accounts to sign up for ChatGPT, allowing them to engage the AI for short-term, specific improvements to their malware before abandoning the accounts. This tactic demonstrates their focus on minimizing traces, an essential practice in operational security.
The distributed malware was disguised as a legitimate tool, a video game crosshair overlay named Crosshair X. Users who inadvertently downloaded this infected version ended up with a powerful malware loader capable of fetching additional harmful payloads from an external server.
The Malware’s Technical Aspects
Once successfully deployed, the malware initiated a multi-stage process aimed at escalating user privileges and maintaining a stealthy presence on the compromised systems. One of its methods involved relaunching with a command called ShellExecuteW, which helped it avoid detection by using PowerShell commands to exclude itself from Windows Defender. It also implemented timing delays and obfuscation techniques, such as Base64 encoding, making the malware harder to detect.
The primary aim of this malicious software was to harvest sensitive information, including browser-stored credentials, tokens, and cookies, all while keeping the threat actors informed through alerts sent to a dedicated Telegram channel.
Technical Intelligence Gathering
OpenAI found that the Russian threat actors sought AI assistance for specific tasks such as debugging Go code related to HTTPS requests and integrating Telegram API functionalities. Their inquiries included modifying Windows Defender settings through PowerShell, indicating advanced technical knowledge and intent for sophisticated malware development.
Suspicious Activities by Chinese Hacking Groups
In addition to the Russian-speaking accounts, OpenAI disabled accounts linked to two Chinese hacking entities, known as APT5 and APT15, among others. One group was primarily focused on open-source research and technical queries, while another aimed at developing support tasks like Linux system administration and software development.
Automation and Script Generation
The Chinese groups utilized ChatGPT to troubleshoot system configurations and develop software packages for offline use. They also crafted scripts for brute-forcing FTP servers and explored methods to automate penetration testing using large language models.
An alarming revelation was the use of ChatGPT by these actors to create automated scripts that could manipulate social media platforms, including Facebook and Instagram, effectively generating engagement through programmed likes and posts.
Other Malicious Activities Leveraging AI
Numerous other malicious activities involving OpenAI’s technology have come to light, showcasing a diverse range of nefarious applications:
-
North Korean Employment Scams: A network designed to promote fraudulent job applications in IT and software development.
-
Geopolitical Messaging from China: An initiative that employed ChatGPT to create and distribute social media posts across various platforms highlighting China’s geopolitical interests.
-
Filipino Political Discourse: An operation that generated a high volume of comments related to current events in the Philippines.
- Iranian Influence Campaigns: Utilizing AI to produce content that seemingly advocated for a variety of social and political issues while disguised as regular users on social media.
Fraudulent Recruitment Schemes
OpenAI’s findings indicate a mix of recruitment-based scams, often exploiting ChatGPT’s capabilities to create enticing job advertisements promising high salaries for trivial tasks. These organizations charged new recruits joining fees, sustaining engagement by providing slight compensation to current “employees.” Such operations typically embody characteristics typical of task scams.
By shedding light on these activities, OpenAI aims to reinforce the importance of responsible AI use while addressing the evolving landscape of cyber threats. The implications of these findings underscore a growing intersection between AI technology and cybercrime, highlighting the need for ongoing vigilance in safeguarding against such abuses.