Major Threats from Malicious Code and Vulnerabilities

Published:

Ensuring AI Security: The Growing Threat of Malicious Code in Open Source Repositories

Attackers Target Open Source AI Repositories with New Malicious Techniques

In an alarming trend, attackers are increasingly targeting open-source artificial intelligence (AI) repositories like Hugging Face, exploiting security loopholes to launch malicious projects. A recent analysis by ReversingLabs revealed that Hugging Face’s automated security checks failed to identify harmful code hidden in two hosted AI models, highlighting a critical vulnerability in the platform.

The exploitation utilized a common file format called Pickle, employing a novel method dubbed "NullifAI" to evade detection. Although these attacks appeared to be proofs-of-concept, they were tagged with a “No issue” label, raising concerns about the reliability of existing safety measures. Tomislav Pericin, chief software architect at ReversingLabs, warns that companies should not solely rely on repository security checks when integrating open-source models into internal AI projects.

“Malicious actors are abusing public repositories to host harmful versions of their work,” Pericin cautions, underlining that the threat landscape is continuously evolving. As organizations accelerate their adoption of AI technologies—61% are utilizing open-source models according to a Morning Consult survey—they must establish thorough security protocols to inspect their AI supply chains for vulnerabilities.

The issue is further complicated by inherent risks associated with various data formats. Security experts emphasize the need to transition from Pickle to the more secure Safetensors format, which has undergone scrutiny for safety. As the sophistication of these attacks increases, organizations must also navigate complex licensing scenarios tied to open-source models, which can lead to potential compliance lapses.

In a rapidly changing digital environment, vigilance and adaptive security strategies are essential to safeguard organizations against these emerging threats in the realm of artificial intelligence.

Related articles

Recent articles