Major Threats from Malicious Code and Vulnerabilities

Published:

spot_img

Ensuring AI Security: The Growing Threat of Malicious Code in Open Source Repositories

Attackers Target Open Source AI Repositories with New Malicious Techniques

In an alarming trend, attackers are increasingly targeting open-source artificial intelligence (AI) repositories like Hugging Face, exploiting security loopholes to launch malicious projects. A recent analysis by ReversingLabs revealed that Hugging Face’s automated security checks failed to identify harmful code hidden in two hosted AI models, highlighting a critical vulnerability in the platform.

The exploitation utilized a common file format called Pickle, employing a novel method dubbed "NullifAI" to evade detection. Although these attacks appeared to be proofs-of-concept, they were tagged with a “No issue” label, raising concerns about the reliability of existing safety measures. Tomislav Pericin, chief software architect at ReversingLabs, warns that companies should not solely rely on repository security checks when integrating open-source models into internal AI projects.

“Malicious actors are abusing public repositories to host harmful versions of their work,” Pericin cautions, underlining that the threat landscape is continuously evolving. As organizations accelerate their adoption of AI technologies—61% are utilizing open-source models according to a Morning Consult survey—they must establish thorough security protocols to inspect their AI supply chains for vulnerabilities.

The issue is further complicated by inherent risks associated with various data formats. Security experts emphasize the need to transition from Pickle to the more secure Safetensors format, which has undergone scrutiny for safety. As the sophistication of these attacks increases, organizations must also navigate complex licensing scenarios tied to open-source models, which can lead to potential compliance lapses.

In a rapidly changing digital environment, vigilance and adaptive security strategies are essential to safeguard organizations against these emerging threats in the realm of artificial intelligence.

spot_img

Related articles

Recent articles

Minor Hotels Strengthens Presence in Egypt with Anantara Somabay Resort & Residences Signing

Minor Hotels Strengthens Presence in Egypt with Anantara Somabay Resort & Residences Signing In a significant development for the hospitality sector in Egypt, Minor Hotels...

Re-architecting Physical Security to Strengthen Resilience in Contested Environments

Re-architecting Physical Security to Strengthen Resilience in Contested Environments The landscape of physical security is undergoing a significant transformation, driven by evolving threats and technological...

EE Launches Enhanced Scam Guard with AI Triple-Lock Protection and Dark Web Monitoring

EE Launches Enhanced Scam Guard with AI Triple-Lock Protection and Dark Web Monitoring EE has introduced an upgraded version of its Scam Guard service, marking...

Australia’s APRA Challenges Financial Sector with Urgent AI Risk Governance Warning

Australia's APRA Challenges Financial Sector with Urgent AI Risk Governance Warning The Australian Prudential Regulation Authority (APRA) has issued a critical warning regarding the governance...