AI-as-a-Service Providers at Risk of Privilege Escalation and Cross-Tenant Attacks

Published:

A new research study has uncovered critical vulnerabilities in artificial intelligence (AI)-as-a-service providers, such as Hugging Face, that could potentially allow threat actors to compromise sensitive data and access target environments. The study, conducted by Wiz researchers Shir Tamari and Sagi Tzadik, revealed that malicious models pose a significant risk to AI systems, especially those hosted by service providers, as attackers could exploit these models to launch cross-tenant attacks.

The risks identified in the study involve shared Inference infrastructure takeover and shared CI/CD takeover, enabling threat actors to run untrusted models in pickle format and execute supply chain attacks through the service’s CI/CD pipeline. The implications of such breaches could be severe, allowing attackers to breach the service’s custom models, escalate privileges, and access other customers’ data stored within the platform.

To address these vulnerabilities, researchers recommend implementing IMDSv2 with Hop Limit to prevent unauthorized access to the Instance Metadata Service and encourage users to source models from trusted providers, enable multi-factor authentication, and avoid using pickle files in production environments.

The disclosure of these findings follows recent research from Lasso Security, which highlighted the potential for generative AI models like OpenAI ChatGPT and Google Gemini to distribute malicious code packages. Additionally, AI company Anthropic introduced a new method called “many-shot jailbreaking” to exploit vulnerabilities in large language models and bypass safety protections.

As the use of AI continues to expand, it is crucial for organizations and service providers to prioritize security measures and proactively address vulnerabilities to safeguard against potential cyber threats and data breaches. The findings underscore the importance of exercising caution when utilizing AI technologies and highlight the need for ongoing efforts to enhance cybersecurity measures in the AI ecosystem.

Related articles

Recent articles