AI-as-a-Service Providers at Risk of Privilege Escalation and Cross-Tenant Attacks

Published:

spot_img

A new research study has uncovered critical vulnerabilities in artificial intelligence (AI)-as-a-service providers, such as Hugging Face, that could potentially allow threat actors to compromise sensitive data and access target environments. The study, conducted by Wiz researchers Shir Tamari and Sagi Tzadik, revealed that malicious models pose a significant risk to AI systems, especially those hosted by service providers, as attackers could exploit these models to launch cross-tenant attacks.

The risks identified in the study involve shared Inference infrastructure takeover and shared CI/CD takeover, enabling threat actors to run untrusted models in pickle format and execute supply chain attacks through the service’s CI/CD pipeline. The implications of such breaches could be severe, allowing attackers to breach the service’s custom models, escalate privileges, and access other customers’ data stored within the platform.

To address these vulnerabilities, researchers recommend implementing IMDSv2 with Hop Limit to prevent unauthorized access to the Instance Metadata Service and encourage users to source models from trusted providers, enable multi-factor authentication, and avoid using pickle files in production environments.

The disclosure of these findings follows recent research from Lasso Security, which highlighted the potential for generative AI models like OpenAI ChatGPT and Google Gemini to distribute malicious code packages. Additionally, AI company Anthropic introduced a new method called “many-shot jailbreaking” to exploit vulnerabilities in large language models and bypass safety protections.

As the use of AI continues to expand, it is crucial for organizations and service providers to prioritize security measures and proactively address vulnerabilities to safeguard against potential cyber threats and data breaches. The findings underscore the importance of exercising caution when utilizing AI technologies and highlight the need for ongoing efforts to enhance cybersecurity measures in the AI ecosystem.

spot_img

Related articles

Recent articles

Australian Privacy Commissioner Finds Vinomofo Lacked Customer Data Protection

Vinomofo's Data Breach: A Deep Dive into Privacy Failures Australia's Privacy Commissioner, Carly Kind, has delivered a significant ruling against Vinomofo Pty Ltd, an online...

Proton Launches Observatory to Uncover Dark Web Crimes

Proton Unveils Data Breach Observatory to Combat Cybercrime New Service Launch On Thursday, Proton, a prominent internet privacy company based in Switzerland, introduced a groundbreaking service...

Why ‘Secure Login’ Alone Can’t Safeguard Your Mobile App

Manish Mimami, founder and CEO of Protectt.ai In the realm of mobile app security, the traditional methods—static passwords, One-time Passwords (OTPs), and Multi-factor Authentication (MFA)—have...

The Atlas Flaw: How One Line of Text Deceived OpenAI’s Advanced Browser

Unveiling Security Flaws in OpenAI’s Atlas Browser Researchers have recently uncovered a new security flaw in OpenAI’s Atlas browser, shedding light on a significant vulnerability...