AI Coding Flaw at Jerry’s Store Exposes 345,000 Stolen Credit Card Records
New Delhi | The increasing reliance on Artificial Intelligence (AI) coding tools has surfaced as a significant cybersecurity threat. An international cyber investigation has uncovered that a fraudulent online platform, known as “Jerry’s Store,” inadvertently exposed sensitive information linked to approximately 345,000 stolen credit cards on the public internet. Initial assessments indicate that a security vulnerability in AI-generated code was responsible for this breach, revealing the inner workings of the cybercriminal network.
The Nature of the Breach
Cybersecurity researchers from Cybernews reported that Jerry’s Store functioned as an underground marketplace for trading and testing stolen payment cards. Investigators discovered an unsecured server associated with the platform that disclosed highly sensitive data, including cardholders’ names, card numbers, CVVs, expiration dates, and billing addresses. Alarmingly, this information was accessible without any password protection or authentication measures. The breach was identified during an investigation in April 2026, raising alarms among cybersecurity experts globally.
Technical Oversight in AI Implementation
The investigation revealed that the operators of Jerry’s Store utilized “Cursor,” an AI coding assistant, to construct their server infrastructure and internal monitoring systems. The criminals reportedly instructed the AI tool to generate a statistics dashboard for managing card inventories and transactions. However, the AI-generated setup failed to implement necessary access controls and authentication security, resulting in an exposed database that could be accessed by anyone who located the server.
The leaked database contained nearly 200,000 cards marked as “invalid,” while around 145,000 cards remained active and usable. Cybersecurity analysts estimate that valid stolen credit cards can be sold on dark web marketplaces for prices ranging from $7 to $18 each. This suggests that the exposed database could potentially be valued at millions of dollars in illicit underground markets. Experts emphasize that such stolen financial data is immensely valuable to cybercriminals, as it can facilitate fraudulent online purchases, identity theft, and unauthorized financial transactions.
Methods of Validating Stolen Cards
Further investigation revealed the methods employed by cybercriminals to verify whether stolen cards were still active before selling them. The operators of Jerry’s Store allegedly used legitimate e-commerce platforms, including Amazon, Grubhub, Temu, Lyft, and Sam’s Club, to conduct small test transactions. If a transaction succeeded, the card would be marked as “valid” and subsequently sold at higher prices on dark web networks. Security experts noted that these low-value transactions often blend into the billions of regular digital payments processed daily, making them challenging for banks and payment companies to detect promptly.
Expert Insights on AI in Cybercrime
Prof. Triveni Singh, a prominent cybercrime expert and former IPS officer, expressed concerns about the role of AI-powered automation in facilitating cybercrime. He stated that the creation of cyber fraud infrastructure previously required skilled hackers and advanced technical knowledge. However, AI tools have significantly lowered the barriers to entry for cybercriminals. Individuals with limited technical expertise can now construct sophisticated fraud platforms with AI assistance. The most significant risk arises when AI-generated code is deployed directly onto live servers without adequate security audits.
The Broader Implications for Cybersecurity
Technology experts warn that while AI coding assistants can expedite software development, a lack of human oversight and security testing can lead to severe vulnerabilities. Recent studies have indicated that AI-generated code often contains security flaws, incorrect permission settings, and data exposure risks if not thoroughly reviewed by experienced developers.
Cybersecurity specialists recommend that consumers regularly monitor their bank accounts and credit card statements, enable SMS and email transaction alerts, and block cards immediately if suspicious activity is detected. Additionally, companies and software developers are urged to implement stringent security audits, penetration testing, and manual verification before deploying AI-generated systems online. These measures are deemed essential to prevent future large-scale data leaks and financial cybercrime incidents driven by insecure AI-generated infrastructure.
For further details on this incident, refer to the original reporting source: the420.in.
Keep reading for the latest cybersecurity developments, threat intelligence and breaking updates from across the Middle East.


