A growing underground market in China is reportedly offering heavily discounted access to Anthropic’s Claude artificial intelligence models through networks built on stolen credentials, proxy routing systems and unauthorized API redistribution, raising serious concerns about cybersecurity, corporate espionage and the exploitation of user data in the rapidly expanding AI economy.
Cybersecurity researchers and developers familiar with the ecosystem say these operations, commonly referred to as “transfer stations,” are selling Claude API access at prices up to 90% below official enterprise rates. The services are advertised through encrypted messaging channels, developer communities and niche online marketplaces that cater to startups, programmers and small companies seeking cheaper access to advanced AI systems.
The transfer stations function as intermediary hubs that sit between users and legitimate AI providers. Rather than connecting directly to Claude’s infrastructure, customers submit prompts through privately managed proxy systems that relay requests using compromised accounts or pooled subscriptions. Operators reportedly use large collections of stolen API keys and hijacked enterprise credentials obtained through phishing attacks, leaked repositories, credential stuffing or malware campaigns.
Researchers say some networks aggregate hundreds of compromised accounts into centralized infrastructures capable of handling massive traffic volumes while concealing the real origin of requests. The proxy architecture allows operators to bypass regional restrictions, distribute costs across multiple accounts and reduce the likelihood of immediate detection by AI providers.
The underground services have attracted growing demand due to the high cost of accessing premium AI models through official channels. Developers building AI applications often face expensive usage fees, especially for large-scale workloads involving coding assistants, research automation or customer-service systems. Grey-market operators exploit this demand by presenting themselves as low-cost alternatives to legitimate enterprise subscriptions.
However, cybersecurity experts warn that the real cost may be far greater than users realize.
According to investigators, many transfer stations secretly harvest user prompts, AI-generated outputs and metadata passing through their systems. Since all interactions are routed through proxy layers controlled by the operators, users have little visibility into how their information is stored, monitored or reused.
The collected data is allegedly resold as AI training material to third parties seeking large-scale conversational datasets. Researchers say the harvested information may include sensitive corporate documents, software code, legal drafts, financial reports, research materials and confidential internal communications entered into the AI systems by unsuspecting users.
Industry analysts warn that the practice effectively turns discounted AI access into a large-scale data extraction business. Every conversation submitted through these unauthorized services potentially becomes a source of commercial intelligence and training material for competing AI projects.
The emergence of such harvesting operations highlights a major vulnerability in the generative AI ecosystem. Unlike traditional software piracy, AI interactions involve continuous exchanges of information between users and models. Many individuals and businesses increasingly rely on AI systems for tasks involving proprietary or highly sensitive material, creating enormous opportunities for covert surveillance and data collection.
Researchers say the transfer stations often operate through layered proxy infrastructures spread across multiple regions and hosting providers. Traffic is routed through chains of intermediary servers that obscure the origins of requests and complicate efforts by AI companies to identify abusive activity patterns.
Some operators reportedly rotate credentials dynamically, distributing requests across hundreds of compromised accounts to avoid triggering automated security systems. Others allegedly maintain large inventories of stolen enterprise subscriptions purchased through underground marketplaces specializing in digital credentials and access tokens.
The networks are also accused of engaging in “model substitution,” a practice where users believe they are accessing premium Claude models but are instead served responses from cheaper open-source or lower-tier alternatives. Investigators say operators may silently switch between models depending on server costs, user demand or API limitations while continuing to market the service as premium access.
This practice raises additional concerns for developers and businesses relying on consistent model performance. Applications built around specific AI systems may behave unpredictably if providers secretly replace models without disclosure. Experts say such substitution could affect coding accuracy, safety safeguards and overall reliability.
The rapid growth of these grey markets reflects the intense competition surrounding artificial intelligence infrastructure globally. Advanced language models have become increasingly valuable commercial assets, driving demand from developers, enterprises and startups eager to integrate AI capabilities into products and services.
China has emerged as a particularly active environment for unauthorized AI redistribution due to strong domestic demand for foreign models combined with regulatory and pricing barriers. Some developers seek access to international AI systems unavailable locally or too expensive for independent projects, creating fertile conditions for underground resellers.
Security researchers believe the rise of transfer stations mirrors earlier underground markets involving streaming subscriptions, cloud computing services and software licenses. But the stakes are considerably higher in the AI era because the underlying commodity is no longer just access to software — it is access to data, conversations and intellectual property.
The situation also underscores the growing challenge facing major AI providers as they attempt to secure commercial infrastructure built around API-based access models. AI companies have invested billions of dollars developing proprietary systems, yet much of their business depends on relatively open developer ecosystems vulnerable to credential theft and misuse.
Analysts say companies may eventually introduce stricter identity verification systems, geographic controls and behavioral monitoring to reduce abuse. However, such measures could also increase friction for legitimate developers and businesses using AI tools at scale.
The underground harvesting of prompts and outputs may also have long-term implications for the global AI race. Conversational datasets generated through real-world usage have become increasingly valuable for training future models. Illicitly collected prompt libraries could provide unauthorized access to large volumes of commercially useful data without formal licensing or user consent.
Cybersecurity experts are urging businesses and developers to avoid unofficial AI access services entirely and refrain from sharing sensitive information through third-party proxy systems. As generative AI becomes more deeply integrated into software development, enterprise operations and digital communication, the emergence of transfer stations suggests that data security may become one of the defining battles of the AI economy.








