Imagine spending billions of dollars to train a genius, carefully curating every book they read and every test they take. Now, imagine your neighbor simply follows that genius around, writing down every answer they give to difficult questions, and then uses those answers to teach their own student for a fraction of the cost. That is essentially what Anthropic claims is happening right now on an industrial scale.
In a move that escalates the already tense tech standoff between the US and China, Anthropic has formally accused major Chinese AI laboratories—specifically DeepSeek, Moonshot, and MiniMax—of running a massive operation to “distill” the capabilities of its Claude models. This isn’t just a few rogue queries; we are talking about a coordinated effort involving tens of thousands of fake accounts designed to siphon off the reasoning capabilities of one of the world’s most advanced AIs.
How did the “distillation” attack actually work?
According to Anthropic, this wasn’t a smash-and-grab job; it was a sustained extraction. The accused labs allegedly created over 24,000 fraudulent accounts to generate a staggering 16 million queries. The goal? To use Claude as a “teacher” for their own smaller, cheaper models.
To pull this off without getting banned immediately, the attackers reportedly used what are known as “hydra cluster” proxy services. These services mask the origin of the internet traffic, allowing the labs to bypass regional restrictions and terms of service that normally block users in China from accessing Claude. It’s a sophisticated way of making 16 million requests look like they are coming from thousands of distinct, legitimate users rather than a few centralized server farms.
The scale here is critical. By feeding these millions of questions and answers into their own models, these labs can effectively clone the behavior and reasoning patterns of Claude without needing the massive computational resources or proprietary datasets Anthropic used to build it in the first place.
Who was stealing what data?
Not all the accused labs were after the same thing. The breakdown of the attack reveals exactly where each competitor felt they were lagging behind.
MiniMax appears to be the most aggressive offender. They generated 13 million queries—the vast majority of the traffic—specifically targeting agentic coding and tool use. Perhaps most shockingly, Anthropic noted that MiniMax pivoted to a new Claude model within just 24 hours of its release, suggesting a highly agile, automated extraction pipeline.
Moonshot AI generated about 3.4 million queries. Their focus was different; they were reportedly mining data on agentic reasoning, coding, and computer vision.
DeepSeek, a name already familiar to US lawmakers, used a smaller volume of queries (around 150,000). Interestingly, they focused on extracting “reasoning chains” and, ironically, censorship-compliant responses.
Anthropic stated clearly that these illicitly distilled models “lack necessary safeguards,” meaning the safety training Anthropic painstakingly applies to Claude is being stripped away or bypassed in the clone models.
How does this impact the US chip war?
This news lands at a pivotal moment for US trade policy. In January 2026, the Trump administration made the controversial decision to allow exports of Nvidia’s powerful H200 chips to China, albeit with a steep 25% tariff. The logic was to keep American companies like Nvidia profitable while taxing the competition.
However, these revelations undermine the idea that hardware controls alone can stop China’s AI progress. If Chinese labs can simply “distill” the intelligence of US models, they may not need as much raw compute power to achieve state-of-the-art results. This has emboldened critics of the administration’s policy.
Senators Pete Ricketts and Chris Coons have introduced the bipartisan “SAFE CHIPS Act,” which seeks to codify stricter export bans for the next 30 months. Senator Ricketts didn’t mince words, stating, “Denying Beijing access to these AI chips is essential to our national security.” The argument is gaining weight: if the software barrier is porous, the hardware barrier needs to be a fortress.
The Real Story
The significance of this breach isn’t just about intellectual property theft; it’s a signal that the “moat” around frontier AI models is evaporating. If a competitor can clone a model’s capabilities simply by querying its API, the business model of selling access to “super-intelligence” becomes incredibly fragile. This forces US companies into a defensive crouch, likely leading to draconian “Know Your Customer” (KYC) protocols that will make it much harder for legitimate developers and open-source researchers to access top-tier models. The era of open API access is likely ending, replaced by a walled garden where only vetted entities get the keys to the kingdom.