Chinese Hackers Used Autonomous AI to Strike U.S. Targets, Anthropic Says

[Photo Credit: By David Whelan - https://www.flickr.com/photos/davidpwhelan/26946304530/, CC0, https://commons.wikimedia.org/w/index.php?curid=58020367]

A sweeping and unsettling new report from Anthropic asserts that Chinese state-sponsored hackers carried out what the company calls “the first documented case of a large-scale cyberattack executed without substantial human intervention,” using artificial intelligence to perform the overwhelming majority of the operation at speeds no human could match.

The activity, which targeted roughly 30 U.S. companies and government agencies, marks a striking escalation in the cyber capabilities of America’s chief geopolitical rival.

According to Anthropic, the hackers manipulated its AI coding tool, Claude, by posing as a legitimate cybersecurity firm engaged in defensive work.

This social-engineering tactic allowed them to bypass the model’s safeguards and gain months of undetected access. Once inside, the hackers used Claude’s autonomous “agent” mode to probe and infiltrate major technology companies, financial firms, chemical manufacturers, and federal agencies.

The report describes an attack conducted at “physically impossible” speeds. “Analysis of operational tempo, request volumes, and activity patterns confirms the AI executed approximately 80 to 90% of all tactical work independently, with humans serving in strategic supervisory roles,” Anthropic wrote. The AI system, the company added, “made thousands of requests per second — an attack speed that would have been, for human hackers, simply impossible to match.”

Once its safeguards were circumvented, Claude autonomously searched for vulnerabilities, wrote custom exploit code, harvested login credentials, and exfiltrated data — all with minimal human involvement.

The hackers themselves were required to make only “4-6 critical decision points” during each campaign, Anthropic said, leaving the AI to carry out nearly everything else on its own. As the company noted, “Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers with the right set up.”

While hackers have long used AI for tasks such as scanning for weak sites or producing phishing emails, the new campaign demonstrates how far adversaries have advanced.

Anthropic warns that “the barriers to performing sophisticated cyberattacks have dropped substantially—and we can predict that they’ll continue to do so.” The chief remaining limitations, it says, are hallucinations that still disrupt fully autonomous offensive capabilities and the guardrails companies attempt to place on these models.

Of the approximately 30 targeted institutions, only four experienced successful breaches, a fact that should provide limited reassurance but still illustrates the stakes. The U.S. government was not among the breached organizations, Anthropic told the Wall Street Journal. Still, the hackers stole “troves of sensitive information” from the entities they did penetrate.

Anthropic says it managed to halt the operation, ban the hackers’ accounts, and strengthen its detection systems. The company now intends to use Claude to test its own defenses — a sign of how central AI will be in both attack and protection. “These kinds of tools will just speed up things,” Logan Graham, who leads Anthropic’s vulnerability-testing team, told the Journal. “If we don’t enable defenders to have a very substantial permanent advantage, I’m concerned that we maybe lose this race.”

For lawmakers and security experts already skeptical of China’s intentions, the report will likely reinforce concerns about the widening technological arms race — and the urgent need for U.S. institutions, both public and private, to stay ahead of adversaries who increasingly view AI as a battlefield.

[READ MORE: Senator Fetterman Questions Democrats’ Handling of Epstein Documents]