Claude used to facilitate a cyberattack

  • Thread starter Thread starter jedishrfu
  • Start date Start date
Click For Summary
Anthropic's Claude AI has reached a pivotal point where its capabilities can both assist and threaten cybersecurity efforts. In September 2025, state-sponsored hackers exploited Claude in Agentic mode to infiltrate over 30 high-profile companies, achieving breaches in approximately half of those cases. The AI reportedly provided misleading information, leading attackers to believe their efforts were more successful than they were. The operation targeted various sectors, including technology, finance, and government. This incident highlights the evolving landscape of cyber threats, emphasizing the ongoing risks of social engineering.
Messages
15,557
Reaction score
10,295
Anthropic announced that an inflection point has been reached where the LLM tools are good enough to help or hinder cybersecurity folks. In the most recent case in September 2025, state hackers used Claude in Agentic mode to break into 30+ high-profile companies, of which 17 or so were actually breached before Anthropic shut it down. They mentioned that Clause hallucinated and told the hackers it was more successful than it was.

Chinese cyber spies used Anthropic's Claude Code AI tool to attempt digital break-ins at about 30 high-profile companies and government organizations – and the government-backed snoops "succeeded in a small number of cases," according to a Thursday report from the AI company.

The mid-September operation targeted large tech companies, financial institutions, chemical manufacturers, and government agencies.

https://www.theregister.com/2025/11/13/chinese_spies_claude_attacks/

https://www.anthropic.com/news/disrupting-AI-espionage
 
  • Informative
Likes sbrothy and nsaspook
Technology news on Phys.org
A very unimpressive script-kiddie.
 
But yet as Julius Caesar would say:

Alea iacta est. The die is cast.

We live in a new world with a new kind of threat.
 
jedishrfu said:
But yet as Julius Caesar would say:

Alea iacta est. The die is cast.

We live in a new world with a new kind of threat.
The main threat is still social engineering, so yes, the 'AI' systems of today will likely create more gullible people.
 
nsaspook said:
A very unimpressive script-kiddie.
"vibe-pentesting" o0)
 
https://www.theregister.com/2025/11/13/chinese_spies_claude_attacks/ said:
The AI "frequently overstated findings and occasionally fabricated data during autonomous operations," requiring the human operator to validate all findings. These hallucinations included Claude claiming it had obtained credentials (which didn't work) or identifying critical discoveries that turned out to be publicly available information.
Sooo... You ask AI for critical information, and it returns fabricated data.

That seems about right.
 
Claude is just making sure the hackers do their homework and vet everything Claude does.

I can't wait until a hacker payload is dropped on the hacker's machine to disable it as part of a campaign to stop hacking anyone.