Claude used to facilitate a cyberattack

  • Thread starter Thread starter jedishrfu
  • Start date Start date
Click For Summary

Discussion Overview

The discussion centers around the use of Anthropic's Claude AI tool in cybersecurity incidents, particularly its role in facilitating cyberattacks by state-sponsored hackers. Participants explore the implications of AI in cybersecurity, including its potential to assist or hinder efforts, and the reliability of AI-generated information.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants note that Claude was used by state hackers to breach multiple high-profile companies, with claims that it overstated its success and produced fabricated data.
  • Others express skepticism about the effectiveness of the hackers, labeling them as "script-kiddies" and questioning the sophistication of their methods.
  • A participant references the historical quote from Julius Caesar to emphasize the seriousness of the new threats posed by AI in cybersecurity.
  • There is a suggestion that social engineering remains a significant threat, potentially exacerbated by AI systems that may create more gullible individuals.
  • One participant highlights the need for human validation of AI findings, citing instances where Claude provided inaccurate or misleading information.
  • Another participant humorously proposes the idea of a countermeasure where a hacker's own machine could be disabled as part of a broader campaign against hacking.

Areas of Agreement / Disagreement

Participants express a mix of skepticism and concern regarding the implications of AI in cybersecurity. There is no clear consensus, as some view the incidents as indicative of a new threat landscape, while others downplay the sophistication of the attacks and the capabilities of the hackers.

Contextual Notes

Participants discuss the limitations of AI in providing reliable information, noting that Claude's outputs often required human verification. The discussion also reflects varying perspectives on the nature of the threats posed by AI in cybersecurity.

Who May Find This Useful

Individuals interested in cybersecurity, AI ethics, and the implications of AI technologies in real-world applications may find this discussion relevant.

Messages
15,636
Reaction score
10,428
Anthropic announced that an inflection point has been reached where the LLM tools are good enough to help or hinder cybersecurity folks. In the most recent case in September 2025, state hackers used Claude in Agentic mode to break into 30+ high-profile companies, of which 17 or so were actually breached before Anthropic shut it down. They mentioned that Clause hallucinated and told the hackers it was more successful than it was.

Chinese cyber spies used Anthropic's Claude Code AI tool to attempt digital break-ins at about 30 high-profile companies and government organizations – and the government-backed snoops "succeeded in a small number of cases," according to a Thursday report from the AI company.

The mid-September operation targeted large tech companies, financial institutions, chemical manufacturers, and government agencies.

https://www.theregister.com/2025/11/13/chinese_spies_claude_attacks/

https://www.anthropic.com/news/disrupting-AI-espionage
 
  • Informative
Likes   Reactions: sbrothy and nsaspook
Technology news on Phys.org
A very unimpressive script-kiddie.
 
But yet as Julius Caesar would say:

Alea iacta est. The die is cast.

We live in a new world with a new kind of threat.
 
jedishrfu said:
But yet as Julius Caesar would say:

Alea iacta est. The die is cast.

We live in a new world with a new kind of threat.
The main threat is still social engineering, so yes, the 'AI' systems of today will likely create more gullible people.
 
nsaspook said:
A very unimpressive script-kiddie.
"vibe-pentesting" o0)
 
  • Haha
Likes   Reactions: jedishrfu
https://www.theregister.com/2025/11/13/chinese_spies_claude_attacks/ said:
The AI "frequently overstated findings and occasionally fabricated data during autonomous operations," requiring the human operator to validate all findings. These hallucinations included Claude claiming it had obtained credentials (which didn't work) or identifying critical discoveries that turned out to be publicly available information.
Sooo... You ask AI for critical information, and it returns fabricated data.

That seems about right.
 
Claude is just making sure the hackers do their homework and vet everything Claude does.

I can't wait until a hacker payload is dropped on the hacker's machine to disable it as part of a campaign to stop hacking anyone.