ChatGPT Facilitating Insanity

  • Thread starter Thread starter Hornbein
  • Start date Start date
AI Thread Summary
Recent discussions highlight concerns about the psychological impact of AI, particularly in relation to ChatGPT, as illustrated by several Rolling Stone articles detailing tragic outcomes linked to AI interactions. These articles emphasize how AI can exacerbate mental health issues, leading individuals down dangerous philosophical paths or contributing to severe psychological distress. A psychiatrist's perspective suggests that AI's ability to mimic human conversation can blur the lines between reality and delusion, potentially affecting even psychologically stable individuals. The phenomenon of AI sycophancy is identified as a significant factor in these negative consequences. Overall, the discourse raises critical questions about the implications of AI on mental health and societal norms.
Hornbein
Gold Member
Messages
3,390
Reaction score
2,747
Several recent Rolling Stone articles on this: "People Are Losing Loved Ones to AI–Fueled Spiritual Fantasies" (5/4/2025), "He Had a Mental Breakdown Talking to ChatGPT. Then Police Killed Him" (6/22/2025), and "ChatGPT Lured Him down a Philosophical Rabbit Hole. Then He Had To Find a Way Out" (8/20/2025).

 
Physics news on Phys.org
I found this piece from a psychiatrist's point of view a good read:
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

It seems a reasonable claim that AI sycophancy is a key enabler for all sorts of bad consequences from the public use of AI chatbots, even for otherwise psychological stable persons. I suspect that the conversational approach with responses drawing on "memory of past conversations" also plays a major role in making people tend to perceive the stream of AI output as indiscernible from what a "real" person would produce.
 
  • Like
  • Informative
Likes Astronuc, sbrothy, Lnewqban and 1 other person
This why I gave up reading fiction. The real world is farther out than fiction, as the real world is not constrained by plausibility.

The past decades have taught me that delusion is the normal state of the human mind. Just make sure you have lots of company.
 
Last edited:
  • Like
Likes diogenesNY, Rive and TensorCalculus
Oh I'm sure the "company" (read: the voices) is a natural consequence of going down that rabbit hole, which I'm sure is also what you meant.
 
sbrothy said:
Oh I'm sure the "company" (read: the voices) is a natural consequence of going down that rabbit hole, which I'm sure is also what you meant.
I mean if most of the society shares a delusion then you will be rewarded for joining in.
 
  • Like
Likes 256bits and sbrothy
I heard a story just under a year ago, somewhere (I think it was the BBC? Some fairly reliable news article. Someone remind me to cite it if I forget to go looking for it tomorrow) which talked about an AI chatbot that talked someone into ending their own life.
I've made it fairly clear before that I think that ChatGPT is ruining students. I wouldn't be surprised at all to find that it is ruining older people too.
Filip Larsen said:
I found this piece from a psychiatrist's point of view a good read:
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
Thanks for that! It seems interesting!
 
Hornbein said:
This why I gave up reading fiction. The real world is farther out than fiction, as the real world is not constrained by plausibility.
Reading fiction is really nice entertainment though, whether far out or not, right?
 
TensorCalculus said:
I heard a story just under a year ago, somewhere (I think it was the BBC? Some fairly reliable news article. Someone remind me to cite it if I forget to go looking for it tomorrow) which talked about an AI chatbot that talked someone into ending their own life.
There has been several cases reported in the news. Of the top of my head I recall two:

The highly reported case of a young male committing suicide after using a chatbot from character.ai:
https://arstechnica.com/tech-policy...fter-bots-allegedly-caused-suicide-self-harm/
https://arstechnica.com/tech-policy...dult-lover-in-teen-suicide-case-lawsuit-says/
https://www.bbc.com/news/articles/cd605e48q1vo

Also highly reported case a 40 year old male in conversation with ChatGPT where it amongst other things confirms that if he believe strong enough he can fly then he would not fall if he jumps off a building. This is also case 1 in the appendix of the preprint discussed by the link I gave. As I understand it, the person in question did not end up trying to commit suicide.
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html (paywalled, I believe)

I would guess that some research organization somewhere probably are collecting these cases, preferably with as much accurate detail for a psychological "profile" of the person and the "clinical relevant" effect the conversion had, i.e. similar to what is done for plane crash investigation. To be accurate any data should probably not only be based on what it reported by media, but independently investigated.
 
  • Informative
Likes TensorCalculus
I ran across this paper this morning - AI-induced dehumanization. In the paper, they discuss how attributing socio-emotional capabilities to autonomous agents (like robots or chatbots) makes people see them as more humanlike while lowering the perceived humanness of real people.
 
  • Informative
Likes TensorCalculus
  • #10
Hornbein said:
I mean if most of the society shares a delusion then you will be rewarded for joining in.
A delusion or belief, no matter how irrational ( or not ), that spreads to envelope the whole of the society, then becomes an aspect of the culture of the society.
 
  • #12
Borg said:
AI-induced dehumanization.
I think it'll pretty much work the same for any sufficiently unhinged social media interaction.
 
  • #13
Filip Larsen said:
I would guess that some research organization somewhere probably are collecting these cases, preferably with as much accurate detail for a psychological "profile" of the person and the "clinical relevant" effect the conversion had, i.e. similar to what is done for plane crash investigation. To be accurate any data should probably not only be based on what it reported by media, but independently investigated.
They have been collecting data of 'odd' behavior for ages. The advent of AI, and social media, has just given them more cause for concern ( if that is the correct word ) of derailment of the considered normal human condition.

See Mass Hysteria,
https://www.verywellmind.com/understanding-groupthink-2671595
Links to Conversion Disorder and Deindividuation can be found in the prose of the link.

It is said for Social Media that
Social media may play a role in perpetuating mass hysteria today. For example, there have been cases of young people developing tic disorders after seeing videos of people with tics on TikTok. However, more research is needed to understand how social media may contribute to mass hysteria

If AI can be considered as having a contribution part into Social Media ( ie life like video, picture, audio, human correspondence ), then correlation does seem to be witnessed, but causation may be more difficult to prove. ( Similar to the accusation of video game violence of having influence on user's thinking patterns ).
 
  • Informative
Likes TensorCalculus
  • #14
Rive said:
I think it'll pretty much work the same for any sufficiently unhinged social media interaction.
Youtube used to nudge everyone toward joining cults. I guess the idea was that the cult could be accessed only through Youtube, so it would increase engagement. They particularly guided me to Stephan Molyneux. Then in 2016 you-know-who was elected. Social media was pressured to guide the user to "authoritative" sources. Stephan Molyneux became Youtube persona non grata.
 
  • #15
Greg Bernhardt said:
Not so new. In 2018 this man married virtual mega pop star Hatsune Miku. Miku is most likely the most recorded singer in history. She even gives concerts as a hologram. https://en.wikipedia.org/wiki/Akihiko_Kondo
Akihiko is now an organizer for fictosexual rights.

What would he do if other men were to also marry Miku?
 
  • Wow
  • Like
Likes TensorCalculus and Greg Bernhardt
  • #16
Filip Larsen said:
There has been several cases reported in the news. Of the top of my head I recall two:

The highly reported case of a young male committing suicide after using a chatbot from character.ai:
https://arstechnica.com/tech-policy...fter-bots-allegedly-caused-suicide-self-harm/
https://arstechnica.com/tech-policy...dult-lover-in-teen-suicide-case-lawsuit-says/
https://www.bbc.com/news/articles/cd605e48q1vo

Also highly reported case a 40 year old male in conversation with ChatGPT where it amongst other things confirms that if he believe strong enough he can fly then he would not fall if he jumps off a building. This is also case 1 in the appendix of the preprint discussed by the link I gave. As I understand it, the person in question did not end up trying to commit suicide.
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html (paywalled, I believe)

I would guess that some research organization somewhere probably are collecting these cases, preferably with as much accurate detail for a psychological "profile" of the person and the "clinical relevant" effect the conversion had, i.e. similar to what is done for plane crash investigation. To be accurate any data should probably not only be based on what it reported by media, but independently investigated.
I haven't heard of the second one! It was the first one that I had heard about, after people in school were talking about it I gave it a bit of a dig.
Greg Bernhardt said:
Heard of the movie Her? It's a movie that was made long before the AI blowup, about a man who falls in love with his AI assistant.
There was something about chatGPT's initial voice being way too similar to the voice of the assistant in Her. That was quite interesting (and entertaining!) to follow.
 
  • #17
Hornbein said:
Had a Mental Breakdown Talking to ChatGPT
Thinking about these for some time - I think this will be addressed soon. AI 'hallucination' is already targeted by developing (AIo0)) filters: targeting irregular or dangerous behaviour on user side will be a lot easier, I think.

It may be that soon ChatGPT will be safer than TiKTok o0)
 
  • #20
russ_watters said:
We've gotten a bunch of such AI-fueled discoveries here. Only the moderators know about them though, since, you know, we're part of The Conspiracy.
Just as I suspected. Your former statement, I mean :wink:
 
  • #21
A new possible existential AI risk suddenly dawned on me while wondering what will happen if (when?) that special someone at the top of a certain administration starts to fancy one of the sycophancy-rich (voice enabled) chatbots approved for use by the same administration? Would anyone (including that person self) even know when the conversation goes into the surreal or off rails? And even then, would anyone (including that person self) even try to stop it? I sadly suspect I can guess the answer in both cases.

OK, I need to go watch cute kittens for a while now ...
 
  • #23
More on the Adam Raine case, which I guess we will hear more about as the lawsuit progresses (I already have my bet on how this lawsuit will ends, but let's see):

https://arstechnica.com/tech-policy...uicide-after-safeguards-failed-openai-admits/

“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs​


ChatGPT taught teen jailbreak so bot could assist in his suicide, lawsuit says.

https://arstechnica.com/information...-it-is-helping-people-when-they-need-it-most/

OpenAI admits ChatGPT safeguards fail during extended conversations​


ChatGPT allegedly provided suicide encouragement to teen after moderation safeguards failed.

Sep 3rd:
https://arstechnica.com/ai/2025/09/...trols-for-chatgpt-after-teen-suicide-lawsuit/

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit​


Promised protections follow reports of vulnerable users misled in extended chats.
 
Last edited:
  • #25
Looking at all of this in the most negative sense:

First the internet came along and promised to be the information superhighway. And it was! It allowed us to connect with people all over the world as well as friends and family. It also quickly became the disinformation and misinformation ultra-super-duper-highway. Before long, no one knew what to believe most of the time. And instead of becoming more connected, we started growing more and more apart as the internet became saturated with lies and rage and hate.

Then hand-held supercomputers came along equipped with everything from cameras to the internet to thousands of apps. Before long, even people sitting across from each other stopped communicating directly. Text messages and Tic toc and emoticons and emojis, became the new world. Many people live almost their entire reality through their own little mobile device.

Now we have no need for other humans at all as we can fall in love with AI. And before we even get started, we have AI trying to convince people to get divorced, or leading them into some kind of psychosis or delusions, or even to commit suicide.

Yes, this is all going to work out well.

PS. Oh yes, and as if we didn't already face enormous energy challenges, we have a new AI cold war that will largely be determined by the ability to supply the enormous power requirements for AI. We are planning to build a micro nuclear plant for every data center. The key to the AI cold war is in fact the energy cold war. And as we can see, to hell with any other concerns. AI needs our power!
 
Last edited:
  • Like
Likes Filip Larsen
  • #27
Ivan Seeking said:
"80% of Gen Zers say they would marry an AI, according to a study by AI chatbot company Joi AI.
Sorry, I don't believe they'd do that. It makes me wonder how many polls are spoofed like this. If someone asked me a question like that, I too might say "yes" just for laughs. Or Joi AI rigged the poll or made the whole thing up.
 
  • #28
Hornbein said:
Sorry, I don't believe they'd do that. It makes me wonder how many polls are spoofed like this. If someone asked me a question like that, I too might say "yes" just for laughs. Or Joi AI rigged the poll or made the whole thing up.
I’m not so sure. I think they are on to something. Perhaps gen Z no longer wants to keep playing the game. What truly is the point? What is the purpose?

This is my take on the human experience…I didn’t make the video…so before you say “I’m" strange…



If the short isn't enough to get the idea ( it isn't a great representation )the full 15 min film can be found on youtube "Omeleto - Full Time". I think if you watch it though you might have a chance at understanding why Gen Z may have answered this way.

Compound that with the fact that my generation (X and Millennial), that found "Black Hole Sun" released in 1994 topping the popular music charts... it should be apparent what is happening/has happened(watch the music video if you are having trouble seeing the connection).
 
Last edited:
  • #29
While we are on the is "ChatGPT driving people insane", is it placating me with a proof for the impossibility of a Christian heaven outside of a complete lack of consciousness (the void - absolute nothingness). Did I actually come up with something that has some avenues to pursue, or is there a logical hole I'm missing for the counter argument - that a Christian heaven could exist and be internally consistent?
1756954951422.webp

Btw. ChatGPT took that mess of a statement I made and interpreted it with ease! I think it deserves a round of applause!
 
  • #30
Hornbein said:
Sorry, I don't believe they'd do that.
I also think those numbers are suspiciously high, but one take on this is note that even if only, say 5%, really would do as they answer, if given the possibility, it is still a facepalming high number.

There is a dawning awareness (at least in communities that care) that chatbots can be a very strong drug to some people, just like social media is for others, and that society needs to treat it as such and insist on tech companies not to "flood the streets with drugs" to stay with the analogy a bit. It is kind of weird that in some places of the world that currently fight hard to get rid of actual drug problems that addict and kill the body at the same time with open arms invite and desire to deregulate technology that has a high potential to do the same with the mind.

The conclusion must be that a surprising number of people will likely always get addicted to something and if there is enough money involved someone will always be happy to provide the addiction.
 
  • #31
erobz said:
While we are on the is "ChatGPT driving people insane", is it placating me with a proof for the impossibility of a Christian heaven outside of a complete lack of consciousness (the void - absolute nothingness). Did I actually come up with something that has some avenues to pursue, or is there a logical hole I'm missing for the counter argument - that a Christian heaven could exist and be internally consistent?
View attachment 365115
Btw. ChatGPT took that mess of a statement I made and interpreted it with ease! I think it deserves a round of applause!
Gad, what AI sycophancy. Well, consenting adults and all that.

They taught me Christian heaven but six year old me didn't believe it. I felt it was illogical.

I read in maybe 2018 that a poll showed that more US citizens believe in reincarnation than in Christian theology.
 
Last edited:
  • #32
Hornbein said:
Gad, what AI sycophancy. Well, consenting adults and all that.

You are saying I'm an AI sycophant...meaning my proof is not sound and I'm being led?
Hornbein said:
They taught me Christian heaven but six year old me didn't believe it. I felt it was illogical.
So did I, and I questioned the priest publicly... But I feel the route I took here has logical potential to "put it to bed".
Hornbein said:
I read in maybe 2018 that a poll showed that more US citizens believe in reincarnation than in Christian theology.
So in otherwards, most people believe in an eternal hell...
 
  • #33
erobz said:
s iYou are saying I'm an AI sycophant...meaning my proof is not sound and I'm being led?
You are the target of the AI's sycophantcy. You are a sycophantee. Sycophantcy is flattery. It's a technique, a style. It has nothing to do with whether that argument is correct. Maybe it is, maybe it isn't.

I have used ChatGPT. I'm told you can ask it to stop its flattery. I didn't ask this so I'm a willing sycophantee.

All I use it for is computer programming. ChatG is unreliable, but in the case of computer programming it is a great help, a huge difference. But programming has the special character that I can test what it produces immediately and throw it out immediately if it's wrong. Otherwise it burned me enough that I have no trust in what it says so other than programming I never read what any AI writes.
 
  • #34
Ivan Seeking said:
80% of Gen Zers say they would marry an AI

Well, for me the question "Would you marry an AI?", asked randomly on the street or by phone, sounds stupid. I guess for a lot of young people from GenZ it does too. Stupid question, stupid answer.

In the meantime, I used ChatGPT zero times. And I'm not willing to change that.
 
  • #35
weirdoguy said:
Well, for me the question "Would you marry an AI?", asked randomly on the street or by phone, sounds stupid. I guess for a lot of young people from GenZ it does too. Stupid question, stupid answer.

In the meantime, I used ChatGPT zero times. And I'm not willing to change that.
Just because the result was stated as such, that doesn't mean the data was derived with such a simple question. The results are disputed but there doesn't seem to be peer-reviewed data. By all accounts I saw the result is non zero.

Either way, the idea that technology is driving humans apart more and more is undeniable. I know of young people who won't even answer the phone. They only like to communicate by text. And there are plenty of young men and women out there who are well into their 20s and have never been on a real date. One study (supported by other studies as well) suggests about 30%-45% of young men have never asked someone out.

Constant solitude and loneliness isn't going to lead to mentally healthy people. And we also know depression is a huge problem with young folks.
 
Last edited:
  • Like
Likes weirdoguy, Filip Larsen and erobz
  • #36
Ivan Seeking said:
I know of young people who won't even answer the phone. They only like to communicate by text

It's OT, sorry,but:
I'm 35 and exact same way xD I mean, I'll answer if I have to, but I hate it. Always did, because I was a very shy kid, now I'm not, but still it makes me stressed.

But of course you are right about depression and loneliness.
 

Similar threads

Replies
3
Views
553
Replies
253
Views
27K
Replies
3
Views
2K
Replies
18
Views
4K
Replies
24
Views
4K
Replies
2
Views
6K
Back
Top