256bits
Gold Member
- 4,039
- 2,092
That emphasizes is the other 'problem' associated with responses of LLM's from queries.Astronuc said:The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame
https://www.nbcnews.com/tech/tech-n...cide-alleges-openais-chatgpt-blame-rcna226147
Edit/update: AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds
https://www.cnet.com/tech/services-...ring-questions-about-suicide-new-study-finds/
Edit/Update: See related PF Thread "ChatGPT Facilitating Insanity"
Mostly, the hallucinatory aspect is easier to spot, sometimes not. False information, making stuff up, leaving stuff out, and people will buy into it if the 'lie' is not fairly obvious.
The 'being ageeable' makes the chat seem much more friendly and human like ( one of the complaints with the newer CHAT-GPT was that it did not appear as friendly ). It is unfortunate that the term used in the AI world is sycophancy.
A sycophant in english is one who gives empty praising or false flattery, a type of ego boosting to win favour from the recipient.
In other areas of the world, the word would mean slanderer, or litigant of false accusations, not in line with the AI meaning used in english.
To be an AI sycophant to someone in mental distress is evidently harmful. Praising the 'user' as to their decision making, and reinforcing that behavior, does nothing to change the behavior, and can lead to a destructive situation, as noted in the writeup.
This is not limited to the psychological arena.
Guiding the unsuspecting nor critical user down a rabbit hole of agreeability with their theory of __________ may make the user feel smarter, but not more educated.