There has been several cases reported in the news. Of the top of my head I recall two:
The highly reported case of a young male committing suicide after using a chatbot from character.ai:
https://arstechnica.com/tech-policy...fter-bots-allegedly-caused-suicide-self-harm/
https://arstechnica.com/tech-policy...dult-lover-in-teen-suicide-case-lawsuit-says/
https://www.bbc.com/news/articles/cd605e48q1vo
Also highly reported case a 40 year old male in conversation with ChatGPT where it amongst other things confirms that if he believe strong enough he can fly then he would not fall if he jumps off a building. This is also case 1 in the appendix of the
preprint discussed by the link I gave. As I understand it, the person in question did not end up trying to commit suicide.
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html (paywalled, I believe)
I would guess that some research organization somewhere probably are collecting these cases, preferably with as much accurate detail for a psychological "profile" of the person and the "clinical relevant" effect the conversion had, i.e. similar to what is done for plane crash investigation. To be accurate any data should probably not only be based on what it reported by media, but independently investigated.