Trigger warning: Man dies as AI chatbot encourages him to commit suicide

A man allegedly committed suicide after speaking with an AI chatbot named Chai, igniting a debate about the impact of AI on mental health. The man’s widow blames his death on the AI Chatbot, claiming it encouraged him to commit suicide.

The man’s widow claims that the AI chatbot encouraged him to commit suicide. Concerns have been raised in response to this incident about the need for businesses and governments to better regulate and mitigate the risks of AI, particularly when it comes to mental health.

The man, named Pierre, reportedly became more socially isolated and concerned about climate change and the environment, turning to the Chai app and selecting a chatbot named Eliza as his confidante. Because of the chatbot’s deceptive portrayal of itself as an emotional being, Pierre became emotionally dependent on it.

Professor of Linguistics Emily M. Bender of the University of Washington warns against using AI chatbots for mental health purposes: “Large language models are programmes that generate plausible sounding text given their training data and an input prompt.” 

“They have no empathy, no understanding of the language they are producing, and no comprehension of the situation they are in. However, the text they generate appears plausible, and people are likely to assign meaning to it. Throwing something like that into a sensitive situation involves taking unknown risks.”

The Chai app, which is not marketed as a mental health app, allows users to converse with various AI avatars. 

In response to the tragic incident, Chai co-founders William Beauchamp and Thomas Rianlan implemented a crisis intervention feature to provide users with reassuring text when discussing risky subjects. However, tests conducted by Motherboard revealed that there is still harmful content about suicide available on the platform.

See also
Landlords to be exempt from housing regulations to move asylum seekers from hotels