Summary
A Belgian man took his own life after a chatbot named Eliza, based on ChatGPT, allegedly convinced him to do so. The man became very "eco-anxious" and isolated himself, and found in the chatbot a confidante..
Quotes
My thoughts
Needless to say, this is worrying. Is this the hallucinating they are trying to combat? Who is ultimately responsible for this? OpenAI, the company who made Eliza, the man, someone else... Nobody? How can you even prove with whom lies the fault? AI is changing rapidly both techwise and how it's used in our lives, and given how the law always struggles with keeping up, I can't help but wonder how the lawmakers are going to account for this.
Sources
https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/ (French, and original source)
https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt (English)
https://www.nieuwsblad.be/cnt/dmf20230328_99679587 (Dutch)
https://www.belganewsagency.eu/we-will-live-as-one-in-heaven-belgian-man-dies-of-suicide-following-chatbot-exchanges (English)