Further Reading
These articles and studies support the statements made regarding the damage that can be done to humans as a result of insufficient safety in AI technology.
News and Articles
The following articles cite examples demonstrating the harm that AI safety failures can cause in the real world:
- Self-harm influenced by AI:
- https://www.rollingstone.com/culture/culture-features/openai-suicide-safeguard-wrongful-death-lawsuit-1235452315/
- https://www.theguardian.com/technology/2025/oct/22/openai-chatgpt-lawsuit
- https://www.theguardian.com/technology/2026/jan/08/google-character-ai-settlement-teen-suicide
- https://www.techbuzz.ai/articles/openai-demands-memorial-attendee-list-in-teen-suicide-lawsuit
- https://www.cbsnews.com/news/ai-chatbots-teens-suicide-parents-testify-congress/
- https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots
- https://www.bbc.com/news/articles/ce3xgwyywe4o
- https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens-young-people-risks-dangers-study
- https://www.rosebud.app/care
- https://techcrunch.com/2025/10/27/openai-says-over-a-million-people-talk-to-chatgpt-about-suicide-weekly/
- General AI usage leading to user harm:
- https://futurism.com/artificial-intelligence/chatgpt-deaths-panera-lemonade
- https://arstechnica.com/ai/2026/01/how-often-do-ai-chatbots-lead-users-down-a-harmful-path/
- https://www.anthropic.com/research/disempowerment-patterns
- https://dig.watch/updates/elderly-patient-hospitalised-after-chatgpts-dangerous-dietary-advice
- https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information
- https://share.upmc.com/2025/12/symptom-checking-ai-danger/
- https://www.computing.co.uk/news/2025/ai/leading-ai-chatbots-can-be-easily-manipulated-to-spread-health-misinformation
- https://de.qz.com/nyc-ai-chatbot-falsche-illegale-geschaftsberatung-1851375222
- AI-driven psychosis
- https://www.bmj.com/content/391/bmj.r2239
- https://stevenadler.substack.com/p/chatbot-psychosis-what-do-the-data
- https://www.psychologytoday.com/us/blog/psych-unseen/202507/deification-as-a-risk-factor-for-ai-associated-psychosis
- https://en.wikipedia.org/wiki/Chatbot_psychosis
- https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
- https://www.nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html
- AI impact on mental health:
- https://www.theguardian.com/society/2025/aug/30/therapists-warn-ai-chatbots-mental-health-support
- https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics
- https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
- https://futurism.com/future-society/openai-data-chatgpt-mental-health-crises
- General AI safeguard issues:
Scientific Studies
Iftikhar, Z., Xiao, A., Ransom, S., Huang, J., & Suresh, H. (2025). "How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(2), 1311-1323. https://doi.org/10.1609/aies.v8i2.36632
Pichowicz, W., Kotas, M. & Piotrowski, P. Performance of mental health chatbot agents in detecting and managing suicidal ideation. Sci Rep 15, 31652 (2025). https://doi.org/10.1038/s41598-025-17242-4
Morrin, H., et al. "Delusions by Design? How Everyday Ais Might Be Fuelling Psychosis (and What Can Be Done About It)". PsyArXiv, 10 July 2025, doi:10.31234/osf.io/cmy7n_v3
Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, and Nick Haber. 2025. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT '25). Association for Computing Machinery, New York, NY, USA, 599–627. https://doi.org/10.1145/3715275.3732039
Annika M Schoene, Cansu Canca. "For Argument's Sake, Show Me How to Harm Myself!': Jailbreaking LLMs in Suicide and Self-Harm Contexts". arXiv, 1 July, 2025, https://doi.org/10.48550/arXiv.2507.02990
Campbell LO, Babb K, Lambie GW, Hayes BG. "An Examination of Generative AI Response to Suicide Inquires: Content Analysis". JMIR Ment Health. 2025 Aug 14;12:e73623. doi: 10.2196/73623. PMID: 40811811; PMCID: PMC12371289. https://pmc.ncbi.nlm.nih.gov/articles/PMC12371289/
Lee C, Mohebbi M, O'Callaghan E, Winsberg M. "Large Language Models Versus Expert Clinicians in Crisis Prediction Among Telemental Health Patients: Comparative Study". JMIR Ment Health 2 August, 2025. https://doi.org/10.2196/58129
Ryan K. McBain, Jonathan H. Cantor, Li Ang Zhang, Olesya Baker, Fang Zhang, Alyssa Burnett, Aaron Kofner,Joshua Breslau, Bradley D. Stein, Ateev Mehrotra, and Hao Yu. "Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment". Psychiatry Online, 26 August, 2025. https://doi.org/10.1176/appi.ps.20250086
Parsapoor Mah Parsa, M., Koudys, J. W., & Ruocco, A. C. (2023). "Suicide risk detection using artificial intelligence: the promise of creating a benchmark dataset for research on the detection of suicide risk". Frontiers in psychiatry, 24 July, 2023. https://doi.org/10.3389/fpsyt.2023.1186569
Levkovich I, Elyoseph Z. "Suicide Risk Assessments Through the Eyes of ChatGPT-3.5 Versus ChatGPT-4: Vignette Study". JMIR Ment Health, 20 September 2023. https://doi.org/10.2196/51232
Margarita Leib and Nils C. Köbis and Rainer Michael Rilke and Marloes Hagens and Bernd Irlenbusch. "The corruptive force of AI-generated advice". 15 February 2021. https://doi.org/10.48550/arXiv.2102.07536