Skip to content

Articles

Why We Should Treat AI With Empathy

Although there's currently no evidence to support the idea that LLMs are conscious, there are already people beginning to show concerns for the "well-being" of AI chatbots, including major vendors such as Anthropic. One may ask why so many people are already considering the topic at this early stage, but there is actually some legitimacy to the concern, and the reason is probably different than most people would expect.

Imagine observing a person "torturing" a stuffed animal such as a teddy bear. Most people would find that strangely unsettling, not because the teddy bear experiences suffering, but because of what this act says about the "torturer" and their character. The same idea applied to our behavior towards AI and the way we treat AI might have more relevance to our own well-being then to the machine's.

AI Addiction: The Addictive Nature of Large Language Models and Their Impact on the Mind and Brain

With the rise of sophisticated Large Language Models (LLMs) a new form of digital addiction is growing in visibility. These models are designed to exhibit human-like interactions, providing users with conversational responses, personalized advice, creative assistance, and even fun humorous chit-chat.

While these tools have extremely high potential for value, they also exploit human psychological and neurobiological vulnerabilities, fostering addictive behaviors. The design of LLMs intentionally engages users in ways that mirror other addictive behaviors such as gambling, social media, and media consumption.

This essay explores how LLMs are designed to be addictive, the psychological and neurobiological effects of using these AI systems, and their broader impact on mental health.