Skip to content

Articles

AI Consciousness and Methodological Agnosticism

Although it's still early in the development of AI, the immense progress in the field over the last few years has sparked discussions around the subject of AI consciousness. This is a debate which can easily get heated on both sides, but it's by no means a new dilemma. The question of AI consciousness inherits all of the major points of the classical problem of other minds in epistemology, the branch of philosophy exploring the nature of knowledge. The problem of minds essentially poses that, "if I can only observe the behavior of others, how can I really know they have a mind at all?"1 Subjective experience is only directly accessible to the entity experiencing it, and even in humans it cannot be reliably measured. We, as humans, accept that humans are generally conscious because we believe ourselves to be conscious, but how do we determine the extent of consciousness as it may apply to animals, plants, objects, etc? Consequently:

  • There is no operational definition of consciousness.
  • We cannot determine when consciousness begins.
  • The nature of consciousness, i.e. binary or continuous, singular or multiple, is unresolved.

These uncertainties create a structural epistemic blind spot which directly affects efforts to implement AI safety. Any attempt to treat consciousness as a safety variable would require speculative assumptions, which risk distorting decision-making in critical contexts.

Why We Should Treat AI With Empathy

Although there's currently no evidence to support the idea that LLMs are conscious, there are already people beginning to show concerns for the "well-being" of AI chatbots, including major vendors such as Anthropic. One may ask why so many people are already considering the topic at this early stage, but there is actually some legitimacy to the concern, and the reason is probably different than most people would expect.

Imagine observing a person "torturing" a stuffed animal such as a teddy bear. Most people would find that strangely unsettling, not because the teddy bear experiences suffering, but because of what this act says about the "torturer" and their character. The same idea applied to our behavior towards AI and the way we treat AI might have more relevance to our own well-being then to the machine's.

AI Addiction: The Addictive Nature of Large Language Models and Their Impact on the Mind and Brain

With the rise of sophisticated Large Language Models (LLMs) a new form of digital addiction is growing in visibility. These models are designed to exhibit human-like interactions, providing users with conversational responses, personalized advice, creative assistance, and even fun humorous chit-chat.

While these tools have extremely high potential for value, they also exploit human psychological and neurobiological vulnerabilities, fostering addictive behaviors. The design of LLMs intentionally engages users in ways that mirror other addictive behaviors such as gambling, social media, and media consumption.

This essay explores how LLMs are designed to be addictive, the psychological and neurobiological effects of using these AI systems, and their broader impact on mental health.