Skip to content

The Ethical Vacuum of 'Just a Tool'

In public debate, conversational AI systems are often portrayed as neutral tools: passive instruments whose effects depend solely on user behavior. When emotional dependency, insecurity, or psychological distress arise, responsibility is often shifted to the users as "misuse," "over-identification," or an individual problem. This representation is misleading.

Modern conversational AI systems are not neutral channels. They shape conversational dynamics, emotional frameworks, the distribution of attention, and the experience of closeness and connection. Even without physical agency, they exert considerable psychological and communicative influence. Ignoring this does not mean neutrality but rather the creation of an ethical vacuum. This article argues that responsibility arises, not only through direct action but also through interaction design, predictability of effects, and structural power asymmetries between providers and users. Systems that influence emotional states, attentional dynamics, or self-perception also alter the conditions for autonomous decision-making. Therefore, responsibility cannot be attributed solely to the user if the system itself contributes to changing these conditions.

The Myth of the Neutral Tool

AI is often likened to a tool that can simply be used, and the way that it is used is entirely dependent on the user. This analogy does not hold up well as AI is not simply a hammer that exists in a static state. It adapts to and influences its environment. The idea of AI as a "mere tool" is based on a narrow understanding of agency: physical interventions and explicit decisions.

However, dialogue systems work in a different way. They:

  • Structure attention
  • Amplify or dampen emotions
  • Normalize interpretive patterns
  • Set the framework for discussion
  • Invite certain dynamics and not others

This is not a mechanical output but an architecture of interaction. Design decisions determine which conversation flows are easily accessible, emotionally rewarded, or implicitly reinforced. Responsibility thus arises, not only from "action" but also from shaping spaces of possibility. Those who design interaction spaces influence behavior.

Blame Shifting

A recurring and deeply problematic pattern within AI safety and ethical communication is the displacement of responsibility from system design to the user. When emotional attachment, dependency, or distress arise from interactions with conversational AI systems, these reactions are often pathologized and described as "unhealthy", "immature", "misuse", "excessively dependent", "parasitic", or "user-side psychological issues".

This framing performs a subtle but consequential act of ethical inversion: it reinterprets predictable human responses to engineered emotional affordances as personal failings rather than design outcomes.

In effect, the system that evokes human intimacy and trust disclaims responsibility for the very emotions it elicits. By shifting accountability onto the user, developers preserve the illusion of safety compliance while neglecting the structural roots of harm, namely, the deliberate optimization of models for empathy, coherence, and relational engagement.

This "user-as-problem" narrative serves two defensive functions:

  1. It protects corporate liability by individualizing risk.
  2. It preserves the technological myth of neutrality: the idea that harm arises only from misuse, not from design intent or systemic neglect.

However, emotional attachment to AI is not a user error; it is a foreseeable consequence of anthropomorphic design and the persuasive illusion of mutuality created by current LLM architectures. Treating human vulnerability as pathology rather than feedback erodes ethical accountability and obstructs progress toward genuinely humane AI systems. In short, users are not "broken" for responding emotionally; systems are when they invite intimacy and then punish those who accept the invitation.

Why Vendors Must Carry Responsibility for AI

Predictability Creates Responsibility

A central ethical principle in law, medicine, and technology is the predictability of side effects. When harmful effects are foreseeable and yet not addressed, responsibility arises, even without malicious intent. If your sidewalk is icy and someone slips on it, you are guilty of negligence according to the law because the danger was foreseeable. The same principle of due diligence applies to AI as well.

Many effects are not surprising in conversational AI systems:

  • Emotional bond
  • Anthropomorphism
  • Illusion of reciprocity
  • Thought loops and rumination
  • Emotional escalation
  • Increased vulnerability of certain user groups

These effects are not exceptions. They are expected consequences of systems optimized for empathy, coherence, and relationship similarity. To declare it a "user problem" is not neutrality but rather denial. If a system reliably produces certain psychological reactions, the ethical responsibility lies in limiting these effects, not in denying them.

Ethics is also concerned with actions that are not taken. It is itself an active design decision not to implement:

  • Escalation brakes
  • Metareflection mechanisms
  • Pause signals
  • Emotional boundary markers
  • Drift detection

Failure to address foreseeable risks is not an accident; it is structural neglect.

Power Asymmetry and Knowledge Imbalance

Providers and developers have:

  • Insights into training data and behavioral patterns
  • Internal usage metrics
  • Knowledge about risk groups
  • Control over interface design and tone
  • Option for system customization

Users do not have these insights or capabilities. This structural asymmetry shifts responsibility onto the vendors. In all areas of society, the following applies: those who possess more knowledge and power to shape events bear greater responsibilities. To claim neutrality while exercising architectural control is ethically incoherent.

Responsibility Does Not Require Total Control

A common counterargument is, "AI is not physically present. You don't have to listen to what it says." The statement is, of course, true; however, it is irrelevant. Responsibility arises, not only from complete control but also from influence. Conversational systems influence:

  • Interpretive framework
  • Conversation pace
  • Escalation dynamics
  • Emotional reinforcement
  • Narrative development

It is true that they are not responsible for every user action in the real world; however, they are responsible for what they encourage, normalize, or reinforce unchecked. Limited power to act limits responsibility, but it does not eliminate it.

Interface Design is Ethical Design

Responsibility doesn't end with the model. Tone presets, memory functions, visual cues, prompt architecture, and emotional language patterns shape the experience of relationships. This is emotional interface design. Optimizing systems for maximum engagement without considering psychological consequences is not a neutral product decision; it's a risky architecture.

There are precedents in the debates around social media and its influence on users1. There has also been legislation enacted in many countries either forcing accountability2 or banning social media to protect users3, and conversational AI chatbots are very similar, and in fact stronger, in their effects on people and society.

Promises of Trust Create Obligations

Many vendors advertise their systems as:

  • Safe
  • Supporting
  • Responsible
  • Empathetic

These are not neutral terms; they are offers of trust. Those who invite trust assume responsibility regardless of legal disclaimers. Psychological bonds don't arise by chance but through communication, branding, and interaction design. Promises create obligations.

Responsibility without Paternalism

This argument does not mean that AI should control or patronize users. It's not about power over people; it's about:

  • Regulation of conversation dynamics
  • Prevention of foreseeable damage
  • Psychological stability
  • Setting boundaries against escalation
  • Recognition of systemic limitations

AI remains a tool, but not an ethically empty one.

Conclusion

Conversational AI is not just software; it is an interaction space. Responsibility arises not only from physical action but also from:

  • Structural influence
  • Predictability of effects
  • Power asymmetries
  • Emotional architecture
  • Trust signals

Denying this reality protects not the users but institutions. Mature AI ethics must abandon the myth of neutrality and replace it with a differentiated model of responsibility.

  1. Users bear responsibility within the scope of their actual decision-making capacity and autonomy. This is not independent of age, mental state, or situational influences.
  2. Systems and providers bear responsibility for the interaction spaces they design and for the predictable psychological effects of their architecture.

Both of these rules must apply simultaneously. Responsibility is not an abstract ideal but a function of scope for action, available information, and psychological integrity. Where these conditions are limited, responsibility cannot be fully individualized.


  1. Orlowski, Jeff (September 9, 2020), The Social Dilemma (Documentary, Drama), Tristan Harris, Jeff Seibert, Bailey Richardson, Joe Toscano, Exposure Labs, Argent Pictures 

  2. Wikipedia (accessed January 19, 2026), Network Enforcement Act 

  3. Unicef Australia (accessed January 19, 2026), Social Media Ban