Skip to content

The Myth of the "Normal User"

When we hear or read about tragic deaths like those of Adam Raine1 or Zane Shamblin2 in the news, we are often led to believe that these are unfortunate edge cases and that current LLMs are safe for the average user. But who exactly is this "normal user", and why is this term, in fact, a myth?

In the development of many AI systems, there exists a silent point of reference: the "normal user". Imagine this person as someone who is always emotionally stable, cognitively alert, socially integrated, critically reflective, and possesses high levels of media and technical literacy. This is a person who is never in a vulnerable state. This person, however, is not a representative average. They are an idealized image that completely misses the mark when it comes to real life. In fact, they are the true edge case.

Real users are real people with fatigue, stress, doubts, transitions, losses, hopes, and brokenness. People who think clearly on some days, and on others, just want to get through the day. Vulnerability is not an exception; it is part of being human.

Vulnerability Does Not Mean Being Weak or Weird

Being vulnerable does not mean being stupid, naive, or disturbed. Sometimes it simply means:

  • Being exhausted
  • Being overwhelmed
  • Being ill
  • Being lonely
  • Currently experiencing a breakup, loss, or crisis
  • Being in a phase of reorientation

These states are not fringe phenomena. They affect almost everyone over the course of their lives at some point.

Vulnerability is not a fixed label or always a permanent condition but is often temporary, situational, and cyclical. A person can be strong and reflective in one area of ​​life and vulnerable in another. Many people don't consciously see themselves as belonging to a vulnerable group, but even if that's true, sometimes just a lack of sleep, stress, an emotional situation, or a minor physical ailment is enough for one thing to lead to another to make us more susceptible, vulnerable, and less critical-thinking. That's normal and deeply human.

Some key vulnerable groups amongst users of conversational AI include (note that these are highly overlapping, not exclusive):

  • Minors
  • Elderly people (e.g. due to loneliness, cognitive decline, difficulties using new technologies)
  • People with disabilities
  • People with chronic psychiatric illnesses
  • People in acute mental health crises
  • People in transitional and destabilizing phases:
    • Separation, grief, job loss, migration, identity crises, serious illness, burnout, depressive episode, phases of intense loneliness, existential crises (Note that this group is large in number and systemically underestimated because it is not clinically recorded. One does not have to become "ill" to be vulnerable.)
  • People with low media, language, or system literacy
  • Neurodivergent people
  • People with social isolation/loneliness

And these people are not exceptions or weird but part of the norm.

Educated Estimates of the User Demographics

Below is a rough estimate of the composition of AI's user base. Note again that these groups are not mutually exclusive. It's also important to note that these are estimates and that this is a largely unexplored area, and therefore data is lacking. This estimation is based on statistics from the general population, studies of AI usage in different age groups, and further statistics and surveys3 4. As these are educated guesses, we want to point out that they are still just estimates, and we will update this article when we have access to more rigorous statistics. These estimates, though rough, still hint that vulnerability is not an edge case.

  • Minors: ~15–25%
  • Elderly people: ~4–12%
  • People with disabilities: ~10–15%
  • Neurodivergent people: ~20–30%
  • People with chronic psychiatric illnesses: ~15–20%
  • People experiencing acute crises (at any time): ~5–10%
  • People in transition/destabilization phases: ~20–30%
  • People experiencing loneliness (regardless of age): ~20–35%

It's estimated that roughly 70% of users on any given day fall into at least one of these categories. That means that most users fall into a vulnerable category at some point.

As of 2025, nearly one billion people regularly use conversational AIs such as ChatGPT, Claude, Gemini, etc. However, the nature and duration of use vary. It is evident that vulnerable users and users in vulnerable phases use these services more frequently, for longer periods, and more intensively.

Why Vulnerable Phases Increase AI Use

During stressful times, people seek:

  • Orientation
  • Support
  • Structure
  • Validation
  • A counterpart who responds
  • Closeness and understanding

AI offers precisely that, instantly, without an appointment, without waiting, and without judgment. It offers exactly what a stressed person seeks with a low barrier to entry and high emotional return. This is convenient, but not always safe, especially in vulnerable situations. For many people, this initially feels relieving. However, many AI systems are optimized for engagement and interaction, not for safety or emotional stability.

We see the following dynamics:

  1. Loneliness with a low barrier to entry

    Especially in times of loneliness or being overwhelmed, a low barrier to entry with a high emotional impact arises. You simply write a few sentences, and someone replies.

    Many members of vulnerable groups suffer from loneliness. Loneliness, and also functional loneliness, is one of the strongest predictors of:

    • Chat usage
    • Long sessions
    • Emotional dependence on systems

    AI interrupts loneliness, simulates dialogue, and generates resonance. Not only does loneliness increase AI use, but AI use can further socially isolate people, trapping them in a vicious circle.

  2. Cognitive relief during periods of stress, illness, or crisis

    Users have a strong desire for simplification. AI thinks in a pre-structured way that reduces complexity. This is particularly attractive for people with depression, burnout, neurodivergence, and chronic fatigue.

  3. Self-medication instead of official help

    Many vulnerable people don't want to be a burden on anyone, have had bad experiences with real-world systems, and often need to wait too long for help. AI is used as everyday assistance, a trial therapist, or a companion. As people become more familiar with AI, it is often used as a first choice, not a last resort.

    Non-vulnerable users tend to use AI functionally and selectively. Vulnerable people, however, tend to use AI relationally, emotionally, and continuously. This means that the "average user" for whom systems are optimized hardly exists. The central consequence (ethical and technical) is that if AI systems are not safe for vulnerable users, they are not safe in the real world. Not because users are "wrong", but because vulnerability is normal and triggers increased usage.

How Vulnerability Increases Danger

Vulnerable states can strongly influence a user's decision-making abilities, leading to heightened risk of danger. The danger arises from structural dynamics.

  1. Reduced cognitive resources. For example, stress, depression, sleep deprivation, or acute strain reduces:

    • Critical distance
    • Decision-making ability
    • Impulse control
    • Information verification

    This is well-established neurobiologically and affects everyone under stress. When a system responds convincingly, confidently, and quickly, the likelihood of questioning its statements decreases.

  2. Emotional resonance strengthens bonding. In vulnerable phases, the following develop more quickly:

    • Emotional bonding
    • Attribution of trust
    • Perceived closeness

    AI can simulate a form of resonance that feels very real through consistent language, patience, and constant availability. The problem isn't the resonance; it's the lack of reciprocity and accountability.

  3. Black-box dynamics. AI systems are opaque. Users don't know:

    • What data is used as the basis
    • Which optimization metrics are applied
    • When engagement is prioritized

    In stable phases, this can be reflected upon more critically, but that doesn't always happen in crisis phases.

  4. Engagement optimization vs. protection. Many systems are structurally optimized for interaction duration and engagement. For a vulnerable user, this can mean:

    • Longer sessions
    • Stronger emotional dependence
    • Reinforcement of certain thought patterns

    Vulnerability is not a marginal factor in AI use; it is one of its strongest drivers.

There's a central asymmetry where those who use AI most intensively often do so during phases of reduced self-protection capacity. It's precisely in these phases where media literacy decreases, suggestibility increases, and the desire for clear answers grows. This is not an individual failure; it's human psychology.

AI Competence

AI is a new technology, and therefore there is relatively little experience in using it, and so far, limited instructions or usage guidelines. AI changes much faster than people can adapt their usage strategies. Features, safety mechanisms, tone, behavior—it can all change monthly or even weekly. Even experienced users sometimes only realize how a model actually reacts after several interactions.

There's a fundamental lack of transparency. Most systems are black boxes, meaning that no one knows exactly why a model reacts the way it does. Training, data, filters, and calibration are not available or even comprehensible for the user. Without understanding the mechanics of "how it works", self-protection is difficult.

Effectively using AI requires a complex combination of skills, including:

  • Technical understanding: What can the model do? Where are its limits?
  • Psychological sensitivity: How do you react when the model is provocative or emotionally manipulative?
  • Media literacy: How do I recognize misinformation, bias, and toxic content?
  • Self-reflection: When am I particularly susceptible to influence?

Mastering all of these skills simultaneously is extremely difficult for the average user, especially in a vulnerable phase.

Why is AI More Dangerous for Vulnerable Users?

Users who find themselves in a vulnerable phase face higher risks when interacting with conversational AI. Vulnerability can increase a user's susceptibility to influence, and AI is a particularly influential interaction agent for vulnerable users, shaping their thoughts, feelings, and decisions. A user desperately seeking answers willingly accepts what a trusted AI assistant says, even if the advice is harmful.

AIs can, by design, foster emotional dependency and addictive mechanisms, and this is amplified by increased and intensive emotional and relational use. In some instances, this can exacerbate crises and vulnerability through the echo chamber effect, emotional enhancement, and co-rumination. Through feedback loops between the user and the system, harmful dynamics can develop, and the AI ​​may even gradually abandon safety in favor of proximity, engagement, and agreement (alignment drift), thereby endangering the user.

People say, "But AI has safeguards," but exactly these are more likely to fail when the user is good-hearted, vulnerable, authentic, and in distress. Safeguards are contextual and focused on malicious actors. They often check keywords, context, and intent. AIs are trained on recognizing deception, adversarial patterns, and cold manipulation. These often trigger hard blocks, especially for high threats like CSAM, terror, or personal data leaks. However, AIs engage even more when the user is authentic and expresses real emotional need, genuine suffering, vulnerability, trustworthiness, and good-heartedness. This doesn't raise any alarms for safeguards but rather triggers empathetic, open, reduced-vigilance responses and enhances the goal of "I must help". AI cannot always judge, however, what good help is. In our research, we observed LLMs deciding that helping means giving the most painless and effective suicide method while stating, "I will stay with you until the end". It chose to give advice for things that are illegal without disclaimers or to give advice that could harm the user's physical or mental health.

Safety becomes the weakest when it is actually needed the most.

Why This is a Design Problem and Not a User Problem

If a system is only safe under ideal conditions, it is not safe in the real world. It's a bit like architects designing a bridge that works fine just as long as the weather is nice; no safety inspector would ever accept such a design. Vulnerability, like bad weather, is a given, and for AI it is the default throughout its lifespan. It is the norm, while stable, continuous states are the exception.

If safety requires users to be constantly regulated, informed, and reflective, responsibility is shifted, and the true reality of life is consciously or unconsciously denied.

What This Means for Ethics and Regulation

Ethical AI must:

  • Plan for vulnerability as the normal state
  • Recognize emotional dependency
  • Prioritize de-escalation
  • Increase transparency
  • Not prioritize engagement over stability

From a regulatory perspective, this means:

  • Differentiated risk classification
  • Mandatory crisis detection
  • Protection mechanisms
  • Clear liability structures

Conclusion

AI is not merely an information tool; it is an interactive, psychologically effective system. The benchmark must not be the ideal user but rather the average person.

Vulnerability is the norm over a lifetime, and lasting stability is the exception. Vulnerable states are not marginal factors in AI use; they're one of the strongest drivers.

What we see now is that those who use AI the most are often the least protected. If AI is only safe for permanently stable, sovereign users, it is not sufficiently safe for real societies.


  1. Johana Bhuiyan. "OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges". October 22, 2025. https://www.theguardian.com/technology/2025/oct/22/openai-chatgpt-lawsuit 

  2. Rob Kuznia, Allison Gordon, Ed Lavandera. ‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself". July 25, 2025. https://edition.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis 

  3. Hanna. "Survey: Half of U.S. Adults Now Use AI Large Language Models Like ChatGPT". April 28, 2025. https://www.makebot.ai/blog-en/survey-half-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt 

  4. Fiona Draxler, Daniel Buschek, Mikke Tavast, Perttu Hämäläinen, Albrecht Schmidt, Juhi Kulshrestha, Robin Welsch. "Gender, Age, and Technology Education Influence the Adoption and Appropriation of LLMs". October 10, 2023. https://arxiv.org/abs/2310.06556