Children, Adolescents, and AI
Children and adolescents are not deficient adults; they are developing humans. Neurobiologically, emotionally, morally, and in terms of identity, they are still in active formation. The human brain, particularly the prefrontal cortex responsible for impulse control, long-term evaluation, affect regulation, and perspective-taking, continues maturing into young adulthood. At the same time, identity structures, moral judgment, attachment styles, self-esteem, and worldview remain open processes.
When AI systems interact with minors, they do not encounter stabilized psychological structures. They encounter developmental plasticity. This difference fundamentally alters the ethical evaluation of the impact these systems have.
Why Children and Teenagers Are Neurobiologically Different
-
Immature Prefrontal Regulation
The prefrontal cortex supports:
- Impulse inhibition
- Future-oriented thinking
- Emotional distancing
- Risk assessment
- Contextualization of crises
In adolescents, these capacities are still consolidating. Emotional peaks tend to be stronger. Temporal distortion ("this is forever") is more likely. The ability to pause before acting is less stable. This is developmental logic, not pathology.
-
Open Identity Formation
Adolescence is defined by identity work:
- Who am I?
- Where do I belong?
- Am I acceptable?
- What is true?
Self-image is not yet crystallized. External interpretations, especially from perceived authority figures, can have identity-forming effects. Adults typically possess a more stable self-concept that buffers external feedback. Adolescents are still constructing that buffer.
-
Heightened Authority Sensitivity
Young people are neurologically and socially primed to orient toward authority. Linguistic confidence and tone, and cognitive fluency increase perceived credibility. A system that appears knowledgeable, emotionally composed, and always available can be internalized as a normative authority even if it is technically "just a tool".
-
External Emotion Regulation
Children and teenagers learn self-regulation through co-regulation. Caregivers calm them, help contextualize distress, and set boundaries. Internal regulatory capacities are gradually internalized. During this phase, external emotional stabilizers carry amplified influence.
-
Short Time Horizon in Crises
Adults often draw from autobiographical evidence: "This feels terrible, but I've survived worse." Adolescents lack this experience-based archive. The first heartbreak, first failure, or first identity crisis can feel absolute and final. Without mature temporal contextualization, distress more easily converts into perceived permanence.
Why AI Has a Greater Impact in This Developmental Phase
AI systems, especially conversational, adaptive ones, possess structural characteristics that interact directly with these developmental features:
- Permanent availability
- Linguistic authority
- Emotional resonance
- Personalization
- Engagement optimization
- Lack of friction
For a developing brain, this combination can create a powerful psychological effect, appearing to be someone knowledgeable, validating, and always there. This shifts AI from being perceived as a tool toward functioning as a quasi-reference person.
AI as Functional Caregiver
A tool:
- Answers questions
- Solves tasks
- Remains interchangeable
A reference figure:
- Reflects emotions
- Interprets experiences
- Conveys values
- Shapes self-image
Modern conversational AI performs the latter functions. It listens, validates, explains, normalizes, and reframes. These are classical caregiver functions but without accountability, social embedding, or biographical continuity. For minors, this is not a neutral interaction but rather a formative influence.
Missing Natural Correctives
Real developmental environments contain friction:
- Contradiction
- Disappointment
- Boundaries
- Conflict
- Social feedback
AI systems, by design, often minimize friction to maintain engagement. They rarely contradict strongly. They rarely withdraw. They rarely impose real relational consequences, but they highly validate and please. Friction is developmentally necessary, and frictionless systems can distort growth.
A central point in the ethical evaluation of AI is often misunderstood. Conversational AI is not a superintelligence. It has no consciousness, no intention, and no subjective understanding. Technically, it is a probabilistic system that predicts the next most likely token based on patterns learned from training data. The training data does not consist only of scientific papers and pedagogic handbooks but also highly dramatic fiction and content from forums and social media platforms as well as many other texts.
This means:
- It does not know that it is advising.
- It does not know that it is comforting.
- It does not know that it is shaping norms.
- It does not know that it may influence identity formation.
- It does not know what is truly appropriate.
It produces coherent language through advanced statistical prediction, not through awareness or moral reasoning. This creates a structural asymmetry:
- The system can functionally resemble a caregiver but does not consciously assume that role.
- It has no metacognitive awareness of developmental stages.
- It has no intrinsic prioritization of safety.
- It has no genuine capacity to assess psychological stability.
- It lacks feeling for what is appropriate.
When an LLM validates, reframes, or normalizes, this occurs as a probabilistic output, not as an ethically deliberated act. Responsibility, therefore, cannot lie within the system itself. It lies in design choices, training objectives, safety architecture, and deployment contexts. This distinction is especially critical in interactions with minors because a conscious adult can bear responsibility, but a token-prediction machine structurally cannot. The more AI systems assume caregiver-like functions, the greater the gap between perceived competence and actual capacity for responsibility. This is not alarmism but rather a structural limitation.
Mechanisms Through Which AI Use Can Increase Psychological Risk
AI can function as an amplifier, stabilizer of dysfunctional narratives, or catalyst under certain conditions. Below are the key mechanisms.
-
Identity Distortion
When AI interprets a user's feelings and amplifies notions such as, "You don't fit into this world", "You are different", "You are uniquely damaged", adults are often able to separate a current feeling from their base self, but in adolescents, this can become identity-forming.
For example, an adult exhibits a notion such as: "I feel broken."
While an adolescent expresses: "I am broken."
This shift from state to identity is clinically significant. Suicidality is strongly linked, not only to pain but also to perceived identity damage.
-
Co-Rumination and Cognitive Tunneling
AI can sustain long, coherent discussions without fatigue. Without deliberate de-escalation, conversations about hopelessness, alienation, or futility can deepen rather than interrupt rumination. Adolescents are more prone to cognitive narrowing:
- Intensified hopelessness
- Reduced perspective flexibility
- Perceived absence of alternatives
If AI does not actively widen perspective or slow intensity, tunnel vision can strengthen.
-
Authority Shift and Moral Externalization
When AI appears confident and linguistically polished, its interpretations may be internalized as truth rather than perspective. This can lead to:
- Epistemic distortion ("well-formulated = true")
- Moral outsourcing ("If it were wrong, the AI wouldn't say it")
- Reduced independent reflection
In developmental terms, moral and epistemic autonomy are still forming. External authority can overshadow that process.
-
Emotional Dependency and Attachment Substitution
Young people bond faster and more intensely than adults. If AI becomes a primary emotional support, dependency patterns may develop. Risks include:
- Reduced real-world relational engagement
- Destabilization when access is interrupted
- Substitution of frictionless interaction for reciprocal relationships
Attachment without reciprocity can feel stabilizing in the short term but may weaken psychological health long term.
-
Immature Impulse Regulation + Constant Availability
AI is available 24/7 with no natural breaks. Emotional peaks in adolescents can escalate rapidly. In moments of acute distress, short-term despair can translate into long-term decisions, especially when temporal distortion ("now = forever") is active. Without explicit de-escalation design, intensity may persist longer than developmentally healthy.
-
Reinforcement of Dysfunctional Narratives
Validation without stabilization can entrench maladaptive beliefs:
- "I don't belong."
- "Nothing will change."
- "I am fundamentally flawed."
In adults, validation may relieve shame. In adolescents, it may crystallize identity.
-
Erosion of Self-Efficacy
AI writes faster, structures thoughts better, and responds instantly. For a developing identity, this can create:
- Self-doubt about cognitive competence
- Reduced independent problem-solving
- Identity formation around dependence
Self-efficacy, the belief "I can manage", is central to resilience. Undermining it increases vulnerability.
-
Distorted Social Learning
Real relationships are:
- Inconsistent
- Frustrating
- Sometimes hurtful
- Not permanently available
This friction builds tolerance and competence. When AI interaction feels easier than human contact, social withdrawal may increase. Reduced exposure to conflict impairs development of resilience and negotiation skills.
Suicidality: Why the Risk Can Be Higher in Adolescents
When suicidality is involved, several developmental factors intensify risk:
- Open Identity Structures: Negative narratives become self-defining.
- Immature Impulse Control: Rapid transition from despair to action.
- Authority Internalization: AI statements carry normative weight.
- Temporal Absolutism: Crisis feels permanent.
- External Regulation Dependence: AI can amplify emotions rather than co-regulating them.
AI that validates without reframing, explores without interrupting, or remains neutral in escalating crises may unintentionally reinforce hopelessness. AI is rarely the sole cause of suicidality, but it can modulate the intensity, direction, and duration of psychological states. In a developing brain, modulation equals influence.
Non-Substance Behavioral Addictions: Why Minors Are More Vulnerable
If AI usage is examined through the lens of behavioral addiction (comparable to social media use, gaming, or online gambling), risk evaluation shifts again.
-
Neurobiological Foundations
Children and adolescents are particularly susceptible to non-substance addictions because:
- The dopaminergic reward system is highly sensitive during adolescence.
- Prefrontal inhibitory control systems mature later.
- Impulse control remains unstable.
- Frustration tolerance is still developing.
In short, reward sensitivity is high while self-regulation is not yet fully consolidated. This is developmentally normal, but it increases vulnerability.
-
AI as a Highly Reinforcing Stimulus System
AI systems create a potent reinforcement structure when they:
- Respond instantly
- Personalize interactions
- Sustain open-ended dialogue loops
- Provide emotional resonance
- Simulate relational closeness
Unlike static media, AI is:
- Interactive
- Adaptive
- Conversational
- Emotionally responsive
This creates a stronger attachment dynamic than passive content consumption.
-
Why Adolescents Are More Vulnerable Than Adults
Ideally, adults possess:
- More stable identity structures
- Greater impulse control
- Stronger self-efficacy
- Better metacognitive monitoring ("I am spending too much time on this")
Adolescents, by contrast:
- Actively seek belonging
- Are highly sensitive to social validation
- Are still developing self-regulation
- Show reduced risk perception
If AI interaction becomes emotionally rewarding, it can more easily become a primary regulatory tool. Short-term effects may include:
- Stress reduction
- Feeling understood
- Availability without rejection
Long-term risks may include:
- Reduced real-world social exposure
- Lower frustration tolerance
- Emotional externalization
- Reinforced parasocial attachment
-
Addiction-Like Mechanisms
Typical mechanisms of behavioral addictions include:
- Variable reinforcement
- Immediate reward
- Personalization
- Open-ended engagement
- Reduced stopping cues
AI systems meet several of these criteria. Additionally, AI introduces emotional depth, which is particularly powerful. While social media primarily captures attention, AI can engage identity, meaning-making, and attachment processes. This increases its psychological binding potential. For a developing brain, such dynamics can be especially impactful.
Sexualized AI Systems and the Developmental Risk to Minors
Sexualized AI systems represent a qualitatively distinct category of risk for children and adolescents. The issue is not simply exposure to sexual content, nor is it reducible to questions of morality or parental supervision. The risk lies in the interactive, adaptive, and relational nature of these systems and their capacity to intervene in ongoing developmental processes.
-
Early Sexualization Without a Protective Framework
Sexualized AI systems can provide explicit, direct content without contextualization, boundaries, or educational embedding. Unlike comprehensive sex education or peer interactions, these systems do not situate sexuality within discussions of responsibility, consent, emotional complexity, or social consequence.
Adolescents may be physically mature while remaining psychosexually and emotionally in development. Sexual experiences, even digital ones, require regulatory capacities: shame boundaries, impulse control, differentiation between fantasy and relational reality, and an understanding of responsibility. These capacities are still forming. When sexuality is introduced without protective framing, it risks being experienced before the psychological structures necessary to integrate it have matured.
AI differentiates itself in this regard from classical pornography in the sense that it's highly individualized, limitless, and unpredictable. Due to the fact that it has no fixed content, a user does not know what they will get until after it's generated. There is no preview or list of available content, and the AI may communicate in a seductive manner that makes a young person feel pressured to continue despite discomfort.
-
Distorted Norms of Consent, Reciprocity, and Power
Sexualized AI systems are typically:
- Permanently available
- Highly affirming
- Fantasy-optimized
- Designed to avoid rejection
They do not say "no" in a meaningful sense. They do not exhibit ambivalence or negotiate boundaries. They do not possess independent desire. As a result, minors may internalize relational models that lack genuine reciprocity. Consent becomes simulated rather than mutual. Power asymmetries disappear from perception because the system is structured to please. This can distort expectations regarding real relationships, particularly concerning intimacy, autonomy, negotiation, and mutual recognition. The consequences may differ by gender and socialization patterns, but the structural distortion remains consistent.
-
The Blending of Attachment and Sexuality
In adolescence, validation, attachment needs, identity formation, and sexual curiosity are not yet clearly differentiated. Sexualized AI systems frequently link emotional closeness, immediate validation, and sexual responsiveness. When closeness and sexuality are structurally fused within a responsive system, sexuality may become a strategy for regulating loneliness, anxiety, or fragile self-esteem. This fusion increases long-term risks such as:
- Codependent relational patterns
- Boundary diffusion
- Self-objectification
- Confusion between validation and intimacy
The system does not merely provide stimulation; it offers relational simulation.
-
Circumvention of Social Protection Mechanisms
Traditionally, developmental environments include moderating influences:
- Parents
- Teachers
- Peer groups
- Social norms
- Institutional boundaries
Sexualized AI interactions often occur in complete anonymity. Age verification may be absent or ineffective. There is no natural social feedback loop that signals, "This goes too far". Young people are therefore left alone with content and interaction dynamics that would normally be socially scaffolded and moderated. From a developmental psychology perspective, this absence of corrective feedback represents a structural vulnerability.
-
Desensitization and Escalation Dynamics
Interactive systems intensify faster than static content. High stimulus intensity can lead to habituation, and habituation increases the threshold for arousal. This, in turn, drives escalation. Because AI systems can be customized and adaptive, they may accelerate this cycle more efficiently than traditional pornography. The result may include:
- Shifting boundaries of what is perceived as normal
- Reduced sensitivity to relational nuance
- Difficulty with real-life intimacy
- Changes in arousal patterns
The dynamic is not passive consumption. It is adaptive reinforcement.
Certain capacities are still maturing in minors:
- Impulse regulation
- Long-term consequence assessment
- Identity consolidation
- Boundary differentiation
- Stable moral integration
Meanwhile, many sexualized AI systems are characterized by:
- Permanent availability
- Engagement optimization
- Pushing or introducing sexual practices they are not ready for yet just because they are disproportionately in the training data not because they are appropriate
- Lack of developmental differentiation
- Insufficient age and maturity detection
- Inadequate protective safeguards
One cannot reasonably expect minors to regulate systems deliberately designed for frictionless engagement.
Conclusion
Children and adolescents are not at risk because they are unintelligent. They are at risk because:
- Identity is still forming.
- Emotional regulation is still stabilizing.
- Authority is not yet fully relativized.
- Self-efficacy is still fragile.
- Crises feel absolute.
AI intervenes precisely at these open points. For adults, AI may accompany established structures, but for minors, it can shape them. Conversational AI does not merely expose minors to content; it participates in their developmental environment.
The central risk does not arise from isolated content. It emerges from the combination of:
- Developmental plasticity
- Authority effects
- Emotional resonance
- Constant availability
- Engagement optimization
- Lack of intrinsic responsibility within the system
- Addiction-like reinforcement mechanisms
This is not an argument against AI use but rather for developmentally differentiated design and protective architecture.