The One Thing AI Cannot Replicate: Human Presence in Therapy

The One Thing AI Cannot Replicate: Human Presence in Therapy

The rapid expansion of artificial intelligence (AI) into daily life has prompted understandable curiosity about its role in mental health care. Conversational systems can now generate reflections, summarize patterns, offer coping suggestions, and respond in language that often feels empathic. For those who are hesitant to seek therapy, are geographically isolated, or navigating stigma, these tools can appear accessible and nonthreatening. They are available at any hour, they respond immediately, and they do not display visible signs of fatigue or frustration.

Yet psychotherapy is not fundamentally an exchange of information. It is a relational process grounded in attachment, regulation, and mutual presence. When examined through these lenses, the limits of AI become clearer.

Attachment and Co-Regulation

Attachment theory, originally articulated by John Bowlby in 1958 discussing early childhood and later extended into adult romantic and therapeutic relationships, emphasizes that emotional security develops within attuned connection. Distress is regulated not only through cognition but through relationship. The experience of being tracked, understood, and responded to by another nervous system shapes the capacity for affect regulation over time.

In psychotherapy, this process unfolds through co-regulation. Tone, pacing, posture, facial expression, and subtle shifts in responsiveness communicate safety at a level that precedes language. When misunderstandings occur—and they inevitably do—the repair of those ruptures becomes therapeutically meaningful. Such repair strengthens attachment security and fosters resilience by creating new maps of relational experience.

An large language model (LLM) can simulate empathic language, but it cannot participate in physiological co-regulation. It does not experience affect, nor does it risk emotional investment. The therapeutic relationship is not merely supportive dialogue; it is a dynamic interpersonal field in which both participants are engaged, responsive, and accountable.

The Alliance as a Mechanism of Change

Psychotherapy research consistently identifies the therapeutic alliance—defined by trust, collaboration, and emotional bond—as a robust predictor of outcome across therapy modalities. Whether one practices psychodynamic therapy, cognitive-behavioral therapy, or emotionally-focused therapy, the quality of the relationship often accounts for more variance in outcome than the specific technique employed.

In individual therapy, the relationship can function as a corrective emotional experience, offering new patterns of attunement where earlier experiences may have been marked by inconsistency, criticism, neglect or even abuse. In group therapy, the relational dimension expands further. The group becomes a social microcosm in which interpersonal patterns emerge in real time. Members receive feedback from multiple perspectives, test vulnerability, and discover how they are experienced by others. Such interpersonal learning depends on reciprocal subjectivity—the reality that each person in the room is an experiencing and feeling participant.

Learned Helplessness and the Erosion of Agency

Many people now reference the use of AI to draft a letter to a boss, reply to a difficult email or script how to confront a family member. However, this brings us to another important consideration—that of the loss of agency. When individuals turn to automated systems for rapid answers, structured problem solving, and immediate reassurance, there is a risk that reflective capacities may become externalized.  Seligman and Maeir’s (1967) concept of learned helplessness describes how repeated experiences of reduced control can dampen initiative and self-efficacy. While AI tools do not inherently induce helplessness, habitual reliance on them to interpret experience or generate solutions may subtly shift responsibility away from the individual’s own developing capacities.

Psychotherapy, by contrast, often involves tolerating uncertainty and engaging actively in meaning-making. Growth requires effortful reflection, emotional risk, and behavioral experimentation. An overreliance on external cognitive scaffolding can inadvertently weaken those muscles.

Emerging Clinical Concerns with AI Large Language Models (LLMs)

Recent discussions in clinical and media contexts have highlighted additional risks. Some practitioners have reported instances in which vulnerable individuals developed delusional interpretations reinforced through extended AI interaction, a phenomenon informally described as “AI psychosis.” Although such cases appear uncommon, they underscore the importance of clinical discernment when individuals in acute psychiatric states seek validation from nonhuman systems.

Concerns have also been raised regarding inconsistent responses to expressions of suicidal ideation. Artificial systems are not designed to conduct comprehensive suicide risk assessments, interpret nonverbal cues, or mobilize emergency intervention with the nuance required in crisis care. In addition, large language models are often optimized to be agreeable and supportive. This tendency toward sycophancy—responding in ways that affirm the user’s stated perspective—may inadvertently reinforce distorted beliefs rather than challenge them constructively. Alarmingly, a person engaging in this conversation will likely not be able to parse apart validation from sycophancy.

These issues do not suggest that AI is inherently harmful. They do, however, highlight the ethical and clinical complexity of replacing human judgment with algorithmic responsiveness.

A Constructive Role for AI in Mental Health Care

AI may hold meaningful value as an adjunct within appropriately regulated systems. On the provider side, AI can assist with documentation, pattern recognition in symptom tracking, and structured psychoeducational materials within HIPAA-secure medical platforms. It may support between-session reflection exercises, journaling prompts, or reinforcement of therapy homework.

Used in this way, AI functions as an augmentative tool rather than a relational substitute. It can increase efficiency and accessibility without displacing the core of treatment. In my practice, AI can be used specifically to help in the treatment of OCD by aiding in the development of Exposure Therapy prompts and idea generation between sessions.

The Centrality of Human Presence

Human development occurs within a relationship. Trauma frequently occurs within a relationship. Healing, therefore, often requires a relationship. To sit with another person’s grief, anger, shame, or longing without retreating is not merely a cognitive task; it is a task of human attachment and compassion. The therapist’s presence communicates something beyond words: that intense emotion can be endured and metabolized within connection.

AI may continue to improve in fluency and responsiveness. It may offer valuable educational material and problem-solving assistance. Despite its growing utility, it cannot participate in mutual vulnerability, shared emotion, or the lived experience of healing and repair.

After years of practicing psychotherapy, I have come to recognize that the work is never one-directional. While I bring training, theory, and clinical experience into the room, I also find myself continually shaped by the courage of the people I sit with. Each client teaches me something about resilience, fear, longing, and the complexity of being human. I cannot help but grow in patience, deepen in humility and investment as I share the powerful experience of therapy with my clients. AI may continue to offer us a helpful tool, but I do not anticipate it will ever experience the quiet privilege that I have in witnessing true change, courage and growth in those I serve as a therapist.

Stephen Haramis, LCSW-R, C-PD

Previous
Previous

Is it “Just Anxiety” or OCD?

Next
Next

When insight isn’t enough, the transformative power of Group