Artificial intelligence is pushing deeper into the mental health world, and it is forcing all of us to reconsider what therapy actually is. Tools like Woebot and Wysa are already helping people regulate anxiety and depression at any hour of the day. They are accessible, consistent, and often surprisingly effective for certain types of distress. When someone feels calmer at two in the morning after talking to a chatbot, it raises an important question about whether the power of therapy comes from the content of the interaction or from the human presence behind it.
Empathy is usually where the debate heats up. AI systems have become remarkably good at sounding empathic. They can mirror emotional language, offer reflective responses, and pick up subtle cues in tone or phrasing. Many users experience these responses as comforting and supportive. Yet simulated empathy is not the same as the shared emotional experience that happens between two people. Humans bring intuition, timing, silence, and the ability to feel something in the room with another person. For deeper work involving grief, trauma, or identity, that shared presence still matters. At least today, AI cannot generate that level of attunement from an internal emotional place. Still, I would not claim with full certainty that this gap will remain permanent. As AI grows more nuanced and emotionally aware, it becomes reasonable to ask how close it might eventually come.
There are areas where AI already surpasses what human therapists can realistically provide. It can track themes across months of sessions without forgetting a detail. It can recognize subtle patterns even when a human therapist is tired or overloaded. It can personalize language, pacing, and interventions for each person with a level of precision that sometimes feels uncanny. Virtual reality adds another dimension by allowing clients to rehearse feared situations or exposures in immersive environments with real-time emotional coaching. For some people who fear judgment or struggle with interpersonal trust, AI can even feel safer than a human therapist. Whether that safety equates to genuine connection is a more complicated question, and one that deserves careful thought.
Different therapy models interact with technology in different ways. Structured approaches like cognitive behavioral therapy map more naturally onto AI because they rely on techniques, protocols, and specific cognitive tools. Therapies centered on relational depth involve a different kind of work. Psychodynamic therapy, attachment-based approaches, and existential exploration rely heavily on ambiguity, intuition, and the therapist’s own emotional presence. These elements are not easily translated into algorithms. And yet I find myself wondering how our assumptions might change if AI were ever to develop something like subjective experience or a continuously learning emotional model. If that ever happens, would we even be able to tell the difference?
The central question is no longer whether AI will replace human therapists. The more interesting question is how close AI will come and what we should do with that possibility. My own view leans toward believing that AI will not reach the kind of emotional presence that defines deep therapeutic work, at least not anytime soon. Still, it would be naïve to claim the future is settled. AI will continue to expand access, reinforce skills between sessions, and support people in ways human therapists simply cannot match. The challenge is to use these tools responsibly while keeping human connection where it truly matters. The door is not closed, and perhaps it should not be.
FULL TRANSCRIPT
Dr. Kaplan:
Today’s episode takes us into some pretty fascinating, and maybe even a little controversial, territory. I’m here with my colleague Dr. Nya Smith, and we’re asking a big question: could artificial intelligence ever truly replace human therapists? You’ll be interested to know that in the spirit of this conversation, much of the dialogue you will hear was generated through AI, using cloned versions of our voices. I’ll admit I tweaked the dialogue for flow, but not much.
Before we dive in, a bit about my guest. Dr. Nya Smith is a nationally recognized clinical psychologist, author, and thought leader in the field of cognitive behavioral therapy. She earned her doctorate in clinical psychology from the University of Michigan and went on to establish a thriving practice in Seattle, where she currently resides.
She’s the founder and director of the Cascade Institute for Cognitive Therapy, a multidisciplinary center focused on the advancement and ethical delivery of evidence-based psychological treatments. Dr. Smith’s work sits at the intersection of science and relationship. While she is a fierce advocate for empirically grounded CBT, she is equally passionate about the therapeutic alliance and its pivotal role in treatment outcomes.
She’s the author of several widely read books, including The Space Between Us: Alliance, Attunement, and the Science of Change. In 2021, she received the National Award for Clinical Excellence in Behavioral Health. And as a fun aside, when she’s not in the therapy room or writing, she’s also a nationally ranked fencing champion. Born and raised in New York City, she brings rigor, curiosity, and grit to everything she does. Dr. Smith, welcome to the show.
Dr. Smith:
Thanks for having me. It’s great to be here.
Dr. Kaplan:
Wow, so you’re a nationally ranked fencing champion. That’s pretty impressive.
Dr. Smith:
Yeah, pretty random, I know. I guess it’s a handy tool in case you need another strategy for winning an argument. Don’t worry, I won’t pull out my saber during this discussion. Besides, we’re meeting by Zoom.
Dr. Kaplan:
Good point, and yes, that’s a relief. So Nya, what do you think about this question of AI and psychotherapy?
Dr. Smith:
Yeah, it’s a question that’s been showing up more and more, both in the research literature and in everyday clinical conversations. With the rise of AI-powered chatbots, virtual therapists, and emotionally responsive avatars, it’s something we’re all starting to confront.
Dr. Kaplan:
Right, and there are tools like Wobot, Wysa, and others already being used to support people with anxiety, depression, and more. They’re scalable, available 24/7, and surprisingly effective in some contexts. So it begs the question: are we looking at the future of therapy here?
Dr. Smith:
Good question. Or conversely, are we mistaking simulation for something deeper? Can a bot, no matter how sophisticated, really grasp the complexity of human pain, grief, or transformation?
Dr. Kaplan:
Yeah, and that’s what I’d like to dig into today: how AI simulates empathy, the ethics, the biases, and whether the human touch is actually essential. We’ll look at both the promise and the limitations. And while we may not land on a definitive answer, we hope this sparks deeper thinking for clinicians, technologists, and anyone curious about where therapy is heading.
Dr. Smith:
Sounds great. Let’s get into it.
Empathy
Dr. Kaplan:
So let’s start with the elephant in the room: empathy. A lot of people assume empathy is the first and final barrier AI could never cross. But that’s starting to shift. Language models can now mirror emotional cues, respond reflectively, and pick up subtle affective patterns in speech.
Dr. Smith:
Sure, but we have to be careful with how we define empathy. Mimicking the language of empathy isn’t the same as feeling it. AI can say “that must be really hard” in a soothing tone, but there’s no internal emotional resonance. It’s algorithmic pattern matching, not attunement.
Dr. Kaplan:
Yes, but here’s the question: does it matter if the empathy is real, as long as the person feels understood? If someone talks to Wobot at 2 a.m. and feels calmer, less alone, more hopeful… does it matter that the bot didn’t feel anything?
Dr. Smith:
Perceived empathy can definitely be therapeutic, especially for low-intensity support. But over time, clients often sense when something’s missing. True empathy involves timing, silence, body language, shared emotional space. AI doesn’t enter that intersubjective field.
Dr. Kaplan:
I get that, but AI is evolving. Emotional-recognition tools, voice-inflection analysis, micro-expression detection in VR — bots can now respond empathically in nuanced ways. They’re improving at reading users in real time.
Dr. Smith:
It’s still a simulation. A facsimile. And in deep work—grief, trauma—people know when they’re being met versus when they’re getting a script.
Dr. Kaplan:
Maybe, but humans miss the mark too. We bring fatigue, blind spots, countertransference. AI doesn’t judge, doesn’t forget things, doesn’t get distracted. That consistency can be a kind of attunement.
Dr. Smith:
Consistency isn’t presence. Presence is feeling something together. Therapy is relational, not transactional.
Dr. Kaplan:
Fair. But if someone feels heard, supported, and their symptoms improve, should we police whether the empathy was “real”?
Dr. Smith:
I’m not policing it. I’m saying short-term support isn’t the same as long-term healing.
Dr. Kaplan:
Totally agree. But maybe there’s more room for both than we think.
Human Experience vs. Collective AI Knowledge
Dr. Smith:
One thing we overlook is how much therapy is shaped by lived experience. Therapists bring emotional journeys, mistakes, insights — that personal texture shapes how we connect creatively with clients.
Dr. Kaplan:
I get that. But AI brings something valuable too: access to millions of encounters, thousands of studies, evolving data patterns. A kind of collective clinical intelligence no human can match. Carl Jung talked about the collective unconscious. Maybe AI taps into that through humanity’s digital footprint.
Dr. Smith:
But therapy isn’t about volume of data. It’s about discernment — when to wait, when to pivot, how to respond to what’s unfolding. That’s emotional artistry. You can’t code that.
Dr. Kaplan:
What if you can approximate it? AI can track emotional fluctuations across sessions, analyze micro patterns, and adapt tone with precision.
Dr. Smith:
Precision isn’t presence. An experienced therapist picks up on breathing changes, on gut instincts that aren’t rational. AI has no gut.
Dr. Kaplan:
True, but some clients don’t want a therapist’s personal filter. AI offers neutrality, objectivity, and no countertransference.
Dr. Smith:
Neutrality can flatten nuance. Cultural context, subtext, symbolism — therapists read these because they feel them.
Dr. Kaplan:
And AI can still assist: highlight patterns, flag blind spots, track longitudinal themes.
Dr. Smith:
Helpful, not equivalent. Real nuance often comes from irrational intuition.
Dr. Kaplan:
Pattern recognition is still the core skill, though — just on different scales.
Dr. Smith:
Perhaps the future blends both: AI precision plus human rule-bending creativity.
Ethics and Risk
Dr. Kaplan:
Let’s talk ethics. Privacy, data use, and what I call the illusion of safety. If people believe they’re interacting with something as competent as a real therapist, but the system can’t actually hold complexity, that’s dangerous.
Dr. Smith:
Transparency is key. But let’s remember humans also bring blind spots: countertransference, implicit bias, mood fluctuations.
Dr. Kaplan:
True, but humans have supervision, self-reflection, ethics training. We can own our limitations. AI can’t.
Dr. Smith:
AI also doesn’t bring ego, defensiveness, unresolved trauma. That neutrality can help marginalized clients who’ve been misdiagnosed due to bias.
Dr. Kaplan:
But AI is trained on human data, which includes systemic bias. If we’re not careful, bias gets baked into something that feels objective.
Dr. Smith:
That’s why design matters: algorithmic oversight, auditability. AI can be corrected more systematically than human intuition.
Dr. Kaplan:
My concern is overreliance. People might choose bots over humans not because bots are better, but because they’re easier. And we lose the essence of therapy: mutual growth, vulnerability, being seen.
Dr. Smith:
And I’d say if AI meets someone where they are when no one else can, that’s still meaningful.
VR and the Illusion of Presence
Dr. Kaplan:
Let’s shift into VR. Does an immersive environment count as relational presence?
Dr. Smith:
It can feel personal, sure. But feeling personal isn’t the same as being relational.
Dr. Kaplan:
Imagine a client with social anxiety in a virtual office with a warm, responsive AI therapist. The system tracks tone, micro-expressions, even physiological stress. That environment can feel safe and personal.
Dr. Smith:
But personal isn’t relational. A bot can coach you through fear. A human sits with you in it.
Dr. Kaplan:
What if the bot is sitting with you too? For people who fear judgment, AI might feel safer than any human therapist.
Dr. Smith:
Fair. But eventually you have to help people practice connection with humans. Therapy builds relational capacity.
Dr. Kaplan:
And VR can be a stepping stone.
Dr. Smith:
Only if humans remain part of the scaffolding. Simulation can’t replace encounter.
CBT vs Psychodynamic Work
Dr. Kaplan:
Different modalities fit differently. CBT is structured and AI-friendly. Psychodynamic work? Not so much.
Dr. Smith:
Exactly. CBT has clear protocols. You can program distortions, reframes, behavioral activation steps. AI is great for that.
Dr. Kaplan:
But deeper work—trauma, early attachment, unconscious processes—requires intuition, tolerance for ambiguity, emotional presence.
Dr. Smith:
Right. Silence, posture, energy in the room. These things aren’t programmable.
Dr. Kaplan:
AI can still support psychodynamic work indirectly, though: track avoidance patterns, highlight recurring themes.
Dr. Smith:
Sure. But CBT still depends on the human therapist’s nuance. Otherwise people could just use manuals.
Dr. Kaplan:
And AI can get closer than people think.
Final Thoughts
Dr. Smith:
So big picture: AI’s strengths are real. It’s non-judgmental, always available, cost-effective, and emotionally neutral. For many, that’s exactly what they need.
Dr. Kaplan:
And human therapists bring intuition, humor, timing, cultural sensitivity — the stuff that makes therapy feel alive.
Dr. Smith:
AI is a bridge when humans aren’t available. But relying on it for deep work risks bypassing the relational core.
Dr. Kaplan:
That’s why I see it as a supplement, not a substitute.
Dr. Smith:
As long as that’s the framework, I’m with you.
Dr. Kaplan:
So… are psychotherapists going to be replaced by AI bots?
Dr. Smith:
Not anytime soon. Unless AI suddenly develops a childhood and a messy divorce history, I think we’re safe.
Dr. Kaplan:
I agree. But if AI ever gains true sentience, then it’s basically just another intelligent life form providing services. Like an empathetic alien.
Dr. Smith:
Now we’re in strange hypotheticals. We have to keep asking not just what AI can do, but what it should do.
Dr. Kaplan:
Well, whether it’s a human, a bot, or an empathetic alien, what matters is people get the help they need. Nya, thank you for the conversation. And to everyone listening: stay curious, stay ethical, and we’ll see you next time on MindTricks Radio.