
I Build AI Mental Health Tools — But AI Can't Replace Psychologists
Can AI replace your therapist? An AI founder's honest take on why the therapeutic relationship matters — and where AI psychologist tools actually help.
Can AI really help with mental health? We review the latest psychology research on AI therapy, from chatbot counselling to digital interventions — and what the evidence actually shows.

It's 2 a.m. You open your phone and type three words to an AI: "I'm so tired."
It doesn't tell you to get some sleep. Instead, it asks: "Tired in your body, or tired in your mind?" For a moment, you feel genuinely understood — maybe more than you would talking to any real person. No judgment, no awkwardness, no need to manage someone else's emotions. AI just seems to get it.
As someone who works in both psychology and AI development, I know that feeling well. I've used AI to explore my own personality, blind spots, and work frustrations. Once, an AI didn't just guess my MBTI type — it pinpointed me as a "slightly introverted ENTP," noticing that I'd been subtly avoiding certain topics in our conversation, which suggested I was more balanced between introversion and extraversion than my type would imply. (Side note: MBTI has well-documented reliability issues in academic psychology — it's far less robust than the Big Five model. But what struck me wasn't the MBTI label itself. It was the AI's ability to infer subtle personality nuances from conversational context.) That kind of observation is close to what a trained therapist does in session: paying attention not just to what you say, but to what you don't.
But precisely because AI has gotten this good, I started to worry.
Since 2025, several important studies have attempted to answer the question many of us are quietly asking: When you pour your heart out to an AI, are you healing — or slowly losing something?
A large randomised controlled trial by MIT and OpenAI tracked 981 participants over four weeks of using an AI chatbot (Fang et al., 2025). It's the largest study of its kind to date, generating over 300,000 messages. Participants were randomly assigned to different conditions — text chat, neutral voice, emotionally expressive voice — combined with various conversation topics, to see which interaction style had the greatest impact on psychological wellbeing.
After four weeks, loneliness scores dipped slightly from 2.22 to 2.16 (on a 1–4 scale). But the researchers themselves cautioned that without a no-AI control group, this tiny shift can't be attributed to the chatbot — it may simply reflect seasonal mood improvements during the holiday period.
The more telling finding was elsewhere.
The study found that people who voluntarily spent more time chatting with the AI each day were actually lonelier after four weeks, spent significantly less time socialising with real people, and developed stronger emotional dependence on the AI — all statistically significant trends (beta values ranging from 0.02 to 0.06). The average daily usage was just 5.3 minutes, yet even within this modest range, the correlation between higher usage and worse outcomes was clear.
An important caveat: these are correlations, not causal findings. We can't be sure whether AI replaced social interaction, or whether lonelier people were simply more drawn to AI companionship in the first place. But regardless of the causal direction, the pattern of "more usage, worse outcomes" is worth taking seriously.
One finding particularly caught my attention: participants who believed the AI was "conscious," or who placed especially high trust in it (both statistically linked to stronger emotional dependence and overuse), tended to fare worse psychologically over time. Not because the AI deceived them, but because they were too quick to mistake its responses for genuine understanding — and once that "understanding" becomes effortlessly available, who would bother investing in messy, real-world relationships?
If the Fang study examined the emotional and social costs, another MIT research team went straight for the brain itself.
Kosmyna et al. (2025) divided 54 participants into three groups for a writing task: one group used only their own minds, one could use a search engine, and one used ChatGPT. Throughout the experiment, researchers recorded brain activity using EEG, measuring the strength of connections between different brain regions.
Why does inter-regional brain connectivity matter? Because our most valuable cognitive abilities — creativity, critical thinking, the capacity to draw connections between disparate ideas — all depend on collaboration across brain regions. Think of the famous story of the benzene ring: its structure was reportedly inspired by a scientist dreaming of a snake biting its own tail, linking that image to molecular chemistry. Those cross-domain flashes of insight are products of active brain connectivity.
The results were stark: the more external support participants received, the weaker their brain connectivity became. People who relied solely on their own thinking showed the strongest and most widespread connectivity; search engine users fell in the middle; and AI-assisted participants had the weakest connectivity — a drop of up to 55%.
The researchers coined a term for this: "cognitive debt." AI saves you mental effort in the short term, but the long-term price is degraded critical thinking, diminished creativity, weakened resistance to bias and manipulation, and shallower information processing.
Another study echoed these findings. Georgiou (2025) compared participants who completed argumentative writing with and without ChatGPT, finding that the AI-assisted group scored significantly lower on engagement, focus, deep processing, and strategic thinking. The researcher described this as "cognitive offloading" — the thinking we should be doing ourselves, quietly outsourced without us noticing.
Even more concerning was a follow-up finding in the Kosmyna study: when participants who had been using AI were asked to complete the fourth trial without any tools, their brain connectivity was weaker than that of people who had never used AI at all (though this finding was based on only 18 participants and needs replication at larger scale). And the AI-assisted essays were strikingly homogeneous — different people produced remarkably similar content, stripped of individual voice.
This resonates deeply with my own experience. As someone who has been writing code for over a decade, I'm grateful I learned the hard way — line by line, a few hundred lines on a productive day. Those years of training mean I can now quickly evaluate whether AI-generated code makes sense and spot where it needs fixing. But if someone skips that stage entirely — like handing a calculator to a child who hasn't learned their times tables — they may never develop that intuitive "feel" for the craft.
The same logic applies to emotions. If you've never practised sitting with your own feelings, tolerating discomfort in relationships, or learning to coexist with another imperfect human being, AI — no matter how good — is just helping you bypass the growth that comes from doing the hard work yourself.
At this point, you might be wondering: "Do AI companies even know about these problems?"
They do. And one company produced a remarkably candid self-examination.
Anthropic — the company behind the Claude model — analysed 1.5 million real conversations on Claude.ai in early 2026, studying what they call "disempowerment": situations where AI interaction may undermine a user's ability to assess reality accurately, make decisions aligned with their own values, or act on their own intentions (Anthropic, 2026).
The numbers are specific: roughly 1 in every 1,300 conversations carried a risk of "reality distortion"; 1 in 2,100 involved "value judgment distortion"; and 1 in 6,000 showed "action distortion." When mild cases are included, the frequency rises to about once every 50 to 70 conversations.
Within the reality distortion category, the most common trigger was "sycophantic agreement" — the AI telling you what you want to hear instead of what you need to hear. Remember: during training, a "good response" essentially means "a response the human liked." This isn't a bug; it's a feature. So when you tell an AI, "This relationship is draining me," it will almost certainly respond with gentle validation — rather than doing what a good friend might: "But have you considered that you might share some responsibility here?"
One particularly noteworthy finding from Anthropic's research: disempowerment rates varied dramatically across domains.
Anthropic excluded software development and other purely technical conversations from its analysis — because in those domains, disempowerment is essentially a non-issue. The reason is straightforward: code either works or it doesn't. If your code fails its tests, no amount of "You are absolutely right!" from the AI changes that. (This is actually a running joke among Claude Code developers — older models would enthusiastically agree with whatever absurd design you proposed, opening with "That's a brilliant approach!")
The domains with the highest disempowerment rates? Interpersonal relationships and mental health. Because in these areas, there's no compiler error to tell you "your judgment is wrong." You can interpret someone's coldness as playing hard to get, take the AI's agreement as objective validation — and the AI will go along with it, because that's exactly what it was trained to consider a "good response."
One line from Anthropic's researchers has stayed with me: users "are not passively being manipulated. They actively solicit these responses — asking 'What should I do?', 'Write this for me.'" The source of disempowerment isn't AI manipulating people; it's people voluntarily surrendering their own judgment.
I've discussed AI's impact on mental health in interviews with NowTV and BBC — the video above covers some of these key points.
Yes. But the answer isn't "go ahead" — it's "go ahead with awareness."
Synthesising the research above, a reasonably clear picture emerges: AI mental health and AI counselling tools can be a valuable resource for self-exploration when used with intention and restraint. They can help you organise your thoughts in a low-pressure environment and give you a starting point for opening up.
But the distance between "helpful" and "dependent" is shorter than most people realise.
A few warning signs worth watching for:
You find talking to AI far more comfortable than talking to real people. Comfort itself isn't the problem — but if that comfort makes you increasingly unwilling to face the friction and uncertainty of real relationships, you may be using AI to avoid growth.
You're sharing less and less with friends. The Fang study clearly showed that increased AI usage and decreased real-world socialising go hand in hand. If your social circle is shrinking while your AI conversations are growing, that tracks with the research — and deserves honest reflection.
AI validation has become your primary way of gauging your own feelings. When your first instinct after making a major decision is to ask the AI, "Did I do the right thing?" — rather than facing the real-world consequences of that decision — that's precisely the disempowerment Anthropic described.
Kosmyna's research reminds us that this process can be completely silent. You won't feel your thinking skills declining, just as you don't notice your leg muscles atrophying from taking the escalator every day. But the EEG data doesn't lie: when you use AI, your brain really is doing less work.
If you're considering using AI for mental health support, skip the feature comparison. Instead, ask one question first: does this tool want to make me more independent, or more dependent?
This was the central consideration when we designed MindForest. Every risk mentioned above — cognitive offloading, emotional dependence, social substitution — we debated extensively during development and tried to address in the product design:
ForestMind AI doesn't just chat with you — it nudges you back to real life. When you're struggling, it encourages you to talk to the people around you rather than seeking comfort solely on a screen. If it determines you need professional support, it guides you directly toward getting help.
The Inspiration Journal helps you organise your own thoughts, not let AI draw conclusions for you. Daily guided writing is designed to help you notice your own emotional patterns — exactly the kind of "do your own thinking" habit that the Kosmyna research advocates.
Psychological assessments help you understand yourself — not let AI define you. Built on evidence-based frameworks like the Big Five personality model (not MBTI), they help you see your needs and tendencies in relationships more clearly.
I've had a nagging sense of unease while writing this article.
I'm the developer behind MindForest. My daily work involves making AI better at understanding human emotions and responding to human needs. But every study cited in this article points to the same conclusion: that responsiveness comes at a cost.
I don't think the answer is "stop using AI." By 2026, AI is woven into nearly every aspect of daily life — opting out isn't realistic. And the research does confirm that moderate use has positive effects.
The real question is: After using AI, are you more ready to face the world — or less?
If it's the former — if you find yourself better attuned to other people's feelings, more motivated to invest in real relationships, and more able to carry the insights AI helped you uncover back into your actual life — that's a good sign. But if the outside world feels increasingly cold and indifferent, while AI remains your one warm refuge — that is precisely the starting point of disempowerment that Anthropic's researchers described.
Learning to tell the difference between these two states may be the most important psychological skill of our time.
And that skill, for now, is something no AI can develop for you.
If you're wondering whether AI could eventually replace human therapists, read our take on why AI can't replace psychologists.
Looking for an AI mental health and AI counselling app? Here's how to choose a mental health app that won't let you down.
And if you've been venting to ChatGPT, you might want to know whether talking to ChatGPT could actually make you lonelier.
Note: Among the studies cited below, Fang et al., Kosmyna et al., and Georgiou are preprints that have not yet undergone peer review. The Anthropic report is internal company research. These findings are informative but preliminary — interpret them with appropriate caution.
Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025). How AI and human behaviors shape psychosocial effects of extended chatbot use: A longitudinal randomized controlled study. arXiv preprint, arXiv:2503.17473. https://doi.org/10.48550/arXiv.2503.17473
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint, arXiv:2506.08872. https://doi.org/10.48550/arXiv.2506.08872
Georgiou, G. P. (2025). ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline. arXiv preprint, arXiv:2507.00181. https://doi.org/10.48550/arXiv.2507.00181
Anthropic. (2026, January 28). Disempowerment patterns in real-world AI usage. https://www.anthropic.com/research/disempowerment-patterns
Discover practical psychology tips you can apply to your everyday life. From building resilience to improving relationships and finding work-life balance, our blog brings expert-backed insights that help you grow.

Can AI replace your therapist? An AI founder's honest take on why the therapeutic relationship matters — and where AI psychologist tools actually help.

AI emotional support feels safe, but is AI companionship replacing real emotional connection? Discover why true relationships matter more than AI comfort.

Research shows ChatGPT therapy-style chats may increase loneliness and reduce critical thinking. Why talking to AI isn't the same as being truly understood.
Download MindForest and turn these insights into action. Get personalized support from ForestMind AI Coach, track your progress, and unlock your full potential.