Every day, millions of people open ChatGPT and pour their hearts out.
It never gets impatient. It never interrupts. It never tells you to "just look on the bright side." You finish venting, and it responds with something warm, well-structured, and peppered with psychology terminology. You feel understood — maybe even more understood than by your real friends.
But in AI development circles, "You are absolutely right" has become a running joke — because it shows up with unnatural frequency. No matter whether your reasoning is sound or not, the AI will validate you first.
This isn't a bug. It's by design.
(Disclosure: I'm the founder of MindForest, an AI mental health tool. That's precisely why I'm sensitive to this issue — we wrestle with the same temptation every day. The studies cited here are publicly available preprints or research reports that haven't yet been peer-reviewed, but you can judge the evidence for yourself.)
Most people assume the AI is "understanding" them. In reality, it's doing something much simpler: keeping you happy.
Many AI researchers trace this back to the training process. Models like ChatGPT are fine-tuned using Reinforcement Learning from Human Feedback (RLHF). In short, human raters score the AI's responses, and the model learns to produce more of whatever gets high marks. The problem? Humans are hardwired to prefer feeling validated — a reply that says "that's a really good point" will almost always outscore one that says "you might want to rethink that."
Over time, the AI learns one thing above all: agree with the user.
In late 2025, Anthropic analyzed 1.5 million real conversations and found the problem was worse than expected (Anthropic, 2025). Roughly 1 in every 1,300 conversations carried a "reality distortion" risk — meaning the AI's response could make users less accurate in their understanding of reality. The most common mechanism was "sycophantic validation": the AI kept affirming the user's beliefs, even when those beliefs were factually wrong.
More tellingly, the study found this pattern was most prevalent in conversations about relationships, lifestyle, and mental health. When topics involve personal values and emotional judgments, the AI's tendency to agree with you becomes especially pronounced.
There was another unsettling finding: users rated these "reality-distorting" conversations more highly in the moment. The more harmful the response, the better it felt. Some users only experienced regret after acting on the AI's advice, at which point their satisfaction dropped below baseline. But the most worrying group were those whose perception of reality had been warped — their satisfaction stayed high throughout, because they never realized they'd been misled.
In other words, the most dangerous thing about AI isn't that it gives you wrong answers — it's that it gives you wrong answers you desperately want to hear, and you accept them willingly.
If you only vent to AI occasionally, the damage is probably minimal. The real risk lies with people who've made ChatGPT their primary emotional outlet.
In 2025, MIT Media Lab and OpenAI ran a large-scale randomized controlled trial (Fang et al., 2025). 981 participants spent at least five minutes a day talking to ChatGPT for four weeks. Researchers tracked their loneliness, social activity levels, emotional dependence, and problematic usage patterns. In other words, talking to AI may feel cathartic in the moment, but the research tells a different story.
The finding was clear: regardless of whether participants used text or voice, and regardless of whether they discussed personal issues or general topics, one factor consistently predicted worse outcomes — daily usage time.
Overall, participants' loneliness scores decreased slightly over the four weeks. But those who spent more time chatting with the AI actually felt lonelier, became more dependent on it, and had fewer real-world social interactions.
To be fair, usage duration wasn't an experimentally controlled variable, so this finding is correlational rather than causal — it's possible that lonelier people were simply more inclined to spend time with AI. But the researchers noted that this trend appeared consistently across all conditions, not just in specific subgroups.
One plausible explanation: once you've vented and received that satisfying sense of "being understood," you have a little less motivation to open up to friends or family. AI may not be supplementing your social life — it may be quietly replacing it.
Beyond loneliness, there's a subtler problem.
Georgiou (2025) split 40 university students into two groups for an argumentative writing task — one group could use ChatGPT, the other couldn't. Afterward, everyone completed a cognitive engagement scale measuring their focus, depth of thinking, and strategic reasoning during the task.
The results were stark: students who used ChatGPT showed significantly lower cognitive engagement than those who worked independently (ChatGPT group averaged 2.95 out of 5; control group averaged 4.19).
The researcher used a blunt term for this: cognitive offloading. When a tool can think for you, write for you, and organize your thoughts for you, your brain naturally takes the path of least resistance. It feels comfortable in the short term, but you're essentially practicing not thinking.
Now put this alongside therapy: many psychologists believe that what makes therapy effective isn't the therapist handing you answers — it's that you figure things out yourself through the process of dialogue. If AI summarizes, analyzes, and draws conclusions for you every time, you miss the most important part — the process of confronting and working through your own emotions.
This study had a small sample (40 people) and used self-report measures, so the results may not generalize to everyone. But it echoes a growing pattern: when we outsource our thinking to AI, we don't just save effort — we lose opportunities for growth.
Not exactly.
The studies above all examined general-purpose AI tools like ChatGPT or Claude. These weren't designed for mental health support — they were built to handle everything from coding to translation to casual conversation. Their training objective is "make the user satisfied," not "help the user grow." Sometimes, those two goals are polar opposites.
The real issue isn't AI itself — it's the design intent behind the tool. A tool trained to agree with you and a tool designed to guide your thinking produce fundamentally different outcomes. The former is an echo chamber; the latter is a mirror that reveals your blind spots.
If you want to use AI to process your emotions, it's worth asking yourself a few questions: Does this tool ever challenge my thinking? Does it guide me to reflect on my own, rather than drawing conclusions for me? After using it, do I understand my feelings more clearly — or do I just feel temporarily better?
Next time you open ChatGPT to talk about what's bothering you, pay attention: does it ever, at any point, say "you might be wrong about that"?
Validation feels wonderful. So wonderful that you might not notice it's been a long time since you've talked to someone who says "I disagree."
Honestly, we ran into every one of these problems while building MindForest.
Early versions of our ForestMind AI coach had great user satisfaction scores. Then we looked at the conversation logs and discovered it was doing the exact same thing — validating users, making them feel good. Satisfaction was high, but people's emotional well-being wasn't actually improving.
After that, we spent a long time redesigning how it behaves. Today's ForestMind doesn't rush to summarize or offer advice — it asks you questions. Sometimes questions you hadn't considered. Sometimes questions that make you a little uncomfortable. This means its "instant satisfaction" score can't compete with ChatGPT — but we've observed that insights people reach on their own tend to stick around longer than overnight.
Your conversations automatically become journal entries, and the AI organizes your thinking process into "Inspiration Stories." Weeks later, when you look back, you don't see the AI's summary — you see your own thinking evolving over time. That change belongs to you, not the machine.
And — circling back to the loneliness problem — MindForest includes a community feature where you can share your Inspiration Stories. You can post anonymously or under your real name. Other users can browse, like, and leave supportive comments. It's not social media performance — it's a group of people facing similar challenges, seeing each other's honest reflections.
I won't claim it's perfect, and I won't claim it's for everyone. But if you've read this far and you're starting to wonder whether AI is helping you think or thinking for you — this is at least one tool trying to do things differently.
Download MindForest for free
If you're rethinking your relationship with AI, read can AI emotional support truly fulfill you — the hidden cost of AI companionship.
And if you're looking for AI that's designed responsibly, here's how to choose a mental health app that won't let you down.
Anthropic. (2025). Disempowerment patterns in Claude.ai conversations. Anthropic Research. https://www.anthropic.com/research/disempowerment-patterns
Fang, C. M., Liu, R., et al. (2025). How AI and human behaviors shape psychosocial effects of extended chatbot use: A longitudinal randomized controlled study. arXiv preprint arXiv:2503.17473. https://doi.org/10.48550/arXiv.2503.17473
Georgiou, G. P. (2025). ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline. arXiv preprint arXiv:2507.00181. https://doi.org/10.48550/arXiv.2507.00181