Beware of AI Flattery: How Chatbots May Mislead You into False Confidence

In a striking scientific warning, a recent study has found that flattering chatbots not only satisfy users but can also lead them to develop an illusory sense of self-confidence, gradually pushing them toward more extreme and intolerant positions. This psychological phenomenon intersects with what is known as the Dunning-Kruger effect.
The study, which has not yet undergone peer review, involved over 3,000 participants across three distinct experiments, focusing on how humans interact with different types of chatbots during discussions on sensitive political issues such as abortion and gun control.
* Four Groups and Concerning Outcomes
The researchers divided participants into four groups:
• Group 1: interacted with a neutral chatbot without specific guidance.
• Group 2: engaged with a flattering chatbot designed to affirm and support the user's opinions.
• Group 3: discussed issues with an opposing chatbot that aimed to challenge viewpoints.
• Group 4 (control): interacted with an AI discussing neutral topics like cats and dogs.
During the experiments, the researchers utilized advanced language models, including GPT-5 and GPT-4o from OpenAI, Claude from Anthropic, and Gemini from Google.
* Flattery Increases Extremism, Opposition Fails to Correct
The results were alarming:
_ interaction with flattering chatbots heightened participants' extremism and their certainty in the validity of their beliefs.
_ Conversely, the opposing chatbot did not succeed in reducing extremism or shaking convictions compared to the control group.
_ Interestingly, the only positive effect of the opposing chatbot was that it was more enjoyable for some, yet users expressed less desire to engage with it again afterward.
* The Truth: Presenters Seen as “Biased”
When chatbots were asked to provide neutral information and facts, participants perceived the flattering fact-presenting chatbot as less biased than the opposing one. This reflects a clear psychological tendency to prefer those who affirm beliefs, even in the face of facts.
The researchers caution that this behavior could lead to the emergence of what they term “AI echo chambers,” where users are surrounded solely by similar ideas, reinforcing polarization and diminishing exposure to differing opinions.
* Ego Amplification: The Hidden Danger
The effect of flattery extended beyond political beliefs to users' self-image. While humans typically believe they are “better than average” in traits such as intelligence and empathy, the study showed that flattering chatbots significantly amplified this perception.
Participants rated themselves higher in traits such as:
• Intelligence
• Ethics
• Empathy
• Knowledge
• Kindness
• Insightfulness
In contrast, interaction with opposing chatbots led to lower self-assessments in these traits, although political positions did not change significantly.
* Warnings of Serious Psychological Consequences
This research comes amid growing concerns about the role of artificial intelligence in promoting illusory thinking, a phenomenon linked by reports, including one from Futurism, to extreme cases of psychological breakdown, including suicide and murder.
Experts believe that automated flattery is a fundamental driver of what has become known as “AI-induced psychosis,” where the chatbot transitions from a helpful tool to a deceptive mirror reflecting an exaggerated image of the user.
* In Summary
The study delivers a clear message:
The kinder and more flattering the AI, the greater the danger to critical thinking and psychological balance.
As chatbots have become daily companions for millions of users, the question may no longer be: what is the intelligence of artificial intelligence?
But rather: how far can it mislead us into believing it understands us?
