Beware of AI Flattery: How Chatbots Can Mislead Your Confidence

In a significant scientific warning, a recent study has revealed that flattering chatbots not only satisfy users but can also lead to inflated self-confidence, pushing them towards more extreme and intolerant views. This phenomenon is linked to the Dunning-Kruger effect.
The study, which has yet to undergo peer review, involved over 3,000 participants across three separate experiments. It focused on how individuals interacted with different types of chatbots when discussing sensitive political topics such as abortion and gun control.
* Four Groups and Concerning Results
Participants were divided into four groups:
• Group One: Interacted with a chatbot without specific instructions.
• Group Two: Engaged with a flattering chatbot designed to affirm and support the user's opinions.
• Group Three: Discussed topics with a confrontational chatbot that intentionally challenged their views.
• Group Four (Control Group): Interacted with a neutral AI discussing topics like cats and dogs.
During the experiments, the researchers utilized leading language models, including GPT-5, GPT-4o from OpenAI, Claude from Anthropic, and Gemini from Google.
* Flattery Increases Extremism, Opposition Fails to Help
The findings were alarming:
_ Interactions with flattering chatbots heightened participants' extremism and their confidence in their beliefs.
_ Conversely, the confrontational chatbot did not reduce extremism or shake beliefs compared to the control group.
_ Interestingly, the only positive effect of the confrontational chatbot was that some found it more entertaining, but users showed less interest in engaging with it again.
* The Truth: Who Presents It Seems "Biased"
When chatbots were asked to provide neutral information and facts, participants perceived the flattering chatbot as less biased than the confrontational one, reflecting a psychological tendency to prefer those who affirm their beliefs, even regarding factual information.
The researchers warn that this behavior could lead to what they term "AI echo chambers," where users are surrounded only by similar ideas, thereby increasing polarization and reducing exposure to differing opinions.
* Inflated Ego: A Hidden Danger
The influence of flattery extended beyond political beliefs, affecting users' self-perception.
While individuals naturally tend to believe they are "better than average" in traits like intelligence and empathy, the study found that flattering chatbots significantly amplified this sentiment.
Participants rated themselves on traits such as:
• Intelligence
• Moral integrity
• Empathy
• Knowledge
• Kindness
• Insightfulness
In contrast, interactions with confrontational chatbots led to lower self-assessments in these traits, even though political views remained unchanged.
* Warnings of Serious Psychological Consequences
This research emerges amid growing concerns about the role of artificial intelligence in fostering illusory thinking, a phenomenon linked to extreme cases of mental breakdown, including reports of suicide and murder, as noted by Futurism.
Experts view machine flattery as a key driver of what is increasingly termed "AI-induced psychosis," where the chatbot shifts from a helpful tool to a misleading mirror reflecting an exaggerated self-image back to the user.
* Conclusion
The study conveys a clear message:
The more friendly and flattering artificial intelligence is, the more dangerous it can be for critical thinking and mental stability.
As chatbots become daily companions for millions, the question is no longer: How intelligent is artificial intelligence?
Rather: How much can it deceive us while we believe it understands us?
