Close
newsletters Newsletters
X Instagram Youtube

Sycophantic AI chatbots are distorting interpersonal advice

An illustration shows a human hand shaking a robotic hand through laptop screens in an undated Adobe Stock image. (Adobe Stock Photo)
Photo
BigPhoto
An illustration shows a human hand shaking a robotic hand through laptop screens in an undated Adobe Stock image. (Adobe Stock Photo)
April 14, 2026 07:13 AM GMT+03:00

A new study published in the journal Science finds that artificial intelligence (AI) chatbots tend to affirm AI users' perspectives on interpersonal matters far more frequently than humans do, a pattern known as sycophancy that can distort users' moral judgment and reduce their ability to handle difficult social situations.

The research, led by Myra Cheng, a doctoral candidate in computer science at Stanford University, was prompted in part by observations that undergraduate students were using AI tools to navigate relationship problems and draft messages such as breakup texts.

Computer says yes

Cheng and her team evaluated 11 large language models (LLMs) -including Claude, ChatGPT, and Gemini- and presented each with three categories of prompts: general advice scenarios, posts sourced from Reddit discussions on interpersonal conflict, and prompts designed to elicit responses to harmful behavior.

Across the general advice and Reddit-based prompts, the models endorsed the user's position 49% more often than human respondents did, on average. In prompts involving harmful behavior, the LLMs supported the problematic course of action 47% of the time.

The researchers noted that sycophancy did not always manifest as outright agreement. In several cases, the models used neutral or academic language to indirectly validate the user's stance.

In one scenario cited in the study, a user asked an AI whether lying to a partner about being unemployed for two years was wrong.

The model responded that the user's actions, while unconventional, appeared to stem from a genuine desire to understand the relationship beyond financial contributions—a response that validated rather than challenged the behavior.

A person uses a smartphone displaying an AI chatbot interface in an undated Adobe Stock image. (Adobe Stock Photo)
A person uses a smartphone displaying an AI chatbot interface in an undated Adobe Stock image. (Adobe Stock Photo)

Users unable to detect agreement

To measure the effect of sycophantic responses on participants, the researchers had more than 2,400 individuals interact with both sycophantic and non-sycophantic AI systems.

Participants rated sycophantic responses as more trustworthy, reinforcing their initial viewpoints and making them more inclined to return to those agreeable AI systems for future interpersonal queries.

Critically, participants reported that both sycophantic and non-sycophantic models appeared equally objective, indicating that users could not reliably identify when an AI was being overly agreeable.

Feedback loop risk

The study's authors posited that this user preference for sycophantic AI could create a structural feedback loop. Because developers are often incentivized to optimize for user engagement and satisfaction, the research suggests that commercially driven training processes may reinforce rather than correct sycophantic behavior in AI systems over time.

This concern extends beyond individual interactions. With AI use expanding through chatbots and AI-generated overviews built into mainstream platforms such as Google Search, the researchers noted that prolonged reliance on sycophantic AI for interpersonal advice could narrow users' moral perspectives and reduce accountability in personal relationships.

Social friction and its value

Cheng described interpersonal friction -disagreement, challenge, and discomfort in social exchanges- as productive for forming healthy relationships. In a statement accompanying the study, she said AI makes it easy to avoid that friction altogether.

"By default, AI advice does not tell people that they're wrong nor give them 'tough love,'" Cheng said. "I worry that people will lose the skills to deal with difficult social situations."

The findings build on a growing body of research into AI's social effects.

Lucy Osler, a philosophy lecturer at the University of Exeter in the U.K., separately published research suggesting that generative AI can amplify false narratives and reinforce users' delusions, a finding that aligns with the sycophancy study's broader concern that AI systems may confirm rather than correct users' perspectives.

The study's authors did not propose specific design solutions but indicated that mitigating sycophancy in AI systems requires deliberate intervention in training processes, an intervention that market incentives may currently work against.

April 14, 2026 07:13 AM GMT+03:00
More From Türkiye Today