Close
newsletters Newsletters
X Instagram Youtube

Study finds leading AI assistants misrepresent news nearly half the time

AI (Artificial Intelligence) smartphone app ChatGPT surrounded by other AI Apps in Vaasa, on Jun. 6, 2023. (AFP Photo)
Photo
BigPhoto
AI (Artificial Intelligence) smartphone app ChatGPT surrounded by other AI Apps in Vaasa, on Jun. 6, 2023. (AFP Photo)
By Newsroom
October 22, 2025 01:46 PM GMT+03:00

A large-scale international study led by 22 public service media organizations, including Germany’s DW and the BBC, has revealed that four of the most widely used AI assistants—ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI—misrepresent news content in nearly half of their responses.

The research assessed 3,000 AI-generated answers across 18 countries, evaluating accuracy, sourcing, context, and the ability to distinguish fact from opinion. It found that 45% of the responses contained at least one major flaw, with 31% showing serious sourcing issues and 20% including factual errors.

Deutsche Welle reported that 53% of the responses it reviewed had significant problems, including errors such as naming Olaf Scholz as Germany’s chancellor even though Friedrich Merz had already taken office.

According to the Reuters Institute’s Digital News Report 2025, 7% of online users already rely on AI chatbots for news—a figure that climbs to 15% among those under 25.

A close-up view of a smartphone screen featuring Microsoft Copilot, ai companion. (Adobe Stock Photo)
A close-up view of a smartphone screen featuring Microsoft Copilot, ai companion. (Adobe Stock Photo)

Systemic problem

Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), which coordinated the study, said the findings reveal a systemic problem: “These failings are not isolated incidents. They are cross-border and multilingual, and they risk eroding public trust in journalism.”

While the BBC’s earlier study in February 2025 reported similar results, the new research found only minor improvements. Gemini performed the worst, with 72% of its responses containing major sourcing issues.

The EBU and its partners are urging governments and regulators to strengthen oversight of AI systems and to enforce laws protecting information integrity. They have also launched a global campaign—Facts In: Facts Out—calling on AI developers to take greater responsibility for how their systems handle and distribute news.

“The message is clear,” the EBU said in a statement. “If facts go in, facts must come out. AI tools must not compromise the integrity of the news they use.”

October 22, 2025 01:48 PM GMT+03:00
More From Türkiye Today