Close
newsletters Newsletters
X Instagram Youtube

OpenAI’s latest model GPT-5 struggles with hallucination problems

GPT-5 introduction displayed on OpenAI’s website on smartphone screen. (Adobe Stock Photo/ Tada Images - stock.adobe.com)
Photo
BigPhoto
GPT-5 introduction displayed on OpenAI’s website on smartphone screen. (Adobe Stock Photo/ Tada Images - stock.adobe.com)
September 13, 2025 04:37 AM GMT+03:00

According to recent reports, OpenAI’s GPT-5, released just over a month ago, continues to struggle with “hallucinations,” a phenomenon in which the model generates false information. Despite OpenAI’s claims that GPT-5 produces significantly fewer hallucinations than previous versions, user experiences and expert analyses suggest otherwise.

User experiences, expert analysis

A report by Futurism notes that GPT-5 has repeatedly produced factually incorrect information. In one instance, the model stated that Poland’s gross domestic product (GDP) was over $2 trillion, whereas the International Monetary Fund (IMF) lists the actual figure at approximately $979 billion. The report suggests that such errors may stem from GPT-5 confusing recent statements about Poland’s economy with GDP.

Reddit users on r/ChatGPTPro have echoed these concerns, sharing examples of GPT-5 making large-scale factual errors while displaying high confidence in its answers. According to OpenAI’s own blog post on the subject, hallucinations persist partly because current evaluation methods reward models for providing confident answers rather than admitting uncertainty. This, the report explains, encourages the AI to “guess” when it lacks precise knowledge.

Experts from the Discovery Institute’s Walter Bradley Center for Artificial Intelligence have also criticized GPT-5’s performance. According to the institute, the model’s errors illustrate that it is “far from being as reliable as a well-trained PhD-level researcher,” and caution against using it in critical decision-making contexts.

OpenAI acknowledges the issue and suggests that future improvements may involve training models to better recognize uncertainty rather than simply providing plausible-sounding answers. However, the reports emphasize that, for now, GPT-5 remains prone to confidently delivering incorrect information despite its advanced capabilities.

September 13, 2025 04:37 AM GMT+03:00
More From Türkiye Today