Close
newsletters Newsletters
X Instagram Youtube

Millions of Americans turn to AI chatbots for medical advice, study warns

A person holds a smartphone displaying a doctor and digital health data interface. (Adobe Stock Photo)
Photo
BigPhoto
A person holds a smartphone displaying a doctor and digital health data interface. (Adobe Stock Photo)
April 19, 2026 12:57 PM GMT+03:00

Millions of Americans are turning to AI chatbots for medical advice, but two recent studies show that these tools often give inaccurate information.

A study published this week in JAMA Network Open tested 21 advanced large language models (LLMs) using realistic patient scenarios.

Researchers found that these systems failed in over 80% of cases when presented with symptoms that could indicate several different conditions.

Even in simpler cases that included physical exam findings and lab results, the models still failed 40 percent of the time.

The researchers pointed out that, unlike human doctors, these systems often settle on a single answer too quickly. The study described this as LLMs collapsing "prematurely onto single answers," which led to "weak performance" in all the models tested.

A smartphone displays a doctor and digital health data interface with a stethoscope in the foreground (Adobe Stock Photo)
A smartphone displays a doctor and digital health data interface with a stethoscope in the foreground (Adobe Stock Photo)

Researchers warn against unsupervised clinical use

Marc Succi, associate chair of innovation and commercialization at Massachusetts General Hospital and the study’s lead author, said in a statement that current chatbots are not ready for medical use without oversight.

"Despite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment," Succi said, as cited by Futurism.

He added that "differential diagnoses are central to clinical reasoning and underlie the 'art of medicine' that AI cannot currently replicate."

The researchers warned that if a chatbot jumps to conclusions without all the clinical details, it could give misleading or unsafe advice.

For example, someone asking about a rash or sudden cough might get information that leads them away from the real cause.

Researchers and medical professionals use artificial intelligence to analyze patient data and improve diagnostics in healthcare settings.
(Adobe Stock Photo)
Researchers and medical professionals use artificial intelligence to analyze patient data and improve diagnostics in healthcare settings. (Adobe Stock Photo)

One in four US adults consult chatbots on health

A separate survey from the West Health-Gallup Center on Healthcare in America found that about one in four American adults, around 66 million people, have used ChatGPT or similar tools for medical advice, according to Futurism.

People said they used these tools both before and after seeing a medical professional. Of those who used AI, 14 percent—over 9 million people—said they skipped a doctor’s visit they would have otherwise made.

Cost was the main reason: 27 percent said they did not want to pay for a visit, and 14 percent said they could not afford it. Others mentioned not having enough time or access.

"Artificial intelligence is already reshaping how Americans seek health information, make decisions, and engage with providers, and health systems must keep pace," Tim Lash, president of the West Health Policy Center, said in a statement reported by Futurism.

Skepticism persists alongside growing reliance

Despite this trend, many users still doubt the information chatbots provide. About a third of people who used AI for health concerns said they did not trust it, and one in ten said the AI had given them advice that could be unsafe.

Still, almost half said that using a chatbot made them feel more confident when talking to their doctors. 22% said it helped them spot issues earlier, and 19% said it helped them avoid unnecessary tests or procedures.

Experts have previously pointed out mistakes in AI-generated medical information. For example, Google's AI Overviews have given dangerously inaccurate answers, and some transcription tools used in clinics have even made up medication names.

Overall, the two studies show a growing gap between how quickly Americans are using AI for health questions and how well the technology actually works.

Industry experts and researchers are calling for stronger regulation of AI tools in health care.

April 19, 2026 12:57 PM GMT+03:00
More From Türkiye Today