Close
newsletters Newsletters
X Instagram Youtube

AI in healthcare demands caution from patients and doctors

Doctors are using AI tools to patient care and manage treatments. (Adobe Stock Photo)
Photo
BigPhoto
Doctors are using AI tools to patient care and manage treatments. (Adobe Stock Photo)
February 02, 2026 04:25 AM GMT+03:00

The use of artificial intelligence (AI) in healthcare is growing rapidly, raising both opportunities and significant risks, particularly when AI-generated information is treated as definitive medical advice.

Experts warn that while AI can provide preliminary assessments and simplify medical information, over-reliance on these systems may delay professional care and lead to serious consequences.

The warnings come from Prof. Dr. Recep Ozturk, Vice-Rector of Istanbul Medipol University, who spoke to Anadolu Agency (AA) about the expanding role of AI in healthcare and its implications for patients and physicians alike.

AI as a preliminary assessment tool

Professor Dr. Ozturk noted that AI is no longer a futuristic concept. Globally, around 230 million people consult digital AI systems each week for guidance on healthy living and well-being.

Reports indicate that over 40 million daily health-related queries are directed to ChatGPT alone, reflecting a profound shift in how people seek and interpret health information.

He explained that AI can act as a preliminary assessment tool by helping patients understand complex lab results, imaging reports, and symptoms, while reducing anxiety caused by uncertainty.

However, he emphasized that these systems cannot replace physicians, provide definitive diagnoses, or perform critical tasks such as physical examination and clinical evaluation.

Risks of over-reliance on AI

Prof. Dr. Ozturk warned that the greatest danger lies in treating AI-generated information as absolutely accurate, which may delay professional medical consultation.

While AI can assist in radiology, dermatology, and pathology by detecting details that human eyes might miss, it cannot fully understand clinical context or patient history.

He highlighted the risk of “hallucinations,” misleading outputs that appear convincing but are false. Studies show an 8–20% risk of hallucinations in clinical decision support systems, and some radiology tools misclassify benign nodules as malignant in up to 12% of cases.

Supporting physicians, not replacing them

AI integration into hospital systems can reduce administrative burdens and enhance data analysis. Professor Dr. Ozturk emphasized that AI tools are designed to assist physicians, not replace them.

Even platforms like ChatGPT Health, which do not provide formal diagnoses, can function as medical tools for tasks such as glucose monitoring or genetic data analysis, highlighting the need for strict validation and regulatory oversight.

He also stressed that physicians retain full responsibility for clinical decisions and AI outputs should never be accepted uncritically.

Future medical professionals must be trained to critically evaluate algorithmic recommendations alongside traditional clinical judgment.

February 02, 2026 04:25 AM GMT+03:00
More From Türkiye Today