Search

Saved articles

You have not yet added any article to your bookmarks!

Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

Study Warns Using AI for Medical Advice Is ‘Dangerous’ as Users Get Inaccurate Health Guidance

Study Warns Using AI for Medical Advice Is ‘Dangerous’ as Users Get Inaccurate Health Guidance

Post by : Anis Farhan

Using artificial intelligence (AI) systems for medical advice — including diagnosing symptoms or suggesting treatment — may pose significant risks to patients’ health and safety, according to a major study released this week. The research, conducted by a team from the University of Oxford and published through partner institutions, found that AI chatbots often provide inaccurate, inconsistent or contradictory guidance that could mislead users seeking help with medical questions.

The findings come amid a rapid boom in the use of AI-powered applications and chatbots by millions of people worldwide who turn to these systems for quick health answers. While the underlying technology continues to improve, experts warn that current AI models — including widely used large language models — are not yet reliable substitutes for professional medical advice and may even misinform users in ways that jeopardise their health.

Growing Reliance on AI for Health Questions

Artificial intelligence tools such as large language models and specialised chatbots are increasingly accessible through smartphones, websites and dedicated apps. Many are marketed as convenient sources for health guidance, symptom interpretation and general advice. Some companies position these technologies as helping users “understand” possible conditions before seeing a clinician.

However, the new study indicates that widespread belief in AI’s medical capabilities may be misplaced and potentially dangerous, particularly when users interpret AI responses as definitive health guidance. Researchers emphasised that while these systems often perform well in controlled tests where conditions are fully specified, real-world interactions with people reveal major limitations.

Study Reveals Weaknesses in Real-World Use

The large-scale research effort involved nearly 1,300 participants in the UK who were asked to use AI tools to make decisions about a range of medical scenarios. These ranged from benign conditions like a common cold to potentially life-threatening situations such as head injuries requiring urgent care.

Key findings from the study showed that:

  • Users who employed AI tools to interpret symptoms did not make better decisions than those relying on traditional methods, such as internet search engines or official health websites.

  • Participants correctly identified relevant medical conditions only about 34.5% of the time.

  • Appropriate action recommendations — such as seeking urgent care or consulting a general practitioner — were made in just around 44.2% of cases.

  • Overall, there was no clear advantage in outcomes for those using AI compared with other information sources.

The researchers cautioned that this performance gap reflects challenges both in how people interact with AI and how AI interprets incomplete or vague information from users. Many participants did not know how to accurately describe symptoms to get useful guidance from the systems, while the models themselves sometimes returned conflicting or misleading advice.

Communication Breakdowns Between Users and AI

One of the most concerning aspects uncovered in the research was the communication disconnect between AI systems and human users. Participants often struggled to provide the precise details necessary for AI to make accurate assessments. At the same time, the responses they received blended useful insights with inaccurate or non-helpful suggestions, making it difficult for users to determine a safe course of action.

For example, when two participants described what were essentially the same set of symptoms — such as severe headache and light sensitivity — the AI systems sometimes produced dramatically different recommendations depending on the phrasing of the question. One person might be told to seek immediate medical attention, while another was advised to rest at home.

Lead researchers argue that such inconsistency highlights how poorly current AI chatbots handle nuanced or context-dependent medical queries — a critical shortcoming when user health could be at stake.

AI Tools’ Promotional Hype vs. Reality

Health-related AI applications are often promoted with glowing language that can create unrealistic expectations. Some medical chatbots present themselves as offering comprehensive or expert-level guidance, but in practice, they lack the clinical judgment and contextual awareness of trained practitioners.

Experts caution that many AI systems, especially those geared toward consumer use, have not undergone the rigorous evaluation and clinical testing required for medical devices or professional diagnosis tools. This regulatory gap means that users may trust outputs that are not backed by robust evidence or safety verification.

Health regulators in various countries are increasingly scrutinising how AI tools are marketed and used in healthcare settings, but current rules lag behind the rapid spread of these technologies.

Why AI Struggles with Medical Advice

Even the most advanced large language models, which can demonstrate high accuracy in controlled scenarios, face major challenges when applied to real-world health advice. Several underlying issues contribute to this:

  • Incomplete and varied user information: People often provide partial or unclear descriptions that make accurate interpretation difficult for AI.

  • Context sensitivity: Medical assessment often requires understanding broader context — something AI may struggle to infer from brief text prompts.

  • Bias and training limitations: Many AI models are trained on datasets that reflect historical clinical language or internet content that may not fully represent real patient scenarios.

  • Conflicting advice patterns: AI responses can blend correct and incorrect elements, making it hard for users to distinguish safe guidance.

These factors contribute to AI’s inherent limitations when providing health information without professional supervision.

Industry Responses and Developer Challenges

Technology companies behind major AI models acknowledge both the potential and pitfalls of their systems. While many emphasise that their tools are not intended to replace healthcare professionals, critics argue that such disclaimers are not always prominent or clear enough for users to interpret responses responsibly.

Some developers are exploring specialised healthcare AI systems with dedicated training and safety layers. However, experts say that robust safeguards, regulatory oversight, and alignment with medical standards are essential before such systems can be trusted for general medical advice.

There are also calls for clear labels and warnings that emphasise the limitations of AI when used for medical self-diagnosis, including alerts to consult licensed practitioners for definitive guidance.

Potential Consequences of Misleading AI Advice

The risks associated with inaccurate or inconsistent AI medical guidance are not merely theoretical. In real-world cases documented by journalists and health professionals, patients seeking medical answers from AI have received troubling responses that contributed to anxiety, misinformation, and unnecessary delays in care.

For instance, some AI applications incorrectly flagged non-serious symptoms as severe conditions, while others failed to recognise when urgent medical intervention was necessary. In one notable example, a chatbot misled a young patient about cancer progression, causing significant distress before clinical evaluation clarified the actual situation.

Such incidents underscore the potential for AI to do harm when used outside its intended scope or without appropriate expert oversight.

Expert Views on AI and Patient Safety

Healthcare professionals and AI researchers alike warn that while artificial intelligence holds promise for supporting clinical workflows, administrative tasks, and data analysis, its use for standalone medical advice remains highly problematic.

Dr. Adam Mahdi, a co-author of the Oxford study, emphasised that the disconnect between AI’s technical capability and real-world performance should be a “wake-up call” for developers, regulators and users alike.

Other experts suggest that future progress in this area will depend on developing AI systems that can reliably interpret human cues, contextual nuance and complex medical information — requirements that go far beyond current capabilities.

Until then, clinicians and patient advocates urge caution and stress that AI should not be relied upon as a replacement for professional medical advice or judgement.

What Users Need to Know

The new research highlights several practical takeaways for individuals considering using AI for health questions:

  • AI should not replace medical professionals: When in doubt about symptoms or medical conditions, users should seek qualified healthcare advice rather than depending solely on machine responses.

  • Verify information from trusted sources: Users are encouraged to cross-reference any AI-provided medical information with reputable health websites or direct consultation with practitioners.

  • Understand AI’s limitations: Knowledge of how AI models work and their shortcomings can help users interpret responses more critically.

Disclaimer:
This article synthesises findings from recent research and reporting on the risks associated with using AI for medical advice. It is intended for informational purposes and does not constitute medical guidance. Readers should consult healthcare professionals for personal medical concerns.

Feb. 10, 2026 1:26 p.m. 294

Sri Lanka Ex-Intel Chief Arrested Over Easter Attacks
Feb. 25, 2026 4:57 p.m.
Former SIS Chief Suresh Sallay arrested by CID in connection with the 2019 Easter Sunday bombings that killed 279 and injured over 500 people
Read More
Japan Reports Spike in Measles Cases Authorities Issue Alert
Feb. 25, 2026 4:39 p.m.
Japan confirms 43 measles cases in early 2026, prompting health authorities to warn potential contacts and urge symptom monitoring nationwide
Read More
Korea US Clash Over West Sea Drill Communication
Feb. 25, 2026 4:25 p.m.
Conflicting accounts emerge on prior notice briefing, and apology during Feb 18-19 US air exercise in West Sea near Korean Peninsula
Read More
China urges political solution to Ukraine crisis backs UN peace efforts
Feb. 25, 2026 4:04 p.m.
China urges diplomatic resolution in Ukraine backs UN efforts and calls all parties to build consensus for lasting peace and respect sovereignty
Read More
Four Fatally Stabbed in Washington Suspect Shot Dead by Deputy
Feb. 25, 2026 3:36 p.m.
A man fatally stabbed four people near Gig Harbor Washington a deputy shot the suspect dead while authorities investigate motives and connections
Read More
Richard Liu launches $690M eco-yacht brand Sea Expandary
Feb. 25, 2026 3:10 p.m.
JD.com founder Richard Liu invests $690M in Sea Expandary aiming to produce affordable green yachts for households with HQ in Shenzhen and factory in Zhuhai
Read More
China imposes export curbs on 40 Japanese firms over military ties
Feb. 25, 2026 2:53 p.m.
Beijing restricts dual-use exports to Japanese companies, citing remilitarization concerns, prompting formal protest from Tokyo as tensions over Taiwan escalate
Read More
Thailand reports 49 Streptococcus suis cases 3 fatalities
Feb. 25, 2026 1:56 p.m.
Thailand reports 49 Streptococcus suis infections with 3 fatalities; authorities warn against undercooked pork and unsafe pig handling
Read More
Russian man Thai woman arrested in Chon Buri over call-centre scam
Feb. 25, 2026 1:25 p.m.
Two suspects in Chon Buri accused of running foreign call-centre fraud posting false info online and withdrawing over one million baht from victims
Read More
Trending News