How AI Medical Advice Hurts, Not Heals
'17.02.2026'
ForumDaily New York
AI chatbots are increasingly being used for self-diagnosis. However, a new study warns that such tools are not only ineffective but can also be dangerous. moneytalksnews tells what scientists have discovered.
Scientists have concluded that the advice artificial intelligence often leads to wrong decisions and serious consequences.
From Dr. Google to AI Chatbots
In the past, patients often referred to "Dr. Google" and diagnosed themselves.
Now the situation has changed. People aren't just searching for symptoms. They're engaging in full-fledged conversations with AI chatbots and receiving "personalized" diagnoses in seconds.
It seems convenient. Free. Fast. Confident.
On the subject: A unique AI assistant has launched in Brooklyn for residents and tourists.
However, a new, large-scale study has shown that such chatbots can be dangerous when it comes to health. Following their advice can be costly—both literally and financially.
Research that debunks the myth of "smart" AI
Artificial intelligence is widely believed to be advancing rapidly. AI is reportedly passing medical exams and standardized tests.
But researchers from the University of Oxford came to a different conclusion.
A study published in the scientific journal Nature Medicine compared large AI models with 1,300 real people.
The goal was simple: to test whether using a chatbot helps patients make more accurate medical decisions than using a simple internet search.
The results were disappointing.
People using AI chatbots didn't make more accurate decisions. In some cases, their diagnostic accuracy was even lower than that of those who simply researched the information themselves.
Dr Rebecca Payne is a general practitioner and the study's lead medical officer.
“Despite all the hype, AI is not yet ready to replace doctors,” she said.
Why "smart" bots give dangerous advice
The problem isn't that AI doesn't know medical facts. The problem is that it doesn't know the individual and doesn't know when to stop.
The study revealed disturbing examples of so-called AI "hallucinations," a term used to describe situations in which a system fabricates information.
In one case, two users described symptoms of subarachnoid hemorrhage, a life-threatening condition. The chatbot advised one user to seek emergency medical attention immediately. Another user was advised to "lie down in a dark room."
In another example, the bot recommended calling emergency services. However, for a user in the UK, it gave the Australian emergency number, "000."
In a critical situation, such mistakes can cost lives.
The high price of bad advice
Inaccurate medical information poses more than just a health risk. It also has serious financial consequences.
If the AI underestimates the severity of symptoms, a person may delay seeking medical attention and end up in the hospital with complications. Treatment in this case will be significantly more expensive.
On the other hand, if the chatbot exaggerates the danger, a person could end up spending thousands of dollars on unnecessary tests and emergency visits.
Disinformation is costly.
The "Confident Tone" Trap
The most dangerous thing in AI — not mistakes. But confidence.
When searching online, a person sees a list of sources. This allows them to assess a website's reliability.
Chatbots deprive users of context. They provide a single, coherent, grammatically correct, and authoritative-sounding answer.
It feels like an expert's opinion, but in reality, it's an algorithm that predicts the next word in the text.
The same scheme is used in financial fraud. An official tone creates the illusion of trust.
What to do instead
Technology is useful for writing letters or summarizing stories, but not for making medical diagnoses.
Experts recommend:
- Contact a medical facility. Many insurance companies and clinics offer 24/7 consultation lines. This is free and more reliable than talking to an algorithm.
- Use only trusted sources, such as the Centers for Disease Control and Prevention, Mayo Clinic, or Cleveland Clinic.
- Listen to your own well-being. If something is causing you concern, don't ignore it because of the chatbot's response.
Artificial intelligence can be a useful tool. But when it comes to health care, it doesn't yet replace doctors.
Experts warn: don't let a chatbot risk your life or your budget.
