Doctor? Optional. How AI Is Disrupting Healthcare...

Doctor? Optional. How AI Is Disrupting Healthcare Before Regulations Catch Up
Denys Voroshylov
Introduction: Three Diagnoses and One Algorithm
I live in Poland and, as a taxpayer, have access to the national health insurance system (NFZ). In emergency situations, it works well — if something is seriously wrong, you’ll be saved, treated, hospitalized. But for “smaller” issues — recurring cystitis, ear inflammation, or a dependency on nasal sprays — things get tricky.
A free ENT appointment might be available in a month. A private one? Tomorrow, but it’ll cost you €50. And here’s the paradox: the outcome might be no better. The same doctor can be either an attentive expert or an indifferent technician, regardless of price. I opt for private care because I can at least speak English or Ukrainian. But even then, results aren’t guaranteed.
That’s why, in three specific instances, I turned not to a doctor — but to ChatGPT. And I got real results.
Case 1. Rebound congestion after using nasal spray: ChatGPT identified the likely cause, suggested an over-the-counter alternative and a withdrawal plan. Three days later, I could breathe freely.
Case 2. Suspected varicocele and cystitis: After an unhelpful urology visit (no tests, just Biseptol), I turned to ChatGPT. It recommended which tests to run, and suggested a plan involving herbal supplements and hygiene routines. Ten days later, the varicocele symptoms were gone and cystitis symptoms improved by 70%.
This isn’t a confession. It’s a symptom — a systemic one.
Why Are People Turning to AI for Health?
The answer is simple: we’re tired of waiting. Tired of battling for basic tests. Tired of doctors rushing through diagnoses in under a minute. Tired of paying — and still feeling underserved.
This isn’t just a Polish issue. A friend in Germany got an allergist appointment… for next year. In the UK? Even longer. In Spain, I used Google Translate to self-diagnose.
AI isn’t a rebellion. It’s an alternative. Not perfect, but fast, accessible, and increasingly intelligent. And as The Rise of AI-Driven Self-Medication (2023–2025) report shows, I’m far from alone.
“Users are actively turning to ChatGPT, Claude, Gemini and other LLMs as a first line of consultation, especially in chronic, poorly diagnosed, or emotionally charged cases.” (source)
What AI-Driven Self-Medication Looks Like
The study categorizes how laypeople are using AI for self-medication — and behind each category is a very human story.
🧠 Self-Diagnosis
LLMs help users explore what might be causing their symptoms. One Reddit user, struggling with chronic back pain for over a decade, used ChatGPT to analyze the possible root causes, received a custom exercise plan — and within weeks, pain dropped by 60–70% (source).
💬 Mental Health Self-Screening
Reddit and similar platforms are full of stories about users using AI to explore ADHD, anxiety, depression. Sometimes it’s a step toward real help. Sometimes, a dead end. But one thing’s clear: people aren’t just asking questions — they’re searching for frameworks to understand themselves better (source).
🧬 Personalized Health Plans
Perhaps the most remarkable case: a person with Type 1 Diabetes uploaded 110 biomarkers and 90 days of glucose data from a Dexcom sensor into ChatGPT. The AI returned a detailed plan for meals, supplements, workouts, and insulin adjustments. Their A1C dropped from 5.8 to 5.4 in six months (source).
🚪 Skipping the Doctor
In many cases, it’s not a choice — it’s a reaction to disappointment. Delays, costs, dismissiveness. And here’s AI: always available, always responsive.
🤖 Emotional Support
AI is stepping into the role of unlicensed therapist. In Japan, adolescent cancer patients used GPT-powered chatbots for emotional support. 80% disclosed things to the chatbot that they hadn’t told their families or doctors (source).
When AI Gets It Dangerously Wrong
AI isn’t magic. It makes mistakes. And sometimes, they’re serious.
- The “Tessa” chatbot from the National Eating Disorders Association advised calorie counting and fat-fold measurements — the opposite of recovery best practices. After public outrage, it was taken offline (source).
- Microsoft’s Copilot AI gave potentially harmful drug advice in 22% of queries about the 50 most prescribed medications. 39% of its answers contradicted scientific consensus (source).
- In urology, ChatGPT gave incorrect or misleading advice in 40% of cases, often citing made-up sources or omitting critical context (source).
The problem isn’t just that AI can be wrong. It’s that it can sound absolutely right while being completely wrong.
AI as Healthcare’s First Responder
And yet — AI is becoming our first line of defense. It’s always there. It doesn’t get tired. It doesn’t judge. It doesn’t say “that’s not my department.”
“Patients often perceive AI as less judgmental, more explanatory, and more convenient when interacting with medical information.” (source)
Doctors don’t always have time to explain. But AI does. And increasingly, it explains better.
Where Demand Goes, Innovation Follows
The market is massive — and already breaking through old boundaries.
AI services are quietly embedding into:
- Insurance app features.
- Telemedicine interfaces.
- At-home biosensors (from wearables to mail-in blood tests).
- Grey-zone Telegram bots combining GPT with diagnostics and treatment protocols.
But this race has a weak leg: regulation.
- Insurance doesn’t cover AI diagnostics.
- Legally, no one’s liable when AI gives bad advice.
- The FDA and EU regulators focus only on certified medical software — not on ChatGPT chatting on your phone (source).
AI in medicine is like Uber in 2012 — already operating, but still technically “unrecognized.” Users don’t wait. And neither do businesses.
Don’t Wait — Heal
AI isn’t perfect. But it offers what many systems can’t: attention, clarity, speed, personalization. We don’t need to wait for the system to fix itself. We’re already healing.
Our job is to use these tools wisely, critically, and creatively. Don’t follow blindly — but don’t be afraid to explore.
And for businesses? The market is forming where traditional coverage is silent.
When fully autonomous GPT-9 diagnostic kiosks roll out in a decade, some of us will say: we’ve been there. We healed. We just didn’t wait.
“AI isn’t your doctor. But it’s sitting right next to you as you decide whether to treat or delay. And often, it says something worth hearing. The key is to listen with wisdom — not instead of it.”