I've sat across from countless patients who nod politely while I explain their diagnosis, only to realize five minutes later they have no idea what I just said. They're scared. They're overwhelmed. And I'm trying to condense years of medical training into a 15-minute conversation using words they've never heard before.
This is the reality of patient education in modern medicine. We have limited time, complex concepts, and patients who desperately need to understand what's happening to their bodies so they can make informed decisions about their care.
I use AI for patient education every single day in clinic. Not as a replacement for good communication skills, but as a tool that makes me better at my job. What that actually looks like:
The Problem: Explaining the Unexplainable
You know the moment. You've just explained radiation therapy, or immunotherapy, or some nuanced side effect profile. You used your best analogies. You drew diagrams. And then the patient asks a question that makes it clear they understood approximately none of what you just said.
The traditional approach is to try again with different words, maybe draw another picture, and hope something clicks. But we're not all natural educators, and even the best analogies don't work for everyone. A 70-year-old farmer and a 35-year-old software engineer need completely different explanations of the same concept.
Add in the time pressure of clinic, and you've got a recipe for patients leaving confused and anxious. They go home, Google everything, find terrifying misinformation, and come back even more scared.
We can do better than this.
How I Actually Use Large Language Models in Real Time
When I'm in clinic and a patient isn't getting it, I pull out my phone. I open ChatGPT or Claude and type something like:
"Explain radiation therapy to a 70-year-old farmer who's never heard of it before. Focus on what it feels like and what to expect day-to-day. Skip the physics."
Thirty seconds later, I have a completely different framing of the same information. Sometimes it's better than what I said. Sometimes it gives me an analogy I hadn't considered. Either way, it breaks me out of the echo chamber of medical-speak.
The key is being specific about your audience. Generic prompts give generic answers. But if you tell the LLM who your patient is and what they need to understand, you get something useful.
Here are prompts I actually use:
For reframing complex side effects: "Explain radiation dermatitis to someone who's worried about 'getting burned.' Make it clear what they'll actually experience and when to worry versus when it's normal."
For demystifying scary procedures: "I need to explain CT simulation for radiation therapy to a claustrophobic patient. Focus on what the experience feels like, how long it takes, and what they can do to make it easier."
For financial conversations: "Explain why we're doing 30 radiation treatments instead of 5 to a patient whose insurance only partially covers it. They need to understand the medical rationale without feeling like we're upselling them."
For treatment decisions: "Compare surgery versus radiation for early-stage prostate cancer for an active 68-year-old who plays golf and travels frequently. Focus on lifestyle impact and recovery, not survival statistics."
The pattern: be specific about the patient, the concept, and what they actually need to understand. The LLM isn't replacing your medical judgment. It's helping you communicate that judgment effectively.
Building Better Patient Education Materials
I maintain a slide deck I go through with every new radiation oncology patient. It covers what radiation is, how treatment works, what side effects to expect, and what the process looks like from consultation to follow-up.
Before AI, this was PowerPoint clip art and stock photos that looked like they were from 2003. Patients glazed over.
Now, I use AI image generation to create custom illustrations that actually explain what I'm trying to say. DALL-E, Midjourney, and similar tools can generate cartoons and diagrams that are both medically intuitive and visually engaging.
For example, I needed an illustration showing how radiation targets cancer cells while sparing normal tissue. Stock images either showed terrifying medical equipment or abstract physics diagrams. Neither helped patients understand.
So I prompted DALL-E:
"Create a simple cartoon illustration showing radiation beams (as gentle light) converging on cancer cells (shown as dark spots) while healthy cells nearby remain unaffected. Medical illustration style, calming colors, no scary imagery."
The result was something I could actually show patients without increasing their anxiety. It simplified a complex concept into something visual and immediately understandable.
I've used similar approaches for:
- Showing the timeline of treatment (calendar-style visuals with what to expect each week)
- Illustrating side effects in a way that's informative but not frightening
- Creating diagrams of treatment positions and setups so patients know what to expect
- Visualizing how planning works (CT images, dose distributions, without overwhelming technical detail)
The trick with AI-generated images is specificity and iteration. Your first prompt rarely gives you exactly what you need. But unlike hiring a medical illustrator, you can iterate in real time until it works.
The Actual Workflow in Clinic
Here's what this looks like in practice:
Before clinic: I review my patient list and update my standard slide deck if needed. If I know I'm seeing a complex case, I'll pre-generate explanations and images for the likely discussion topics.
During the visit: I start with my standard explanation. If the patient isn't following, I pull out my phone and reframe on the fly with an LLM. This takes 30 seconds and gives me a completely different angle to try.
For visual learners: I'll generate quick images right there if needed. "Show me a simple diagram of..." works surprisingly well when patients need to see something to understand it.
Post-visit: If I found a particularly good explanation or image, I add it to my standard deck for future patients. My materials improve continuously based on what actually works with real patients.
The goal isn't to replace human connection. It's to make that connection more effective. I'm still the one talking to the patient, reading their body language, answering their questions. AI just helps me say things in ways that actually land.
What This Isn't: Safety Considerations
Always Review Before Sharing
AI-generated patient materials must be reviewed by a clinician before reaching any patient. Check for accuracy, appropriate reading level, and cultural sensitivity. The AI draft is a starting point -- your clinical judgment is the quality gate.
What I'm not doing:
I'm not using AI to make medical decisions. I'm using it to communicate decisions I've already made based on evidence and clinical judgment.
I'm not copy-pasting AI responses to patients. I'm using them as starting points that I adapt based on the actual conversation and the specific patient in front of me.
I'm always fact-checking medical content. LLMs can hallucinate. If an explanation includes specific medical facts or numbers, I verify them before using them. The explanation structure might come from AI, but the medical accuracy comes from me.
AI-generated images supplement, they don't replace. Patients still get standard educational materials, written information, and access to nurses and other resources. The AI-generated stuff just makes the initial explanation more effective.
Privacy matters. I never put patient-specific information into public LLMs. The prompts are about generic medical concepts, not "explain this to John Smith who has X diagnosis." See our safety guide for more on protecting patient data.
This is tool-augmented communication, not automated patient education.
What Actually Changes for Patients
I've been doing this for over a year now. What I've noticed:
Patients ask better questions. When they actually understand the basics, they can ask about what really matters to them instead of trying to decode medical jargon.
Fewer anxious portal messages. When patients leave with genuine understanding instead of nodding politely, they don't spiral at home trying to make sense of what I said.
Better adherence. Patients who understand why they're doing 30 treatments instead of 5 are more likely to show up for all 30.
More informed decision-making. When patients truly grasp the trade-offs between treatment options, they make choices that align with their values instead of just going with whatever I recommend.
Less repetition. I used to explain the same concepts three or four times per visit. Now it's usually once or twice, which means more time for questions that actually matter.
The feedback I get most often: "That actually made sense." That's what we're going for.
This Works Beyond Radiation Oncology
I practice radiation oncology, so my examples come from that world. But this approach works for any specialty where you're explaining complex concepts to patients who don't have medical backgrounds.
Surgery? Use AI to help explain procedures, recovery timelines, and why you're recommending one approach over another.
Cardiology? Heart failure and arrhythmias are abstract and terrifying. AI can help you find analogies and visuals that make them concrete.
Endocrinology? Diabetes management involves dozens of small decisions every day. AI-generated explanations can help patients understand the why behind each one.
The pattern is the same: use your medical expertise to make the decision, then use AI to help communicate that decision in ways that actually work for the human being in front of you.
Getting Started
If you want to try this, start small:
-
Pick one concept you explain frequently that patients consistently struggle with.
-
Craft a specific prompt that includes who your patient is, what they need to understand, and what you want them to be able to do with that information.
-
Try it with the next three patients who need that explanation. See if it helps.
-
Iterate. The first prompt rarely gives you exactly what you need. Refine it based on what actually works.
-
Build a library. When you find explanations and images that work, save them. Your patient education materials should improve continuously.
Don't try to revolutionize your entire practice overnight. Just make one explanation better. Then make another one better. Over time, you'll have a toolkit of AI-assisted explanations that make you more effective at the actual job of medicine: helping patients understand and make decisions about their health.
This isn't about replacing physicians with AI. It's about physicians using AI to be better at the parts of the job that matter most. Patient education is one of those parts.
Your medical training gives you the knowledge. AI can help you communicate that knowledge in ways that actually work. Use both.
