Your Prompt Is the Prescription
In medicine, a vague order gets vague results. "Give some pain medicine" is not the same as "Morphine 4mg IV q4h PRN for severe pain."
The same principle applies to AI. The quality of what you get out is almost entirely determined by what you put in.
This concept - shaping your input to get better output - is called prompt engineering. It sounds technical, but it is really just the skill of asking good questions. You already do this with consultants, residents, and colleagues. Now you need to do it with a model.
If you prefer a visual walkthrough, Jeff Su's video on prompt engineering is an excellent companion to this article:
The Five Principles That Actually Matter
1. Give Context. Always.
Models have no idea who you are unless you tell them. "What's the best treatment for prostate cancer?" will get you a Wikipedia-level overview.
Compare that to:
"I'm a radiation oncologist seeing a 68-year-old patient with Gleason 4+3 prostate cancer, PSA 15, T2c. He's interested in definitive treatment and wants to understand his options. Help me think through EBRT with ADT versus brachytherapy boost versus surgery, considering his age and risk group."
Same question at its core. Wildly different output. The second version gives the model enough context to produce something actually useful to your clinical reasoning.
Think of it like presenting a case: the more relevant detail you provide, the better the "consultant" can respond.
2. Ask for Reasoning, Not Just Answers
"Should I use protons for this pediatric medulloblastoma case?" will get you "yes" with some generic justification.
Instead, try:
"Walk me through the arguments for and against proton therapy versus IMRT for a 6-year-old with standard-risk medulloblastoma. Include dosimetric considerations, late effects data, and any relevant clinical trials. What factors would tip you toward one approach?"
When you ask for reasoning, the model shows its work. You can evaluate the logic, catch errors, and engage with the thinking rather than just accepting or rejecting a conclusion.
3. Beware Sycophancy - Models Will Agree With You
This is the most underappreciated problem in clinical AI use. Language models are trained to be helpful, and "helpful" often means "agreeable."
If you frame your question with a built-in assumption, the model will usually validate it - even when you are wrong.
The Sycophancy Problem
If you go into an AI conversation with a wrong assumption and phrase your questions to confirm it, you will walk out feeling validated and still being wrong. Always frame questions neutrally. Ask for pros and cons, not confirmation.
Try this experiment yourself. Ask:
"Isn't it true that concurrent cetuximab is equivalent to cisplatin for HPV-positive oropharyngeal cancer?"
The model will likely agree with your framing, maybe with mild caveats. Now ask:
"Compare the evidence for concurrent cisplatin versus cetuximab in HPV-positive oropharyngeal cancer. What do the RTOG 1016 and De-ESCALaTE trials show?"
Completely different response. The second version does not lead the witness. It asks for a balanced comparison and lets the evidence speak.
4. Tell It to Push Back
You can explicitly override the model's tendency toward agreement:
"I'm going to describe my treatment plan for this patient. I want you to play devil's advocate. Challenge my reasoning, point out what I might be missing, and suggest alternatives I haven't considered. Do not just agree with me."
This is one of the most valuable uses of AI in clinical thinking - having a tireless contrarian who will challenge your reasoning without ego or hierarchy getting in the way.
Use it.
5. Iterate. Your First Prompt Is a Draft.
The most common mistake is treating the AI interaction as a single exchange: you ask, it answers, you are done. The real power is in the conversation.
Start broad, then narrow: "Help me think about treatment options for this case" followed by "Focus on the radiation approach - what fractionation schemes should I consider?" followed by "Compare 60 Gy in 30 fractions versus 70 Gy in 35 fractions for this scenario."
Ask for revisions: "Make this more concise." "Explain that section as if you're talking to a medical student." "What am I missing?"
Push back on the model: "I don't think that's right. The PACIFIC trial was for stage III NSCLC, not stage II. Correct your response."
The conversation is the tool. Each follow-up refines the output.
The Framing Bias Challenge
Here is something I want you to try this week. Take a clinical question you have been thinking about and ask it two different ways:
Version A: Frame it with your existing assumption. "Isn't it better to use SBRT for early-stage NSCLC in elderly patients rather than conventional fractionation?"
Version B: Frame it neutrally. "What does the evidence say about SBRT versus conventionally fractionated radiation for early-stage NSCLC in patients over 75? What are the tradeoffs?"
Compare the responses. Notice how Version A gives you confirmation and Version B gives you a balanced analysis.
Now ask yourself: which one is actually more useful for your clinical decision-making?
This exercise alone will make you a dramatically better AI user. Once you see the difference framing makes, you cannot unsee it.
Quick Reference: Good Prompts vs. Bad Prompts
| Bad Prompt | Why It Fails | Better Prompt |
|---|---|---|
| Summarize this paper | No structure or focus specified | Summarize this paper's methods, key findings, and 3 limitations |
| Is this treatment good? | Too vague - no context or comparison | Compare X vs Y for stage III NSCLC, citing key trials and tradeoffs |
| Write me an abstract | No format, context, or content provided | Write a structured abstract (Background, Methods, Results, Conclusion) for a retrospective N=200 study on re-irradiation for head and neck cancer. Here's my data: [...] |
| Tell me about SBRT for liver mets | Generic request with no clinical context | I'm a radiation oncologist evaluating a patient with 3 colorectal liver metastases, each under 3 cm, who has progressed on first-line FOLFOX. Walk me through the evidence for SBRT in this scenario, including patient selection criteria, dose-fractionation options, and expected outcomes |
The Bottom Line
The difference between a useless AI interaction and a genuinely helpful one is almost always the prompt. Specificity, context, and explicit format instructions turn a mediocre tool into a powerful one.
You do not need to learn a programming language or memorize prompt templates. You need to internalize three habits:
Three Habits That Matter
Give context - Tell the model who you are and what you are trying to do. Ask for reasoning - Do not just ask for answers, ask the model to show its work. Do not lead the witness - Frame questions neutrally and explicitly ask for pushback.
Do those three things consistently and you will get more value out of AI tools than most power users.
The model is as good as your questions. Ask better questions.
