GuideBeginner

Okay, I'm Curious. How Do I Actually Start?

Ramez Kouzy, MD 7 minStep 3 of 10 in Guided Path

What you'll learn

  • Setting up ChatGPT and Claude accounts
  • Summarizing papers with AI
  • Using AI for clinical reasoning support
  • Drafting professional communication with AI
  • Practical tips for better AI interactions

Beyond the Party Tricks

I am going to assume you have already tried the basics. You have asked ChatGPT to write an email. Maybe you asked it a medical question and got a textbook answer. Perhaps you had it summarize something.

Those are the party tricks, and they are fine - but they are not why this technology is worth your time.

The real value shows up when you use AI as a thinking partner for the messy, unstructured parts of your work. The parts where you have scattered thoughts, half-formed research questions, or a clinical problem you want to reason through out loud.


Step 1: Pick Your Tools

You need at least one, ideally two. They are all free to start.

ToolWhat It IsBest For
Claude (claude.ai)Anthropic's model. Tends to give nuanced, careful responses and handles long documents well. Free tier gives you Claude Sonnet.Clinical thinking and writing. My daily driver.
ChatGPT (chat.openai.com)The most widely known. GPT-4o on the free tier. Excellent voice mode and native web browsing.Voice brainstorming and quick factual lookups.
Gemini (gemini.google.com)Google's model with built-in Google Search grounding. Deeply integrated with Google Workspace.Live web results and Google Docs/Gmail integration.

My honest take: try all three. They are free. You will develop preferences quickly - I use different ones for different tasks, and you probably will too.

For now, open at least one and keep it in a tab.

If you want a visual walkthrough of the ChatGPT basics, this beginner guide covers the interface well:


Step 2: Try These Three Things Right Now

Do not just read these. Actually do them. Each one demonstrates a fundamentally different way AI can fit into your workflow.

Exercise 1: Upload a Paper and Have a Conversation With It

Go to PubMed. Find a paper you have been meaning to read - something dense, maybe a phase III trial or a meta-analysis in your disease site. Download the PDF.

Now upload it directly into Claude or ChatGPT (both support PDF uploads on the free tier). Then type:

"I just uploaded a paper. Give me a structured summary: study design, patient population, key findings, and the most important limitations. Then tell me what this means for clinical practice in radiation oncology."

Look at what comes back. It is not a generic abstract summary - it is a structured analysis of the full paper, tailored to your question.

Follow up with anything: "What fractionation did they use?" "How does this compare to the RTOG 0617 results?" "Would you change management based on this?"

This is not summarization. This is having a research conversation with a paper.

Exercise 2: Ramble Into Your Phone and Let AI Organize Your Thoughts

Open ChatGPT on your phone. Press the voice button (the headphone icon). Now just... talk.

Pick a topic you have been thinking about - a research question, a clinical problem, a talk you need to prepare. Do not organize your thoughts first. Ramble. Say everything that comes to mind, in whatever order it comes.

"So I've been thinking about this idea for a retrospective study looking at... actually wait, the real question is whether... and I saw a paper last week that suggested... but the problem is the data might not capture..."

When you are done, ask: "Organize my thoughts into a structured outline. Identify the core research question, the key assumptions I'm making, and any blind spots or gaps in my thinking. Push back where you think I'm being vague."

What you get back is your own messy thinking, structured. The model did not generate ideas - it organized yours and challenged you on the weak points.

This is what a great research mentor does in a meeting, except it is available at 11 PM on a Tuesday.

Exercise 3: Stress-Test a Research Idea

Take a document you have been working on - a draft specific aims page, a scrambled set of notes for a grant, a rough outline with half-formed hypotheses. It can be messy. It does not matter.

Paste or upload it and type:

"This is a rough draft of my thinking about a research question. I want you to do three things: (1) Identify the core hypothesis and whether it is clearly stated, (2) List the assumptions I am making - especially the ones I might not realize I am making, (3) Point out the biggest blind spots in my reasoning. Be critical. I want this to get better, not to feel good about it."

This is where AI moves from "useful tool" to "essential workflow."

Getting rigorous pushback on your ideas normally requires booking time with a mentor, presenting at a lab meeting, or waiting for Reviewer 2 to tear you apart. Now you can get a first pass of intellectual pressure-testing on demand.


What You Just Learned

If you actually did those exercises, you experienced something different from the email-drafting party tricks:

Three Ways AI Extends Your Work

Deep document interaction - AI can read your papers with you, not just summarize them. Thought organization - your messy voice notes become structured outlines with critical analysis. Intellectual stress-testing - your ideas get challenged before they reach a reviewer. None of these replace your expertise. They extend it.


A Few Practical Tips

Be specific about who you are. "I'm a radiation oncologist evaluating..." or "I'm writing a grant for..." gives the model context to give you relevant, useful responses instead of generic ones.

Tell it to push back. By default, AI tends to agree with you (we cover this in detail in the next article). Explicitly asking for criticism, assumptions, and blind spots overrides that tendency.

Use voice mode for brainstorming. Typing forces you to organize thoughts before sharing them. Voice lets you think out loud. For brainstorming, voice is better.

Try Gemini for factual lookups. If you need current, grounded information - "what were the key abstracts on AI at ASTRO 2025?" - Gemini's Google Search integration gives you search-grounded answers where the other models are drawing from training data.

Iterate. Your first prompt rarely produces the perfect output. Follow up: "Go deeper on point 3," "What am I missing?", "Now rewrite this as if I'm presenting to a tumor board." The conversation is the tool, not the first response.

Okay, I'm Curious. How Do I Actually Start?