Back to Guides
GuideResearch

The Researcher's Guide to LLMs

Ramez Kouzy

I wrote this guide because I kept seeing the same pattern: researchers either dismissing AI as hype or using it badly. Neither approach serves science well.

Large language models have fundamentally changed how research gets done. Not in the breathless, revolutionary sense that tech companies want you to believe, but in quiet, practical ways that compound over time. Literature reviews that took weeks now take days. Grant sections that required painful iteration now flow more naturally. Data analysis that demanded hours of Stack Overflow searches now happens in conversation.

But most researchers I talk to either don't know where to start or they've tried ChatGPT once, got mediocre results, and gave up. This guide exists to fix that.

Who This Is For

This guide is for researchers across all of medicine, science, and academia. I'm a radiation oncologist, but nothing here is specific to oncology or even medicine. Whether you're studying protein folding, climate models, social networks, or cancer biology, the workflows are the same.

You don't need to be "technical." You don't need to understand transformers or attention mechanisms. You just need to be willing to experiment and iterate.

What You'll Learn

This guide covers AI across the entire research lifecycle:

  1. Literature Review — Finding papers, synthesizing evidence, identifying gaps
  2. Writing — Drafting manuscripts, editing prose, meeting journal standards
  3. Data Analysis — Writing analysis code, interpreting results, creating visualizations
  4. Brainstorming — Generating hypotheses, designing studies, finding novel angles
  5. Grant Writing — Crafting specific aims, arguing significance, responding to reviews
  6. Tools Comparison — Which AI tool for which task, when to upgrade, what's worth paying for

Each section includes practical workflows, actual prompts you can use, and honest assessments of what works and what doesn't.

How to Use This Guide

You can read sequentially or jump to what you need right now. If you're preparing a grant, start with AI for Grant Writing. If you're drowning in papers, go to AI for Literature Review.

I recommend everyone read the Tools Comparison section. Choosing the right tool for the task matters more than most people realize. Claude and ChatGPT are not interchangeable. Neither are Elicit and Perplexity.

Quick Reference: Which Tool for Which Task

| Task | Best Tools | Why | |------|-----------|-----| | Literature search | Elicit, Semantic Scholar, Consensus | Purpose-built for research papers, cite actual sources | | Reading papers | Claude (Projects), NotebookLM | Long context windows, can ingest full PDFs | | Writing manuscripts | Claude, ChatGPT-4 | Strong at academic prose, follows style guides | | Editing/polishing | Claude (Sonnet) | Best at preserving your voice while improving clarity | | Data analysis | ChatGPT (Code Interpreter), Claude | Can execute code, iterate on errors, explain results | | Statistical questions | Claude, Perplexity (Pro) | Good at explaining concepts, citing methodology papers | | Brainstorming ideas | Claude (Opus), ChatGPT-4 | Strong reasoning, creative hypothesis generation | | Grant writing | Claude (Opus) | Handles long documents, maintains consistency across sections | | Quick questions | Perplexity, ChatGPT | Fast, cites sources, good for spot checks | | Visual analysis | ChatGPT-4 (Vision), Claude | Can analyze figures, extract data from graphs |

What AI Is Good At (and What It Isn't)

Capabilities and limits:

AI is excellent for:

  • Synthesizing information from multiple sources
  • Generating first drafts that you edit heavily
  • Explaining complex concepts in different ways
  • Finding patterns in literature you might have missed
  • Writing boilerplate (methods sections, standard protocols)
  • Reformatting and restructuring text
  • Brainstorming alternatives and edge cases
  • Translating jargon between fields

AI is terrible at:

  • Knowing what it doesn't know (it confidently hallucinates)
  • Citing sources accurately (always verify citations)
  • Understanding nuance in cutting-edge research
  • Original insight (it recombines, doesn't create)
  • Judging quality (it can't tell good studies from bad)
  • Statistical rigor (it makes plausible-sounding errors)

The key is treating AI as a very knowledgeable but unreliable research assistant. You're the principal investigator. You make the decisions. You verify everything.

The Cardinal Rules

Before you dive into specific workflows, commit these to memory:

1. Never trust a citation without verifying it.
LLMs hallucinate papers that sound real but don't exist. Every single citation needs manual verification.

2. Never submit AI-generated text without substantial editing.
AI writes like an AI. It's verbose, hedges constantly, and lacks your voice. Edit ruthlessly.

3. Disclose AI use when required.
Most journals now have policies. Follow them. This isn't optional.

4. AI makes statistical errors that sound correct.
Don't trust AI for statistical advice without understanding the concepts yourself or consulting a statistician.

5. Your research integrity is your responsibility.
AI is a tool. Using it well requires judgment. Using it badly damages science.

What This Guide Isn't

This isn't a tutorial on prompt engineering. I include prompts that work, but I'm not teaching you to craft the perfect system message or use obscure tokens.

This isn't a comparison of model architectures. I don't care if you understand what RLHF is. I care if you can write a better grant.

This isn't cheerleading for AI. I'm not here to convince you that LLMs will revolutionize research. They're tools. Some things they do well. Some things they don't.

This guide is practical. I want you to finish reading a section and immediately apply it to your work.

Start Where You Are

You don't need to master everything at once. Pick one workflow that would help you right now and try it. If you're writing a paper, read the Writing guide and try one prompt today.

Most researchers I know started small — using AI to polish an abstract or search for a specific set of papers — and gradually expanded as they saw what worked.

That's the right approach. Start small. Iterate. Build intuition.

A Note on Cost

Most of what I recommend can be done with free tiers. ChatGPT Plus ($20/month) and Claude Pro ($20/month) are worth it if you use AI daily, but you can get substantial value without paying.

I note costs throughout the guide. For most academic work, you'll spend $0-40/month. That's a rounding error compared to what your university pays for journal subscriptions.

Let's Get Started

The rest of this guide is organized by research task. Each section is self-contained, includes practical examples, and ends with key takeaways.

If you read nothing else, read the Tools Comparison. Choosing the right tool is half the battle.

If you want immediate value, pick the section that matches what you're working on this week and try one workflow.

Research is hard enough without ignoring tools that make it easier. Let's put these models to work.


Key Takeaways

  • LLMs are practical tools for research, not magic solutions or useless hype
  • Every field can benefit — workflows apply across medicine, science, and academia
  • Start with one task — pick literature review, writing, or data analysis and master it
  • The right tool matters — Claude, ChatGPT, Elicit, and Perplexity excel at different things
  • Verify everything — AI hallucinates citations, makes statistical errors, and confidently states falsehoods
  • Disclose when required — journal policies on AI use are evolving; follow them
  • Cost is minimal — most work can be done free or for $20-40/month
  • You remain responsible — AI assists, you decide; your research integrity isn't delegable

Continue to: AI for Literature Review and Synthesis →

Enjoyed this guide?

Subscribe to Beam Notes for more insights delivered to your inbox.

Subscribe