The Research Bottleneck Is Not Intelligence
| Research Phase | How AI Helps | What It Cannot Do |
|---|---|---|
| Literature review | Summarize papers, find gaps | Replace systematic search methodology |
| Writing | Draft sections, improve clarity | Replace your voice and original thinking |
| Data analysis | Write code, explain stats, iterate | Guarantee statistical correctness |
| Peer review response | Draft rebuttals, organize revisions | Know what the reviewer meant |
You're not slow at research because you lack ideas or expertise. You're slow because research involves enormous amounts of tedious work: searching for papers, reading abstracts, extracting data, formatting references, drafting boilerplate sections, cleaning datasets, running basic analyses, revising prose that's already good but not quite right.
AI doesn't replace the intellectual core of your research—the question, the design, the interpretation. But it can dramatically compress the time you spend on the tedious infrastructure around that core. For a clinician-scientist balancing a full clinical load with research, that time savings is the difference between publishing and not publishing.
Literature Search: Finding What Matters
The traditional approach: type keywords into PubMed, scan hundreds of titles, open dozens of tabs, read abstracts, download promising papers, repeat until you feel you've covered the landscape. It works, but it takes hours.
Elicit changed my literature workflow. It's an AI-powered research assistant built specifically for academic work. You type a research question in natural language—not keywords, but an actual question—and it returns relevant papers with AI-generated summaries of each paper's key findings, methods, and results.
A practical example: Say you're starting a project on reirradiation for recurrent head and neck cancer and want to understand the landscape of published fractionation schemes.
- Go to elicit.com
- Type: "What fractionation schemes have been used for re-irradiation of recurrent head and neck squamous cell carcinoma?"
- Elicit returns a list of relevant papers, each with a summary of the dose-fractionation used, patient population, and outcomes
- Filter by study type, extract specific data columns, export everything to a spreadsheet
What used to be a full afternoon of PubMed searching becomes a 30-minute focused exploration. You still read the key papers in full—AI doesn't replace that—but it dramatically compresses the triage phase.
Other tools worth knowing:
- Semantic Scholar — AI-powered academic search with better relevance ranking than Google Scholar. Shows citation context (how a paper is cited by others).
- Consensus (consensus.app) — Searches published papers and synthesizes evidence for or against a claim. Good for quick "what does the evidence say about X?" questions.
- Connected Papers — Visual tool showing related papers in a graph format. Great for discovering work you might have missed.
Writing Assistance: Steering, Not Autopilot
AI is exceptionally useful for research writing, but the framing matters. You're not asking the model to write your paper. You're using it to get past the blank page, improve your prose, and accelerate revision.
The Critical Rule
Every word in your manuscript should be reviewed, verified, and approved by you. AI-generated text is a starting point. Your expertise, your data, and your name are on the paper.
First drafts: Give the model your outline, key points, and the tone you want. Ask it to draft a section. The output won't be publishable, but it'll be a solid starting point you can edit into your voice with real content. This is particularly valuable for sections that follow predictable structures: introductions, methods, parts of the discussion.
"Here are the key points for my Introduction section on reirradiation for recurrent head and neck cancer: [your bullet points]. Draft an introduction that covers these points in a logical flow, written for a radiation oncology journal audience."
Revision and editing: Paste a draft section and ask for specific feedback.
"Review this Results section for clarity and conciseness. Point out any sentences that are unnecessarily complex and suggest simpler alternatives. Flag any logical gaps in the presentation."
Responding to reviewers: This is one of the most time-efficient uses of AI in research. Paste the reviewer comment and your draft response, and ask the model to help you make the response more precise, diplomatic, and complete.
Data Analysis: Code You Don't Have to Write
If you've ever stared at a dataset in Excel and wished you could just describe what analysis you wanted in plain English, this section is for you.
ChatGPT with Code Interpreter lets you upload a dataset (CSV, Excel, etc.) and describe what you want to do. The model writes Python code, executes it, and shows you the results—tables, plots, statistical tests, all without you writing a single line of code.
Example workflow:
- Upload your dataset of treatment outcomes
- "Show me the distribution of overall survival by treatment group. Generate Kaplan-Meier curves. Run a log-rank test comparing the groups."
- The model writes the code, runs it, shows you the results with publication-quality plots
Claude with Artifacts offers similar capability—you can ask it to generate analyses and visualizations that you can preview and iterate on interactively.
This isn't a replacement for a biostatistician on complex analyses. But for exploratory data analysis, quick descriptive statistics, data cleaning, and generating preliminary figures, it's remarkably efficient. It's also an excellent way to learn basic data science: you can see the code the model writes and start understanding the logic behind the analyses.
Systematic Reviews: Where AI Shines Most
Systematic reviews involve some of the most tedious work in research: screening thousands of titles and abstracts, extracting data from dozens of papers into standardized tables, assessing quality across studies. AI tools are making this dramatically more efficient.
Several workflows are emerging:
- Title/abstract screening: Use AI to do a first pass, flagging papers as likely relevant or likely irrelevant. You still review, but the AI does the heavy lifting of the initial sort.
- Data extraction: Upload included papers and ask the AI to extract specific data points (sample size, intervention, outcomes, follow-up period) into a structured format.
- Quality assessment: Some tools can help assess risk of bias using standard frameworks, though this still requires human judgment for the final call.
Tools like Elicit, Covidence, and ASReview are building this capability directly into systematic review platforms. If you're planning a systematic review, exploring these tools before starting will save you dozens of hours.
The Workflow: AI as Research Infrastructure
The Human Is in the Driver's Seat
Finding papers: AI searches and summarizes (you decide what's relevant). Reading papers: AI extracts key data (you interpret and synthesize). Writing: AI drafts and edits (you verify and finalize). Analysis: AI runs code (you interpret results and check assumptions). Revision: AI helps with reviewer responses (you make the scientific decisions).
The best way to think about AI in research is as infrastructure that handles the mechanical work so you can focus on the intellectual work. At every step, the human is in the driver's seat. AI accelerates the process. It doesn't replace the researcher.
The Bottom Line
If you're a clinician-scientist with limited research time, AI tools aren't optional anymore—they're a competitive advantage. The researchers who adopt these tools will move faster, search more broadly, and produce more polished work than those who don't.
Start with one tool. Elicit for your next literature search is the easiest entry point. Then expand: try ChatGPT Code Interpreter for your next dataset, Claude for your next manuscript draft. Build the workflow that works for you.
The tedious parts of research are solvable problems. Let AI solve them so you can focus on the science.
