Most researchers use AI for writing badly. They treat it like a ghostwriter instead of a writing partner. The result: prose that's technically correct but hollow. No voice, no insight, no point of view.
AI writes like AI. It hedges constantly ("may", "might", "could potentially"), uses passive voice, and produces verbose nothingness. If you submit AI-generated text without heavy editing, reviewers will know. And they won't be impressed.
For a complete overview of using AI across the research lifecycle, see the LLM Research Guide.
But if you use AI correctly — as a collaborator that handles grunt work while you provide substance — you'll write faster and better.
The Three Modes of AI-Assisted Writing
AI helps differently depending on where you are in the writing process:
1. Drafting — Getting words on the page, organizing thoughts, overcoming blank-page paralysis
2. Editing — Improving clarity, fixing structure, cutting verbosity
3. Polishing — Final cleanup, formatting, consistency checks
Each mode needs different prompts and different expectations.
Mode 1: Drafting (AI Writes, You Edit Heavily)
When to Use AI for Drafting
Good use cases:
- Methods sections (standard protocols)
- Background/introduction (synthesizing known information)
- Abstract drafts (organizing key points)
- Outline expansion (turning bullets into prose)
Bad use cases:
- Results section (you need to report your actual data)
- Discussion (requires your interpretation and insight)
- Novel arguments (AI recombines, doesn't create)
Methods Section: The Best Starting Point
Methods sections are perfect for AI because they're descriptive, not interpretive.
Prompt template:
Write a methods section for a study with the following design:
Study Design: [retrospective cohort / RCT / cross-sectional / etc.]
Setting: [academic medical center / community hospital / etc.]
Population: [inclusion/exclusion criteria]
Intervention: [if applicable]
Outcome Measures: [primary and secondary]
Statistical Analysis: [methods used]
Write in past tense, third person, following JAMA style. Be concise.
Example:
Write a methods section for a study with the following design:
Study Design: Retrospective cohort study
Setting: Academic radiation oncology department
Population: Adult patients with stage III NSCLC treated with definitive
chemoradiation between 2015-2020. Excluded patients with prior thoracic RT.
Intervention: None (observational)
Outcome Measures: Primary = overall survival. Secondary = progression-free
survival, grade 3+ toxicity.
Statistical Analysis: Kaplan-Meier curves, log-rank test, Cox proportional
hazards for multivariable analysis.
Write in past tense, third person, following JAMA style. Be concise.
The AI draft will need editing (it always does), but it gives you 80% of the work in 30 seconds.
What to fix in AI-generated methods:
- Remove hedging ("data were analyzed" not "data may have been analyzed")
- Add specifics (software versions, exact tests used, alpha levels)
- Cut verbosity (AI loves unnecessary words)
- Verify accuracy (don't let AI describe methods you didn't use)
Introduction/Background: Providing Structure
Don't ask AI to write your introduction from scratch. You'll get generic literature review.
Bad prompt:
"Write an introduction about machine learning in radiology"
Good prompt:
I'm writing an introduction for a paper on machine learning for lung nodule
detection. I want to make three points:
1. Lung cancer screening has high false positive rates (cite: NLST trial
showed 96% of positive screens were false positives)
2. Radiologist burnout from screening volume is increasing (cite: recent
studies show radiologists read 100+ CTs daily)
3. AI could assist by pre-screening, reducing radiologist workload while
maintaining sensitivity
Write 250 words connecting these points into a coherent narrative. Start
with the clinical problem, then introduce AI as a potential solution.
Avoid hedging language.
You provide the structure and evidence. AI provides the connective tissue. You edit for voice and accuracy.
Abstracts: Organizing Key Points
AI is decent at structuring abstracts if you give it the content.
Prompt template:
Write a 250-word structured abstract for a research paper with these sections:
Background: [1-2 sentences on clinical problem]
Objectives: [specific research question]
Methods: [study design, population, sample size, analysis]
Results: [primary outcome, effect size, p-value]
Conclusions: [clinical implications]
Follow [target journal] style. Be direct and quantitative.
Critical: Fill in the brackets with YOUR actual content. Don't let AI invent results.
The Outline-to-Prose Workflow
This is my most-used AI writing workflow:
- Write an outline in bullet points (your ideas, your structure)
- Ask AI to expand bullets into prose
- Edit heavily for voice and accuracy
Example:
Your outline:
Discussion Section:
- Main finding: intervention reduced readmissions by 30%
- Compares favorably to Smith et al (20% reduction) and Jones et al (25%)
- Possible mechanisms:
- More structured discharge process
- Better patient education
- Earlier follow-up
- Limitations:
- Single center
- Selected population
- Short follow-up (6 months)
- Clinical implications: scalable, low-cost intervention
Prompt:
Expand this outline into a 400-word discussion section. Maintain the
structure and points. Write in first person plural ("we found").
Be direct, avoid hedging. Follow academic style but prioritize clarity.
[paste outline]
The result will need editing, but you'll have a draft in seconds instead of staring at a blank page for an hour.
Mode 2: Editing (AI Improves Your Writing)
Claude excels here. It's better at editing than drafting.
Improving Clarity
Prompt:
I'm revising a manuscript. This paragraph is unclear. Improve clarity
without changing meaning. Cut unnecessary words. Maintain technical accuracy.
[paste paragraph]
Claude will:
- Simplify complex sentences
- Remove hedging ("may", "possibly", "might")
- Cut redundancy
- Improve flow
You must verify it didn't change your meaning.
Reducing Word Count
Prompt:
This section is 800 words but needs to be under 500 for journal limits.
Cut to 450 words while preserving all key points and data.
[paste section]
AI is excellent at this. It identifies redundancy and verbose phrasing you've become blind to.
Fixing Passive Voice
Academic writing overuses passive voice. AI can help fix it.
Prompt:
Convert this paragraph to active voice where appropriate. Keep passive
voice only where it's genuinely better (e.g., when the actor is unknown
or unimportant).
[paste paragraph]
Before: "The data were analyzed using Cox proportional hazards models. Statistical significance was defined as p < 0.05. The analysis was performed using R version 4.2."
After (AI suggestion): "We analyzed data using Cox proportional hazards models. We defined statistical significance as p < 0.05 and performed the analysis using R version 4.2."
Better, but you still need judgment on when passive voice is appropriate.
Improving Flow Between Sections
Prompt:
These two paragraphs don't connect well. Write a transition sentence
that bridges them.
Paragraph 1: [paste]
Paragraph 2: [paste]
AI is surprisingly good at this.
Mode 3: Polishing (Final Cleanup)
Consistency Checks
Prompt:
Check this manuscript for inconsistencies:
- Verb tense (should be past tense in methods/results)
- Abbreviation definitions (defined on first use?)
- Number formatting (decimals, percentages, p-values)
- Citation format
[paste full draft]
This catches errors you miss after reading your paper for the 50th time.
Style Guide Compliance
Prompt:
Review this abstract for compliance with JAMA style guidelines:
- Word count under 350
- Structured format (Background, Objectives, Methods, Results, Conclusions)
- No citations
- Quantitative results with effect sizes and confidence intervals
- No subheadings
[paste abstract]
Most LLMs know major journal style guides. They're not perfect, but they catch obvious violations.
Reference Formatting
Don't use AI for this. Use reference managers (Zotero, Mendeley, EndNote). AI hallucinates citations and gets formatting wrong.
Tool Selection: Claude vs ChatGPT vs Gemini
Claude
Best for:
- Editing (preserves your voice better than alternatives)
- Long documents (200+ pages in Projects)
- Technical accuracy (fewer hallucinations)
- Following complex instructions
Use Claude when: You have a draft and want to improve it
ChatGPT-4
Best for:
- Quick drafts
- Brainstorming alternative phrasings
- Formatting tasks
- Faster responses than Claude
Use ChatGPT when: You need speed over nuance
Gemini
Best for:
- Integration with Google Docs
- Long context (up to 1 million tokens)
- Free tier (generous limits)
Use Gemini when: You're working in Google Docs or need very long context
Honest assessment: Claude > ChatGPT > Gemini for most academic writing. See the tools comparison for more details.
Journal Policies on AI Use
Most major journals now have policies. You need to know them.
Data Privacy Warning: Manuscripts Under Review
Before we discuss disclosure policies, there's a critical privacy issue: never paste manuscripts under peer review into public LLMs. This applies whether you're the author revising based on reviewer comments or a reviewer critiquing someone else's work.
Why this matters:
- Manuscripts under review contain unpublished data that could be scooped
- When you paste text into ChatGPT or Claude, that content is sent to the company's servers
- While most providers claim they don't train on your inputs, privacy policies allow human review for safety/quality purposes
- You're potentially exposing someone else's intellectual property to a third party
For sensitive manuscript work, use local LLMs:
- Ollama or LM Studio run entirely on your computer (nothing leaves your machine)
- They're sufficient for drafting rebuttals, improving clarity, and editing
- Not as capable as GPT-4/Claude, but private
If you must use public LLMs for manuscript revision:
- Work with small snippets (a paragraph at a time) rather than full sections
- Remove all identifying information, citations, and specific details
- Never paste the entire manuscript
This isn't paranoia. It's the same confidentiality standard you apply to discussing unpublished work at conferences.
ICMJE Guidelines (Applies to 1000+ Journals)
- Authorship: AI tools cannot be authors (no accountability)
- Disclosure: Authors must disclose AI use in methods or acknowledgments
- Responsibility: Authors are responsible for accuracy of AI-assisted content
Recommended disclosure language:
"AI tools (Claude/ChatGPT) were used to assist with drafting and editing
portions of this manuscript. All AI-generated content was reviewed and
revised by the authors, who take full responsibility for the final content."
Nature Journals
- AI use must be disclosed in methods section
- Not permitted to use AI for data analysis or image generation (without disclosure)
- AI-generated text must be substantially revised
NEJM
- AI use must be disclosed
- AI cannot be listed as author
- Manuscripts predominantly written by AI will be rejected
Science
- Requires disclosure in methods
- AI-generated images must be labeled
- Transparent about AI assistance
Bottom line: Disclose AI use. It's not shameful. Hiding it is.
What NOT to Do
Don't Submit Unedited AI Text
AI-generated prose has tells:
- Overuse of "delve", "underscore", "landscape" (ChatGPT favorites)
- Constant hedging ("may", "might", "possibly")
- Verbose, meandering sentences
- Lack of specific detail
- No point of view
Reviewers spot this instantly.
Don't Let AI Write Your Results
AI doesn't have your data. If you paste results and ask it to "write the results section," you risk:
- Misreporting numbers
- Overstating significance
- Missing important findings
- Statistical errors
Results require your direct writing and interpretation.
Don't Trust AI with Citations
AI hallucinates papers. Always verify citations manually.
Don't Use AI for Ethics Statements
AI doesn't know your IRB approval number or consent process. Write these sections yourself.
The Right Workflow
Here's how I actually use AI for manuscripts:
1. Outline first (my ideas, no AI)
- Structure the argument
- Identify key points
- List evidence for each claim
2. Write critical sections myself
- Results (reporting my data)
- Key discussion points (my interpretation)
- Novel arguments
3. Use AI for expansion
- Convert outline bullets to prose
- Draft methods from protocol notes
- Synthesize background literature
4. Edit with AI
- Improve clarity
- Cut word count
- Fix passive voice
5. Final polish
- Consistency checks
- Style guide compliance
- Manual citation verification
6. Disclose AI use
- In methods or acknowledgments
- Per journal policy
Practical Examples
Example 1: Converting Bullets to Discussion
My outline:
Our findings show intervention reduced ICU delirium by 40%. This is better
than pharmacologic interventions (typically 15-20% reduction). Likely
mechanisms: improved sleep, reduced sedation, early mobilization.
Limitations: single center, nurse-intensive. But: scalable with training.
Prompt:
Expand this into a 200-word discussion paragraph. Write in first person
plural. Be direct, avoid hedging. Connect the points logically.
[paste outline]
AI output (edited): "We found that our multicomponent intervention reduced ICU delirium incidence by 40%, a substantially larger effect than the 15-20% reduction typically achieved with pharmacologic interventions. We attribute this to three mechanisms: improved sleep architecture through noise reduction and circadian lighting, reduced sedative exposure via protocol adherence, and earlier mobilization reducing the cascade of ICU-acquired weakness. While our single-center design and nurse-intensive protocol are limitations, we believe the intervention is scalable. Our post-implementation training program required only 4 hours per nurse and showed sustained adherence over 18 months."
What I edited: Added specific details (4 hours training, 18 months), removed some hedging, verified the claims matched our data.
Example 2: Improving Clarity
My original: "The utilization of advanced imaging modalities, including but not limited to magnetic resonance imaging and computed tomography, has been increasingly adopted in clinical practice settings, potentially contributing to improved diagnostic accuracy, though the cost-effectiveness of this approach remains a subject of ongoing investigation."
Prompt to Claude:
Improve clarity and conciseness. Cut unnecessary words. Be direct.
[paste sentence]
Claude output: "Advanced imaging (MRI and CT) is increasingly used in clinical practice and may improve diagnostic accuracy, though cost-effectiveness remains uncertain."
Better: Went from 47 words to 23, same meaning, clearer.
Voice Preservation
The biggest risk with AI writing: losing your voice.
To preserve voice:
- Write key sections yourself (especially arguments)
- Edit AI output heavily
- Read drafts aloud (AI prose sounds wooden)
- Compare to your non-AI writing
- Don't let AI make judgment calls
Your voice comes from:
- Specific examples (AI writes generically)
- Point of view (AI hedges everything)
- Word choice (AI defaults to academic clichés)
- Rhythm and pacing (AI writes monotonously)
Use AI for structure and expansion. Provide the substance and style.
Cost and Access
Most writing can be done with free tiers:
- ChatGPT (free): 40 messages per 3 hours
- Claude (free): Lower limits, often sufficient
- Gemini (free): Generous limits
If you write regularly:
- Claude Pro ($20/mo): Best for serious writing
- ChatGPT Plus ($20/mo): Faster, more capacity
- Both: Overkill for most researchers
Time Savings
Realistic expectations:
- Methods section: 80% faster (15 min instead of 60 min)
- Introduction: 50% faster (you still do intellectual work)
- Editing for clarity: 70% faster
- Abstract: 60% faster
But: Results and key discussion points take the same time. AI doesn't speed up the hard thinking.
Key Takeaways
- AI is a writing partner, not a ghostwriter — you provide substance, it provides prose
- Best for drafting methods, expanding outlines, editing clarity — not for results or novel arguments
- Claude > ChatGPT > Gemini for academic writing — Claude preserves voice better
- Always disclose AI use per journal policy — most require acknowledgment
- Edit AI output heavily — unedited AI prose is obvious to reviewers
- Never trust AI citations — verify every reference manually
- Use for expansion and editing, not for thinking — AI can't do your intellectual work
- Preserve your voice — read drafts aloud, compare to your non-AI writing
- The workflow: outline → AI draft → heavy editing → disclosure
