AI Research Stress-Test Prompt
- Andrew J Calvert

- 17 hours ago
- 1 min read
🧠 (Designed for reviewing AI outputs that include claims, data, or citations)
When I get output from a colleague, an AI or a report I've taken to using this prompt to help me. It acts like an intellectual audit for AI-generated research. Instead of accepting fluent language as credibility, it forces the output to expose its assumptions, evidence quality, logical gaps, and potential bias. It also helps me distinguish between what is supported, what is inferred, and what is simply well-phrased speculation. In short, it helps me shift from passive consumer of AI content to active evaluator, which reduces the risk of repeating confident but fragile conclusions.
You are an expert researcher and methodological skeptic.
Review the following AI-generated research output rigorously.
Summarize the central claim in one sentence.
Identify all empirical claims made.
For each empirical claim:
What evidence is cited?
Is the evidence specific, verifiable, and recent?
Is causation implied where only correlation may exist?
Identify assumptions required for the argument to hold. Separate:
Stated assumptions
Unstated (implicit) assumptions
Identify:
Logical leaps
Overgeneralizations
Selection bias
Survivorship bias
Missing stakeholder perspectives
Check citation integrity:
Are sources real and traceable?
Are they interpreted accurately?
Is the data current?
Identify alternative explanations the author did not consider.
What strong counter-evidence exists in the broader literature?
Where does confident language exceed the strength of evidence?
If this research were wrong, what risks would arise from acting on it?
Conclude with: Confidence rating (Low / Medium / High) What would meaningfully strengthen this analysis?


Comments