Accuracy and Verification: Testing Claims During and After the Interview

Capítulo 8

Estimated reading time: 9 minutes

+ Exercise

Verification as a Two-Stage Process

Accuracy is not a single step you do at the end; it is a workflow you run in two stages: (1) real-time checks during the interview to test claims as they appear, and (2) post-interview corroboration to confirm, contextualize, and quantify what you heard. Treat every meaningful assertion as a claim that needs a verification plan.

What Counts as a “Claim” Worth Tracking

Track anything that could mislead an audience if wrong, including:

  • Numbers (budgets, counts, dates, percentages, rankings).
  • Causation (“X caused Y,” “because of our policy…”).
  • Attribution (“the regulator approved it,” “experts agree”).
  • Superlatives (“first,” “largest,” “only,” “unprecedented”).
  • Characterizations that imply fact (“no one was harmed,” “the system is secure”).
  • Process claims (“we followed protocol,” “we notified customers”).

Not every colorful quote needs verification, but any statement that functions as evidence in your story does.

Stage 1: Real-Time Checks During the Interview

Real-time verification is about pinning down specifics, surfacing the source of knowledge, and collecting artifacts (documents, names, dates) that make later checking possible. You are not accusing; you are clarifying.

Step-by-Step: A Real-Time Verification Routine

  1. Isolate the claim. Repeat it back in plain language to ensure you heard it correctly.
    “Just to confirm, you’re saying the outage affected 12,000 customers?”
  2. Define terms and boundaries. Ask what exactly is included/excluded and the timeframe.
    “12,000 across which counties, and over what dates?”
  3. Ask “How do you know?” Identify whether it’s firsthand observation, internal data, or hearsay.
    “Is that from your internal logs, a regulator report, or your estimate?”
  4. Request the underlying source. Ask for documents, datasets, emails, meeting minutes, photos, or logs.
    “Can you share the incident report or the dashboard screenshot that shows that figure?”
  5. Get names and identifiers. People, offices, case numbers, contract IDs, docket numbers, report titles, URLs.
    “Who authored the report, and what’s the report title and date?”
  6. Test for alternative explanations. Ask what else could account for the outcome and what was ruled out.
    “What other causes did you investigate, and how did you eliminate them?”
  7. Quantify uncertainty. If they’re estimating, capture the range and method.
    “Is 12,000 a precise count or a range? What’s the margin of error?”
  8. Lock in a follow-up path. Agree on what they will send and by when; confirm the best contact channel.
    “Can you email the report today, and if it’s confidential, can you provide a redacted version?”

Real-Time Question Templates That Trigger Verifiable Detail

GoalUseful prompts
Pin down a number“What’s the exact figure, and what system/report does it come from?”
“As of what date/time was that measured?”
Clarify a timeline“What happened first, second, third—what are the timestamps?”
“Who was notified at each step?”
Verify a policy/process claim“What is the written policy called, and where can I read it?”
“Who signs off when that protocol is followed?”
Check attribution“Which agency/office approved it, and what’s the reference number?”
“Can you point me to the public record?”
Test a superlative“Compared to what baseline—what makes it the largest/first?”
“Who else tracks this, and what do their numbers show?”
Surface incentives/bias“What’s your role in this decision, and what would success look like for you?”
“Who disagrees internally, and why?”

On-the-Spot “Red Flags” That Require Immediate Clarification

  • Vague quantifiers: “many,” “most,” “a lot,” “rarely,” “soon.” Ask for counts, percentages, dates.
  • Passive voice without actors: “mistakes were made,” “it was decided.” Ask: who decided, who acted.
  • Unanchored comparisons: “better,” “safer,” “more efficient.” Ask: compared to what, measured how.
  • Appeals to unnamed authority: “experts say,” “people are saying.” Ask: which experts, which publication, which study.
  • Overconfident certainty: “definitely,” “no doubt,” “guaranteed.” Ask what evidence would falsify the claim.

Claim-Tracking Table (Fillable)

Use a running table while you interview and as you report. The goal is to prevent “verification debt” from piling up.

Continue in our app.
  • Listen to the audio with the screen off.
  • Earn a certificate upon completion.
  • Over 5000 courses for you to explore!
Or continue reading below...
Download App

Download the app

Claim (write it as a testable statement)Evidence offered by source (quote, doc, data, observation)Verification method (what you will do)Confidence level (Low/Med/High + why)
Example: “The program reduced wait times by 30% in 2025.”“Our dashboard shows it.” (no doc provided yet)Request dashboard export; compare to prior-year baseline; confirm with independent dataset or public records; ask analyst to review methodologyMed (plausible but needs data + baseline definition)
Example: “No customers’ data was accessed.”Internal incident summary (promised)Obtain incident report; confirm with regulator filing; consult security expert on indicators of access; check breach notification requirementsLow (high-stakes claim; requires independent confirmation)
Example: “The permit was approved last Friday.”Mentions email from agency stafferCheck permit database; request approval letter; confirm date with agency spokespersonHigh (public record likely available)

Tip: Keep the “claim” column free of hedging. Write the strongest version of what the source implies, so you know what you must prove or qualify.

Stage 2: Post-Interview Corroboration

Post-interview verification is where you corroborate independently, validate documents, and stress-test interpretations. Your standard should be: could a skeptical reader trace this statement to reliable evidence?

Step-by-Step: A Post-Interview Verification Workflow

  1. Extract and prioritize claims. From notes/transcript, list claims that are central, surprising, or high-impact (legal, safety, money, reputations).
  2. Classify each claim by type. Number, timeline, causation, attribution, policy/process, characterization. This helps you choose the right method.
  3. Gather primary records. Prefer original documents over summaries: filings, contracts, audits, meeting minutes, emails, logs, datasets, court records, inspection reports.
  4. Validate authenticity and context. Check headers, dates, authors, version history, metadata when available; confirm whether it’s complete or excerpted; note what’s missing.
  5. Seek independent confirmation. Corroborate with at least one source not controlled by the interviewee: public databases, third-party reports, other witnesses, affected parties, watchdog groups, regulators.
  6. Consult an expert for technical claims. Use experts to evaluate methodology, plausibility, and what the evidence does/doesn’t show. Ask them to explain assumptions and limitations.
  7. Re-contact the interviewee with specific gaps. Present the precise point that needs support and ask for the exact record.
    “Your claim depends on a 2024 baseline. Which month is the baseline, and can you share the raw monthly counts?”
  8. Document your verification trail. Keep a folder or log: what you checked, links/filenames, dates accessed, who confirmed, and what remains uncertain.

Verification Methods by Claim Type

Claim typeBest corroboration toolsCommon pitfalls
Numbers/statisticsOriginal dataset; methodology notes; independent dataset; trend comparison; denominator checkCherry-picked time windows; unclear baseline; mixing counts and rates; rounding presented as precision
Timeline/eventsLogs, emails, calendars, dispatch records, filings, contemporaneous messages, photos with metadataMemory drift; time zone confusion; post hoc reconstruction
CausationMultiple sources; expert review; alternative explanations; before/after with controls when possibleConfusing correlation with causation; single anecdote as proof
Attribution/authorityOfficial statements; public records; direct confirmation from the named entity“Off the record” used as shield; unnamed “officials” without traceable authority
Policy/process complianceWritten policy; training records; audit results; enforcement history; internal memosPolicy exists but not followed; outdated versions; exceptions not disclosed
Safety/security claimsIncident reports; regulator notifications; independent forensic/technical assessment; affected-party accountsOverreliance on internal assurances; incomplete incident scope

Identifying and Reporting Uncertainty

Verification is not always binary. Sometimes the honest outcome is: partly confirmed, unconfirmed, or cannot be independently verified. Your job is to recognize uncertainty early and express it accurately.

Where Uncertainty Commonly Enters

  • Estimates presented as facts: a source gives a neat number without showing how it was calculated.
  • Incomplete records: documents cover only part of the period, region, or population.
  • Conflicting accounts: two credible sources disagree on a key detail.
  • Technical ambiguity: evidence exists but requires interpretation (e.g., what counts as “accessed” data).

Language That Avoids Overstating

Use wording that matches what you can support. Examples:

  • When you have partial confirmation: “Records reviewed by [newsroom] show…” / “According to two internal emails dated…”
  • When a claim is unverified: “The company says…” / “The claim could not be independently verified.”
  • When evidence suggests but doesn’t prove: “The documents indicate…” / “The data is consistent with…”
  • When numbers are estimates: “An estimated…” / “Between X and Y, according to…”
  • When sources disagree: “Accounts differ on…” / “Officials offered conflicting timelines.”

Discipline: avoid upgrading a source’s certainty in your paraphrase. If they say “we believe,” don’t write “it is.” If they say “around,” don’t write an exact figure.

Confidence Levels You Can Use in Your Claim Table

  • High: confirmed by primary records and/or independent sources; definitions and timeframe are clear.
  • Medium: supported by some documentation or credible testimony but missing key context (baseline, scope, method) or independent confirmation.
  • Low: relies mainly on assertion; evidence is unavailable, unverifiable, or contradicted; high stakes if wrong.

Handling Corrections When New Facts Emerge

Verification continues after publication or broadcast. New records, new witnesses, or clarified data can change what you know. Build a correction workflow that is fast, specific, and transparent.

Step-by-Step: Correction and Update Workflow

  1. Identify the exact statement that may be wrong. Quote the sentence/graphic/chyron and note where it appeared.
  2. Re-verify from the ground up. Return to primary records and independent confirmation; don’t rely on the same chain of information that produced the error.
  3. Assess impact. Does it change a number, a timeline, the meaning, or the fairness to a person/organization?
  4. Correct precisely. Replace the incorrect detail with the verified one; avoid vague fixes like “clarified” when it was wrong.
  5. Explain what changed and why. “This story originally said X. Records show Y. The story has been updated.”
  6. Notify affected parties when appropriate. Especially if the error could harm reputation or safety.
  7. Update your claim-tracking table. Mark the claim as corrected, note the new evidence, and record how the error occurred (misread document, ambiguous term, outdated data).

Common Correction Triggers to Watch For

  • Revised datasets (agencies update counts; companies restate figures).
  • New filings or reports that supersede earlier statements.
  • Terminology disputes (e.g., “breach,” “injury,” “approval” defined differently by different parties).
  • Scope changes (a problem initially described as local turns out to be broader).

Practical Drill: Run a Claim Through Both Stages

Scenario claim: “Our new screening reduced false positives by 40%.”

During the Interview (Real-Time Checks)

  • “40% compared to which period or version of the screening?”
  • “What’s the definition of ‘false positive’ in your system?”
  • “What are the raw counts—how many false positives before and after?”
  • “Who measured this, and is there a report or dataset you can share?”

After the Interview (Corroboration)

  • Obtain the report/dataset; verify the baseline and timeframe.
  • Check whether the denominator changed (e.g., fewer total screenings).
  • Ask an independent expert whether the measurement approach is valid and what confounders might exist.
  • Seek an external check: regulator inspection, third-party audit, or affected-party accounts.

Enter the claim into the table, record what was offered, choose methods, and assign a confidence level that reflects what you can prove, not what you were told.

Now answer the exercise about the content:

During an interview, what approach best supports real-time verification of a source’s important claim?

You are right! Congratulations, now go to the next page

You missed! Try again.

Real-time verification focuses on clarifying—not accusing—by isolating the claim, pinning down boundaries, asking “How do you know?”, and collecting sources and identifiers so the claim can be checked later.

Next chapter

Capturing Quotes Ethically: Notes, Recording, and Attribution

Arrow Right Icon
Free Ebook cover Interviewing for Journalists: Asking Better Questions
67%

Interviewing for Journalists: Asking Better Questions

New course

12 pages

Download the app to earn free Certification and listen to the courses in the background, even with the screen off.