What “auditing” means in an analyst’s spreadsheet workflow
Spreadsheet auditing is the disciplined process of verifying that a workbook’s numbers, logic, and outputs are correct, complete, and consistent with the intended purpose. In practice, auditing is less about “finding a single wrong cell” and more about establishing a repeatable review workflow that (1) surfaces risk quickly, (2) documents what was checked, and (3) makes it easy for another reviewer to reproduce your findings.
A useful way to frame auditing is to separate three questions:
- Does the workbook calculate correctly? (formula logic, references, edge cases, rounding, sign conventions)
- Does it represent the business problem correctly? (definitions, units, time alignment, aggregation, inclusion/exclusion rules)
- Is it reviewable and maintainable? (traceability, consistent structure, change control, clear review evidence)
This chapter focuses on the workflow and review mechanics: how to plan an audit, how to execute checks efficiently, how to record evidence, and how to manage changes without creating new errors.
Roles and review levels: self-check, peer review, and independent audit
Not every workbook needs the same rigor. Define the review level up front so the effort matches the risk.
- Self-check (author review): The model builder runs a structured checklist before sharing. Goal: remove obvious errors and reduce reviewer time.
- Peer review: Another analyst reviews logic and outputs. Goal: catch mistakes the author is blind to and validate reasonableness.
- Independent audit: A reviewer not involved in building tests the workbook as if it were a black box. Goal: provide higher assurance for high-stakes decisions.
Even in a small team, you can mimic independence by having the reviewer rebuild a small portion of the logic from scratch, or validate outputs using an alternative method (e.g., pivoting raw data, recalculating a key metric with a different approach).
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Planning the audit: scope, materiality, and risk hotspots
Before clicking through formulas, define what “done” means. A short audit plan can be a simple table in the workbook (or a separate review log) that lists what will be checked and at what depth.
1) Define scope and boundaries
- Which sheets are in scope (inputs, calculations, outputs, data import, helper sheets)?
- Which outputs are decision-critical (KPIs, charts, summary tables, executive numbers)?
- What is explicitly out of scope (e.g., upstream data accuracy, external system extracts)?
2) Set a materiality threshold
Materiality is the smallest error that would change a decision. For example, in a budget model, $1,000 might be immaterial; in a pricing model, $0.01 could matter. Materiality guides where you spend time: you still fix small issues, but you prioritize checks that can move key outputs beyond the threshold.
3) Identify risk hotspots
Some areas are statistically more likely to contain errors:
- Complex formulas with multiple nested functions
- Manual overrides and hard-coded numbers inside formulas
- Copy/paste blocks and repeated sections (risk of misaligned references)
- Time series roll-forwards (month-to-month links, cumulative totals)
- Lookups/joins across tables (missing keys, approximate matches, duplicates)
- External links and imported data ranges
- Hidden rows/columns and hidden sheets
Use these hotspots to decide where to do deep tracing versus high-level reasonableness checks.
Audit workflow overview: a repeatable sequence
A practical review sequence that works well for most analytical workbooks is:
- Stabilize: ensure calculation settings and file state are controlled.
- Map: understand workbook flow and dependencies.
- Scan: run quick checks to surface obvious issues.
- Test: validate critical calculations with targeted test cases.
- Trace: follow key outputs back to sources and verify references.
- Reconcile: compare to independent totals or external benchmarks.
- Document: record what was checked, issues found, and fixes applied.
The rest of the chapter expands each step with concrete Excel techniques.
Stabilize: control calculation mode, external links, and file state
Check calculation mode and iterative settings
Workbooks can behave differently depending on calculation mode. If a reviewer opens a file in Manual calculation, outputs may appear “wrong” until recalculated. As part of stabilization:
- Confirm calculation mode (Automatic vs Manual) and note it in the review log.
- If the workbook uses iterative calculation (for circular references), confirm the iteration settings and ensure they are intentional.
Identify external links and data connections
External links can break silently or update unexpectedly. Reviewers should:
- List external links and confirm whether they should be updated, frozen, or removed.
- Confirm data connections refresh behavior (manual refresh vs refresh on open).
If the workbook is meant to be a static deliverable, consider converting external dependencies into values (with clear labeling) for the reviewed version, while keeping a working version for ongoing updates.
Freeze the version being reviewed
Reviewing a moving target wastes time. Establish a “review copy” and a “working copy.” The review copy should not change except through tracked fixes. If your team uses SharePoint/OneDrive version history, note the version/time stamp in the review log.
Map: understand structure and dependencies quickly
Create a dependency map for key outputs
You do not need to understand every cell to audit effectively. Start from the most important outputs and map their upstream dependencies.
- Pick 5–10 critical output cells (e.g., total revenue, EBITDA, cash balance, headcount, conversion rate).
- For each, identify which sheet(s) feed it and whether it depends on lookups, time roll-forwards, or aggregations.
Excel tools that help mapping:
- Trace Precedents / Trace Dependents to visualize direct links.
- Go To Special to find formulas, constants, blanks, and errors in a region.
- Name Manager to review named ranges used in key formulas.
Spot “hidden complexity”
During mapping, explicitly check for:
- Hidden sheets and very hidden sheets (VBA-controlled)
- Hidden rows/columns in calculation areas
- Grouped outlines that collapse sections
- Pivot tables feeding outputs (refresh state matters)
Hidden elements are not inherently bad, but they increase review risk because they can conceal overrides or broken links.
Scan: fast checks that catch many issues early
Scan for formula consistency within blocks
A common error is a single cell in a copied range that references the wrong row/column. A quick way to detect this is to compare formulas across a block.
- Select a calculation block and use a formula view (Ctrl+`) to visually scan for outliers.
- Use conditional formatting to highlight cells where the formula differs from the pattern (teams often build a “formula consistency” check column).
Practical tip: in time series, the formula in each month should usually be the same pattern shifted one column. A single different formula is a red flag worth tracing.
Scan for hard-coded constants inside formulas
Hard-coded numbers can be legitimate (e.g., 12 months), but they often represent hidden assumptions. Use Find to search for common patterns:
- Search for “*0.” or “*1.” to find multipliers
- Search for “/100” or “*100” to find percent conversions
- Search for “+” followed by digits to find embedded add-ons
When you find a constant, ask: is it a true constant, or should it be a referenced input? If it must remain embedded, ensure it is documented in the review log as an intentional design choice.
Scan for error values and suspicious blanks
Use Go To Special to select:
- Formulas with errors (#N/A, #VALUE!, #DIV/0!, #REF!)
- Constants in calculation areas (may indicate overrides)
- Blanks where data is expected (may indicate broken links)
Not all errors are wrong (e.g., #N/A used intentionally), but every error should be explainable and controlled.
Test: targeted test cases to validate logic
Testing is where you move from “looks plausible” to “proven under conditions.” The idea is to create small, controlled test inputs that produce predictable outputs.
Designing effective test cases
- Zero test: Set key drivers to zero and confirm outputs go to zero where expected (and do not produce divide-by-zero issues unless handled).
- One-at-a-time test: Change one driver while holding others constant; confirm the output changes in the expected direction and magnitude.
- Boundary test: Use extreme but possible values (very high volume, very low price) to see if formulas break or overflow.
- Sign test: Flip the sign of a driver (e.g., refunds, returns) to confirm sign conventions are consistent.
- Timing test: Shift a start date or lag assumption and confirm the effect moves to the correct periods.
Step-by-step: build a small test harness inside the workbook
This approach keeps testing repeatable and visible without rewriting the model.
- Step 1: Create a “Review Tests” sheet with a table: Test ID, Change applied, Expected result, Actual result cell link, Pass/Fail, Notes.
- Step 2: For each test, link the “Actual result” to the relevant output cell (do not copy values; link to the live cell).
- Step 3: Apply the test change (temporarily) and record whether the output matches expectation.
- Step 4: Revert the change and confirm the workbook returns to baseline outputs.
To avoid losing the baseline, record baseline values for key outputs in the test sheet and compare after each test. If the workbook is large, limit baseline to the most decision-critical outputs.
Example: testing a revenue calculation
Suppose a key output is Total Revenue. You can test it with predictable inputs:
- Set Units = 100, Price = 10, Discounts = 0. Expected revenue = 1,000.
- Set Discounts = 10%. Expected revenue = 900.
- Set Units = 0. Expected revenue = 0.
If any test fails, you now have a narrow area to trace: the revenue formula chain and any lookups feeding price or discount.
Trace: follow critical outputs back to sources
Trace precedents for key outputs
For each critical output cell, trace precedents and confirm:
- References point to the intended ranges (correct rows/columns, correct time period)
- Lookups use the correct key and return the correct field
- Aggregations include all required components and exclude non-relevant ones
When tracing, watch for “reference drift” caused by inserting rows/columns or copying formulas across sections. A formula can still calculate without error while pointing to the wrong place.
Step-by-step: audit a lookup chain
- Step 1: Identify the lookup formula feeding a key output (e.g., XLOOKUP/INDEX-MATCH/VLOOKUP).
- Step 2: Verify the lookup key cell: is it the correct ID, and is its format consistent (text vs number)?
- Step 3: Verify the lookup array/table: does it include the full dataset, or is it a partial range?
- Step 4: Verify match mode: exact match vs approximate. Approximate matches are high-risk unless explicitly intended.
- Step 5: Validate with a manual spot check: pick one ID and confirm the returned value matches the source table.
Common failure modes include duplicate keys (lookup returns the first match), truncated ranges (new rows not included), and approximate matches on unsorted data.
Trace dependents from inputs that “should matter”
Another powerful technique is to start from a key driver and trace dependents to ensure it actually influences outputs. If a driver is disconnected, changes won’t flow through, and the workbook may appear stable while being wrong.
Reconcile: independent checks and cross-footing
Reconciliation is about confirming that totals tie out across different views of the same data.
Cross-footing and internal consistency checks
- Row/column totals: If a table has both row totals and column totals, confirm they agree.
- Subtotals vs grand totals: Confirm that category subtotals sum to the grand total.
- Opening + change = closing: For roll-forward schedules (inventory, cash, headcount), confirm the identity holds for every period.
Independent recomputation (spot rebuild)
Pick one or two high-impact outputs and recompute them independently using a simplified method. Examples:
- Recalculate a KPI using a pivot table from the underlying data extract.
- Recompute a monthly total by summing the underlying line items in a separate check area.
- Rebuild a single period’s calculation using a calculator-style breakdown to confirm the formula chain.
The goal is not to duplicate the entire workbook, but to create an independent “sanity anchor” that can catch systemic issues (like a missing category or a sign inversion).
Reviewing changes: fix workflow, retesting, and avoiding regressions
Separate “issue identification” from “issue fixing”
During review, it is tempting to fix issues immediately. That can be efficient, but it can also blur accountability and make it hard to know what changed. A controlled workflow is:
- Log the issue first (location, description, impact, suspected cause).
- Decide who will fix it (author vs reviewer).
- Apply the fix in a tracked way (notes, version history, or a change log table).
- Retest the affected outputs and any related areas.
Step-by-step: implement a change log inside the workbook
- Step 1: Add a “Change Log” sheet with columns: Date, Version, Changed by, Sheet/Range, Description, Reason, Reviewer sign-off.
- Step 2: For each fix, record the exact range and a short description (e.g., “Corrected lookup range to include new rows”).
- Step 3: After fixes, rerun the test harness and record pass/fail results.
This creates audit evidence and reduces the risk of “silent fixes” that introduce new errors.
Regression testing: what to retest after a fix
When you fix a formula, you should retest:
- The specific output that was wrong
- Any outputs that depend on the changed cells (trace dependents)
- Baseline totals and reconciliation identities (cross-footing, roll-forward checks)
Even small changes can ripple through a workbook, especially if the fixed cell is upstream of many calculations.
Peer review mechanics: how to review efficiently and communicate findings
Use a structured review checklist
A checklist reduces variability and ensures critical steps are not skipped. A practical checklist for peer review might include:
- Calculation mode and external links checked
- Key outputs identified and mapped
- Error scans completed (errors, constants, blanks)
- Formula consistency checked in major blocks
- Lookup logic spot-checked
- Roll-forward identities reconciled
- Targeted test cases executed and recorded
- Change log updated and retesting completed
Keep the checklist short enough to be used; add depth through “risk hotspots” rather than making every review exhaustive.
Write issues in a way that is actionable
Good review notes are specific and reproducible. For each issue, capture:
- Where: sheet name and cell/range
- What: observed behavior (e.g., total excludes category X)
- Expected: what should happen and why
- Impact: which outputs are affected and approximate magnitude
- Suggested fix: if known, or questions to resolve
This reduces back-and-forth and helps the author prioritize fixes by impact.
Common audit findings and how to detect them
Misaligned time periods
Symptoms include sudden jumps at month boundaries, totals that don’t match annual sums, or a driver affecting the wrong period. Detection methods:
- Compare month-over-month patterns for discontinuities
- Trace references for one period and confirm they point to the same period upstream
- Run a timing test by shifting a start date and confirming the shift propagates correctly
Partial ranges and “new rows not included”
Symptoms include totals that stop changing after new data is appended. Detection methods:
- Inspect SUM ranges and lookup arrays for fixed endpoints
- Spot-check the last row of data and confirm it is included in totals
Duplicate keys in lookup tables
Symptoms include inconsistent results for the same ID or unexpected values returned. Detection methods:
- Check uniqueness of key columns (e.g., count occurrences)
- Spot-check a few returned values against the source table
Hidden overrides
Symptoms include a single cell breaking a pattern or totals not reconciling. Detection methods:
- Go To Special for constants in formula regions
- Scan for cells with different formatting or comments indicating manual edits
- Review hidden rows/columns in calculation blocks
Using Excel’s built-in auditing tools effectively
Formula auditing tools: when to use which
- Trace Precedents/Dependents: Best for understanding direct links around a key cell.
- Evaluate Formula: Best for stepping through a complex formula to see intermediate results and identify where logic diverges.
- Show Formulas: Best for scanning a region for pattern breaks and reference drift.
- Error Checking: Helpful for catching obvious issues, but do not rely on it as a full audit.
Step-by-step: diagnosing a complex formula with Evaluate Formula
- Step 1: Select the suspicious cell and open Evaluate Formula.
- Step 2: Step through and note where the intermediate value becomes unexpected.
- Step 3: If a referenced cell is wrong, jump to it and trace further upstream.
- Step 4: After a fix, re-evaluate the formula and rerun relevant test cases.
This method is especially effective for formulas that combine lookups, conditional logic, and arithmetic in one cell.
Deliverables from a good review: evidence, not just “looks good”
A review should leave behind artifacts that make the workbook safer to use and easier to maintain:
- A review log or checklist showing what was tested
- A test harness with recorded pass/fail outcomes for key cases
- A change log documenting fixes and retesting
- Clear identification of any known limitations or open questions (with owners)
These deliverables turn auditing from an informal activity into a workflow that scales across analysts and across time, especially when the workbook will be reused or updated under deadline pressure.
Example “Review Tests” table structure (on a dedicated sheet) Columns: Test_ID | Change_Applied | Expected_Result | Actual_Cell_Link | Pass_Fail | Notes T01 | Units=100, Price=10, Discount=0 | Revenue=1000 | =Outputs!F12 | (manual) | Verify revenue logic T02 | Discount=10% | Revenue=900 | =Outputs!F12 | (manual) | Check percent handling T03 | Units=0 | Revenue=0 | =Outputs!F12 | (manual) | Check zero behavior