What Traceability Means (and Why It Matters)
Traceability is the practical linking of requirements to the tests that verify them. In day-to-day testing, it answers questions like: “Which tests prove this acceptance criterion is met?”, “What coverage do we have for this story?”, and “If this criterion changes, which tests must be updated?”
Traceability is not paperwork for its own sake. Used lightly, it helps you:
- Ensure coverage: every acceptance criterion has at least one scenario and one test case.
- Manage change: when a story or criterion changes, you can quickly find impacted scenarios and tests.
- Avoid duplicates: identify multiple tests that check the same thing and decide whether they are both needed.
- Communicate status: show what is designed, implemented, executed, and passing/failing.
What You Link: A Simple Chain
A lightweight traceability chain typically connects four levels:
- User Story (what the user needs)
- Acceptance Criteria (what “done” means)
- Test Scenarios (what to validate at a high level)
- Test Cases (the concrete checks you run)
In practice, you can trace in either direction:
- Forward: Story → Criteria → Scenarios → Test Cases (coverage and planning).
- Backward: Failed Test Case → Scenario → Criterion → Story (impact and debugging focus).
Identifiers: The Backbone of Traceability
Traceability works only if each item has a stable identifier. Keep identifiers short, readable, and consistent.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
- US-### for user stories (e.g., US-101)
- AC-### for acceptance criteria, ideally tied to the story (e.g., US-101-AC1)
- SC-### for scenarios (e.g., SC-210)
- TC-### for test cases (e.g., TC-450)
Rule of thumb: identifiers should not change when wording changes. If the meaning changes significantly, create a new ID and retire the old one.
Building a Lightweight Traceability Matrix (Step-by-Step)
Step 1: Choose the smallest useful table
Start with a table that maps acceptance criteria to scenarios and test cases. You can include the story ID as a column to group criteria.
Recommended minimum columns:
- User Story ID
- Acceptance Criterion ID
- Acceptance Criterion (short)
- Scenario ID(s)
- Test Case ID(s)
- Coverage status (e.g., Not designed / Designed / Implemented / Executed)
- Result (Pass/Fail/Blocked/Not run) if you want execution visibility
- Notes (assumptions, environment constraints, links)
Step 2: Populate rows from acceptance criteria (not from tests)
Create one row per acceptance criterion. This prevents “test-led” traceability where you only track what you happened to test.
Step 3: Link scenarios and tests using IDs
For each criterion, list the scenario IDs that cover it, then list the test case IDs that implement those scenarios. If one test covers multiple criteria, list it in each relevant row (and later decide if that is acceptable or if you need more targeted tests).
Step 4: Mark coverage status with a simple vocabulary
Use a small set of statuses so the table stays readable:
- Not designed: no scenario/test exists yet.
- Designed: scenario exists; test case(s) drafted.
- Implemented: test cases are ready in your test management tool or automation suite.
- Executed: run at least once in the current cycle.
If you track execution, keep “coverage status” separate from “result” so you can distinguish “not executed” from “executed and failing.”
Example: A compact traceability matrix
| US ID | AC ID | AC (short) | SC IDs | TC IDs | Coverage | Result | Notes |
|-------|---------------|--------------------------------------|--------------|---------------------|--------------|----------|-------------------------------|
| US-101| US-101-AC1 | User can reset password via email | SC-210 | TC-450, TC-451 | Implemented | Not run | Email provider sandbox needed |
| US-101| US-101-AC2 | Reset link expires after 30 minutes | SC-211 | TC-452 | Designed | N/A | Time control approach TBD |
| US-101| US-101-AC3 | Invalid email shows generic message | SC-212 | TC-453 | Executed | Pass | |
| US-102| US-102-AC1 | User can view order history | SC-220,SC-221| TC-460, TC-461,TC-462| Executed | Fail | Bug: pagination incorrect |Notice how the table makes gaps obvious (e.g., “Designed” but not “Implemented”, or “Not designed”). It also shows where one criterion is covered by multiple tests (potential duplicates) and where one story has multiple criteria with uneven progress.
Coverage Rules You Can Apply Immediately
To keep traceability actionable, define a few simple rules and use the matrix to enforce them:
- Rule 1: Every acceptance criterion must map to at least one scenario. If SC IDs are empty, you have a design gap.
- Rule 2: Every scenario must map to at least one test case. If TC IDs are empty, you have an implementation gap.
- Rule 3: Every test case must map back to at least one acceptance criterion. If a test cannot be traced back, it may be unnecessary or based on an assumption that should be documented as a new criterion.
- Rule 4: Avoid “one mega test covers everything.” If the same TC appears in many rows, consider splitting it so failures point to a specific criterion.
Handling Change: Add, Retire, Revise (Without Chaos)
Change is normal: criteria evolve, stories split, and edge conditions are clarified. Traceability helps you update tests systematically instead of relying on memory.
When a new acceptance criterion is added
- Add a new row with a new AC ID (e.g., US-101-AC4).
- Set Coverage status = Not designed.
- Create or link scenario(s) and test case(s) and update the row as they are produced.
- Check for overlap: does an existing scenario already cover it? If yes, decide whether to link it (and maybe add a new test case) or to create a new scenario for clarity.
When an acceptance criterion is removed
- Mark the AC as retired (keep the row but label it “Retired” in Notes or Coverage).
- Identify orphan tests: any TC that only exists for that retired criterion should be retired too.
- Do not delete immediately if you need auditability; instead, mark tests as “Retired” and exclude them from active runs.
When an acceptance criterion changes wording vs. meaning
First decide whether it is a clarification or a real behavior change:
- Clarification (same meaning): keep the same AC ID; update the text; review linked scenarios/tests for wording updates only.
- Behavior change (new meaning): create a new AC ID; mark the old one as retired or superseded; review impacted scenarios/tests and either revise or replace them.
When a criterion changes: a practical impact workflow
Use the matrix as your impact analysis tool:
- Step 1: Locate the AC row and list all linked SC and TC IDs.
- Step 2: Classify each linked test as “Revise”, “Retire”, or “Still valid”.
- Step 3: Update coverage status to reflect work needed (e.g., back to Designed if tests must be rewritten).
- Step 4: Add a note referencing the change request, ticket, or decision (so future readers know why it changed).
Example: Revising tests after a criterion change
Suppose US-101-AC2 changes from “Reset link expires after 30 minutes” to “Reset link expires after 15 minutes and can be used only once.”
- Revise existing scenario/test for expiration time (SC-211, TC-452).
- Add a new scenario/test for “single-use” behavior (new SC-213, new TC-454).
- Update the matrix so US-101-AC2 links to SC-211 and SC-213, and to TC-452 and TC-454.
Spotting Gaps and Duplicates Using the Matrix
How to spot gaps
- Empty SC IDs: criterion not covered by any scenario.
- Empty TC IDs: scenario exists but no executable test.
- Status lag: many rows stuck at Designed (planning risk).
- Execution gaps: Implemented but not Executed (delivery risk).
How to spot duplicates (and decide what to do)
Duplicates are not always bad. Sometimes you need separate tests for different platforms, roles, or data setups. The matrix helps you see duplication and then justify it.
- Same TC listed across many AC rows: test may be too broad; consider splitting.
- Many TCs for one AC with similar notes: likely redundant; keep the most valuable ones and retire the rest.
- Two scenarios with near-identical intent: merge scenarios and keep distinct tests only if they validate different risks.
Structured Activity: Build and Use a Traceability Table
Goal
Create a traceability matrix for the user stories, acceptance criteria, scenarios, and test cases you produced in the earlier exercises, then use it to identify missing coverage and unnecessary duplication.
Inputs
- Your earlier User Story IDs and Acceptance Criteria
- Your earlier Scenario IDs
- Your earlier Test Case IDs
Part A: Build the table
Follow these steps:
- 1) Create a table with columns: US ID, AC ID, AC (short), SC IDs, TC IDs, Coverage, Result, Notes.
- 2) Add one row per acceptance criterion (copy AC text in a shortened form).
- 3) Fill SC IDs by linking each criterion to the scenario(s) that validate it.
- 4) Fill TC IDs by listing the test case(s) that implement those scenarios.
- 5) Set Coverage for each row (Not designed/Designed/Implemented/Executed).
Part B: Use the table to find issues
Answer these questions using only your table:
- Coverage gaps: Which AC rows have no SC IDs or no TC IDs?
- Overloaded tests: Which TC IDs appear in 3+ AC rows? Should they be split?
- Potential duplicates: Which AC rows have many TC IDs that seem to check the same thing? Which ones can be retired?
- Status risk: Which stories have the most “Not designed” or “Designed” rows?
Part C: Apply a change and update traceability
Pick one acceptance criterion from your earlier work and simulate a change (e.g., add a constraint, remove a condition, or tighten a rule). Then:
- 1) Update the AC text (and decide whether the AC ID stays or a new one is needed).
- 2) Mark impacted tests as Revise/Retire/Still valid in Notes.
- 3) Add new SC/TC IDs if needed, and update Coverage statuses accordingly.
Template you can copy
| US ID | AC ID | AC (short) | SC IDs | TC IDs | Coverage | Result | Notes |
|------|-------|------------|--------|--------|----------|--------|-------|
| | | | | | | | |