Why Quality Checks Matter Before You Code
Pseudocode can look “right” while still being wrong. Quality checks help you catch logic errors early, before they become debugging sessions in a real language. This chapter focuses on four practical checks you can do on paper (or in a document): manual tracing, test case design (normal/boundary/invalid), completeness checks (branches/loops), and readability review (names/steps). You will use a worksheet-style loop: choose inputs → predict outputs → trace → compare → revise until everything matches.
1) Manual Tracing (Dry-Run) with a Variable Table
Manual tracing means you execute the pseudocode step by step as if you were the computer. The key tool is a trace table: each row is a step, and each column is a variable (plus notes about decisions and outputs). Tracing is especially useful for catching off-by-one errors, wrong updates inside loops, and missing initialization.
Example Pseudocode to Trace
We will trace a small algorithm that calculates the total cost after applying a discount rule. (The exact rule is less important than the tracing method.)
INPUT price, isMember // price is a number, isMember is true/false
IF price < 0 THEN
OUTPUT "invalid"
STOP
END IF
discountRate <- 0
IF isMember = true THEN
discountRate <- 0.10
END IF
IF price >= 100 THEN
discountRate <- discountRate + 0.05
END IF
finalPrice <- price * (1 - discountRate)
OUTPUT finalPriceHow to Build a Trace Table
- Choose a specific input (e.g.,
price=120,isMember=true). - List variables you expect to change:
price,isMember,discountRate,finalPrice. - Number key steps (each assignment, each decision, each output).
- Record values after each step. For decisions, record the condition result (T/F) in a notes column.
Trace Table (Input: price=120, isMember=true)
| Step | Action | price | isMember | discountRate | finalPrice | Notes |
|---|---|---|---|---|---|---|
| 1 | Read INPUT | 120 | true | — | — | |
| 2 | IF price < 0 | 120 | true | — | — | 120 < 0 is False |
| 3 | discountRate <- 0 | 120 | true | 0 | — | |
| 4 | IF isMember = true | 120 | true | 0 | — | true = true is True |
| 5 | discountRate <- 0.10 | 120 | true | 0.10 | — | |
| 6 | IF price >= 100 | 120 | true | 0.10 | — | 120 >= 100 is True |
| 7 | discountRate <- discountRate + 0.05 | 120 | true | 0.15 | — | |
| 8 | finalPrice <- price * (1 - discountRate) | 120 | true | 0.15 | 102 | 120 * 0.85 = 102 |
| 9 | OUTPUT finalPrice | 120 | true | 0.15 | 102 | Output: 102 |
What Tracing Reveals
- If
discountRatewas not initialized, the table would show “unknown” values early, signaling a bug. - If the second discount rule overwrote the first instead of adding (
discountRate <- 0.05), the trace would show 0.05 instead of 0.15. - If the boundary check was wrong (
price > 100instead of>=), a trace withprice=100would catch it.
2) Designing Small Test Cases (Normal, Boundary, Invalid)
Tracing one input is not enough. You want a small set of test cases that “cover” the important behaviors. A good beginner rule: for each decision and loop, include at least one test that makes it go each possible way.
Test Case Categories
- Normal inputs: typical values that should work.
- Boundary inputs: values at or near decision edges (e.g., exactly 0, exactly 100, one less, one more).
- Invalid inputs: values outside allowed range or wrong type/format (e.g., negative price, missing input).
Mini Test Plan for the Discount Example
| ID | Input (price, isMember) | Category | Predicted Output | Reason (what it covers) |
|---|---|---|---|---|
| T1 | (50, false) | Normal | 50 | No discounts apply |
| T2 | (50, true) | Normal | 45 | Member discount only (10%) |
| T3 | (100, false) | Boundary | 95 | Exactly at 100 triggers +5% |
| T4 | (100, true) | Boundary | 85 | Both discounts at boundary (15%) |
| T5 | (99.99, true) | Boundary | 89.991 | Just below 100 should not add +5% |
| T6 | (-1, true) | Invalid | "invalid" | Negative price path |
Keep test cases small and focused. Each test should have a clear purpose: “This one checks the boundary,” “This one checks the invalid branch,” etc.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Tip: Predict Before You Trace
Write the predicted output first. If you trace and get a different result, you have found either (a) a bug in the pseudocode or (b) a misunderstanding of the requirement. Either way, you learned something important before coding.
3) Completeness Checks (Branches, Outputs, Loop Termination)
Completeness means the pseudocode handles all situations it claims to handle, and it always reaches a valid end state. This is a different mindset than tracing: you are scanning for missing paths, missing outputs, and non-terminating loops.
Branch Completeness: “Does Every Path Produce a Result?”
Common problems:
- Missing ELSE: a variable is assigned only in one branch, then used later.
- Early STOP without output: the algorithm ends but the user gets nothing.
- Some branches output, others don’t: inconsistent behavior.
Quick check method:
- For each
IF/ELSE IF/ELSEchain, list the possible outcomes (True/False for each condition). - For each outcome, confirm: are all required variables assigned? is there an output/return if needed?
Loop Termination: “What Makes the Loop Stop?”
For each loop, identify:
- Loop condition: what must become false (or true for REPEAT-UNTIL) to stop?
- Progress step: which variable changes each iteration to move toward stopping?
- Termination guarantee: is it possible that progress never happens?
Example of a termination bug (progress missing):
count <- 0
WHILE count < 5 DO
OUTPUT count
// missing: count <- count + 1
END WHILECompleteness check catches this without running anything: the condition depends on count, but count never changes, so the loop never ends.
Input Handling Completeness: “What If the Input Is Invalid?”
If your pseudocode states or assumes constraints (e.g., “price must be non-negative”), ensure there is a clear behavior for violations: output an error message, return a special value, or stop with a reason. Then include at least one invalid test case to verify that path.
4) Readability Review (Consistent Names, No Hidden Steps)
Readability is a quality check because unclear pseudocode leads to incorrect code. The goal is that another person (or future you) can implement it without guessing.
Readability Checklist
- Consistent naming: don’t switch between
discount,disc, andratefor the same idea. - No hidden steps: avoid “magic” phrases like “process the data” without specifying how.
- Explicit units and meaning: if
discountRateis 0.15, clarify it is a fraction (15%), not 15. - One action per line: makes tracing and debugging easier.
- Clear outputs: specify exactly what is output (value, format, rounding rules if relevant).
- Consistent decision wording: use the same comparison style and avoid ambiguous conditions.
Example: Removing Hidden Steps
Vague:
total <- compute total with discountsClearer:
total <- subtotal
IF hasCoupon THEN
total <- total - couponAmount
END IF
total <- total * (1 - discountRate)Worksheet: Quality-Check Loop (Predict → Trace → Revise)
Use this worksheet each time you want to verify pseudocode before coding. The goal is to iterate until your predicted outputs match the traced outputs and all completeness/readability checks pass.
A) Write the Pseudocode Block You Are Checking
Paste or rewrite only the relevant block (small enough to trace). Number lines if helpful.
B) Select Test Inputs (Small Set)
- Pick 1–2 normal cases.
- Pick 2–3 boundary cases (at edges and just around them).
- Pick 1–2 invalid cases (violating constraints).
Template table:
| Test ID | Inputs | Category | Predicted Output | Notes (what it covers) |
|---|---|---|---|---|
| Normal / Boundary / Invalid | ||||
| Normal / Boundary / Invalid | ||||
| Normal / Boundary / Invalid |
C) Trace Each Test Case with a Variable Table
For each test case, create a trace table with:
- Step number
- Executed line/action
- All variables that change
- Decision results (True/False)
- Outputs produced
Template trace table:
| Step | Action | Var1 | Var2 | Var3 | Notes / Output |
|---|---|---|---|---|---|
| 1 | INPUT ... | ||||
| 2 | IF ... | ||||
| 3 | ... |
D) Compare: Predicted Output vs Traced Output
- If they match, mark the test as pass.
- If they do not match, write down which step caused the difference (often an assignment or boundary condition).
Mismatch log template:
| Test ID | Predicted | Traced | First Divergence Step | Suspected Cause | Fix |
|---|---|---|---|---|---|
E) Revise the Pseudocode and Re-run the Worksheet
When you revise, be specific. Typical revisions include:
- Initialize a variable before use.
- Adjust a boundary condition (
>vs>=,<vs<=). - Add a missing
ELSEpath or missing output/return. - Add a progress update inside a loop to guarantee termination.
- Rename variables for consistency and clarity.
F) Final Quick Scan (Completeness + Readability)
- Branches: every path assigns required variables and produces required output/return.
- Loops: termination condition is reachable; progress variable changes correctly.
- Inputs: invalid inputs have defined behavior.
- Readability: consistent names, explicit steps, no “and then it works” lines.