Free Ebook cover Pseudocode Mastery for Beginners

Pseudocode Mastery for Beginners

New course

10 pages

Pseudocode Quality Checks: Tracing, Test Cases, and Edge Conditions

Capítulo 10

Estimated reading time: 7 minutes

+ Exercise

Why Quality Checks Matter Before You Code

Pseudocode can look “right” while still being wrong. Quality checks help you catch logic errors early, before they become debugging sessions in a real language. This chapter focuses on four practical checks you can do on paper (or in a document): manual tracing, test case design (normal/boundary/invalid), completeness checks (branches/loops), and readability review (names/steps). You will use a worksheet-style loop: choose inputs → predict outputs → trace → compare → revise until everything matches.

1) Manual Tracing (Dry-Run) with a Variable Table

Manual tracing means you execute the pseudocode step by step as if you were the computer. The key tool is a trace table: each row is a step, and each column is a variable (plus notes about decisions and outputs). Tracing is especially useful for catching off-by-one errors, wrong updates inside loops, and missing initialization.

Example Pseudocode to Trace

We will trace a small algorithm that calculates the total cost after applying a discount rule. (The exact rule is less important than the tracing method.)

INPUT price, isMember   // price is a number, isMember is true/false
IF price < 0 THEN
    OUTPUT "invalid"
    STOP
END IF

discountRate <- 0
IF isMember = true THEN
    discountRate <- 0.10
END IF
IF price >= 100 THEN
    discountRate <- discountRate + 0.05
END IF

finalPrice <- price * (1 - discountRate)
OUTPUT finalPrice

How to Build a Trace Table

  • Choose a specific input (e.g., price=120, isMember=true).
  • List variables you expect to change: price, isMember, discountRate, finalPrice.
  • Number key steps (each assignment, each decision, each output).
  • Record values after each step. For decisions, record the condition result (T/F) in a notes column.

Trace Table (Input: price=120, isMember=true)

StepActionpriceisMemberdiscountRatefinalPriceNotes
1Read INPUT120true
2IF price < 0120true120 < 0 is False
3discountRate <- 0120true0
4IF isMember = true120true0true = true is True
5discountRate <- 0.10120true0.10
6IF price >= 100120true0.10120 >= 100 is True
7discountRate <- discountRate + 0.05120true0.15
8finalPrice <- price * (1 - discountRate)120true0.15102120 * 0.85 = 102
9OUTPUT finalPrice120true0.15102Output: 102

What Tracing Reveals

  • If discountRate was not initialized, the table would show “unknown” values early, signaling a bug.
  • If the second discount rule overwrote the first instead of adding (discountRate <- 0.05), the trace would show 0.05 instead of 0.15.
  • If the boundary check was wrong (price > 100 instead of >=), a trace with price=100 would catch it.

2) Designing Small Test Cases (Normal, Boundary, Invalid)

Tracing one input is not enough. You want a small set of test cases that “cover” the important behaviors. A good beginner rule: for each decision and loop, include at least one test that makes it go each possible way.

Test Case Categories

  • Normal inputs: typical values that should work.
  • Boundary inputs: values at or near decision edges (e.g., exactly 0, exactly 100, one less, one more).
  • Invalid inputs: values outside allowed range or wrong type/format (e.g., negative price, missing input).

Mini Test Plan for the Discount Example

IDInput (price, isMember)CategoryPredicted OutputReason (what it covers)
T1(50, false)Normal50No discounts apply
T2(50, true)Normal45Member discount only (10%)
T3(100, false)Boundary95Exactly at 100 triggers +5%
T4(100, true)Boundary85Both discounts at boundary (15%)
T5(99.99, true)Boundary89.991Just below 100 should not add +5%
T6(-1, true)Invalid"invalid"Negative price path

Keep test cases small and focused. Each test should have a clear purpose: “This one checks the boundary,” “This one checks the invalid branch,” etc.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

Tip: Predict Before You Trace

Write the predicted output first. If you trace and get a different result, you have found either (a) a bug in the pseudocode or (b) a misunderstanding of the requirement. Either way, you learned something important before coding.

3) Completeness Checks (Branches, Outputs, Loop Termination)

Completeness means the pseudocode handles all situations it claims to handle, and it always reaches a valid end state. This is a different mindset than tracing: you are scanning for missing paths, missing outputs, and non-terminating loops.

Branch Completeness: “Does Every Path Produce a Result?”

Common problems:

  • Missing ELSE: a variable is assigned only in one branch, then used later.
  • Early STOP without output: the algorithm ends but the user gets nothing.
  • Some branches output, others don’t: inconsistent behavior.

Quick check method:

  • For each IF/ELSE IF/ELSE chain, list the possible outcomes (True/False for each condition).
  • For each outcome, confirm: are all required variables assigned? is there an output/return if needed?

Loop Termination: “What Makes the Loop Stop?”

For each loop, identify:

  • Loop condition: what must become false (or true for REPEAT-UNTIL) to stop?
  • Progress step: which variable changes each iteration to move toward stopping?
  • Termination guarantee: is it possible that progress never happens?

Example of a termination bug (progress missing):

count <- 0
WHILE count < 5 DO
    OUTPUT count
    // missing: count <- count + 1
END WHILE

Completeness check catches this without running anything: the condition depends on count, but count never changes, so the loop never ends.

Input Handling Completeness: “What If the Input Is Invalid?”

If your pseudocode states or assumes constraints (e.g., “price must be non-negative”), ensure there is a clear behavior for violations: output an error message, return a special value, or stop with a reason. Then include at least one invalid test case to verify that path.

4) Readability Review (Consistent Names, No Hidden Steps)

Readability is a quality check because unclear pseudocode leads to incorrect code. The goal is that another person (or future you) can implement it without guessing.

Readability Checklist

  • Consistent naming: don’t switch between discount, disc, and rate for the same idea.
  • No hidden steps: avoid “magic” phrases like “process the data” without specifying how.
  • Explicit units and meaning: if discountRate is 0.15, clarify it is a fraction (15%), not 15.
  • One action per line: makes tracing and debugging easier.
  • Clear outputs: specify exactly what is output (value, format, rounding rules if relevant).
  • Consistent decision wording: use the same comparison style and avoid ambiguous conditions.

Example: Removing Hidden Steps

Vague:

total <- compute total with discounts

Clearer:

total <- subtotal
IF hasCoupon THEN
    total <- total - couponAmount
END IF
total <- total * (1 - discountRate)

Worksheet: Quality-Check Loop (Predict → Trace → Revise)

Use this worksheet each time you want to verify pseudocode before coding. The goal is to iterate until your predicted outputs match the traced outputs and all completeness/readability checks pass.

A) Write the Pseudocode Block You Are Checking

Paste or rewrite only the relevant block (small enough to trace). Number lines if helpful.

B) Select Test Inputs (Small Set)

  • Pick 1–2 normal cases.
  • Pick 2–3 boundary cases (at edges and just around them).
  • Pick 1–2 invalid cases (violating constraints).

Template table:

Test IDInputsCategoryPredicted OutputNotes (what it covers)
Normal / Boundary / Invalid
Normal / Boundary / Invalid
Normal / Boundary / Invalid

C) Trace Each Test Case with a Variable Table

For each test case, create a trace table with:

  • Step number
  • Executed line/action
  • All variables that change
  • Decision results (True/False)
  • Outputs produced

Template trace table:

StepActionVar1Var2Var3Notes / Output
1INPUT ...
2IF ...
3...

D) Compare: Predicted Output vs Traced Output

  • If they match, mark the test as pass.
  • If they do not match, write down which step caused the difference (often an assignment or boundary condition).

Mismatch log template:

Test IDPredictedTracedFirst Divergence StepSuspected CauseFix

E) Revise the Pseudocode and Re-run the Worksheet

When you revise, be specific. Typical revisions include:

  • Initialize a variable before use.
  • Adjust a boundary condition (> vs >=, < vs <=).
  • Add a missing ELSE path or missing output/return.
  • Add a progress update inside a loop to guarantee termination.
  • Rename variables for consistency and clarity.

F) Final Quick Scan (Completeness + Readability)

  • Branches: every path assigns required variables and produces required output/return.
  • Loops: termination condition is reachable; progress variable changes correctly.
  • Inputs: invalid inputs have defined behavior.
  • Readability: consistent names, explicit steps, no “and then it works” lines.

Now answer the exercise about the content:

When reviewing a WHILE loop for completeness, which combination best ensures the loop will eventually stop?

You are right! Congratulations, now go to the next page

You missed! Try again.

A completeness check for loops focuses on termination: what condition stops the loop, what variable changes each iteration (progress), and whether that progress guarantees the stopping condition can be reached.

Download the app to earn free Certification and listen to the courses in the background, even with the screen off.