Free Ebook cover Software Testing Foundations: From Requirements to Defects

Software Testing Foundations: From Requirements to Defects

New course

12 pages

Core Test Design Techniques for Beginners

Capítulo 6

Estimated reading time: 12 minutes

+ Exercise

What “Test Design” Means in Practice

Test design is the activity of turning information you have (requirements, user stories, interface descriptions, data rules, workflows, and known constraints) into concrete test cases that can be executed. A “test case” is more than a click path: it includes a purpose, inputs, steps, and expected results. Good test design aims to cover important behavior with a manageable number of tests, using techniques that help you choose representative inputs and scenarios instead of guessing.

Beginners often start by writing tests that mirror the “happy path” only. Core test design techniques help you systematically explore variations: different data values, different states of the system, different user actions, and different combinations of conditions. The goal is not to test everything, but to test intelligently—using structured methods that reduce blind spots.

Technique 1: Equivalence Partitioning (EP)

Concept: Many inputs can be grouped into “partitions” (equivalence classes) where the system should behave the same for any value in that group. Instead of testing every possible value, you test one representative value from each partition.

Equivalence partitions are typically:

  • Valid partitions: inputs that should be accepted.
  • Invalid partitions: inputs that should be rejected or handled with an error message.

Practical example: A signup form has an “Age” field with the rule: “Age must be an integer between 18 and 65 inclusive.”

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

  • Valid partition: integers 18–65
  • Invalid partition: integers < 18
  • Invalid partition: integers > 65
  • Invalid partition: non-integers (e.g., 18.5)
  • Invalid partition: non-numeric (e.g., “abc”)
  • Invalid partition: empty / null

Instead of testing 18, 19, 20, …, 65, you might pick one representative from each partition: 30 (valid), 17 (invalid low), 66 (invalid high), 18.5 (invalid type), “abc” (invalid type), empty (invalid missing).

Step-by-step: How to Apply Equivalence Partitioning

  • Step 1: Identify the input(s) you need to vary (a field, parameter, file, message attribute).
  • Step 2: Find rules (range, format, allowed set, length, required/optional).
  • Step 3: Create partitions for valid and invalid groups.
  • Step 4: Choose representatives (at least one per partition).
  • Step 5: Define expected results for each representative (accept, reject, error message, defaulting behavior).

Tip: EP works best when you can clearly define “same behavior.” If different values in a partition trigger different business logic, split the partition further.

Technique 2: Boundary Value Analysis (BVA)

Concept: Defects often occur at the edges of allowed ranges: minimums, maximums, and values just outside them. Boundary Value Analysis focuses tests around these boundaries.

BVA is often used together with EP. EP tells you which groups exist; BVA tells you where the risk of mistakes is highest within those groups.

Practical example: Same “Age 18–65 inclusive” rule.

  • Lower boundary: 18
  • Just below lower boundary: 17
  • Just above lower boundary: 19
  • Upper boundary: 65
  • Just below upper boundary: 64
  • Just above upper boundary: 66

This set is small but powerful. It catches common errors like using > instead of ≥, or off-by-one mistakes.

Step-by-step: How to Apply Boundary Value Analysis

  • Step 1: Identify boundaries (min/max values, length limits, date cutoffs, count limits).
  • Step 2: Create boundary-focused values: at boundary, just inside, just outside.
  • Step 3: Combine with EP: ensure you still cover other invalid types (e.g., non-numeric) if relevant.
  • Step 4: Specify expected results precisely (e.g., “65 accepted,” “66 rejected with message X”).

Common beginner mistake: Only testing the boundary itself (18 and 65) and forgetting “just outside” (17 and 66), which is where many defects show up.

Technique 3: Decision Tables (Rules and Combinations)

Concept: When behavior depends on multiple conditions, decision tables help you enumerate combinations and expected outcomes so you don’t miss important cases. This is especially useful when requirements say things like “If A and B then X, but if A and not B then Y.”

Practical example: An online store applies shipping fees based on two conditions:

  • Condition 1: Order total ≥ $50?
  • Condition 2: Customer is a premium member?

Rules:

  • If total ≥ $50 OR premium member, shipping is free.
  • Otherwise, shipping costs $5.

A decision table can represent all combinations:

Conditions              Rule1   Rule2   Rule3   Rule4  Expected shipping fee  Notes/Example data  Total ≥ $50?             Y       Y       N       N  $0                    $60  Premium member?         Y       N       Y       N  $0                    $40 + premium  $0                    $55 non-premium  $5                    $40 non-premium

Even with only two conditions, the table prevents you from forgetting the “premium but low total” case.

Step-by-step: How to Build a Decision Table

  • Step 1: List conditions that influence the outcome.
  • Step 2: List possible values for each condition (often Yes/No, but can be more).
  • Step 3: Enumerate combinations (all, or a reduced set if some are impossible).
  • Step 4: Define expected actions/outcomes for each combination.
  • Step 5: Convert each rule into a test case with concrete data.

Tip: If you have many conditions, combinations can explode. Start by identifying impossible combinations (e.g., mutually exclusive states) and merge rules that lead to the same outcome.

Technique 4: State Transition Testing (Behavior Over Time)

Concept: Some systems behave differently depending on their current state, and actions can move the system from one state to another. State transition testing focuses on verifying allowed transitions and preventing invalid ones.

Practical example: A support ticket can be in states: New, In Progress, Waiting for Customer, Resolved, Closed. Rules might include:

  • New → In Progress (allowed)
  • In Progress → Waiting for Customer (allowed)
  • Waiting for Customer → In Progress (allowed)
  • Resolved → Closed (allowed)
  • Closed → In Progress (not allowed)

Step-by-step: How to Design State Transition Tests

  • Step 1: Identify states (from requirements, UI labels, workflow diagrams).
  • Step 2: Identify events/actions that cause transitions (buttons, API calls, scheduled jobs).
  • Step 3: Draw a simple state diagram (even a rough sketch is enough).
  • Step 4: Create tests for allowed transitions (positive) and disallowed transitions (negative).
  • Step 5: Include state-dependent behavior checks (e.g., which fields are editable in each state).

Example test ideas:

  • Create a ticket (New), click “Start Work,” verify state becomes In Progress and “Close Ticket” is not available.
  • Move to Resolved, then attempt to edit description; verify it is read-only (if that is the rule).
  • Close the ticket, then attempt to reopen via UI; verify the system blocks it or requires a specific permission.

Common beginner mistake: Testing only the “normal” path (New → In Progress → Resolved → Closed) and missing invalid transitions or alternate paths (Waiting for Customer loops).

Technique 5: Use Case / Scenario-Based Testing (End-to-End Stories)

Concept: Scenario-based testing designs tests around realistic user goals and workflows. It helps you validate that features work together, not just in isolation. This technique is especially useful for beginners because it provides a narrative structure: “As a user, I want to achieve X.”

Scenario tests are not random exploratory sessions; they are designed with clear start conditions, steps, and expected outcomes across multiple screens or components.

Practical example: “User resets password and logs in with the new password.”

  • Precondition: user account exists, email is accessible.
  • Steps: request reset → receive link → set new password → log in.
  • Expected: reset link works once, password policy enforced, old password no longer works, new password works.

Step-by-step: How to Write a Strong Scenario Test

  • Step 1: Define the goal (what the user is trying to accomplish).
  • Step 2: Define preconditions (account state, permissions, data setup).
  • Step 3: Write main flow steps with expected results at key points.
  • Step 4: Add 1–3 variations (e.g., expired reset link, wrong email, password policy violation).
  • Step 5: Identify checkpoints (UI messages, emails sent, audit logs, database changes if observable).

Tip: Combine scenario tests with EP/BVA for the critical inputs inside the scenario (e.g., password length boundaries, invalid characters).

Technique 6: Pairwise / All-Pairs Testing (Efficient Combination Coverage)

Concept: When you have multiple parameters with multiple values (browser types, user roles, payment methods, locales), testing every combination can be too large. Pairwise testing aims to cover every pair of parameter values at least once, which often finds many interaction defects with far fewer tests than full combinatorial coverage.

Practical example: You need to test checkout with:

  • Browser: Chrome, Firefox, Safari
  • Payment: Card, PayPal, Bank Transfer
  • Shipping: Standard, Express

All combinations: 3 × 3 × 2 = 18 tests. Pairwise can reduce this to a smaller set while still ensuring, for example, that “Safari + PayPal” and “Firefox + Express” each appear in at least one test.

Step-by-step: How to Apply Pairwise Testing (Beginner-Friendly)

  • Step 1: List parameters that can vary independently.
  • Step 2: List values for each parameter.
  • Step 3: Decide constraints (invalid combinations, e.g., “Bank Transfer not available for Express”).
  • Step 4: Generate a pairwise set using a simple tool or a manual approach for small sets.
  • Step 5: Turn each row into a test case with concrete data and expected results.

Manual mini-approach: For small combinations, you can build a table and try to ensure each pair appears at least once, but tools are more reliable. The key learning is the mindset: prioritize interaction coverage without exploding the test count.

Technique 7: Error Guessing (But Make It Structured)

Concept: Error guessing uses tester experience and common failure patterns to design tests that are likely to find defects. For beginners, the risk is turning this into random testing. The way to make it useful is to use a checklist of common error patterns and apply it consistently.

Common error patterns to guess:

  • Empty input, whitespace-only input, leading/trailing spaces
  • Very long strings (length limits), special characters, emoji, non-ASCII characters
  • Copy/paste behavior, auto-fill, browser back/refresh
  • Double-clicking submit, repeated API calls, retry behavior
  • Time-related issues: expired links, timezone offsets, daylight saving changes
  • Concurrency: two users editing the same record
  • Permissions: user without rights tries to access admin function

Step-by-step: Turning Error Guessing into Repeatable Tests

  • Step 1: Pick a feature (e.g., “Create invoice”).
  • Step 2: Apply a checklist of error patterns relevant to that feature.
  • Step 3: Write test cases for the top 5–10 guesses with clear expected results.
  • Step 4: Track what you find and update your checklist based on real defects.

Example: For a “Submit payment” button, add a test: click “Pay” twice quickly; expected result: only one payment is created, UI shows a single confirmation, and the second click is ignored or blocked.

Technique 8: Exploratory Testing with Charters (Focused Learning)

Concept: Exploratory testing is simultaneous learning, test design, and execution. It is valuable for beginners when it is guided by a clear mission (a charter), time-boxed, and produces notes that can be turned into repeatable test cases.

Charter example: “Explore the profile update page focusing on validation, save behavior, and error handling.”

Step-by-step: Running a Simple Exploratory Session

  • Step 1: Define a charter (what to explore and what to focus on).
  • Step 2: Time-box (e.g., 30–60 minutes).
  • Step 3: Prepare data (test accounts, sample inputs).
  • Step 4: Explore using techniques: EP/BVA for fields, state transitions for workflow, error guessing for edge behaviors.
  • Step 5: Capture evidence: notes, screenshots, steps to reproduce, observed vs expected.
  • Step 6: Convert discoveries into new scripted tests (especially for defects and regressions).

Beginner tip: If you feel lost during exploration, pick one field or one rule and apply EP+BVA first. Structure creates momentum.

How to Choose the Right Technique for a Given Feature

Most real features benefit from combining techniques. A practical way to decide is to look at what drives behavior:

  • Single input with clear rules → Equivalence Partitioning + Boundary Value Analysis
  • Multiple conditions affecting outcomes → Decision Tables
  • Workflow with statuses → State Transition Testing
  • User goal across multiple steps → Scenario-Based Testing
  • Many configuration combinations → Pairwise Testing
  • Known common failure patterns → Structured Error Guessing
  • Unclear behavior or new feature → Exploratory Testing with Charters

Worked Example: Designing Tests for a “Discount Code” Feature

To see how techniques combine, imagine a checkout page with a “Discount code” input. Rules:

  • Code is optional.
  • Valid codes are 6–10 characters, uppercase letters and digits only.
  • Some codes are “percentage” discounts (e.g., 10% off), some are “fixed amount” (e.g., $5 off).
  • Code may be expired.
  • Code may be limited to first-time customers.
  • Only one code can be applied at a time.

Step 1: Equivalence Partitions for the Code Field

  • Valid format code (uppercase letters/digits, length 6–10)
  • Invalid: too short (< 6)
  • Invalid: too long (> 10)
  • Invalid: contains lowercase
  • Invalid: contains special characters
  • Invalid: empty (but optional, so expected behavior is “no discount applied,” not an error)

Step 2: Boundary Values for Length

  • Length 5 (invalid)
  • Length 6 (valid)
  • Length 10 (valid)
  • Length 11 (invalid)

Step 3: Decision Table for Eligibility Rules

Now add conditions that affect whether a valid-format code is accepted:

  • Condition A: Code exists in system?
  • Condition B: Code expired?
  • Condition C: First-time customer required?
  • Condition D: Customer is first-time?
Rule  A exists  B expired  C requires first-time  D is first-time  Expected result  1     Y         N         N                    -              Apply discount  2     Y         N         Y                    Y              Apply discount  3     Y         N         Y                    N              Reject with eligibility message  4     Y         Y         -                    -              Reject with expired message  5     N         -         -                    -              Reject with invalid code message

Notice how the table forces you to define distinct expected messages/outcomes, which improves defect reporting and reduces ambiguity.

Step 4: State/Sequence Checks (Only One Code at a Time)

This rule is about behavior over time:

  • Apply code CODE10 → discount applied
  • Apply code SAVE5 → expected: CODE10 removed and SAVE5 applied (or system blocks second code with a message, depending on the rule)
  • Remove code → totals return to original
  • Refresh page → code remains applied (if expected) or is cleared (if expected); verify consistent behavior

Step 5: Error Guessing for Realistic Failures

  • Paste code with trailing space: “CODE10 ” → expected: trimmed and accepted, or rejected with clear message (define expected)
  • Rapidly click “Apply” multiple times → expected: one application, no duplicated discounts
  • Apply code, then change cart items → expected: discount recalculates correctly or is removed if no longer eligible

Writing Test Cases So They Are Executable and Useful

Test design techniques help you choose what to test; you still need to document tests so someone can run them and know what “pass” means. For beginners, a simple consistent template is enough:

  • ID/Name: short and specific
  • Purpose: what risk or rule it covers
  • Preconditions: required setup
  • Test data: exact values
  • Steps: numbered actions
  • Expected results: observable outcomes per step or at key checkpoints

Example test case (BVA + validation):

ID: DISC-LEN-LOW-01  Purpose: Verify discount code length lower boundary is enforced  Preconditions: Cart has 1 item, user is logged in  Test data: Code = 'ABCDE' (length 5)  Steps: 1) Open checkout page 2) Enter code 'ABCDE' 3) Click Apply  Expected results: 1) No discount is applied 2) User sees validation message indicating code length requirement (6–10) 3) Total price remains unchanged

Beginner tip: When expected results are vague (“works correctly”), defects become hard to report and fixes become hard to verify. Use concrete checks: totals, messages, state changes, emails sent, records created, permissions enforced.

Now answer the exercise about the content:

When applying Boundary Value Analysis to a numeric field with an allowed inclusive range, which set of test inputs best targets the most defect-prone areas?

You are right! Congratulations, now go to the next page

You missed! Try again.

Boundary Value Analysis focuses on edges where defects often occur. It uses values at the boundaries and just inside and just outside them to catch off-by-one and comparison errors.

Next chapter

Building Practical Test Cases and Lightweight Test Notes

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.