Free Ebook cover Software Testing Foundations: From Requirements to Defects

Software Testing Foundations: From Requirements to Defects

New course

12 pages

Risk-Based Testing: Deciding What Matters Most

Capítulo 2

Estimated reading time: 11 minutes

+ Exercise

What Risk-Based Testing Means in Practice

Risk-Based Testing (RBT) is an approach for deciding what to test first, what to test deeper, and what can be tested lighter (or deferred) by using risk as the main decision input. In RBT, “risk” is the combination of (1) the likelihood that a problem exists or will be introduced and (2) the impact if that problem reaches users or downstream systems. The goal is not to eliminate all risk; it is to spend limited testing time where it reduces the most meaningful risk.

RBT is especially useful when you face common constraints: short release cycles, large feature sets, incomplete information, and limited test environments. Instead of treating all requirements and components as equally important, you explicitly rank them, then align test design, test depth, and test execution order to that ranking.

Key Terms

  • Risk item: A feature, requirement, user journey, component, integration point, data flow, or non-functional quality attribute that could fail in a way that matters.
  • Likelihood: How probable a defect is (based on complexity, change rate, past defects, unclear requirements, new technology, etc.).
  • Impact: The damage if a defect escapes (financial loss, safety, legal exposure, reputational harm, operational disruption, user churn, support cost).
  • Risk exposure: A numeric or qualitative combination of likelihood and impact (for example, Risk = Likelihood × Impact).
  • Risk appetite: How much risk the organization is willing to accept for a given release or area (often different for payments vs. UI cosmetics).

Why Risk-Based Testing Changes Your Test Plan

A traditional test plan might list test types (unit, integration, system, regression) and then try to cover everything uniformly. RBT changes the plan by making risk the organizing principle. This affects:

  • Scope: High-risk items are in-scope by default; low-risk items may be sampled or deferred.
  • Depth: High-risk items get more thorough tests (more data combinations, negative tests, boundary tests, exploratory sessions, security checks).
  • Order: High-risk tests run earlier (shift-left where possible) and more frequently (e.g., per commit, nightly, or per build).
  • Test techniques: You choose techniques that best reduce the specific risk (e.g., threat modeling for security risk, contract testing for integration risk).
  • Exit criteria: “Done” is defined by risk reduction, not by “all tests executed.” For example, you may require zero open critical defects in high-risk areas, while allowing minor UI issues in low-risk areas.

Sources of Risk: Product Risk vs. Project Risk

RBT often focuses on product risk: the risk that the product will fail in the field. But project risk also matters because it affects your ability to test effectively.

Product Risk (What could go wrong in the software)

  • Business-critical flows: checkout, payment, refunds, account creation, authentication, reporting.
  • Safety or compliance: medical, automotive, finance, privacy regulations.
  • Security: authentication, authorization, data exposure, injection, insecure direct object references.
  • Performance and reliability: peak load, latency, timeouts, retries, resilience.
  • Data integrity: calculations, currency rounding, idempotency, concurrency, migrations.
  • Integration: third-party APIs, message queues, webhooks, batch jobs.

Project Risk (What could prevent good testing)

  • Unclear requirements or late changes.
  • Limited environments or unstable test data.
  • Short timelines that force trade-offs.
  • Skill gaps (e.g., no one comfortable with security testing).
  • Tooling gaps (no automation pipeline, weak observability).

Project risks don’t replace product risks; they influence the testing strategy. For example, if environments are unstable, you may prioritize contract tests and component tests that run locally to reduce dependency on flaky staging systems.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

A Step-by-Step Risk-Based Testing Workflow

Step 1: Identify Risk Items (Build a Risk Inventory)

Start by listing what you might test. A risk inventory can be organized by:

  • User journeys (e.g., “user signs up,” “user pays invoice”).
  • Features/requirements (e.g., “discount codes,” “refund policy rules”).
  • System components (e.g., “pricing service,” “auth service,” “database migration”).
  • Quality attributes (security, performance, accessibility, reliability).

Use multiple inputs to avoid blind spots: product roadmap, architecture diagrams, incident history, support tickets, analytics (drop-off points), and known “hot spots” (modules with frequent changes).

Practical tip: keep the inventory small enough to manage. If you list 300 items, you won’t rank them meaningfully. Group low-level items into higher-level risk items (e.g., “Checkout flow” rather than every single UI field).

Step 2: Define a Simple Scoring Model

You need a consistent way to compare risks. A common model uses a 1–5 scale for likelihood and impact.

  • Likelihood (1–5): 1 = very unlikely, 5 = very likely.
  • Impact (1–5): 1 = negligible, 5 = catastrophic.

Then compute Risk Score = Likelihood × Impact. This yields a 1–25 range that is easy to sort.

Keep the model simple at first. You can add dimensions later (e.g., detectability, exposure, compliance) if needed, but complexity can stall adoption.

Step 3: Calibrate Likelihood and Impact with Concrete Criteria

Teams often argue because “high” means different things to different people. Reduce subjectivity by defining criteria.

Example criteria for Impact:

  • 5: legal/compliance breach, significant revenue loss, security incident, data loss, safety risk.
  • 4: major workflow blocked for many users, high support volume, severe reputational damage.
  • 3: workaround exists, affects a subset of users, moderate support cost.
  • 2: minor workflow issue, cosmetic but noticeable.
  • 1: trivial cosmetic issue, minimal user impact.

Example criteria for Likelihood:

  • 5: new feature with complex logic, many recent changes, unclear requirements, historically defect-prone area.
  • 4: changed integration, new dependency, concurrency or timing involved.
  • 3: moderate change, some complexity, limited testability.
  • 2: small change in stable area.
  • 1: no change, proven stable, strong automated coverage.

Step 4: Score and Rank the Risk Items

Run a short workshop with product, engineering, QA/test, and (when relevant) security or operations. Score each item quickly, capture assumptions, and rank by score.

Example risk table (simplified):

Risk Item                          Likelihood  Impact  Score  Notes/Assumptions  Owner  Test Focus Level
Checkout payment authorization     4           5       20     New PSP integration  QA     High
Discount code stacking rules       3           4       12     Complex rules        QA     Medium
Profile avatar upload              2           2        4     Minor feature        QA     Low
Database migration for orders      3           5       15     Large data volume     QA     High

Don’t treat the score as “truth.” It is a decision aid. The most important output is a shared understanding of what matters most and why.

Step 5: Map Risk Levels to Test Strategy (Depth and Technique)

Create a simple policy that translates risk into testing actions. For example:

  • High risk (score 15–25): thorough functional coverage, negative tests, boundary tests, exploratory testing, integration tests, targeted performance checks, security checks where applicable, and strong regression automation. Require review of test design and evidence.
  • Medium risk (score 8–14): solid functional coverage with key negative paths, some exploratory testing, integration tests for main paths, regression automation for stable scenarios.
  • Low risk (score 1–7): smoke-level checks, sampling, rely on existing regression suite, minimal new automation unless cheap and valuable.

This mapping prevents the common failure mode where everything is labeled “high priority” and the ranking becomes meaningless.

Step 6: Turn Risks into Concrete Test Conditions and Test Cases

For each high-risk item, define test conditions that explicitly target the risk. A test condition is a statement of what must be verified, often broader than a single test case.

Example: Risk item “Checkout payment authorization”

  • Condition: authorization succeeds and order is created exactly once.
  • Condition: authorization fails and order is not created; user sees actionable error.
  • Condition: network timeout triggers retry logic without double-charging (idempotency).
  • Condition: currency rounding and tax calculation are correct for supported locales.
  • Condition: authorization request includes required metadata; sensitive data is not logged.

Then derive test cases with specific data and steps. For high-risk items, include negative and edge cases deliberately rather than as an afterthought.

Step 7: Plan Execution Order and Cadence

Use the ranking to decide what runs when:

  • Pre-merge / CI: fast checks for high-risk logic (unit tests, component tests, contract tests).
  • Nightly: broader integration and end-to-end tests for high-risk flows.
  • Pre-release: focused exploratory sessions on top risks, plus targeted non-functional tests (performance, security smoke).

Also decide retest triggers: if a high-risk component changes, its tests run automatically; if a low-risk UI text changes, you may not rerun the full suite.

Step 8: Track Risk Coverage and Residual Risk

Instead of reporting only “tests passed/failed,” track:

  • Risk coverage: which high-risk items have test evidence for this release.
  • Residual risk: what remains untested or partially tested, and why (time, environment, missing tools).
  • Open defects by risk: critical defects in high-risk areas block release; minor defects in low-risk areas may be accepted.

This makes trade-offs explicit. Stakeholders can make informed decisions rather than assuming “QA tested everything.”

Practical Example: Risk-Based Testing for an E-Commerce Release

Imagine a release includes: new payment service provider (PSP) integration, a new discount feature, UI redesign of product pages, and an internal refactor of the order database schema.

1) Identify risk items

  • Payment authorization and capture
  • Refund processing
  • Discount code application and stacking
  • Order database migration
  • Product page UI redesign
  • Search filters UI

2) Score them

Item                          L  I  Score  Rationale
Payment authorization/capture 4  5  20     New PSP, revenue impact
Refund processing             3  5  15     Financial + support impact
Order DB migration            3  5  15     Data integrity risk
Discount stacking             3  4  12     Complex rules, promo abuse
Product page UI redesign      2  3   6     Conversion impact but reversible
Search filters UI             2  2   4     Minor inconvenience

3) Decide test focus

  • High: payment, refunds, DB migration
  • Medium: discount stacking
  • Low: product page UI, search filters UI

4) Translate into test activities

High-risk payment tests might include:

  • Contract tests against PSP sandbox for required fields and error codes.
  • Idempotency tests: retry the same request; ensure one charge.
  • Negative tests: invalid card, insufficient funds, expired card, 3DS challenge failure.
  • Observability checks: verify logs/metrics do not include sensitive data; ensure correlation IDs exist.
  • Exploratory session focused on interruption scenarios: refresh during payment, back button, duplicate clicks.

High-risk DB migration tests might include:

  • Migration rehearsal on production-like data volume.
  • Rollback plan verification (what happens if migration fails halfway).
  • Data reconciliation queries: counts, sums, referential integrity checks.
  • Concurrency tests: orders created during migration window.

Low-risk UI redesign tests might include:

  • Smoke test: page loads, add-to-cart button works, images render.
  • Sampling across major browsers/devices.

Notice how the plan is not “test everything equally.” It is “test what can hurt us most, in the ways it can hurt us.”

Choosing Test Techniques Based on Risk Type

Different risks need different techniques. RBT is not only about prioritizing; it is about selecting the most effective method to reduce a specific risk.

Security risk

  • Threat modeling for high-risk endpoints (auth, payments, admin).
  • Authorization tests: verify role-based access and object-level access.
  • Input validation tests: injection attempts, malformed payloads.
  • Dependency vulnerability checks and configuration review.

Integration risk

  • Contract testing (consumer/provider) to detect breaking changes early.
  • Mocking third-party failures: timeouts, 500 errors, rate limits.
  • End-to-end tests for the most critical cross-service flows.

Data integrity risk

  • Boundary and equivalence tests for calculations (tax, discounts, rounding).
  • Idempotency and deduplication tests for message processing.
  • Migration validation scripts and reconciliation checks.

Performance and reliability risk

  • Load tests focused on bottleneck endpoints rather than the whole system.
  • Soak tests for memory leaks or resource exhaustion.
  • Resilience tests: retries, circuit breakers, graceful degradation.

How to Keep Risk-Based Testing Lightweight (Without Losing Rigor)

RBT can fail if it becomes a bureaucratic scoring exercise. Keep it practical:

  • Timebox risk scoring: 30–60 minutes per sprint/release planning.
  • Use “top N”: identify the top 5–10 risks and ensure they drive most testing decisions.
  • Record assumptions: a score without rationale is hard to revisit.
  • Review after incidents: if a defect escaped, update likelihood criteria or risk inventory.
  • Align with change: a stable high-impact area might become lower likelihood if it has strong automated coverage and no changes; a low-impact area might become high risk if it suddenly drives conversion or compliance.

Common Pitfalls and How to Avoid Them

Everything is “high risk”

If all items are high, the model is not helping. Force ranking by limiting how many items can be labeled “High” (for example, only the top 20% by score). Alternatively, use a 3x3 matrix (Low/Med/High) and require justification for “High.”

Risk scoring is detached from test design

RBT is not complete when you have a spreadsheet. Ensure each high-risk item has explicit test conditions, owners, and planned evidence (automation, exploratory notes, logs, metrics, test results).

Likelihood is guessed without using signals

Use real indicators: code churn, complexity, defect history, incident frequency, new dependencies, and requirement volatility. Even simple signals (new module vs. stable module) improve accuracy.

Impact is underestimated because failures seem “unlikely”

Likelihood and impact are separate. A rare security breach can still be catastrophic. Keep impact scoring anchored to business outcomes and compliance obligations.

RBT is used to justify skipping testing without transparency

Deferring low-risk testing is valid, but it must be explicit. Track residual risk and communicate what was not tested and why, so release decisions are informed.

Templates You Can Reuse

Risk Inventory Template

Risk Item | Type (Feature/Flow/Component/Quality) | Likelihood (1-5) | Impact (1-5) | Score | Key Failure Modes | Planned Tests | Evidence | Owner | Status

Risk-to-Test Mapping Policy (Example)

High:   Must have negative + boundary tests, integration coverage, exploratory session, regression automation for core paths, and release sign-off criteria.
Medium: Must have functional coverage for main paths + some negative tests, targeted regression automation where stable.
Low:    Smoke + sampling; rely on existing suite; fix only if cheap or user-visible.

Exploratory Session Charter for a High-Risk Item

Charter: Explore payment interruptions and duplicate actions
Focus: refresh/back button, double-click pay, network drop, slow responses, retry behavior
Oracles: no double charge, clear user messaging, consistent order state, logs without sensitive data
Data: valid/invalid cards, different currencies, high-value orders, discount applied

Now answer the exercise about the content:

Which statement best describes how Risk-Based Testing prioritizes and defines completion of testing work?

You are right! Congratulations, now go to the next page

You missed! Try again.

Risk-Based Testing uses risk (likelihood and impact) to rank items and drive scope, depth, and order. Exit criteria focus on reducing high risks (e.g., no open critical defects in high-risk areas), not executing every planned test.

Next chapter

Turning Requirements into Testable Statements

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.