Free Ebook cover Software Testing Foundations: From Requirements to Defects

Software Testing Foundations: From Requirements to Defects

New course

12 pages

The Purpose of Software Testing and Quality Thinking

Capítulo 1

Estimated reading time: 12 minutes

+ Exercise

What “Purpose” Means in Software Testing

The purpose of software testing is not a single slogan like “find bugs.” In practice, testing serves multiple purposes that support business goals, user needs, and engineering decisions. A useful way to think about purpose is to ask: what decision will this testing help us make, and what risk will it reduce?

Testing provides information. That information can confirm that a feature is ready to release, reveal that a change introduced a regression, show that performance is degrading, or demonstrate that a workflow is confusing for users. When testing is done with quality thinking, it becomes a continuous feedback mechanism that helps a team steer the product rather than merely inspect it at the end.

Core purposes of testing in real projects

  • Reduce risk: Identify failures that would harm users, revenue, safety, compliance, or reputation. Testing prioritizes what matters most, not what is easiest to check.

  • Provide fast feedback: Help developers and product owners learn quickly whether changes are safe. Fast feedback supports small changes, frequent releases, and confident refactoring.

  • Support decision-making: Offer evidence for release readiness, scope trade-offs, and whether to fix now or accept a known limitation.

    Continue in our app.

    You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

    Or continue reading below...
    Download App

    Download the app

  • Reveal unknowns: Explore behavior that is not fully specified, ambiguous, or newly discovered through usage patterns.

  • Build shared understanding: Testing conversations clarify expectations, edge cases, and what “good” looks like for users.

  • Protect maintainability: A well-designed test suite can act as a safety net, enabling changes without fear, and documenting intended behavior through executable checks.

Quality Thinking: A Mindset, Not a Department

Quality thinking is the habit of considering how a product can fail, who would be impacted, and how to prevent or detect those failures early. It is not limited to testers. Developers, designers, product managers, and operations all contribute to quality outcomes. Testing is one activity within a broader quality approach.

Quality thinking shifts the team from “Did we build it?” to “Did we build the right thing, and will it keep working in the real world?” This includes usability, reliability, security, performance, accessibility, data integrity, and operational behavior.

Quality is multi-dimensional

A feature can “work” and still be low quality. Consider a password reset flow:

  • Functional: The reset email is sent and the password changes.

  • Security: The token cannot be guessed, reused, or intercepted; rate limiting prevents abuse.

  • Usability: The email arrives quickly, instructions are clear, and error messages help users recover.

  • Reliability: The flow works during peak load and when email delivery is delayed.

  • Observability: Support can trace failures; logs and metrics show if emails are failing.

Quality thinking means asking about these dimensions early, then designing tests and checks that provide evidence across them.

Testing as Learning: Verification and Exploration

Testing includes both verification (checking that the system meets expected behavior) and exploration (learning how the system behaves, especially in uncertain areas). Quality thinking balances both.

Verification: proving known expectations

Verification focuses on known rules: calculations, validations, permissions, and workflows that have clear expected outcomes. Automated tests often excel here because they can run frequently and catch regressions.

Exploration: discovering unexpected behavior

Exploration is essential when requirements are incomplete, when user behavior is unpredictable, or when the system interacts with external services. Exploratory testing is not random clicking; it is structured learning guided by risk and hypotheses.

Example exploratory questions for a checkout flow:

  • What happens if the payment provider is slow or returns an error?

  • Can the user submit the order twice by double-clicking?

  • What if the cart changes in another tab?

  • Do we handle currency rounding consistently across UI and backend?

Risk-Based Purpose: Testing What Matters Most

Because time is limited, testing must be selective. Quality thinking uses risk to decide what to test deeply and what to test lightly. Risk can be thought of as: impact if something goes wrong multiplied by likelihood that it will go wrong.

Common risk drivers

  • User impact: Will it block core tasks, cause data loss, or expose private information?

  • Business impact: Will it affect revenue, legal exposure, or customer churn?

  • Change size and complexity: Large refactors, new integrations, and concurrency changes increase risk.

  • Novelty: New technology, new team members, or new domains increase uncertainty.

  • Past defect patterns: Areas with frequent bugs deserve extra attention.

  • Operational criticality: Features needed for incident response, billing, authentication, and audit trails.

Practical step-by-step: create a lightweight risk map for a feature

This is a simple process a team can do in 15–30 minutes before implementing or testing a feature.

  • Step 1: List the main user journeys. Example: “Add item to cart,” “Apply discount,” “Pay,” “Receive confirmation.”

  • Step 2: For each journey, list failure modes. Example: “Payment captured but order not created,” “Discount applied incorrectly,” “Confirmation email not sent.”

  • Step 3: Rate impact (High/Medium/Low). Data loss and double-charging are typically High.

  • Step 4: Rate likelihood (High/Medium/Low). New integration with a payment gateway might be High likelihood.

  • Step 5: Decide test focus. High impact + high likelihood gets the most testing: deeper exploratory sessions, more automated checks, more monitoring, and possibly staged rollout.

  • Step 6: Define “evidence” needed. Example: “Automated tests cover discount rules,” “Manual exploratory session covers payment failures,” “Metrics track payment error rate.”

Quality Thinking in the Development Lifecycle

Quality thinking is most effective when applied early and continuously. Testing at the end can reveal defects, but it is often too late to cheaply fix design issues, unclear behavior, or missing edge cases. A quality-minded team integrates testing activities throughout development.

Before implementation: clarify behavior and risks

Even without repeating prior material, it is important to emphasize that quality thinking begins with shared understanding. Teams can review a feature and ask: what could go wrong, what data is critical, and what user promises are we making?

Practical examples of early quality questions:

  • What is the expected behavior when an external service is unavailable?

  • What data must never be lost or duplicated?

  • Which actions must be auditable?

  • What are acceptable response times for key endpoints?

During implementation: build testability into the system

Quality thinking also means designing software so it can be tested and observed. This includes clear interfaces, deterministic behavior where possible, and meaningful logs. Testability reduces the cost of testing and increases confidence.

Examples of testability improvements:

  • Stable identifiers in UI: Use consistent selectors so UI tests don’t break with cosmetic changes.

  • Dependency injection: Allow swapping real services with fakes for tests.

  • Idempotency: Make critical operations safe to retry (important for network failures).

  • Feature flags: Enable gradual rollout and quick disablement if issues appear.

After implementation: validate, explore, and monitor

Testing does not end at “all tests passed.” Quality thinking includes monitoring in production, analyzing incidents, and feeding learning back into design and tests. Some failures only appear under real load, real data, or unexpected user behavior.

Common Misconceptions That Weaken Testing Purpose

Misconception 1: “Testing proves the software is correct”

Testing can show the presence of defects, not their absence. Even extensive testing samples behavior; it does not exhaustively prove correctness for non-trivial systems. The practical goal is to reduce risk to an acceptable level and provide evidence for decisions.

Misconception 2: “More test cases means better quality”

Quantity is not the same as coverage of meaningful risk. Hundreds of shallow tests can miss critical scenarios like concurrency, data corruption, or permission bypass. Quality thinking prioritizes high-value scenarios and maintains tests so they remain trustworthy.

Misconception 3: “Testing is a phase after coding”

When testing is treated as a late phase, defects become expensive and schedules become unpredictable. Quality thinking integrates testing into daily work: small changes, quick checks, and continuous feedback.

Misconception 4: “Automation replaces human testing”

Automation is excellent for repeatable checks and regression prevention, but humans are better at exploration, judgment, and discovering surprising behavior. Quality thinking uses automation to free time for deeper investigation, not to eliminate thinking.

Practical Step-by-Step: Define a Test Mission for a Feature

A test mission is a short statement that guides what you will test and why. It keeps testing purposeful and aligned with risk.

  • Step 1: Identify the primary user promise. Example: “Users can transfer money between accounts instantly and safely.”

  • Step 2: Identify the worst credible failure. Example: “Money is debited but not credited,” or “Transfer goes to the wrong account.”

  • Step 3: Identify key constraints. Example: “Must be auditable,” “Must prevent unauthorized transfers,” “Must handle retries safely.”

  • Step 4: Choose test approaches. Example: automated checks for validation rules, exploratory testing for edge cases, fault injection for service timeouts, and review of logs/metrics.

  • Step 5: Define exit evidence. Example: “All critical automated checks pass,” “Exploratory notes show no critical issues,” “Monitoring dashboards exist for transfer failures.”

Example mission statement:

Mission: Evaluate money transfer reliability and safety under normal use and failure conditions, focusing on preventing loss, duplication, and unauthorized actions.

Practical Step-by-Step: Turn Risks into Concrete Test Ideas

Quality thinking becomes actionable when risks are translated into test ideas that can be executed manually or automated.

  • Step 1: Pick one high-risk area. Example: “Order total calculation with discounts and taxes.”

  • Step 2: Identify invariants (things that must always be true). Example: “Total must equal sum of line items minus discounts plus tax,” “Total must never be negative.”

  • Step 3: Identify boundaries and special values. Example: 0 items, maximum quantity, 100% discount, rounding at 0.005, large totals.

  • Step 4: Identify interactions. Example: discount + tax rules, multiple discounts, shipping changes, currency conversions.

  • Step 5: Identify failure conditions. Example: missing tax rate, invalid coupon, stale pricing, concurrent cart updates.

  • Step 6: Write test ideas as short charters. Example: “Explore rounding behavior across UI and API for totals near half-cent boundaries.”

Sample charter list:

  • Verify totals for mixed taxable and non-taxable items with percentage discount.

  • Explore rounding differences between frontend display and backend stored total.

  • Test concurrency: apply coupon while cart is updated in another session.

  • Test resilience: tax service timeout should not create an order with missing tax silently.

Quality Thinking for Defect Prevention (Not Just Detection)

Testing is often associated with finding defects, but quality thinking also aims to prevent them. Prevention can be achieved by improving clarity, simplifying design, adding safeguards, and making failures visible.

Examples of preventive quality practices

  • Make invalid states unrepresentable: Use types, validations, and constraints so impossible combinations cannot be created.

  • Fail fast and loudly: If a critical dependency is missing, return a clear error rather than silently producing incorrect data.

  • Use defensive checks for critical invariants: For example, assert that an order total matches recalculated totals before charging.

  • Design for safe retries: Ensure that repeating a request does not duplicate charges or create duplicate records.

  • Build observability: Add structured logs, metrics, and alerts for critical flows so issues are detected quickly.

Testing Purpose Across Different Levels of Checks

Quality thinking also involves choosing the right level of testing for the question you are trying to answer. Different checks provide different kinds of confidence and have different costs.

Unit-level checks: fast feedback on logic

Purpose: validate business rules and edge cases in isolation. These checks are fast and help developers refactor safely.

Service/API checks: validate contracts and integration logic

Purpose: ensure endpoints behave correctly, handle errors, and enforce authorization. These checks can catch issues that unit tests cannot, such as serialization, validation, and permission problems.

UI/end-to-end checks: validate user journeys

Purpose: confirm that critical workflows work from the user’s perspective. Because these checks are slower and more fragile, quality thinking focuses them on the most critical journeys and uses them as smoke/regression coverage rather than exhaustive logic verification.

Non-functional checks: validate real-world behavior

Purpose: assess performance, security, accessibility, and reliability. These checks often require specialized tools or environments, but even lightweight versions (basic load tests, simple accessibility scans, dependency failure simulations) can provide valuable evidence.

Practical Example: Applying Quality Thinking to a “Profile Update” Feature

Imagine a feature that allows users to update their profile: name, email, phone number, and notification preferences. It seems simple, but quality thinking reveals multiple risks.

Identify key risks

  • Data integrity: Email must remain unique; changes must not corrupt user records.

  • Security: Users must not update another user’s profile; email change might require verification.

  • Usability: Clear validation messages; changes should persist and reflect immediately.

  • Operational: Audit trail for email changes; notifications triggered correctly.

Turn risks into test ideas

  • Verify authorization: attempt to update another user’s profile via API.

  • Verify uniqueness: change email to one already in use.

  • Explore partial updates: update phone only; ensure other fields unchanged.

  • Explore concurrency: update profile in two tabs; last write behavior should be defined and consistent.

  • Verify audit logging: email change produces an audit event with correct metadata.

  • Explore resilience: if notification service fails, profile update should still succeed (or fail) according to defined behavior, and the failure should be visible.

Define evidence for release decision

  • Automated API checks cover authorization and validation rules.

  • Manual exploratory session covers concurrency and failure modes.

  • Monitoring dashboard includes rate of profile update failures and email verification completion.

Quality Thinking as Communication

One of the most practical purposes of testing is to create a shared language about quality. When testers and developers discuss risks, edge cases, and expected behavior, they reduce ambiguity and prevent defects. Quality thinking encourages asking precise questions and documenting decisions in a way that supports future changes.

Examples of clarifying questions that improve quality:

  • What should happen if the user repeats the same request due to a network retry?

  • Which errors should be shown to the user, and which should be logged only?

  • What is the acceptable delay for eventual consistency, if any?

  • What data is considered the source of truth when systems disagree?

When these questions are answered, testing becomes more targeted, automation becomes more stable, and release decisions become more defensible.

Now answer the exercise about the content:

Which statement best reflects quality thinking about the purpose of software testing?

You are right! Congratulations, now go to the next page

You missed! Try again.

Testing is a continuous feedback mechanism that informs decisions and reduces risk. It balances verification of known expectations with structured exploration to reveal unknowns, prioritizing meaningful scenarios over sheer test quantity.

Next chapter

Risk-Based Testing: Deciding What Matters Most

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.