Free Ebook cover Software Testing Foundations: From Requirements to Defects

Software Testing Foundations: From Requirements to Defects

New course

12 pages

Writing Clear Bug Reports with Reproducible Steps

Capítulo 10

Estimated reading time: 11 minutes

+ Exercise

Why clear bug reports matter

A bug report is a communication tool: it transfers what you observed, how to reproduce it, and why it matters from the person who found the issue to the person who will fix it (and later to the person who will verify it). A report that is unclear, missing context, or hard to reproduce creates delays, back-and-forth questions, and sometimes incorrect fixes. A clear report reduces ambiguity by describing observable facts, providing reproducible steps, and attaching evidence that helps others see the same behavior.

“Clear” does not mean “long.” It means that a reader who has never seen the problem can set up the same conditions, follow the steps, and observe the same result. “Reproducible” means the report includes enough detail about the environment, data, and timing so the issue can be triggered again reliably (or, if it is intermittent, the report explains the pattern and frequency).

What makes a bug report actionable

1) A specific, searchable title

The title is often the only part people see in lists, notifications, and dashboards. A good title helps triage quickly and prevents duplicates. Use a structure like: [Area] [Action] [Observed problem] [Condition].

  • Good: “Checkout: Applying a second coupon removes the first coupon discount”
  • Good: “Mobile login: ‘Continue’ button unresponsive after rotating screen (iOS 17)”
  • Weak: “Coupon bug”
  • Weak: “Login broken”

Include key identifiers when relevant: feature name, page/screen, API endpoint, error code, browser, device model. Avoid speculation like “database issue” unless you have evidence.

2) Clear observed vs expected results

Separate what you saw from what you expected. Keep it objective and testable.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

  • Observed: “After tapping ‘Pay now’, the spinner shows for ~30 seconds and the page returns to the cart with no error message.”
  • Expected: “Order should be created and confirmation page displayed, or an error message should explain why payment failed.”

If the expected behavior is not obvious, reference a requirement, acceptance criteria, design spec, or product decision (by link or identifier) rather than re-explaining background theory.

3) Reproducible steps that control variables

Repro steps are the core of the report. They should be written so another person can follow them without guessing. Good steps control variables by specifying:

  • Starting state (logged in/out, role, feature flags, account type)
  • Data (which user, which product, which input values)
  • Navigation path (exact screen/menu)
  • Actions (click/tap/type/submit)
  • Timing (wait for X to load, do Y within 5 seconds)

When steps are vague (“Try to check out”), the reader must fill in missing details, and they may not trigger the same behavior.

4) Environment and build information

Many issues depend on where they occur. Include enough environment detail to reproduce:

  • App version/build number, commit hash if available
  • Environment (dev/staging/prod), region, tenant, feature flags
  • Device model, OS version, browser version
  • Network conditions (Wi‑Fi vs cellular, VPN, proxy), if relevant
  • User role/permissions

If you tested multiple environments, say so and list results (e.g., “Reproduces in staging, not in prod”).

5) Evidence: screenshots, video, logs, and data

Evidence accelerates diagnosis. Attach what helps a developer see what you saw and correlate it with system behavior:

  • Screenshot with visible URL/screen name and timestamp if possible
  • Short video for timing/gesture issues (e.g., scrolling, rotation, drag-and-drop)
  • Console logs, network traces (HAR), server logs excerpt, crash logs
  • Request/response payloads (redact sensitive data)
  • Test data used (IDs, sample inputs) or a safe export

When attaching logs, include the time window and correlation IDs if your system supports them. If you cannot attach logs, paste the relevant snippet in the report using a code block.

Writing reproducible steps: a practical method

Step 1: Reproduce once, then simplify

After you first observe the issue, reproduce it again to confirm it was not a one-off. Then simplify the path: remove unnecessary steps until you find the shortest sequence that still triggers the problem. This “minimal reproduction” is valuable because it narrows the suspected cause and saves time.

Example approach:

  • First run: you hit the bug after a long user journey.
  • Second run: confirm it happens again.
  • Simplify: start closer to the failing screen, use a fresh session, remove optional actions.
  • Stop when removing any step makes the bug disappear.

Step 2: Define the starting state explicitly

Many “cannot reproduce” outcomes come from different starting states. Make the starting state explicit and easy to set up.

  • “User is logged in as a Customer with an active subscription.”
  • “Cart contains 1 physical item and 1 digital item.”
  • “Feature flag ‘NewCheckout’ is ON.”

If setup is complex, split it into Preconditions and Steps. Preconditions describe what must already be true; steps describe what to do.

Step 3: Use numbered steps with one action each

Write steps as a numbered list where each step is a single action. Avoid combining multiple actions into one line because it makes it harder to pinpoint where behavior diverges.

Better:

  • 1. Open the app.
  • 2. Log in as user A.
  • 3. Navigate to Cart.
  • 4. Tap “Apply coupon”.

Worse: “Open the app, log in, go to cart and apply coupon.”

Step 4: Include exact input values

Inputs matter. Provide the exact values you typed or selected, including whitespace, casing, and formatting. If the input is long, attach it as a file or paste it in a code block.

Coupon code: SAVE10-NEW
Shipping address: 123 Test St, Apt 4B, Springfield, 01101
Card: 4111 1111 1111 1111 (test card)

If the issue depends on a specific record, include the record ID and where it came from (created by you, seeded data, imported). If the record is sensitive, provide a safe substitute or instructions to generate an equivalent record.

Step 5: Specify timing and frequency for intermittent issues

Some bugs are flaky: they happen only sometimes, under load, or after repeated actions. For these, reproducibility means describing the pattern and providing a way to increase the chance of occurrence.

  • Frequency: “Occurs ~3/10 attempts.”
  • Timing: “If you tap ‘Submit’ twice within 1 second…”
  • Load: “More likely when network is ‘Slow 3G’ in browser dev tools.”
  • Warm state: “Only after app has been in background for 10+ minutes.”

Also include what you tried that did not reproduce it. That information helps narrow conditions.

Recommended bug report template (with guidance)

Use a consistent template so readers know where to find information. Adapt fields to your tracking tool.

Title

[Area] [Action] [Problem] [Condition]

Summary

One or two sentences describing the issue and impact in plain language. Focus on user-visible behavior.

Environment

  • App version/build:
  • Environment/region:
  • Device/OS or Browser:
  • User/role:
  • Feature flags/config:

Preconditions / Test data

  • Account state:
  • Data IDs / sample inputs:

Steps to reproduce

  • 1.
  • 2.
  • 3.

Observed result

What happened (include exact message text, error codes, and where it appears).

Expected result

What should happen (link to spec/requirement if needed).

Attachments / Evidence

  • Screenshot/video:
  • Logs/HAR:
  • Correlation ID / timestamp:

Notes (optional)

  • Repro frequency:
  • Workaround:
  • Additional observations:

Keep “Notes” clearly separated from facts. If you include hypotheses, label them as such.

Examples of strong bug reports

Example 1: UI behavior regression

Title: “Profile: Saving changes shows success toast but data is not persisted after refresh”

Summary: User can edit profile fields and sees “Saved” confirmation, but changes revert after reloading the page.

Environment:

  • Web app build: 2.18.0 (staging)
  • Browser: Chrome 121.0 (macOS 14.2)
  • User: Standard user (no admin permissions)

Preconditions/Test data:

  • User account: user_id=184233
  • Existing profile name: “Taylor Nguyen”

Steps to reproduce:

  • 1. Log in as user_id=184233.
  • 2. Go to Settings → Profile.
  • 3. In “Display name”, replace “Taylor Nguyen” with “Taylor N.”
  • 4. Click “Save”.
  • 5. Observe the toast message.
  • 6. Refresh the page (Cmd+R) and return to Settings → Profile.

Observed result: Step 4 shows toast “Profile saved successfully”, but after refresh the display name is “Taylor Nguyen”.

Expected result: After saving, the display name should remain “Taylor N.” after refresh.

Attachments/Evidence: Video showing save + refresh; Network tab screenshot shows POST /api/profile returns 200 with response body {“status”:“ok”}.

Notes: Reproduces 5/5 times in staging. Does not reproduce in production build 2.17.3.

Example 2: API validation and exact payload

Title: “Orders API: POST /orders accepts negative quantity and creates order with -1 items”

Summary: API allows invalid negative quantity; order is created and totals become negative.

Environment:

  • Environment: QA
  • Service version: orders-service 1.9.4
  • Auth: Bearer token for role=Customer

Preconditions/Test data:

  • Product SKU: SKU-100042 exists and is in stock

Steps to reproduce:

  • 1. Send the following request:
POST /api/orders
Content-Type: application/json

{
  "items": [
    { "sku": "SKU-100042", "quantity": -1 }
  ]
}
  • 2. Observe the response status and body.
  • 3. Fetch the created order by ID from the response.

Observed result: API returns 201 Created and order contains quantity=-1; totals are negative.

Expected result: API should reject negative quantities with 400 Bad Request and validation error message.

Attachments/Evidence: Full request/response saved as text; correlation-id: 7f2c1a9e-1c3b-4f2a-9b2d-2a9e1f0c9d11; timestamp 2026-01-13 10:42 UTC.

Example 3: Intermittent mobile issue with timing

Title: “iOS: Push notification tap sometimes opens blank screen when app was backgrounded >10 min”

Summary: Tapping a push notification occasionally opens a blank screen; user must force close to recover.

Environment:

  • iPhone 14, iOS 17.2
  • App build: 5.6.0 (TestFlight)
  • Network: Wi‑Fi

Preconditions/Test data:

  • Notifications enabled for the app
  • Test account: user_id=55102

Steps to reproduce:

  • 1. Log in as user_id=55102.
  • 2. Send a test push notification “New message” to this user.
  • 3. Press Home to background the app.
  • 4. Wait 10–15 minutes.
  • 5. Tap the push notification from the lock screen or notification center.

Observed result: About 2/10 attempts, app opens to a blank white screen with no navigation controls; no content loads after 60 seconds.

Expected result: App should open directly to the message thread referenced by the notification.

Attachments/Evidence: Screen recording; device logs around the time show “Scene restore failed” message (attached excerpt).

Notes: Does not reproduce if step 4 wait time is under 2 minutes.

Common pitfalls and how to avoid them

Vague language and missing specifics

Words like “sometimes,” “doesn’t work,” and “broken” are not actionable unless you add specifics. Replace them with measurable statements.

  • Instead of: “Sometimes saving fails.”
  • Write: “Saving fails 4/10 times; when it fails, the Save button stays disabled and no request appears in the network tab.”

Mixing multiple issues in one report

One report should describe one primary problem. If you found multiple distinct problems, create separate reports. This helps prioritization and prevents partial fixes from being marked as complete.

If issues are related (e.g., one causes another), you can link them: “Issue B appears to be a consequence of Issue A; see BUG-1234.”

Not stating what changed

If you suspect a regression, include the last known good version and first known bad version if you have it. Even a rough range helps narrow the cause.

  • “Works in 2.17.3, fails in 2.18.0.”

Assuming the reader has your context

Bug reports are often read by people in different time zones or teams. Avoid references like “as discussed” or “same as last time.” Provide the essential context in the report itself and link to discussions if needed.

Unclear severity/impact statements

Even if your tracking tool has separate severity/priority fields, include a brief impact note in the summary when it affects user outcomes.

  • “Impact: user cannot complete checkout; no workaround found.”
  • “Impact: cosmetic; text overlaps but action remains possible.”

Keep impact factual. Avoid exaggeration; let the evidence speak.

Making bug reports faster to write without losing quality

Create reusable snippets

Maintain short snippets for common fields: environment block, device info, log collection instructions, and standard preconditions. This reduces time spent rewriting boilerplate while keeping reports consistent.

Capture evidence while reproducing

When you reproduce the issue, capture evidence immediately: screenshot the error, export the network trace, copy the request payload, note the timestamp. Waiting until later often means the context is lost.

Write steps as you perform them

Instead of reproducing first and writing later, write the steps during the second reproduction attempt. This naturally produces accurate, sequential instructions and reduces forgotten details.

Checklist for a high-quality bug report

  • Title identifies area + action + problem + condition.
  • Environment includes version/build and platform details.
  • Preconditions and test data are explicit and accessible.
  • Steps are numbered, minimal, and contain exact inputs.
  • Observed and expected results are clearly separated.
  • Evidence is attached (screenshots/video/logs) with timestamps or IDs.
  • Intermittent issues include frequency and patterns.
  • Only one primary issue is described; related issues are linked.

Now answer the exercise about the content:

Which bug report description is most likely to be actionable because it supports reliable reproduction?

You are right! Congratulations, now go to the next page

You missed! Try again.

An actionable report reduces ambiguity by controlling variables: it states environment/build, starting state and data, minimal numbered steps with exact inputs and timing, and clearly separates observed vs expected results, ideally with evidence.

Next chapter

Communicating Findings and Supporting Team Decisions

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.