What a Defect Lifecycle Is (and Why It Matters Day-to-Day)
A defect lifecycle is the set of states and handoffs a defect report goes through from the moment a problem is discovered until the team has verified the fix (or decided not to fix). It is both a workflow and a communication contract: it defines what “done” means for a bug, who is responsible at each step, and what information must be present before the defect can move forward.
In practice, the lifecycle prevents two common failure modes: (1) defects bouncing around because nobody knows what to do next, and (2) defects being “closed” without a shared understanding that the fix is real, tested, and safe. A well-defined lifecycle also supports planning (how many defects are in progress), quality reporting (how many are verified), and learning (why defects were introduced and how to prevent repeats).
Typical Roles Involved
- Reporter (often a tester, sometimes support or a developer): discovers and documents the defect.
- Triage owner (QA lead, product owner, engineering lead, or rotating duty): decides priority, assigns, and clarifies scope.
- Developer/Engineer: investigates root cause, implements fix, adds automated checks when appropriate.
- Reviewer (peer developer): reviews code changes and risk.
- Verifier (tester or developer depending on team): confirms the fix and checks for regressions.
- Release manager (in some teams): ensures the fix is included in the right release and properly communicated.
Common Defect States and What They Mean
Tools differ, but most defect lifecycles can be mapped to a small set of meaningful states. The key is that each state has an entry criterion (what must be true to move into it) and an exit criterion (what must be true to move out of it).
1) New (or Open)
Meaning: A defect has been reported and awaits initial review.
Entry criteria: A report exists in the tracking system.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Exit criteria: Triage has reviewed it and decided next steps (assign, request info, reject, or defer).
2) Needs Info (or Clarification Requested)
Meaning: The report is not actionable yet.
Common causes: missing steps to reproduce, unclear expected result, no environment details, no evidence, or the issue cannot be reproduced.
Exit criteria: Reporter (or someone else) provides the missing details; then it returns to triage or becomes actionable.
3) Triaged (or Accepted)
Meaning: The team agrees it is a valid defect and has decided priority and ownership.
Exit criteria: Assigned to an engineer and moved to an “In Progress” state, or moved to “Deferred/Won’t Fix” with rationale.
4) Assigned
Meaning: A specific person is responsible for investigation and fix.
Exit criteria: Work begins (In Progress) or assignment changes due to workload/area ownership.
5) In Progress (or Investigating/Fixing)
Meaning: The engineer is actively working on diagnosis and/or implementation.
Exit criteria: A fix is ready for review/build, or the engineer determines it is not a defect (duplicate, expected behavior, configuration issue).
6) Fixed (or Resolved)
Meaning: Code/config change has been made and is available in a build/environment.
Important nuance: “Fixed” is not the same as “Verified.” It means “developer believes it is fixed and has delivered a change.”
Exit criteria: A build containing the fix is available and the defect is ready for verification.
7) Ready for Test (or Ready for QA)
Meaning: The fix is deployed to a testable environment and includes enough information for verification.
Exit criteria: A tester (or verifier) begins verification.
8) Verified (or Closed)
Meaning: The fix has been confirmed in the target environment/build, and any required regression checks have been performed.
Exit criteria: Defect is closed, or reopened if the issue persists or regressed.
9) Reopened
Meaning: Verification failed, or the issue returned after being marked fixed/verified.
Exit criteria: Re-triage and re-assignment with updated evidence.
10) Duplicate / Not a Bug / Won’t Fix / Deferred
Meaning: The team decided not to proceed with a fix in the current plan.
- Duplicate: Same root issue as another defect; link to the canonical ticket.
- Not a Bug: Expected behavior, misunderstanding, or correct per spec.
- Won’t Fix: Valid issue, but the cost/risk is not worth fixing (must include rationale).
- Deferred: Valid issue, but postponed to a later release or backlog.
Step-by-Step: From Discovery to Verified
The lifecycle becomes practical when you treat each step as a checklist. The goal is to reduce back-and-forth and make each handoff efficient.
Step 1: Capture the Defect at the Moment of Discovery
When you first see the issue, assume you may not be able to reproduce it later. Capture evidence immediately.
- Record the context: environment (test/staging/prod), build/version, device/browser, user role, feature flags, configuration, data set.
- Capture proof: screenshot, screen recording, logs, network trace, console output, server error ID/correlation ID.
- Note timing: exact timestamp can help correlate logs.
Practical tip: If the defect is intermittent, write down how many attempts you made and how often it occurred (e.g., “3 failures in 20 attempts”).
Step 2: Reduce to Clear Reproduction Steps
Convert your exploration into a minimal, repeatable path. Aim for steps that another person can follow without interpretation.
- Start from a known state (e.g., “User is logged in as Admin; cart is empty”).
- Use numbered steps with concrete actions and inputs.
- Include any required test data (IDs, emails, sample files).
- State the actual result and the expected result in observable terms.
Example (good reproduction):
1. In Staging build 2.8.1, log in as user role: BillingAdmin (user: ba_test_03). 2. Navigate to Invoices > Create Invoice. 3. Add line item: SKU=CONSULT-1H, Qty=1. 4. Click Save. Actual: Error banner “Something went wrong” appears; invoice is not created. Network tab shows POST /invoices returns 500 with correlationId=9f2c... Expected: Invoice is created and appears in invoice list with status Draft.Step 3: Create the Defect Ticket with Actionable Fields
Different teams use different templates, but the same information makes defects actionable.
- Title: concise, specific, includes symptom and area (e.g., “Create Invoice fails with 500 when adding CONSULT-1H”).
- Description: steps, actual vs expected, evidence, frequency.
- Environment/build: where it happens; note if it does not happen elsewhere.
- Severity and priority: if your process separates them, state both (and why).
- Attachments/links: logs, traces, related tickets, monitoring alerts.
- Suspected scope: what might also be affected (e.g., “all invoice creation endpoints”).
Practical tip: If you can propose a quick diagnostic (e.g., “likely null pointer when SKU missing tax category”), include it as a hypothesis, clearly labeled as such, not as a fact.
Step 4: Triage—Decide What Happens Next
Triage is where the defect becomes a team decision rather than an individual observation. The triage outcome should be explicit.
- Validate: confirm it is reproducible or at least credible with evidence.
- Classify: component/module, defect type (logic, UI, performance, security, data), and affected platforms.
- Assess impact: who is affected, how often, and what the user loses (money, data, time, trust).
- Set priority: when it should be fixed relative to other work.
- Assign owner: a person or team.
- Decide disposition: accept, duplicate, needs info, defer, won’t fix.
Step-by-step triage checklist:
- Is the report understandable and reproducible?
- Is it already reported? (search by error message, endpoint, UI label)
- Is it a defect or expected behavior?
- What release is affected? Is it a regression?
- What is the smallest safe fix? Is a workaround available?
- Who should own it?
Step 5: Investigation—From Symptom to Root Cause
Once assigned, the engineer (often with tester support) investigates. A good lifecycle encourages investigation notes to be recorded in the ticket so verification is easier later.
- Reproduce locally or in a controlled environment using the provided steps.
- Confirm scope: does it affect all users or a subset (role, region, data shape)?
- Identify trigger conditions: specific inputs, timing, concurrency, missing data.
- Collect diagnostics: stack traces, database queries, feature flag states.
- Decide fix approach: code change, config change, data correction, dependency update.
Practical tip: If the defect is caused by bad test data or environment configuration, still document it clearly. The “fix” might be a data cleanup script or an environment change, and verification still matters.
Step 6: Implement the Fix with Verification in Mind
A fix is more than changing code. It should be packaged so someone else can verify it efficiently and safely.
- Implement the change and keep it as small as possible.
- Add or update automated checks where appropriate (unit/integration checks) to prevent recurrence.
- Update migration/data scripts if needed and document how to apply them.
- Note side effects and areas that might regress.
What to write in the ticket when marking “Fixed”:
- Build number/commit hash containing the fix.
- Where it is deployed (environment and URL).
- Any setup needed to verify (feature flag on, test account, sample data).
- Suggested regression areas (e.g., “also verify invoice edit and invoice PDF export”).
Step 7: Handoff to “Ready for Test”
Many teams struggle here: the defect is marked fixed, but the verifier cannot access the fix yet. A distinct “Ready for Test” state reduces wasted time.
- Confirm the fix is deployed to the agreed environment.
- Confirm dependencies are included (backend + frontend, config changes, migrations).
- Confirm the verifier has access (accounts, permissions, feature flags).
Practical tip: If deployments are batched, include the target deployment window and the exact build identifier so the verifier knows when to retest.
Step 8: Verification—Confirm the Fix and Check for Regressions
Verification is a focused activity: confirm that the reported symptom is gone under the same conditions, and that the fix did not break nearby behavior. Verification should be evidence-based.
- Re-run the original reproduction steps in the environment containing the fix.
- Verify expected behavior with observable outcomes (UI state, API response, database record created).
- Check negative/edge conditions related to the fix (e.g., missing optional fields, boundary values).
- Run targeted regression checks in the impacted area (not a full test suite unless required).
- Capture evidence (screenshots, logs, response payloads) especially for high-impact defects.
Verification note example:
Verified in Staging build 2.8.2. Reproduced original steps with user ba_test_03; Save now returns 201 and invoice INV-10492 created in Draft status. Also verified editing invoice line items and exporting PDF works. No 500 errors observed; correlationId not generated.Step 9: Close or Reopen with Specific Findings
If verification passes, move to Verified/Closed with evidence. If it fails, reopen with new information that helps the engineer act quickly.
- If reopening: include the build tested, what still fails, and how it differs from before.
- If partially fixed: clarify what is fixed and what remains (may require splitting into separate defects).
- If cannot verify: state why (environment down, fix not deployed, missing access) and move to an appropriate state rather than guessing.
Managing Special Situations in the Lifecycle
Intermittent Defects
Intermittent issues require lifecycle discipline because they are easy to close prematurely.
- Use probability language: “occurs ~10% of attempts” rather than “sometimes.”
- Define a verification threshold: e.g., “no failures in 50 attempts” or “no failures over 24 hours of monitoring.”
- Track signals: error rate metrics, logs, and correlation IDs.
Lifecycle adjustment: consider a state like “Monitoring” after “Fixed,” where the fix is deployed but the team waits for evidence that the intermittent symptom is gone.
Duplicates and Linking
Duplicates are not “bad reports”; they are signals about impact and discoverability.
- Always link duplicates to a single canonical defect.
- Copy useful evidence from duplicates into the canonical ticket (logs, new reproduction paths).
- Use duplicates to refine priority (many duplicates may indicate high user impact).
Not a Bug vs. Needs Better Explanation
When closing as “Not a Bug,” include a short explanation and a reference to the rule/behavior that makes it expected. If the behavior is surprising, consider creating a separate ticket for usability or documentation improvement rather than dismissing the report.
Deferred and Won’t Fix with Accountability
Deferring is a decision that should remain visible.
- Deferred: include target milestone or review date, and any workaround.
- Won’t Fix: include rationale (risk, cost, low impact) and who approved the decision.
Defect Ticket Quality: What Makes a Report “Actionable”
An actionable defect report minimizes time-to-fix by reducing ambiguity. Use this checklist to self-review before submitting.
- Reproducibility: steps are complete, minimal, and deterministic when possible.
- Observability: actual result is described with concrete evidence (error codes, messages, screenshots).
- Expectation: expected result is clear and testable.
- Context: environment/build, platform, user role, data prerequisites.
- Impact: what the user cannot do; whether data loss or security exposure exists.
- Scope hints: where else it might occur; whether it is a regression.
Example: Weak vs. Strong Ticket Description
Weak: “Invoice page broken. Please fix ASAP.”
Strong: “Staging 2.8.1: Create Invoice fails with 500 when adding SKU CONSULT-1H (BillingAdmin). Steps included; correlationId attached; occurs 3/3 attempts; blocks invoice creation.”
Verification Depth: How Much Retesting Is Enough?
Verification is not just “try the same thing once.” The right depth depends on the change and the risk of side effects. Even without repeating earlier testing theory, you can apply a practical rule: verify the original path, then verify the most likely neighbors to break.
- Original path: exact reproduction steps.
- Neighbor paths: same feature with slightly different data (different SKU, different role, different currency).
- Boundary checks: empty fields, maximum lengths, invalid inputs if relevant.
- Integration touchpoints: if the fix touched an API, check the UI and any consumers that call it.
Practical example: If the fix changes invoice tax calculation, verify: creating invoice with taxable item, non-taxable item, mixed items, and exporting the invoice where totals appear.
Lifecycle Metrics That Improve Flow (Without Gaming)
Tracking a few lifecycle metrics helps teams spot bottlenecks. Metrics should guide improvement, not punish individuals.
- Time to triage: how long defects sit in New before a decision.
- Time in Needs Info: indicates unclear reporting or missing access to diagnostics.
- Time to fix: from Accepted to Fixed; useful when segmented by component.
- Time to verify: from Ready for Test to Verified; highlights environment/deployment delays.
- Reopen rate: percentage of defects reopened after being marked fixed; can indicate weak fixes or weak verification notes.
Practical Workflow Example: A Complete Lifecycle Walkthrough
Use this end-to-end scenario to see how the states and handoffs connect.
- New: Tester reports “Password reset email link returns 404” with steps, timestamp, and email message ID.
- Triage: Team confirms it affects all users in staging; priority set high; assigned to auth team.
- In Progress: Engineer finds routing misconfiguration after recent deployment; notes that only the email template points to an old domain.
- Fixed: Engineer updates template and routing config; adds a small automated check that the reset URL matches current base domain.
- Ready for Test: Fix deployed to staging build 3.1.0; engineer comments: “Use user pr_test_01; request reset; link should open /reset?token=...; verify token invalidation after use.”
- Verified: Tester requests reset, confirms link opens reset page, resets password, confirms old token no longer works, and checks login with new password.