Sprint Retrospective: Improving How the Team Works Each Sprint

Capítulo 12

Estimated reading time: 8 minutes

+ Exercise

What the Sprint Retrospective Is (and What It Is Not)

The Sprint Retrospective is a focused working session where the team inspects how they worked during the sprint and adapts their way of working to improve. The goal is not to judge people or re-litigate decisions; it is to identify a small number of changes that make the next sprint easier, faster, safer, or higher quality.

A useful retrospective produces: (1) shared understanding of what happened, grounded in facts; (2) one or two improvement experiments; (3) clear ownership and measurable outcomes; and (4) a plan to track the actions during the next sprint so improvements actually stick.

Creating Psychological Safety and Using Facts

Psychological safety: how to contribute as a team member

Retrospectives work when people can speak honestly without fear of blame. You can actively create safety by how you show up:

  • Assume positive intent: describe impact, not character. Say “We missed handoffs and it delayed testing,” not “You never communicate.”
  • Use “I” and “we” language: “I struggled to understand the acceptance criteria,” “We had too many parallel tasks.”
  • Invite quieter voices: “I’ve shared my view—what did others notice?”
  • Normalize learning: treat mistakes as data. “What did this teach us?”
  • Keep it about the system: process, tools, constraints, policies, communication patterns.

Use facts before interpretations

Facts reduce defensiveness and help the team solve the right problem. Bring concrete observations such as:

  • Work item aging (how long items sat “In Progress”)
  • Number of carryovers (items not finished within the sprint)
  • Defects found after “Done”
  • Cycle time by workflow step (e.g., dev vs. review vs. test)
  • Interruptions (unplanned work count, production incidents)

Then separate observations from interpretations:

Continue in our app.
  • Listen to the audio with the screen off.
  • Earn a certificate upon completion.
  • Over 5000 courses for you to explore!
Or continue reading below...
Download App

Download the app

Observation (fact)Interpretation (hypothesis)Question to explore
3 items waited > 2 days for reviewReviews are a bottleneckWhy were reviews delayed (availability, unclear checklist, too large changes)?
2 defects escaped after “Done”Our testing approach is insufficientWhich scenarios were missed and how can we catch them earlier?
We had 9 context switches due to support requestsInterruptions are hurting flowCan we create a support rotation or buffer capacity?

Step-by-Step: Turning Discussion into Changes That Stick

1) Set a clear focus for the retro

Before diving in, align on what you want to improve. Examples: “handoffs and waiting,” “quality and rework,” “collaboration with stakeholders,” or “predictability.” A narrow focus helps avoid a scattered conversation.

2) Gather data quickly

Use a simple structure: each person writes 3–5 notes (digital or sticky) answering prompts like “What helped us?” “What slowed us down?” “What surprised us?” Group similar notes and name the themes.

3) Choose one or two improvement experiments (not a wish list)

Pick the smallest set of changes with the highest expected impact. A good rule: one primary experiment and optionally one small supporting tweak. If you select five improvements, you usually implement none.

To decide, use lightweight criteria:

  • Impact: If this works, what improves (time, quality, stress, clarity)?
  • Effort: How hard is it to try for one sprint?
  • Control: Can the team implement it without waiting on external approvals?

4) Define measurable outcomes (how you’ll know it worked)

Turn the experiment into a testable statement with a metric and a target. Keep it measurable within one sprint.

Examples of measurable outcomes:

  • Reduce waiting: “No work item waits more than 24 hours in ‘Ready for Review’.”
  • Improve quality: “Zero ‘reopened’ items after Done; or reduce escaped defects from 2 to 0.”
  • Increase focus: “Limit WIP to 2 items per developer; track daily WIP breaches.”
  • Improve clarity: “Decrease ‘clarification needed’ comments in review from 10 to 3.”

A helpful template:

Experiment: We will [change] for the next sprint.  We expect [measurable outcome].  We will measure it by [metric/source].

5) Assign ownership and define the first next step

Every action needs an owner (one person accountable for driving it) and a concrete first step that can be started immediately.

ActionOwnerFirst step (within 24 hours)Done when…
Create a lightweight PR review checklistSamDraft checklist in repo and share for feedbackChecklist used on all PRs this sprint
Introduce a daily 15-min “review swarm” slotLeePut recurring calendar hold; announce working agreementAverage review wait < 24h
Support rotation to reduce interruptionsRiyaPropose rotation schedule and escalation rulesInterruptions per person reduced by 30%

6) Track actions during the next sprint (so the retro matters)

Improvements stick when they are visible and revisited. Practical ways to track:

  • Add retro actions to the team’s working board as explicit items (not hidden in notes).
  • Review actions briefly during the sprint (e.g., twice a week): “Are we doing the experiment? Any blockers?”
  • Use a simple status: Not started / In progress / Done / Abandoned (with reason).
  • Collect the metric as you go, not at the end. If the metric is “review wait time,” check it mid-sprint.

In the next retrospective, start by inspecting last sprint’s experiments: Did we do them? What did the data show? Keep, tweak, or stop.

Example Retrospective Formats (and When to Use Each)

Start / Stop / Continue

Best when: the team is new to retrospectives, needs a simple structure, or wants quick actionable changes.

How to run it:

  • Start: practices to introduce (e.g., “start pairing on risky items”).
  • Stop: practices to remove (e.g., “stop starting work without clear acceptance criteria”).
  • Continue: practices that are working (e.g., “continue review swarms”).

Tip: Don’t let “Continue” become a compliment round only; ask “What exactly should we keep doing and why?”

Timeline (Sprint Journey)

Best when: there was confusion, conflict, or a significant event (incident, major change, missed deadline) and you need shared context before solving.

How to run it:

  • Draw a timeline from day 1 to the end of the sprint.
  • Each person adds key events (deploys, incidents, scope changes, absences, decisions).
  • Discuss patterns: where did things start to drift? what signals did we miss?
  • Choose one or two leverage points to change next sprint.

Tip: Keep it factual first; treat causes as hypotheses to test.

4Ls (Liked, Learned, Lacked, Longed for)

Best when: you want a balanced view (positives + gaps) and deeper reflection, especially after a challenging sprint.

How to run it:

  • Liked: what helped (tools, behaviors, decisions).
  • Learned: insights gained (about product, tech, collaboration).
  • Lacked: what was missing (clarity, time, skills, access).
  • Longed for: what you wish you had (more stakeholder access, better test data, fewer interruptions).

Tip: Convert “Longed for” into experiments within your control (or into a clear request with an owner to pursue).

Common Pitfalls (and How to Avoid Them)

Pitfall: Venting without action

What it looks like: the retro becomes a complaint session; the same issues repeat every sprint.

How to avoid:

  • Timebox discussion per theme (e.g., 10 minutes) then move to “What experiment will we try?”
  • Ask: “What is one change we can try next sprint that is within our control?”
  • Require each chosen topic to produce an action with owner + metric.

Pitfall: Choosing too many improvements

What it looks like: a long list of actions, none completed.

How to avoid:

  • Limit to one primary experiment (plus one small tweak if truly necessary).
  • Prefer small, reversible changes you can test in one sprint.
  • If an improvement is important but large, slice it: define the smallest first step that produces learning.

Pitfall: Blame and defensiveness

What it looks like: people argue about who caused a problem; quieter members stop contributing.

How to avoid:

  • Use neutral language and facts; focus on workflow and constraints.
  • Replace “Who did this?” with “What in our system allowed this?”
  • When emotions rise, pause and restate the shared goal: improving how the team works.

Pitfall: Vague actions

What it looks like: “Improve communication” or “Test more” with no clear behavior change.

How to avoid:

  • Make actions behavioral and observable: “Add a 10-minute mid-sprint alignment on dependencies every Wednesday.”
  • Define “done” for the action: “Used for all items this sprint.”
  • Add a metric: “Reduce dependency-related blockers from 6 to 2.”

Pitfall: Not revisiting last sprint’s actions

What it looks like: the team forgets experiments and starts fresh each retro.

How to avoid:

  • Start each retro by reviewing the previous experiment outcomes.
  • Keep a visible “Improvement Log” with: experiment, owner, metric, result, decision (keep/tweak/stop).

Practical Examples of Improvement Experiments

Example 1: Reduce review bottlenecks

  • Experiment: Add a daily 20-minute review swarm immediately after lunch.
  • Expected outcome: Average time in “Ready for Review” drops from ~2 days to < 1 day.
  • Owner: One developer schedules and reminds; everyone participates.
  • Tracking: Record review wait time for each item; check mid-sprint.

Example 2: Improve clarity before work starts

  • Experiment: For any item started, confirm a short checklist: acceptance criteria understood, test approach noted, dependencies identified.
  • Expected outcome: Reduce “clarification needed” comments during implementation by 50%.
  • Owner: Rotating “clarity buddy” each week.
  • Tracking: Count clarification pings or rework notes per item.

Example 3: Reduce interruptions

  • Experiment: Create a support rotation with a single on-duty person per day; others protect focus time.
  • Expected outcome: Reduce context switches per person; improve completion rate of planned work.
  • Owner: One person maintains the rotation and escalation rules.
  • Tracking: Log support requests and who handled them; compare distribution and impact.

Now answer the exercise about the content:

During a Sprint Retrospective, which approach best helps ensure improvements actually carry into the next sprint?

You are right! Congratulations, now go to the next page

You missed! Try again.

A retrospective should produce a small number of experiments with clear ownership and measurable outcomes, then be tracked during the next sprint so the change sticks.

Next chapter

Common Pitfalls for New Scrum Teams: What to Watch For and How to Respond

Arrow Right Icon
Free Ebook cover Scrum Foundations for New Team Members: How Scrum Works Day-to-Day
92%

Scrum Foundations for New Team Members: How Scrum Works Day-to-Day

New course

13 pages

Download the app to earn free Certification and listen to the courses in the background, even with the screen off.