What “Deciding What to Build Next Based on Evidence” Means
After you have gathered signals from interviews, landing pages, pricing tests, and commitment tests, the hard part is not “getting more data.” The hard part is turning mixed, imperfect evidence into a clear build decision: what to build next, what not to build, and what to test before writing more code or spending more money.
Evidence-based build decisions are about reducing risk in the next step. You are not trying to predict the future with certainty; you are trying to choose the next build slice that most efficiently increases learning and/or revenue while minimizing wasted effort.
In practice, this chapter helps you answer questions like:
- Which customer segment should we serve first, based on the strongest signals?
- Which use case should we implement first, based on urgency and frequency?
- Which features are “must-have” for adoption versus “nice-to-have”?
- Should we build at all, or run another test because the evidence is too weak?
- If we build, what is the smallest build that can validate the next risk?
Common Traps When Choosing What to Build
Trap 1: Building the “most requested feature” without checking why
Customers often request features as a proxy for outcomes. “Add integrations” may actually mean “I don’t want manual work.” “Add analytics” may mean “I need to justify this to my boss.” If you build the literal request, you may miss the underlying job and build the wrong thing.
Trap 2: Treating loud feedback as representative
A single enthusiastic person can dominate your perception. Evidence-based decisions require you to weigh feedback by relevance (target customer fit), intensity (how painful the problem is), and credibility (did they take action, commit time, or commit money?).
- Listen to the audio with the screen off.
- Earn a certificate upon completion.
- Over 5000 courses for you to explore!
Download the app
Trap 3: Confusing “interesting” with “valuable”
Some ideas are fun to build or intellectually appealing, but the evidence may show low urgency, low willingness to change, or low budget. Your build plan should follow value signals, not novelty.
Trap 4: Trying to build the whole product at once
Even if the idea is validated, the next step is rarely “build everything.” The next step is “build the smallest reliable path to the next proof point,” such as activation, retention, or paid conversion.
Types of Evidence You Can Use (and How to Weight Them)
Not all evidence is equal. A useful approach is to classify evidence by how close it is to real behavior and real constraints.
1) Behavioral evidence (strong)
- People completed a workflow, used a prototype, or returned to use it again.
- They shared data, granted access, or integrated it into their process.
- They introduced you to a decision-maker or teammate.
This evidence indicates the problem is real enough to act on.
2) Commitment evidence (very strong)
- They paid, signed a letter of intent, or agreed to a pilot with clear terms.
- They scheduled onboarding, allocated internal time, or accepted a procurement step.
This evidence indicates the problem is not only real, but prioritized.
3) Stated preference evidence (medium)
- “I would use this.”
- “This is a good idea.”
- Feature wish lists without tradeoffs.
Useful for direction, but risky if used alone.
4) Proxy evidence (weak to medium)
- Landing page clicks, email signups, survey responses.
- Social engagement.
Helpful for demand sensing, but can overstate intent. Use it to choose what to test next, not to justify a large build.
A simple weighting rule
When evidence conflicts, prioritize: commitment > behavior > constrained statements (with tradeoffs) > unconstrained opinions.
Turn Evidence Into Decisions: A Practical Step-by-Step
Step 1: List the decision options you are choosing between
Write the options as mutually exclusive choices. If you can’t choose between them, your options are probably too vague.
- Option A: Build onboarding + core workflow for Segment 1
- Option B: Build integration X first to unlock adoption
- Option C: Build reporting/dashboard first to satisfy buyers
- Option D: Do not build yet; run a targeted test to resolve uncertainty
Include “do not build yet” as a real option. It prevents false urgency.
Step 2: Define the next risk you must reduce
Validated interest does not mean validated adoption. Decide what must be true for the business to work in the next 30–60 days. Common “next risks” include:
- Activation risk: Can new users reach the “aha moment” quickly?
- Retention risk: Will they come back and use it repeatedly?
- Workflow fit risk: Does it fit into existing tools and processes?
- Buyer approval risk: Will a decision-maker approve purchase?
- Delivery risk: Can you deliver the outcome reliably at acceptable cost?
Your next build should target the biggest remaining risk, not the most exciting feature.
Step 3: Create an evidence table for each option
Make a simple table that forces you to cite evidence, not opinions.
Option: Build integration X first (for Segment 1) Evidence FOR: - 6/10 target users said integration is required to try it - 3 offered to connect us to their admin if integration exists - In prototype test, manual upload caused drop-off Evidence AGAINST: - 2 users said they'd start with CSV if results are fast - Integration requires 3 weeks engineering + ongoing maintenance Evidence quality: Medium-High (behavior + constrained statements) Remaining unknowns: - Which system is most common (X vs Y)? - Can we deliver without full integration using a workaround?Do this for each option. If you cannot cite evidence, that option is currently a guess.
Step 4: Score options with a lightweight decision rubric
Use a rubric to avoid “feature debates.” Keep it simple and consistent. Example criteria (score 1–5):
- Evidence strength: How strong is the proof that this option matters?
- Impact on next risk: Does it directly reduce the biggest uncertainty?
- Time-to-learn: How quickly will we know if it worked?
- Effort/cost: Engineering time, complexity, support burden.
- Reusability: Will this work benefit multiple segments/use cases?
Then compute a simple weighted score. Example weights: evidence (30%), impact (30%), time-to-learn (20%), effort (10%, inverse), reusability (10%). The exact weights matter less than using the same method each time.
Step 5: Choose the smallest build that can produce a decisive signal
Once you pick an option, define the smallest build slice that can validate the next risk. This is not the same as “MVP” in the abstract; it is the minimal implementation that creates a measurable outcome.
Examples:
- If the risk is activation: build guided onboarding, one core workflow, and a clear success output. Skip advanced settings.
- If the risk is workflow fit: build one integration or one export path that matches the most common tool, not all tools.
- If the risk is buyer approval: build reporting output, audit trail, or admin controls that a buyer needs to say yes.
Step 6: Define “build success” as a decision trigger
Before building, define what result will cause you to continue, pivot the build plan, or stop. You are not redefining success after the fact.
Examples of decision triggers:
- Continue: 40% of invited users complete onboarding and reach the “aha output” within 10 minutes.
- Pivot: Users start but drop at the same step; fix that step before adding features.
- Stop: Less than 10% complete the workflow even with support; revisit the use case or segment.
How to Translate Customer Feedback Into Build Priorities
Separate “requirements to try” from “requirements to stay”
Early customers often need a minimum set to even attempt adoption (requirements to try), and a different set to keep using it (requirements to stay). Your next build should usually focus on “try” requirements until activation is proven, then shift to “stay” requirements to improve retention.
Example: A scheduling tool might need Google Calendar sync to try (workflow fit), but needs team permissions and audit logs to stay (organizational fit).
Look for constraints and tradeoffs in statements
Feedback becomes more reliable when customers mention constraints:
- Time: “If it takes more than 5 minutes per client, we won’t use it.”
- Process: “We can’t upload files; it must pull from our system.”
- Risk: “We need an approval step before anything is sent.”
- Budget: “We can do $200/month, but not $2,000.”
These constraints point to build priorities because they define adoption boundaries.
Prioritize by frequency, intensity, and strategic fit
- Frequency: How often does this pain occur?
- Intensity: How costly is it when it happens?
- Strategic fit: Does solving it align with the segment you are choosing to serve first?
A rare but intense problem may still be a great business if it is tied to budget and urgency. A frequent but mild annoyance may not justify switching.
Deciding Between Multiple Segments or Use Cases
Sometimes evidence shows multiple plausible directions. You need a method to choose without “analysis paralysis.”
Use a “first wedge” decision
Pick the segment/use case that gives you the fastest path to:
- Observable outcomes (you can measure success quickly)
- Shorter sales cycle (fewer approvals)
- Lower implementation complexity (fewer integrations, fewer edge cases)
- Clearer repeatability (similar customers with similar workflows)
This does not mean the other segments are bad; it means they are not the best first wedge.
Example: same product, different wedges
Imagine you are building a tool that turns meeting notes into follow-up tasks.
- Segment A: Freelancers. Fast adoption, low budget, simple workflows.
- Segment B: Sales teams. Higher budget, needs CRM integration, buyer approval.
- Segment C: Legal teams. High risk constraints, needs audit trail and security.
Evidence might show Sales teams pay more, but require integrations and approvals. Freelancers might adopt quickly and provide usage data. An evidence-based decision could be: build for Freelancers first to validate activation and retention, then expand to Sales teams once workflow and value are proven, using the freelancer learnings to reduce product risk.
Feature Prioritization: A Practical Framework You Can Apply Weekly
Create a “Now / Next / Later / Never (for now)” list
Instead of a long backlog, maintain four lists:
- Now: Items required to validate the next risk and hit the next decision trigger.
- Next: Items likely needed soon, but not required for the current proof point.
- Later: Items that may matter after traction, often scaling or optimization.
- Never (for now): Items that are distractions, too costly, or not supported by evidence.
This structure makes it easier to say no without losing information.
Define “minimum lovable” for the next milestone
“Minimum lovable” means the smallest experience that feels coherent and trustworthy for the target customer. It is not about polish everywhere; it is about removing the specific friction that prevents adoption.
Examples of “lovable” elements that often matter early:
- Clear output (the user can see the value quickly)
- Reliability in the core workflow (few errors, predictable results)
- Basic support path (even if manual) so users don’t get stuck
Examples of “polish” that often can wait:
- Multiple themes, advanced customization, complex settings
- Edge-case automation for rare scenarios
- Full self-serve admin panels if you can handle early changes manually
When the Evidence Is Inconclusive: What to Do Instead of Building
Sometimes your evidence does not clearly support any build direction. That is not failure; it is a signal that the next step should be a targeted learning move.
Signs you should not build yet
- High interest but low follow-through (people like the idea but won’t take the next step).
- Feedback is scattered across unrelated use cases.
- Strong pain exists, but customers disagree on what “success” looks like.
- Adoption depends on a complex integration you have not validated as necessary.
How to resolve uncertainty with a “disambiguation test”
A disambiguation test is designed to choose between two plausible directions. It is not a broad exploration.
Process:
- Pick the top two options you are torn between.
- Write the single question that would decide between them.
- Design a small test that produces behavioral or commitment evidence.
- Set a deadline (for example, one week) to avoid endless testing.
Example question: “Is integration X required for adoption, or will a manual import be acceptable if the outcome is fast?”
Test idea: Offer two pilot paths to similar customers: one with manual import supported by you, one with a lightweight connector. Measure completion and repeat usage. The goal is not perfection; it is clarity.
Turning the Decision Into an Execution Plan (Without Overbuilding)
Write a one-page build brief tied to evidence
Before you start building, write a short brief that connects evidence to scope. Keep it to one page so it stays sharp.
Build Brief (1 page) Target customer: [specific segment] Target use case: [specific workflow] Next risk to reduce: [activation / retention / workflow fit / buyer approval] Evidence summary: - [3-5 bullets with strongest evidence] Build scope (Now): - [3-7 items max] Out of scope (Not now): - [list] Success trigger (decision rule): - [metric + threshold + time window] Rollout plan: - [who gets access, how you support them, how you collect signals]Design the build to produce measurable events
If you cannot observe whether users reached value, you cannot learn. Ensure the build includes measurable events tied to the workflow, such as:
- Completed onboarding step
- Imported data / connected tool
- Generated first output
- Shared/exported output
- Returned within 7 days and repeated the workflow
These events are not “vanity metrics.” They are the breadcrumbs that tell you where the product is working or failing.
Plan for “manual support” as part of the build
Early on, manual support is often the fastest way to learn and succeed. Evidence-based building includes deciding what you will do manually to reduce scope while still delivering value.
Examples:
- Manually cleaning imported data instead of building a full parser
- Manually configuring a customer’s account instead of building a settings UI
- Manually generating a report template while you learn what format buyers need
The key is to track what you do manually, because repeated manual tasks become candidates for automation later, once evidence proves they matter.
Practical Examples of Evidence-Based “What to Build Next” Decisions
Example 1: Choosing between two features
Scenario: You have a prototype for a personal finance tool. Users like it, but you must choose what to build first: bank syncing or budgeting categories.
- Evidence: In usage tests, most drop-off happens at manual transaction entry. Several users say, “I won’t keep up with this unless it syncs.” A few power users want categories, but they are willing to start with simple tagging.
- Decision: Build bank syncing (or the simplest reliable import path) first because it reduces workflow friction and increases the chance of repeat usage.
- Smallest build slice: Support one bank aggregator or one import method, plus a basic “first dashboard” output.
Example 2: Choosing a segment based on adoption speed
Scenario: You are building a tool that automates social media reporting. Agencies and in-house marketing teams both show interest.
- Evidence: Agencies can decide quickly and have a repeated monthly workflow; in-house teams require manager approval and custom templates.
- Decision: Build for agencies first to validate retention and willingness to pay with a repeatable workflow.
- Smallest build slice: One report template, one data source integration, and export to PDF/Slides.
Example 3: Deciding not to build yet
Scenario: You are considering building an AI assistant for customer support. Many people say it is “cool,” but few can describe a specific workflow they would trust it with.
- Evidence: Interest is high, but customers hesitate when asked what they would delegate and what risks they fear. No one commits to a pilot without strict controls.
- Decision: Do not build a full assistant. Run a targeted test focused on one narrow task with clear guardrails (for example, drafting responses that require approval).
- Next build slice: A constrained drafting tool with approval workflow, not autonomous sending.