From Idea to Tested Core Loop
A prototype is a fast, disposable build meant to answer one question: “Is this fun and controllable on a phone?” A vertical slice is a small, polished section of the game that answers a different set of questions: “Can we ship this quality level with our art style, UI, and performance targets?”
Think of the prototype as mechanics validation and the vertical slice as production validation. Both are about reducing risk before you commit to full content production.
(1) Defining a Prototype Goal (Prove Fun and Control Feel)
What a prototype should prove
- Core loop clarity: the player understands what to do within seconds.
- Moment-to-moment feel: movement, aiming, timing, and feedback feel responsive and learnable.
- Decision density: the player makes meaningful choices at the cadence you intend (e.g., every 2–5 seconds).
- Failure and recovery: losing is readable and restarting is frictionless.
Define a single prototype question
Write one sentence that the prototype must answer. Examples:
- “Is one-thumb movement + auto-aim satisfying for 60–90 second runs?”
- “Does swipe-to-dodge create a skill ceiling without confusing new players?”
- “Can players understand merge-and-upgrade decisions in under 30 seconds?”
Prototype scope rules
- One environment, one character, one enemy type (or equivalent minimal set).
- Temporary visuals are fine, but feedback must be clear enough to judge feel.
- No meta progression required unless the core loop depends on it.
- Timebox: aim for days, not weeks.
Step-by-step: Build a prototype that answers the question
- Write the loop in 3 verbs (e.g., “move → collect → upgrade” or “aim → shoot → reposition”).
- Implement the loop end-to-end so the player can complete a run (even if it’s 30 seconds).
- Add only the feedback needed to judge feel: hit confirmation, damage taken, success/fail state, restart.
- Expose tuning knobs (speed, cooldowns, spawn rate, friction) via a simple config file or debug panel.
- Run 10 rapid tuning passes with a consistent test script (same level, same duration) and record changes.
- Test on real devices early to validate responsiveness and comfort (emulators hide problems).
Prototype deliverables
- A build that launches fast and reaches gameplay in <10 seconds.
- A single “golden path” run that demonstrates the intended fun.
- A short tuning log: what changed, why, and the observed effect.
(2) Building a Vertical Slice Goal (Prove Art Style, Performance Targets, and UI)
What a vertical slice should prove
- Art direction viability: the chosen style reads well on phones and is feasible to produce at scale.
- UI completeness for the slice: menus, HUD, and results screens for one loop, with final interaction patterns.
- Performance budget adherence: stable frame pacing under representative load for the slice.
- Production pipeline: assets move from source to game with predictable steps and naming/versioning.
- Quality bar: the slice looks and feels like a shippable game segment, not a demo.
Pick a slice that represents the whole game
A good slice is small but representative. It should include at least one instance of each “hard thing” you expect to repeat in production.
- If your game is level-based: one full level with start → mid challenge → end.
- If your game is run-based: one run with a mid-run upgrade choice and an end screen.
- If your game is PvE waves: one wave set including a miniboss or special enemy behavior.
Step-by-step: Plan the slice as a checklist
- List the slice screens (e.g., title, home, gameplay HUD, pause, results, settings).
- List the slice content units (one environment kit, one character set, one VFX set, one audio set).
- Define performance scenarios (e.g., “worst-case combat moment,” “UI-heavy moment,” “loading transition”).
- Lock the art style rules (palette, outline/no outline, lighting approach, VFX density) as a one-page reference.
- Implement the final UI flow for the slice (including error states like “no connection” if relevant to the slice).
- Measure and tune until the slice consistently meets your targets on your chosen test devices.
Vertical slice “polish” that matters
- First 30 seconds: the player should reach interactive gameplay quickly with minimal friction.
- Readable combat/interaction: effects support clarity rather than obscuring it.
- Consistent UI language: buttons, panels, and transitions behave predictably.
- Stable frame pacing: avoid spikes during common actions (spawning, opening menus, end-of-run screens).
(3) Playtesting Methods Specific to Mobile
Design playtests around real mobile contexts
Mobile play happens in short bursts, often one-handed, with distractions and variable lighting. Your tests should intentionally simulate these realities rather than only “quiet room, two hands, perfect attention.”
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Thumb reach and grip tests
Goal: confirm that critical actions are comfortable across grips (one-handed, two-handed) and across hand sizes.
- One-thumb test: ask players to play one-handed for 2 minutes. Observe missed taps, awkward stretches, and accidental touches.
- Grip switch test: mid-session, ask them to switch hands (right to left). Note if the game becomes significantly harder.
- Edge interaction test: watch for discomfort when frequent actions require reaching corners repeatedly.
Practical method: create a simple observation sheet with columns for “missed input,” “hand reposition,” “complaint,” and “workaround.”
Session length validation
Goal: confirm that a typical session fits your intended time window and feels complete.
- Cold start to first meaningful outcome: measure time from app launch to first win/lose/reward moment.
- Natural stop point: ask players to stop whenever they feel “done.” Record when and why.
- Repeat session test: have them play 3 sessions separated by 5–10 minutes to see if the loop remains inviting.
Readability outdoors and in motion
Goal: ensure the game remains playable in bright environments and while attention is split.
- Outdoor readability test: test in daylight (or near a bright window) and note what becomes hard to read: timers, health, enemy tells, text.
- Glance test: ask players to look away briefly (simulate a notification or distraction) and return. Can they re-orient in 1–2 seconds?
- Audio-off test: many players mute. Verify that critical information is not audio-only.
How to run a mobile playtest session (step-by-step)
- Recruit 5–8 testers who match your target audience (not your dev team).
- Prepare 2 device types (one smaller, one larger) and ensure builds are identical.
- Give a single prompt: “Play as you normally would.” Avoid teaching.
- Observe silently for 2–3 minutes, then ask: “What do you think you’re trying to do?”
- Run one focused test (one-handed, outdoor, audio-off, etc.).
- Collect a quick rating (1–7) for: control comfort, clarity, desire to play again.
- End with one question: “If you had this on your phone, when would you play it?”
Common mobile-specific failure signals
- Players frequently adjust grip or shift the phone to reach controls.
- Players pause to decipher UI during action moments.
- Players stop after one run even if they didn’t fail (loop lacks pull).
- Players misinterpret feedback because they’re playing in short glances.
(4) Instrumentation Concepts (Basic Analytics Events to Validate Funnel and Retention Signals)
Why instrument during prototype/slice
Playtests tell you why something feels off; instrumentation tells you how often it happens and where players drop. Keep it lightweight: a small set of events tied to decisions you’re actively making.
Principles for early analytics
- Every event should answer a question (avoid “tracking everything”).
- Use consistent naming and include a session identifier.
- Log context only when it changes interpretation (e.g., difficulty, control scheme, device class).
- Validate data quality by comparing logs to a recorded play session.
Minimal event set for a mobile core loop
| Question | Event(s) | Key properties |
|---|---|---|
| Do players reach gameplay quickly? | app_start, gameplay_start | time_to_gameplay_ms, device_model, build_version |
| Do they understand the objective? | tutorial_step (if any), first_action | step_id, time_from_start_ms |
| Where do they fail? | run_end | result (win/lose/quit), time_alive_s, cause |
| Are upgrades/choices engaging? | upgrade_offered, upgrade_chosen | option_ids, chosen_id, reroll_count |
| Do they come back? | session_start, session_end | session_index, time_since_last_session_s |
| Is the UI causing friction? | ui_open, ui_close, ui_error | screen_id, duration_ms, error_code |
Define a simple funnel for validation
Even without monetization tracking, you can validate whether the game’s loop “holds.” A basic early funnel might be:
app_start → gameplay_start → first_success_moment → run_end → start_next_runKey metrics to compute from this funnel:
- Time to gameplay (median and 90th percentile).
- Completion rate of the first run (how many quit before the end).
- Replay intent proxy: percentage who start a second run within the same session.
- Early return signal: percentage who open the app again within 24 hours (for external tests).
Implementation sketch (engine-agnostic)
Keep instrumentation behind a small interface so you can swap providers later without rewriting gameplay code.
// Pseudocode: minimal analytics wrapper
interface Analytics {
void Track(string eventName, map<string, any> props);
}
// Example usage
Analytics.Track("gameplay_start", {
"build_version": Build.Version,
"device_tier": Device.Tier,
"time_to_gameplay_ms": Timer.SinceAppStartMs()
});
Analytics.Track("run_end", {
"result": result,
"time_alive_s": timeAlive,
"cause": cause,
"difficulty": difficultyId
});(5) Exit Criteria: What Must Be True Before Content Production Starts
Why exit criteria matters
Content production multiplies effort. If the core loop, UI flow, or performance characteristics are not validated, you risk producing large amounts of content that later must be reworked or discarded.
Prototype exit criteria (mechanics validation)
- Core loop is understood without explanation by most target testers within the first minute.
- Controls feel comfortable across common grips; no repeated “stretch” complaints for core actions.
- Players voluntarily replay at least once in the same session during tests (a strong early fun signal).
- Difficulty curve is tunable via exposed parameters (you can make it easier/harder without code surgery).
- Top 3 confusion points are identified with a concrete plan to address them (UI, feedback, rules clarity).
Vertical slice exit criteria (production validation)
- Art style is locked with a reference pack and rules that artists can follow consistently.
- UI flow for the slice is complete (including pause, results, and at least one settings interaction relevant to play).
- Performance targets are met in the slice’s worst-case moments on your chosen representative devices.
- Loading and transitions are acceptable for the slice (no long stalls during common actions).
- Pipeline is proven: you can create, import, iterate, and ship an asset with predictable steps and review points.
Playtest and analytics exit criteria (evidence-based readiness)
- Playtest results are consistent: the same issues appear across testers (not random noise), and fixes measurably improve outcomes.
- Instrumentation is trustworthy: events fire once, properties are correct, and session boundaries make sense.
- Funnel drop-offs are explainable with observed behavior (e.g., players quit at upgrade screen because choices are unclear).
- Retention signals exist in small tests: players express intent to return, and repeat sessions show stable comprehension.
Production greenlight checklist (printable)
- Prototype question answered with evidence (notes + builds).
- Vertical slice demonstrates the intended quality bar.
- Known risks are listed with owners and mitigation plans.
- Core loop tuning parameters documented.
- Analytics event list documented and versioned.
- Backlog for “must-fix before scale” issues is agreed and scheduled.