Free Ebook cover Marketing Analytics for Beginners: Measure What Matters and Make Better Decisions

Marketing Analytics for Beginners: Measure What Matters and Make Better Decisions

New course

11 pages

Dashboards That Answer Questions: KPIs, Breakdowns, and Filters

Capítulo 7

Estimated reading time: 11 minutes

+ Exercise

Design dashboards to answer decisions, not to display everything

A useful dashboard is a decision tool: it should help someone answer a specific question quickly (e.g., “Should we increase spend on Campaign A?” “Is the drop in sign-ups a tracking issue or a real performance issue?”). A “data dump” dashboard does the opposite: it shows many charts with no clear hierarchy, forcing the viewer to hunt for meaning.

Before building, write the dashboard’s purpose as a short sentence and list the decisions it supports. Examples:

  • Budget decision: “Where should we shift spend this week?”
  • Growth decision: “Which funnel stage is constraining growth?”
  • Execution decision: “Which creative and landing page combination should we scale?”

Then design the dashboard in layers so the viewer moves from what happened (KPIs) to why (diagnostics) to what to do next (drill-down).

Layer 1: KPI layer (North Star + 3–6 supporting metrics)

The KPI layer is the “at-a-glance” section. It should fit on one screen without scrolling and answer: Are we on track? Keep it small: one North Star metric and 3–6 supporting metrics that explain it.

Step-by-step: build the KPI layer

  1. Choose the North Star metric that reflects the outcome your team optimizes weekly (e.g., trials started, qualified leads, first purchases, activated users). This should be a single number for the selected date range.
  2. Add 3–6 supporting metrics that are actionable and explain movement in the North Star. A common pattern is to include: volume, efficiency, and quality signals.
  3. Include time comparisons next to each KPI (WoW/MoM/YoY) and one smoothing option (rolling average) to reduce noise.
  4. Show targets when you have them (goal line or variance to target). If you don’t have targets, show historical benchmarks (e.g., last 8-week median).

Example KPI tile layout (what to show in each tile)

  • Primary value (e.g., 12,480 trials)
  • Delta vs prior period (e.g., +6.2% WoW)
  • Delta vs same period last year when seasonality matters (e.g., -1.1% YoY)
  • Optional sparkline (last 30–90 days) to show trend without adding a full chart

How to choose WoW vs MoM vs YoY

  • WoW (week over week): best for operational cadence and fast feedback loops (paid spend shifts, creative tests). Use when volume is high enough that weekly noise is manageable.
  • MoM (month over month): better for B2B cycles, lower volume, or when weekly seasonality (weekday effects) makes WoW misleading.
  • YoY (year over year): best when seasonality is strong (holidays, back-to-school) or when comparing across months would mislead.

Tip: If your business has strong weekday patterns, compare last 7 days vs previous 7 days rather than “this calendar week vs last calendar week.”

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

Rolling averages: when and how to use them

Rolling averages smooth short-term spikes so you can see the underlying trend.

  • 7-day rolling average: common for daily dashboards; reduces weekday effects.
  • 28-day rolling average: useful when volume is low or when you want a “monthly-like” trend without waiting for month-end.

Show both when needed: a thin line for daily values and a thicker line for the rolling average. Avoid smoothing in KPI tiles if it hides important short-term changes; instead, include smoothing in the trend chart directly under the KPI tiles.

Annotations: make context visible

Annotations turn “mystery spikes” into understandable events. Add markers for:

  • Product launches or onboarding changes
  • Budget increases/decreases
  • Major creative refreshes
  • Pricing or offer changes
  • Tracking changes (tag updates, consent banner changes)

Each annotation should include: date, event name, and a short note (e.g., “Budget +20% on Search Brand”). Keep annotations consistent across charts so viewers can connect cause and effect.

Layer 2: Diagnostic layers (channel, campaign, funnel stage)

Once the KPI layer signals a change, the diagnostic layer answers: Where is it coming from? This layer should be organized into three common breakdowns: channel, campaign, and funnel stage. The goal is to isolate the driver quickly without forcing the viewer to click through many pages.

Diagnostic layer A: Channel view (mix and contribution)

Use a channel breakdown to identify whether the KPI change is driven by mix shifts (more volume from one channel) or performance shifts (conversion changes within a channel).

Recommended elements:

  • Channel table with key metrics and deltas (current period vs prior period)
  • Share of total for the North Star (or its closest upstream metric) to see mix changes
  • Trend by channel for the North Star or a leading indicator
ChannelNorth StarWoW ΔShareNotes
Search4,120+9%33%Brand campaign relaunched
Social3,050-6%24%Creative fatigue suspected
Email1,980+2%16%Stable

Design rule: Sort by impact, not alphabetically. For example, sort by absolute change in the North Star (or contribution to change) so the biggest driver is on top.

Diagnostic layer B: Campaign view (what changed inside a channel)

Campaign breakdowns help you find whether a channel-level shift is driven by a few campaigns or broadly distributed.

  • Use a ranked bar chart for “top movers” (largest positive/negative change vs prior period).
  • Include filters for channel and objective so the list stays relevant.
  • Show spend and outcome side-by-side in a table to detect “spend up, outcomes flat” patterns.

Top movers pattern: Create two ranked lists: “Biggest gains” and “Biggest drops.” This prevents a single mixed list from hiding negative movers below the fold.

Diagnostic layer C: Funnel stage view (where the drop occurs)

Funnel-stage diagnostics answer: Is the issue acquisition, activation, or conversion? Use a funnel chart carefully (see chart guidance below) or, often better, use a stage-by-stage trend with conversion rates between stages.

Recommended elements:

  • Stage counts (e.g., visits → sign-ups → activated → paid)
  • Stage conversion rates (e.g., sign-up rate, activation rate)
  • Time trend for each stage to see where the divergence begins

Practical diagnostic approach: If the North Star is down, check upstream stages in order. The first stage that deviates from baseline is usually where you should investigate next.

Layer 3: Drill-down views (audience segment, creative, landing page)

Drill-down views are for action: they help you decide what to change. Keep them one click away from diagnostics, and ensure they inherit the same date range and filters so comparisons remain consistent.

Drill-down A: Audience segment

Segment drill-downs answer: Who is driving the change? Common segments include device type, geography, new vs returning, and customer tier.

  • Use a heatmap table (segment × metric delta) to spot where changes concentrate.
  • Include a minimum volume threshold to avoid overreacting to tiny segments.

Guardrail: Always show segment volume next to performance. A segment with +40% improvement but tiny volume may not be worth prioritizing.

Drill-down B: Creative

Creative drill-downs answer: Which ads are working or fatiguing?

  • Use a scatter plot to compare efficiency vs volume (e.g., outcome rate on one axis, spend or impressions on the other). This helps identify “scale candidates” (good performance + meaningful volume).
  • Use a small multiples layout for creative thumbnails (if available) with key metrics beneath each; keep it scannable.

Practical tip: Add a filter for “launched in last X days” to separate new creatives from mature ones; performance patterns differ.

Drill-down C: Landing page

Landing page drill-downs answer: Where does traffic convert best, and where is it leaking?

  • Use a table with landing page, sessions, conversion rate, and change vs prior period.
  • Add a trend line for the top 5 pages by volume to see if a single page regression explains the KPI drop.
  • Include page load time or error rate if available, because technical issues often masquerade as marketing performance changes.

Filters and controls: make exploration safe and consistent

Filters are powerful, but too many create confusion. Use a small set of global filters that apply to the whole dashboard, then add local filters only where needed.

Recommended global filters

  • Date range (with presets: last 7 days, last 28 days, MTD, QTD)
  • Channel
  • Region (if relevant)
  • Device (if relevant)

Filter design rules

  • Default to “All” and show the current filter state clearly at the top.
  • Limit combinations that produce misleading comparisons (e.g., filtering to a tiny campaign while showing YoY deltas).
  • Lock definitions so the same metric means the same thing across views (avoid “conversion” changing by page).

Chart selection: pick the simplest chart that answers the question

Choose chart types based on the question being asked, not on what looks impressive.

Common questions → best chart types

  • “Is it up or down over time?” Line chart (add rolling average line if noisy).
  • “Which items are biggest?” Sorted horizontal bar chart.
  • “What changed the most vs last period?” Diverging bar chart (positive to the right, negative to the left) or ranked delta table.
  • “How is performance distributed?” Histogram or box plot (useful for creative or landing page performance spread).
  • “Is there a tradeoff between two metrics?” Scatter plot with quadrant lines (e.g., high volume/high efficiency).
  • “How does composition change?” Stacked area chart (use sparingly; limit to a few categories).

Avoid misleading visuals

  • Dual axes: They often imply correlation where none exists. If you must use them, label clearly and consider separate charts aligned by time instead.
  • Truncated y-axes: Truncating can exaggerate changes. For bar charts, start at zero. For line charts, truncation can be acceptable if clearly labeled and if the goal is to show small variation—use caution.
  • 3D charts: They distort perception and reduce readability.
  • Too many colors: Use color to encode meaning (e.g., channel) and reserve highlight color for “focus” items.

Practical check: If a chart can be misread in 3 seconds, simplify it (fewer series, clearer labels, or split into small multiples).

Time comparisons done right: align periods and show both absolute and relative change

Time comparisons should help the viewer understand magnitude and significance.

Best practices

  • Align by day-of-week when comparing short periods (last 7 vs previous 7) to avoid weekday bias.
  • Show absolute change (e.g., -320 sign-ups) alongside percent change (e.g., -4.8%). Percent alone can mislead when baselines are small.
  • Use consistent comparison logic across the dashboard (don’t mix WoW for one KPI and MoM for another unless there’s a reason).
  • Call out partial periods (e.g., month-to-date) so viewers don’t compare incomplete data to complete periods without realizing it.

Dashboard wireframe outline (replicable template)

Use this wireframe as a starting point. It follows the layered approach and keeps the “decision path” clear.

HEADER (sticky)  ----------------------------------------------------  Title: Performance Overview (Decision: budget + optimization)  Global filters: Date range | Channel | Region | Device  Comparison toggle: WoW / MoM / YoY   Rolling avg toggle: Off / 7D / 28D  --------------------------------------------------------------------  SECTION 1: KPI LAYER (one screen, no scroll)  [KPI Tile 1: North Star]   value | Δ vs prior | Δ YoY | sparkline  [KPI Tile 2] Supporting metric 1  [KPI Tile 3] Supporting metric 2  [KPI Tile 4] Supporting metric 3  [KPI Tile 5] Supporting metric 4 (optional)  [KPI Tile 6] Supporting metric 5 (optional)  Under tiles: Trend line (North Star) with rolling average + annotations  --------------------------------------------------------------------  SECTION 2: DIAGNOSTICS (where is it coming from?)  Row A: Channel diagnostics  - Left: Channel contribution table (sorted by impact)  - Right: Trend by channel (top 3–5 only)  Row B: Campaign diagnostics  - Left: Top movers (gains) ranked bars  - Right: Top movers (drops) ranked bars  - Below: Campaign table with spend + outcome + Δ  Row C: Funnel stage diagnostics  - Left: Stage counts + stage conversion rates (table)  - Right: Stage trends (small multiples or multi-line with limited series)  --------------------------------------------------------------------  SECTION 3: DRILL-DOWNS (what to change?)  Tabs: [Audience] [Creative] [Landing Pages]  Audience tab: heatmap table + volume thresholds  Creative tab: scatter (efficiency vs volume) + creative list (top/bottom)  Landing tab: landing page table + top page trends + tech metrics (if available)  --------------------------------------------------------------------  FOOTER: Definitions & data notes  - Metric definitions (short)  - Data freshness timestamp  - Known issues / tracking notes

Step-by-step: implement the wireframe in your tool

  1. Create the header controls: date range preset selector, comparison toggle (WoW/MoM/YoY), rolling average toggle, and 2–4 global filters.
  2. Build KPI tiles: ensure each tile uses the same date range and comparison logic; add sparklines for quick trend context.
  3. Add the North Star trend chart: daily granularity, optional rolling average, and annotation markers for launches and budget shifts.
  4. Build channel diagnostics: a table sorted by impact plus a trend chart limited to top channels to avoid spaghetti lines.
  5. Build campaign diagnostics: top movers bars (gains/drops) and a detailed table for validation.
  6. Build funnel diagnostics: stage table + stage trend charts; ensure stage definitions are consistent with your tracking.
  7. Add drill-down tabs: audience heatmap, creative scatter, landing page table; each should inherit global filters.
  8. QA for misleading visuals: check axes start points, remove dual axes, limit series count, and verify that deltas match the selected comparison.
  9. Add data notes: refresh time, known tracking changes, and any exclusions that affect interpretation.

Now answer the exercise about the content:

Which dashboard design best supports making decisions quickly when a key metric changes?

You are right! Congratulations, now go to the next page

You missed! Try again.

A decision-focused dashboard should move from what happened (KPIs) to why (diagnostics) to what to do next (drill-down), while keeping filters and definitions consistent.

Next chapter

Interpreting Results Without Vanity Metrics: Context, Benchmarks, and Cohorts

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.