Attribution is a Rulebook, Not “Truth”
Attribution answers a practical question: which marketing touchpoints get credit for a conversion? The key idea is that attribution is not a factual reconstruction of causality; it is a set of rules for assigning credit. Different rulebooks produce different “winners,” which then changes budget decisions, creative priorities, and what teams optimize.
Because attribution is a rulebook, you should treat it like a decision lens: pick the model that best matches the decision you’re trying to make (e.g., “What closes deals?” vs “What creates demand?”), and be explicit about the biases it introduces.
One Customer Journey, Four Attribution Models
Use a single journey example to see how credit shifts. Assume one purchase worth $100 and the customer touched four channels in this order:
- Day 1: Paid Social (prospecting ad) — first discovery
- Day 3: Organic Search — reads a comparison article
- Day 6: Email — clicks a promo/reminder
- Day 7: Paid Search (branded) — searches brand name and buys
We will allocate the same $100 conversion value under four common models: last-click, first-click, linear, and time-decay.
Model 1: Last-Click Attribution
Rule: 100% of the credit goes to the final touchpoint before conversion.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Result for our journey:
| Channel | Credit | Value Credited |
|---|---|---|
| Paid Social | 0% | $0 |
| Organic Search | 0% | $0 |
| 0% | $0 | |
| Paid Search (Branded) | 100% | $100 |
What decisions it pushes: invest in channels that “close” (often branded search, retargeting, email). It tends to undervalue awareness and consideration activity.
Model 2: First-Click Attribution
Rule: 100% of the credit goes to the first touchpoint that started the journey.
Result for our journey:
| Channel | Credit | Value Credited |
|---|---|---|
| Paid Social | 100% | $100 |
| Organic Search | 0% | $0 |
| 0% | $0 | |
| Paid Search (Branded) | 0% | $0 |
What decisions it pushes: invest in top-of-funnel acquisition and creative that generates first interest. It tends to undervalue channels that nurture and capture existing demand.
Model 3: Linear Attribution
Rule: Split credit equally across all touchpoints in the journey.
There are 4 touches, so each gets 25%.
| Channel | Credit | Value Credited |
|---|---|---|
| Paid Social | 25% | $25 |
| Organic Search | 25% | $25 |
| 25% | $25 | |
| Paid Search (Branded) | 25% | $25 |
What decisions it pushes: balanced investment across the funnel and cross-channel coordination. It can over-credit “incidental” touches (e.g., a low-effort reminder email) and under-credit truly pivotal moments.
Model 4: Time-Decay Attribution
Rule: Give more credit to touches that happened closer to conversion, less to earlier touches. The exact weights vary by tool; the concept is consistent.
Example weights for this journey (illustrative): Day 1 = 10%, Day 3 = 20%, Day 6 = 30%, Day 7 = 40%.
| Channel | Timing | Credit | Value Credited |
|---|---|---|---|
| Paid Social | Day 1 | 10% | $10 |
| Organic Search | Day 3 | 20% | $20 |
| Day 6 | 30% | $30 | |
| Paid Search (Branded) | Day 7 | 40% | $40 |
What decisions it pushes: prioritize mid-to-late funnel activities while still acknowledging early demand creation. It’s often a pragmatic compromise when you believe recency matters but you don’t want a pure last-click view.
How the “Best Channel” Changes Under Each Model
Using the same $100 conversion, the top credited channel becomes:
- Last-click: Paid Search (Branded) looks like the sole driver.
- First-click: Paid Social looks like the sole driver.
- Linear: all channels look equally important.
- Time-decay: Paid Search (Branded) still leads, but Email and Organic Search show meaningful contribution.
This is why attribution changes decisions: if you only look at last-click, you may cut prospecting because it “doesn’t convert,” then later wonder why branded search volume and retargeting performance weaken.
Use Cases: When Each Model Is Most Useful
Last-Click: Best for “What Captures Demand Right Now?”
- Use it when: you’re optimizing checkout flow, landing pages, or capture channels where the user already intends to buy.
- Good for: tactical bid/keyword adjustments, promo timing, and identifying friction at the final step.
- Watch out for: over-investing in channels that harvest existing intent (branded search, retargeting) while starving demand creation.
First-Click: Best for “What Creates New Demand?”
- Use it when: you’re evaluating prospecting campaigns, new audiences, or content meant to introduce the brand.
- Good for: creative testing for awareness, audience expansion, and early-funnel channel comparisons.
- Watch out for: assuming the first touch “caused” the sale; it may have started interest but not necessarily persuaded.
Linear: Best for “How Do Channels Work Together?”
- Use it when: you want a stable, simple cross-channel view for coordination and planning.
- Good for: understanding multi-touch journeys, avoiding extreme credit concentration, and aligning teams that influence different stages.
- Watch out for: treating all touches as equally influential; some touches are passive or redundant.
Time-Decay: Best for “What Influences the Decision as It Approaches Purchase?”
- Use it when: you believe recency matters (common in shorter purchase cycles) but you still want to recognize early touches.
- Good for: weekly optimization where you need sensitivity to recent changes without going full last-click.
- Watch out for: systematically under-crediting upper-funnel work in longer consideration cycles.
Common Attribution Biases That Mislead Decisions
Bias 1: Retargeting Inflation
What happens: Retargeting ads often appear late in the journey, close to conversion. In last-click (and even time-decay), they can receive outsized credit.
Why it’s misleading: Many retargeting impressions reach people who were already going to buy (or who were driven by earlier touches). Retargeting may still help, but attribution can exaggerate its incremental impact.
Practical checks:
- Compare performance for new vs returning users (retargeting should skew returning).
- Look at frequency and time-to-convert: very short time-to-convert after many impressions can indicate “capture” more than “creation.”
- Run a simple holdout or exclude recent site visitors for a period and observe conversion changes (if feasible).
Bias 2: Branded Search Capture
What happens: Users search your brand name near the end of the journey and convert. Last-click gives branded search most or all credit.
Why it’s misleading: Branded search often captures demand created by other channels (social, PR, partnerships, offline, organic content). It is valuable for capture and defense, but it can look like the primary growth driver when it’s actually the final step.
Practical checks:
- Separate branded vs non-branded search in reporting.
- Monitor brand search volume alongside upper-funnel spend: if brand searches rise after prospecting, branded search is likely downstream.
- Use first-click or linear views periodically to ensure upstream channels aren’t being starved.
Platform-Reported vs Analytics-Reported Attribution: When to Use Which
You will often see different numbers depending on whether you look inside an ad platform (platform-reported) or your analytics tool (analytics-reported). The goal is not to force them to match; it’s to use each for the right job.
Platform-Reported Results (Inside Ad Platforms)
What they’re best for: optimizing within that platform (campaigns, ads, audiences) because the platform has the most detailed view of its own delivery, clicks, and sometimes view-through interactions.
- Use for: creative A/B tests inside the platform, audience comparisons, bid strategy tuning, diagnosing delivery issues.
- Strength: fastest feedback loop for in-platform optimization.
- Limitations: each platform tends to credit itself; cross-channel comparisons can be distorted due to different attribution windows, view-through credit, and identity matching.
Analytics-Reported Results (Your Cross-Channel Analytics)
What they’re best for: consistent cross-channel comparison using one rulebook and one set of definitions.
- Use for: weekly channel performance reviews, budget allocation across channels, unified funnel reporting.
- Strength: one attribution model applied across channels, reducing “everyone wins” reporting.
- Limitations: may undercount some platform conversions due to tracking gaps, consent limitations, and differences in how conversions are matched.
Practical Guidance: A Simple Workflow to Reconcile Without Getting Stuck
- Pick one system as the cross-channel source of truth for weekly reporting (usually analytics-reported), and keep it consistent week to week.
- Use platform-reported numbers for in-platform optimization (creative, targeting, bidding), not for deciding which channel “won” overall.
- Align windows where possible (e.g., click-through window) when doing deep dives, but don’t expect perfect parity.
- Always break out branded search and retargeting in your weekly view to reduce capture bias.
Step-by-Step: Build a Single Journey Attribution Table for Your Team
This quick exercise makes attribution tangible and helps stakeholders understand why numbers change.
- Choose one recent conversion path (from your analytics path exploration or user journey report) with 3–6 touches.
- List touches in order with timestamps and channel labels (e.g., Paid Social, Organic Search, Email, Paid Search Branded).
- Assign conversion value (e.g., revenue or a fixed conversion value).
- Create four columns for last-click, first-click, linear, time-decay.
- Allocate credit using the rules above.
- Discuss what would change if you used each model for budgeting vs for optimization.
Journey: Paid Social → Organic Search → Email → Branded Search → Purchase ($100)Use this as a recurring training artifact when new stakeholders join or when channel owners dispute performance.
Decision Tree: Choose a Default Attribution View for Weekly Reporting
Weekly reporting needs a default view that is stable, understandable, and useful for decisions. Use this decision tree to pick one and stick with it for trend tracking.
- 1) Is the main weekly decision “where to allocate budget across channels”?
- Yes → Prefer an analytics-reported model for cross-channel consistency.
- No → If the decision is “how to optimize within a channel,” use platform-reported for that channel’s internal review.
- 2) Is your purchase cycle typically short (days) and you expect recency to matter?
- Yes → Use time-decay as the default weekly view.
- No / longer consideration → Use linear as the default weekly view to avoid over-weighting late touches.
- 3) Do branded search and retargeting dominate your last-click results?
- Yes → Avoid last-click as the default; use time-decay or linear and report branded vs non-branded plus retargeting vs prospecting as separate cuts.
- No → Time-decay can still be a strong default; last-click may be acceptable for very direct-response funnels, but validate with a secondary view.
- 4) Do stakeholders need a simple, explainable number every week?
- Yes → Choose linear (simplest multi-touch) or last-click (simplest single-touch) depending on bias tolerance; document the tradeoff.
- No → Choose time-decay and keep a saved report that also shows last-click for context.
Recommended Weekly Default (Common Starting Point)
For many beginner teams, a practical setup is:
- Weekly cross-channel: analytics-reported time-decay (or linear if cycles are long)
- Weekly guardrails: always show last-click as a secondary column to understand capture channels
- Required breakouts: branded vs non-branded search; retargeting vs prospecting