At some point, every growth team hits the same wall:
- Your platform dashboards say you’re crushing it.
- Your GA4 reports look… different.
- Finance is asking why spend is up while revenue is flat.
- And you’re stuck defending a murky story with “blended ROAS” and crossed fingers.
This isn’t because you’re doing anything wrong. It’s because modern measurement is messy by default. Privacy changes, modeled conversions, cross-device behavior, walled gardens, and multi-touch journeys have made “perfect attribution” a fairy tale.
So if you want to make better budget decisions—and keep your sanity—you need a different question:
Not “what got credit?” but “what caused lift?”
Incrementality testing is the most practical way to quantify the additional outcomes your marketing produced beyond what would’ve happened anyway. It’s why so many modern measurement playbooks (from platforms, agencies, and measurement vendors) keep circling back to experiments as the anchor for truth.
Below is the playbook I use to make incrementality testing approachable for teams that don’t have a research department—or six months to boil the ocean.
First: what incrementality is (and what it isn’t)
Incrementality measures causal impact using a control vs. test comparison—ideally with clean separation between the two groups.
What it’s not
- A new attribution model
- Another dashboard layer
- A replacement for UTMs, GA4, or your BI stack
What it is
- A way to validate (or challenge) the story your attribution is telling
- A forcing function for better decision-making
- The bridge between channel metrics and P&L outcomes
If you’ve read my post on full-funnel CAC or why marketers need to think like operators, this is the natural next step: moving from metrics to models—and from correlation to causality.
When incrementality testing is worth it
Incrementality testing is work. Treat it like work you do when decisions are expensive.
Good candidates
- You’re spending enough in a channel that a 10–20% reallocation would matter.
- Leadership is questioning whether a channel is “real” (paid social is a common one).
- You’re launching a new channel and don’t want to scale a mirage.
- You suspect you’re paying for conversions you’d get anyway (brand search, retargeting, affiliates can all fall into this trap).
Not great candidates
- Very low volume (you won’t get a signal strong enough to trust).
- Constant promo chaos where you can’t isolate variables for a few weeks.
- You don’t have basic instrumentation hygiene (fix that first).
Pro tip: the fastest path to “good enough” is often one well-designed test per quarter, not a dozen tiny tests nobody trusts.
The four incrementality test types you’ll actually use
There are a lot of experiment formats. In practice, most teams rotate between these:
1) Geo holdout tests
Turn spend up/down (or on/off) in matched geographies and compare outcomes. Powerful for channels that can’t do clean user-level holdouts.
2) Audience holdout tests
Exclude a randomly selected slice of your target audience (the control) and compare to the exposed group.
3) Platform lift studies
Some platforms offer conversion lift tooling that approximates experimental design inside their ecosystem (useful, but don’t treat it as gospel).
4) Creative or offer incrementality
When the main question isn’t “is the channel incremental?” but “which message is incremental?” This is where your A/B testing muscle turns into something more strategic.
The 30–60–90 day plan
Days 1–30: get your measurement foundation tight enough to trust
Incrementality is about causality, but you still need clean inputs.
- Standardize UTMs everywhere (especially “non-platform” sources like partners, creators, PR, affiliates, email). If your UTMs are sloppy, your post-test analysis becomes a debate about data quality instead of results.
- Make sure your conversion events are consistent across GA4 / pixel / server-side / CRM where applicable.
- Confirm you can report at least these outcomes by day:
- New customers (or qualified leads)
- Revenue (or pipeline)
- Gross margin (if you want finance to take you seriously)
- Refund/cancel rate (for DTC and subscription realities)
You don’t need perfection. You need repeatability.
Days 31–60: design the test like an operator
This is where teams usually get cute. Don’t.
Start with one decision:
- “Is paid social incremental at our current spend?”
- “Is brand search protecting demand or harvesting it?”
- “Does retargeting create lift or just steal credit?”
- “Which creative angle changes conversion behavior, not just CTR?”
Then lock:
- Primary KPI: incremental revenue, incremental new customers, incremental CAC, iROAS
- Test window: long enough to smooth day-to-day noise
- Guardrails: no overlapping promos, no website redesign mid-test, no major pricing shifts
Write down the hypothesis before you run it. If you can’t articulate the expected outcome in one sentence, you’re not ready to spend money testing it.
Days 61–90: run, analyze, and make a decision you’ll stand behind
When the test ends, resist the urge to “interpret” your way into a win.
What you want coming out:
- Lift estimate (what changed)
- Cost to generate that lift (what you paid for it)
- Confidence range (how sure you are)
- Decision (scale, hold, cut, or redesign)
This is where incrementality becomes a leadership tool. A CFO doesn’t need a perfect model—they need a decision that’s defensible.
The 7 failure modes I see constantly (so you can avoid them)
- Contamination: your control group gets exposed anyway (overlapping geos, shared audiences, shared devices).
- Seasonality blindness: you ran the test during an unusually weird period.
- Too many simultaneous changes: new creative + new landing page + new offer = you learned nothing.
- Short windows: the test ends before behavior stabilizes.
- Success metrics that don’t match the business: optimizing for leads when you needed revenue.
- Platform-only truth: you only use the platform’s view of conversions (helpful, not sufficient).
- No operational follow-through: you run a good test and then… go back to last-click.
Modern measurement frameworks explicitly call out that incrementality isn’t always the right tool—but when you do use it, it should calibrate and improve everything else you rely on.
How this ties back to CAC (and why it matters)
A lot of CAC conversations are secretly attribution arguments.
If CAC is a reflection of your whole go-to-market motion (not just media efficiency), then incrementality is how you keep CAC honest. It helps you answer:
- Are we buying new demand or renting conversions with good timing?
- Which parts of the funnel are actually creating lift?
- Where is CAC inflated by channel overlap and mis-credited touchpoints?
If you haven’t read it yet, my full-funnel CAC framework lays the groundwork for where to look; incrementality helps you prove what’s real.
A simple place to start (if you only do one thing)
Pick one channel that leadership questions the most and run a single, clean test designed to answer a budget decision.
Then document it like an operator:
- What we tested
- What changed
- What it cost
- What we’re doing next
That becomes institutional knowledge and it compounds.
Thanks for reading!