Incrementality

Incrementality

What's on this page:

Experience lead source tracking

👉 Free demo

TL;DR:

  • Incrementality quantifies the causal impact of marketing activities by measuring additional conversions, revenue, or engagement generated beyond baseline performance—the sales that wouldn’t have happened without your campaign.
  • Unlike attribution models that assign credit based on correlation, incrementality uses controlled experiments (test vs control groups) to prove true lift, eliminating organic conversions misattributed to paid channels and revealing actual marketing effectiveness.
  • Organizations implementing incrementality testing discover 20-40% of attributed conversions would have occurred anyway, enabling reallocation of wasted spend toward genuinely incremental channels and improving true ROAS by 30-60%.

What Is Incrementality?

Incrementality is a measurement methodology that determines the true causal effect of marketing activities by quantifying the additional business outcomes (conversions, revenue, engagement) directly attributable to a specific campaign, channel, or tactic beyond what would have occurred organically without that marketing intervention.

Rather than relying on correlation-based attribution models that credit the last touchpoint or distribute credit across the customer journey, incrementality employs controlled experimentation—dividing audiences into matched test groups (exposed to marketing) and control groups (not exposed)—to isolate the precise lift generated by marketing spend. The difference in conversion rates or revenue between these groups represents the incremental impact, separating true marketing-driven outcomes from conversions that would have happened regardless due to existing brand awareness, organic search, word-of-mouth, or natural purchase intent.

This scientific approach answers the fundamental question attribution cannot: “What sales did we actually create versus what we simply intercepted?” By proving causality rather than assuming correlation, incrementality reveals wasted spend on non-incremental channels, validates genuine performance drivers, and provides defensible ROI metrics to justify marketing budgets to CFOs and boards.

Test LeadSources today. Enter your email below and receive a lead source report showing all the lead source data we track—exactly what you’d see for every lead tracked in your LeadSources account.

Why Incrementality Matters for Marketing Measurement and ROI

Incrementality fundamentally changes how marketing organizations measure effectiveness and allocate budget.

Attribution models suffer from a critical flaw: they credit marketing touchpoints for conversions that may have occurred organically. Last-click attribution gives 100% credit to the final touchpoint, even if the customer was already determined to purchase. Multi-touch attribution distributes credit across the journey but cannot distinguish between influential touchpoints and coincidental ones that happened to be in-path.

Industry research from Measured and Fospha reveals that 20-40% of conversions attributed to paid channels would have happened without any ad exposure—customers who were already brand-aware, searching organically, or referred by existing customers. This means traditional attribution systematically overvalues performance, leading to budget misallocation toward channels claiming credit for organic demand rather than generating new demand.

Incrementality testing exposes this overcounting by measuring true lift. When Meta or Google Ads reports 150 attributed conversions but incrementality testing shows only 90 incremental conversions, you’ve identified 60 conversions (40%) that were organic but intercepted by retargeting or brand search campaigns. This 40% represents wasted spend—budget allocated to claiming existing demand rather than creating new customers.

For CMOs defending marketing budgets, incrementality provides causal proof of impact that attribution cannot. CFOs dismiss correlation-based attribution as “marketing claiming credit for everything.” Incrementality delivers experimental evidence: “We ran a holdout test. Markets with advertising generated 35% more revenue than matched control markets. Here’s the incremental revenue and iROAS calculation.” This shifts conversations from defending correlation to proving causation.

The financial impact scales significantly. Organizations reallocating even 15-20% of non-incremental spend toward genuinely incremental channels report 30-60% improvements in true ROAS, representing millions in recovered efficiency for mid-market brands spending $5-10M annually on paid media.

How Incrementality Testing Works

Incrementality testing follows a controlled experimental framework that isolates causal impact through comparison of matched groups.

Core Methodology

The fundamental approach divides your target audience into two statistically equivalent groups: the test group receives normal marketing exposure (ads, campaigns, promotions), while the control group is held out from that specific marketing activity but remains exposed to all other baseline marketing and organic channels.

After a statistically significant period (typically 2-8 weeks depending on conversion velocity), you measure outcomes (conversions, revenue, engagement) across both groups. The difference in performance represents incremental lift directly caused by the marketing intervention being tested.

Incrementality Calculation Formula

Incremental Lift % = [(Test Group Conversion Rate – Control Group Conversion Rate) / Control Group Conversion Rate] × 100

Incremental Conversions = (Test Group Conversions) – (Control Group Conversions × Scale Factor)

Incremental ROAS (iROAS) = Incremental Revenue / Marketing Spend

Example: Your retargeting campaign shows 200 attributed conversions. Incrementality test results: Test group (exposed to retargeting) had 200 conversions from 10,000 users (2.0% conversion rate). Control group (no retargeting) had 110 conversions from 10,000 users (1.1% conversion rate).

Incremental lift = [(2.0% – 1.1%) / 1.1%] × 100 = 81.8% lift. True incremental conversions = 200 – 110 = 90 conversions (not 200). Your retargeting campaign generated only 90 new conversions; the other 110 were organic conversions intercepted by retargeting. If you spent $9,000 on retargeting claiming 200 conversions ($45 CPA), true incremental CPA is $9,000 / 90 = $100—more than double the attributed CPA.

Test Design Approaches

Different incrementality methodologies suit various business contexts:

User-Level Randomization: Randomly assign individual users to test or control groups using hashed user IDs or device identifiers. Control group sees PSA (public service announcement) ads or ghost ads instead of campaign creative. Best for: Platforms with user-level control (Meta Conversion Lift, Google Brand Lift).

Geo-Based Holdout Testing: Divide geographic regions (DMAs, states, zip codes) into matched test and control markets based on historical performance, demographics, and seasonality. Run campaigns in test markets while holding out spend in control markets. Best for: Broadcast channels (TV, radio, OOH), broad awareness campaigns, when user-level randomization isn’t possible.

Time-Based Holdout (On/Off Testing): Alternate campaign exposure over time periods—campaign runs Week 1, paused Week 2, runs Week 3, paused Week 4—comparing performance during on versus off periods. Best for: Channels with fast conversion cycles, when geographic or user-level testing isn’t feasible. Caution: Vulnerable to seasonality and external factors.

Ghost Bidding: Enter ad auctions normally but intentionally lose to competitors by bidding low, then track whether users in this “ghost bid” control group converted organically. Best for: Measuring incrementality of paid search brand terms without fully pausing campaigns and ceding market share to competitors.

Types of Incrementality Measurement

Incrementality testing operates at multiple levels depending on strategic objectives.

Channel-Level Incrementality: Measures whether an entire channel (Meta Ads, Google Ads, TikTok, TV) generates incremental demand or simply intercepts organic conversions. Answers: “Does this channel create new customers or just claim credit?” Typical finding: Brand search and retargeting often show 30-50% incrementality (low); prospecting and awareness channels show 70-90% incrementality (high).

Campaign-Level Incrementality: Tests specific campaigns within a channel to identify which creative, audience segments, or messaging drives genuine lift versus claiming organic demand. Answers: “Which campaigns are worth scaling?” Enables budget reallocation from non-incremental to high-incrementality campaigns within the same platform.

Tactic-Level Incrementality: Evaluates discrete tactics like promotional discounts, free shipping offers, or urgency messaging. Answers: “Did the 20% discount promotion generate new demand or simply subsidize purchases that would have occurred at full price?” Reveals margin-destroying promotions that don’t drive incremental volume.

Cross-Channel Incrementality: Measures combined impact when multiple channels interact (e.g., TV driving search volume, social awareness boosting direct traffic). Answers: “What’s the total incremental lift when channels work together?” Requires sophisticated experimental design accounting for interaction effects.

Implementing an Incrementality Testing Program

Step 1: Prioritize Test Candidates

Start with channels or campaigns most likely to overclaim: retargeting, brand search, remarketing, affiliate/last-click partners. These typically show high attributed performance but low incrementality because they intercept organic demand rather than create new demand.

Select tests with sufficient scale: minimum 1,000-2,000 conversions per group over the test period ensures statistical significance. Smaller volumes require longer test durations or accept wider confidence intervals.

Step 2: Design Statistical Experiment

Define test parameters: test duration (2-8 weeks balancing statistical power and business risk of pausing campaigns), group size (typically 50/50 split for user-level tests; for geo tests, allocate 60-70% of markets to test, 30-40% to control to minimize business disruption), and primary metrics (conversions, revenue, incremental ROAS).

Ensure group equivalence: for geo tests, match markets by historical performance, seasonality, demographics, competitive intensity. For user-level tests, use random assignment with stratification on key variables (past purchase frequency, LTV segment).

Step 3: Execute Holdout

Implement the holdout cleanly: control groups must be fully excluded from the tested marketing activity but remain exposed to all other channels. Avoid contamination where control users accidentally see ads through other channels or campaigns.

Monitor throughout: track daily metrics to detect anomalies, external events (competitor campaigns, PR crises, supply issues), or technical errors invalidating the test. Document all confounding factors for analysis.

Step 4: Analyze Results and Calculate Lift

Compare outcomes using statistical rigor: calculate conversion rate lift, incremental conversions, incremental revenue, and iROAS. Apply significance testing (t-tests, chi-square tests) to confirm results aren’t due to chance.

Context matters: a 40% lift sounds impressive until you realize control group had 1.8% conversion rate and test group had 2.5%—strong relative lift but modest absolute difference. Calculate incremental CPA and incremental ROAS to translate lift into financial impact.

Step 5: Act on Insights

Reallocate budget based on incrementality findings: reduce or pause non-incremental channels (typically retargeting, brand search) and increase investment in high-incrementality channels (prospecting, awareness, consideration-stage tactics).

Retest regularly: incrementality changes as market conditions, competition, and audience awareness evolve. Quarterly or bi-annual retesting ensures optimization decisions reflect current performance.

Common Challenges and Solutions

Challenge: Short-term revenue risk. Holding out marketing—even from control groups—creates fear of losing sales during the test period, especially for performance-dependent businesses.

Solution: Start with smaller control groups (10-20% of audience) to minimize business impact while still achieving statistical power. Run tests during lower-stakes periods (avoid holiday seasons, product launches). Calculate maximum potential revenue at risk versus long-term value of knowing true incrementality—most CMOs accept 2-4 weeks of 20% holdout to optimize 52 weeks of annual budget allocation.

Challenge: Statistical significance requires large sample sizes and long duration. Businesses with low conversion volumes struggle to reach 95% confidence in reasonable timeframes.

Solution: Accept lower confidence thresholds (90%) for directional insights. Focus on high-volume channels first. Extend test duration even if uncomfortable (8-12 weeks for low-volume businesses). Consider Bayesian approaches that provide probabilistic confidence estimates with smaller samples than frequentist methods require.

Challenge: Geo-based tests face regional variation. Control and test markets may differ due to unmeasured factors (local economy, weather, competitive activity), biasing results.

Solution: Use synthetic control methods that weight multiple control markets to create a “synthetic twin” matching pre-test trends in the test market. Validate market matching by comparing 4-8 weeks of pre-test performance—matched markets should show <5% variance in key metrics.

Challenge: Spillover effects contaminate results. Marketing in test regions may influence control regions through word-of-mouth, national brand awareness, or users traveling between regions.

Solution: Choose geographically distant markets with minimal cross-region interaction. For national campaigns (TV, digital), accept that pure holdouts are impossible and complement with Marketing Mix Modeling (MMM) to estimate aggregate impact.

Challenge: Platform-run tests lack transparency. Meta Conversion Lift and Google Brand Lift provide incrementality testing but don’t share full methodology, control group construction, or raw data, limiting ability to validate results.

Solution: Use platform tests for directional insights but supplement with independent third-party testing (Measured, Haus, Cassandra, Triple Whale) that provides full transparency, custom methodologies, and cross-platform measurement. Independent tests cost more but deliver defensible insights for CFO-level reporting.

Incrementality Testing Best Practices

Test retargeting and brand search first. These channels systematically overclaim because they target high-intent users already likely to convert. Incrementality tests typically reveal 30-50% of attributed conversions were organic, representing immediate budget reallocation opportunities.

Run continuous always-on holdouts. Rather than one-off tests, implement persistent 5-10% control groups across channels to generate ongoing incrementality data. This enables real-time optimization and detects changes in incrementality as market conditions evolve.

Combine incrementality with attribution. Attribution tells you where conversions happen; incrementality tells you which marketing causes them. Use attribution for tactical optimization (creative testing, audience refinement) and incrementality for strategic budget allocation across channels. Incrementality validates attribution’s story—if attributed ROAS is 3.0x but incremental ROAS is 1.2x, your attribution model overclaims by 150%.

Document and educate stakeholders. Incrementality results often contradict internal assumptions (“retargeting works amazingly!” vs “retargeting shows 35% incrementality and should be reduced”). Prepare stakeholders with clear communication: explain methodology, show historical validation, frame results around total incremental revenue rather than relative lift percentages.

Account for long-term effects. Standard incrementality tests measure immediate impact (conversions during the test window) but miss delayed effects (awareness campaigns driving conversions 8-12 weeks later). For awareness and consideration-stage tactics, extend measurement windows or use holdback analysis—keep control groups held out for 90-180 days and measure cumulative lift over time.

Benchmark against industry standards. Typical incrementality findings by channel: Brand search (30-50% incremental), Retargeting (40-60% incremental), Prospecting paid social (70-85% incremental), Prospecting paid search (75-90% incremental), TV and awareness channels (80-95% incremental). Significant deviation warrants investigation into test design or channel strategy.

Frequently Asked Questions

What is the difference between incrementality and attribution?

Attribution assigns credit to touchpoints based on their presence in the conversion path, identifying correlation between marketing exposure and conversions. Incrementality measures causality through controlled experiments, quantifying how many conversions were directly caused by marketing versus how many would have occurred organically without marketing intervention.

Attribution answers “what touchpoints did converters interact with?” Incrementality answers “how many conversions did marketing actually create?” Attribution models (last-click, multi-touch) credit all conversions that follow a touchpoint, including organic conversions that would have happened anyway. Incrementality uses test vs control groups to isolate only the additional conversions caused by marketing, separating true lift from coincidental correlation.

How long should incrementality tests run to achieve statistical significance?

Test duration depends on conversion volume, baseline conversion rates, and expected lift magnitude. General guidelines: Businesses with 1,000+ weekly conversions per channel can achieve 95% confidence in 2-3 weeks. Businesses with 200-500 weekly conversions require 4-6 weeks. Businesses with <100 weekly conversions need 8-12 weeks or should accept 90% confidence thresholds instead of 95%.

Calculate required duration using power analysis: larger expected lift (40%+ vs 10-15%) reaches significance faster. Higher baseline conversion rates need less time than low-conversion scenarios. Use sample size calculators (Optimizely, VWO) inputting baseline conversion rate, minimum detectable effect (MDE), desired statistical power (typically 80%), and significance level (typically 0.05) to determine precise duration.

Why does retargeting typically show low incrementality despite high attributed performance?

Retargeting targets users who already engaged with your brand (website visitors, cart abandoners, past customers), representing the highest-intent audience most likely to convert organically through direct traffic, organic search, or saved links. Attribution models credit retargeting when it’s the last touchpoint before conversion, but incrementality tests reveal 50-70% of retargeted users would have converted without retargeting ads because they were already in-market and brand-aware.

Retargeting doesn’t create new demand—it reminds existing high-intent users. This makes retargeting valuable for conversion acceleration (shortening sales cycles) but less valuable than attribution suggests for incremental revenue generation. Smart allocation: maintain retargeting at modest budgets for conversion optimization but avoid scaling aggressively based on attributed performance that overstates incrementality.

Can incrementality testing work for small businesses with limited budgets and low conversion volumes?

Yes, but with modifications. Small businesses face statistical power challenges due to limited sample sizes, requiring trade-offs: Run longer test durations (8-12 weeks instead of 2-4 weeks) to accumulate sufficient conversions for significance. Accept 90% confidence intervals instead of 95%, providing directional insights sufficient for budget decisions even without perfect statistical certainty.

Start with high-volume channels where statistical power is achievable (if you run Meta Ads generating 50+ conversions weekly, test those first rather than low-volume LinkedIn campaigns with 5 conversions weekly). Use time-based on/off testing rather than simultaneous control groups—pause campaigns for alternating weeks and compare on vs off performance. While less rigorous than randomized control trials, on/off tests provide incrementality signals for small businesses unable to hold out large control groups.

How do you measure incrementality for brand awareness campaigns with delayed conversion effects?

Brand awareness campaigns (TV, display, video, sponsorships) often drive conversions weeks or months after exposure rather than immediately, requiring extended measurement windows. Standard 2-4 week incrementality tests miss delayed lift, systematically undervaluing awareness tactics.

Solutions: Implement extended holdout analysis where control groups remain held out for 90-180 days, measuring cumulative conversions over the full customer consideration period. Use Marketing Mix Modeling (MMM) which captures aggregate long-term effects through time-series regression rather than short-term A/B tests. Measure leading indicators during the test (branded search volume lift, direct traffic increase, organic social mentions) that predict future conversions, then validate with long-term conversion tracking.

What is the difference between incremental ROAS (iROAS) and attributed ROAS?

Attributed ROAS divides total revenue from conversions that touched a channel by the spend on that channel, crediting all conversions that followed any exposure. Incremental ROAS (iROAS) divides only the incremental revenue (revenue from conversions proven to be caused by the channel via incrementality testing) by channel spend.

Example: Retargeting campaign shows $150,000 attributed revenue from 500 conversions with $30,000 spend = 5.0x attributed ROAS. Incrementality test reveals only 200 conversions (40%) were incremental; 300 would have occurred organically. Incremental revenue = 200 conversions × $300 AOV = $60,000. Incremental ROAS = $60,000 / $30,000 = 2.0x. The campaign’s true efficiency is 2.0x, not 5.0x—attributed ROAS overvalued performance by 150%.

Should marketing organizations replace attribution models with incrementality testing entirely?

No—incrementality and attribution serve complementary purposes. Use incrementality for strategic budget allocation across channels (answering “which channels should receive more or less budget?”), validating channel efficiency, and defending ROI to finance stakeholders. Use attribution for tactical optimization within channels (answering “which audiences, creatives, keywords perform best?”), understanding customer journeys, and daily campaign management.

Incrementality testing requires pausing campaigns or holding out audiences, making it unsuitable for continuous real-time optimization—you can’t run constant holdout experiments without business disruption. Attribution provides always-on feedback for tactical decisions. Best practice: Run quarterly or bi-annual incrementality tests per major channel to validate and calibrate strategy, then use attribution daily for execution. Incrementality proves what works; attribution guides how to execute what works.