Are you confident that every dollar in your marketing budget is driving real growth?
Clicks and conversions are relatively easier to track, but the real impact is harder to see. Incrementality helps determine if your ads create new outcomes, like installs or purchases, or if those results would’ve happened anyway.
Traditional attribution often overcredits certain channels. Incrementality gives a clearer picture of what’s truly working. Apply it well, and you’ll spend smarter and grow faster, especially in high-stakes areas like user acquisition (UA), where every dollar counts.
Key Takeaways:
Why incrementality matters and how it reveals the true impact of paid marketing by separating ad-driven conversions from organic ones.
How conventional attribution models can mislead marketers and lead to inefficient spending on low-value channels.
What steps to take to measure causal lift accurately, including using holdout groups, geo-tests, and time-based pauses.
Why iROAS and CPII are essential for understanding true campaign efficiency and making smarter budget decisions.
How to measure lift when user-level tracking is limited, using aggregated methods like geo-lift and ghost-bidding.
What Is Incrementality and Why Does It Matter?
Incrementality measures true lift or additional conversions driven solely by paid marketing efforts. It answers a critical question: What would have happened if you didn’t run a specific ad campaign?
Unlike last-touch attribution, which credits the final interaction, incrementality isolates marketing’s real impact.
For example, if your mobile game’s campaign generates 10,000 installs, incrementality helps determine how many of those installs occurred because of the campaign and not through organic discovery or other channels.
In today’s privacy-focused environment, where granular user-level tracking is increasingly getting restricted, incrementality might become indispensable. It shifts the focus from individual user journeys to aggregated data-driven insights, enabling mobile marketers to understand which strategies truly drive engagement, retention, and revenue.
Challenges with Last-Touch Attribution and Multi-touch Attribution
Last-touch attribution assigns 100% of the credit for a conversion to the final touchpoint a user interacted with before taking action, such as installing a game. While easy to implement, this model paints an incomplete picture. It disregards the broader user journey and earlier interactions that may have played a crucial role in influencing the final decision.
For example, a user might see your game’s ad multiple times on social media before finally clicking a search ad to install it. In a last-touch model, only the search ad receives credit, potentially leading to the misallocation of marketing spend and an underappreciation of top-of-funnel efforts.
To address these limitations, marketers are increasingly adopting multi-touch attribution (MTA). Unlike last-touch, MTA assigns value to multiple touchpoints along the user’s path to conversion. This can be done through various models:
Linear: Distributes equal credit to all touchpoints.
Time-decay: Gives more weight to interactions closer to the conversion event.
U-shaped (position-based): Emphasizes both the first and last touchpoints.
Data-driven: Uses machine learning to allocate credit based on actual impact.
While last-touch attribution offers ease and speed, it often oversimplifies user behavior. Multi-touch attribution provides more profound insights into the entire customer journey, although it requires a greater investment in tools, data, and compliance. Choosing the right model depends on your marketing goals, available resources, and the level of visibility you require across the sales funnel.
Cannibalizing Organic Growth
When user acquisition (UA) campaigns are not accurately measured for incrementality, they risk cannibalizing organic growth. Cannibalization occurs when users who would have downloaded your game organically instead do so through paid ads.
This re-attributes credit away from organic sources and inflates paid performance metrics. For example:
Paid-to-Organic Cannibalization: Aggressive paid campaigns may reduce organic downloads as users opt for ad-driven links rather than discovering your game through app store searches.
Organic-to-Paid Cannibalization: Over-reliance on ASO strategies can redirect organic users to paid channels when additional campaigns are launched.
Without measuring incrementality, you risk spending heavily on UA campaigns that merely shift users between acquisition channels without generating true growth.
Incrementality separates new installs driven by ads from those that would’ve happened anyway, ensuring you’re not wasting budget on players who’d convert without campaigns. Below are some common methodologies:
1. A/B Testing & Holdout Groups:
To measure the true impact of your ads, split your audience into two groups: the control group and the treatment group. Randomly assign users to each group, with the treatment group receiving the ads and the control group seeing none. You can calculate the incremental lift by comparing the install rates between the two groups.
A smart way to avoid ad leakage and minimize noise is by using Ghost Bidding, which records "ghost impressions" for the control group, ensuring a more accurate comparison.
2. Geo-Testing:
Testing your campaigns in different regions or during various periods helps account for external factors, providing a clearer picture of your marketing's effectiveness. For example, run a campaign in Texas but not in Arizona. Compare regional install rates to account for external factors like seasonal trends.
3. Time-Based Analysis:
Pausing campaigns can indeed restart the learning phase, particularly if the pause exceeds platform-specific thresholds:
Short pauses (≤7 days): For platforms like Facebook Ads, pauses under 7 days typically avoid a full learning phase reset. However, even brief pauses (e.g., 48 hours) may cause minor algorithm recalibration, potentially leading to temporary performance fluctuations.
Long pauses (>7 days): Campaigns paused beyond this threshold (e.g., Facebook’s 7-day limit) will restart the learning phase, requiring 50+ optimization events to stabilize.
Recommendations:
For 48-hour pauses: Monitor post-resume performance for 3–5 days to distinguish between incremental gains and algorithm relearning effects.
If installs drop steeply post-pause, incrementality is likely high. Minimal changes may indicate overspending, but confirm with post-recovery performance to rule out algorithmic disruption.
To maintain campaign performance, avoid making pauses that exceed 20% in budget changes or implementing major edits, such as altering targeting or creatives, as these actions trigger the learning phase, regardless of the pause duration.
4. Advanced Machine Learning Approaches:
A platform like Adjust's InSight employs advanced machine learning techniques to assess the true impact of marketing campaigns. By analyzing aggregated data, InSight identifies the portion of conversions directly resulting from advertising efforts, distinguishing them from those that would have occurred naturally. These methods analyze noisy, multi-channel data patterns to isolate incremental impact at a 95% confidence interval.
Key Metrics to Track
Incremental Lift (% Increase): This metric indicates the percentage increase in desired actions (like installs or purchases) attributed to your marketing activities. It's calculated by comparing the difference in outcomes between your test and control groups.
Incremental Return on Ad Spend (iROAS): iROAS measures the additional revenue generated for every dollar spent on advertising. A high iROAS signifies that your marketing strategies are effectively driving revenue growth.
Cost per Incremental Install (CPII): CPII reflects the cost of acquiring one additional install through your marketing efforts. Monitoring this metric helps assess the efficiency of your user acquisition strategies.
Pro Tip: Combine methodologies for precision. Use geo-testing to validate machine learning predictions or run holdout groups alongside time-based pauses.
By implementing these methodologies and closely monitoring these metrics, you can gain a nuanced understanding of your marketing campaigns' true impact, allowing you to make informed decisions. Equipped with these essential metrics, we can now explore how incrementality plays a strategic role in mobile gaming.
By focusing on actionable strategies, you can optimize marketing spend, prevent organic cannibalization, refine creative and channel performance, and adapt to privacy-first environments, all while ensuring your efforts yield measurable results.
1. Optimizing Marketing Spend
By isolating the impact of your campaigns, you can reallocate budgets toward channels that deliver incremental installs rather than wasting money on users who would have installed your app organically.
For example, if a campaign yields 20,000 installs but only 12,000 are incremental, your true Cost Per Incremental Install (CPII) is $4.17 rather than the misleading $2.50 calculated from total installs. If your CPI appears low but your CPII reveals inefficiencies, it’s a signal to shift resources to higher-performing platforms or creatives.
Pro Tip: Use geo-based experiments or A/B testing to segment audiences and measure incremental impact. This ensures you’re investing in channels that genuinely expand your user base.
2. Avoiding Organic Cannibalization
One of the biggest challenges in user acquisition (UA) campaigns is ensuring that ads don’t simply redirect organic traffic. Incrementality testing helps you prevent this by measuring whether your campaigns add new users or merely shift existing ones. For instance, ads targeting users already searching for your game can cannibalize organic installs without delivering real value.
Pro Tip: Implement holdout groups with zero ad impressions to isolate true campaign impact. Avoid targeting audiences with high organic intent by refining your targeting parameters.
3. Channel and Creative Optimization
Incrementality insights empower you to refine your creative strategies and channel selection:
Creative Refreshes: Test how different ad formats, such as rewarded video ads or playable ads, affect incremental installs. Rewarded ads often drive measurable actions like purchases or extended gameplay, while playable ads increase engagement and attract high-value users.
Targeting Adjustments: Use incrementality data to refine audience segmentation. Personalized campaigns tailored to user preferences (e.g., game genres) can significantly boost incremental impact.
Channel Experimentation: Evaluate the effectiveness of various platforms and ad placements. For example, geo-based experiments or ghost bidding techniques can help determine which channels yield the highest ROI without compromising user privacy.
Pro Tip: Rotate ad creatives every two weeks and experiment with dynamic formats like interactive ads or personalized campaigns tailored to user preferences.
4. Adapting to a Privacy-First Environment
With privacy regulations like Apple’s App Tracking Transparency (ATT) framework limiting access to granular user data, incrementality testing becomes even more critical. By using aggregated data and causal inference methods, you can measure campaign impact while remaining compliant with privacy standards. Geo-based experiments or pseudonymized user-level tests allow you to assess incremental lift without compromising user trust.
Pro Tip: Adopt privacy-friendly methodologies like Geo-Lift Testing or Ghost Ads for precise measurement in open web environments. Focus on anonymized data and aggregated insights for ethical personalization.
With a comprehensive view of strategic adjustments, let's transition into practical steps for implementing incrementality in your user acquisition strategy.
Implementing Incrementality in Your UA Strategy
Supercell's strategy game, Clash of Clans, offers an excellent example of incrementality testing in action. The game's marketing team implemented incrementality tests across multiple platforms to refine their user acquisition strategy. Their approach demonstrates how a mobile game can continue driving growth by focusing on truly incremental users rather than simply chasing volume.
Through strict incrementality testing, Clash of Clans was able to:
Identify which platforms delivered the highest percentage of incremental users.
Optimize bidding strategies based on true incremental cost.
Reallocate the budget from channels with high attribution but low incrementality.
Develop platform-specific creative strategies targeting users most likely to be influenced by ads.
Below, we break down actionable steps to implement incrementality effectively, using insights from the success of Clash of Clans.
Define Clear Objectives
Before diving into incrementality testing, you must establish clear goals for your campaign. Ask yourself: What KPI do I aim to improve?
Install Volume: Measure how many truly new users your campaigns bring in who wouldn't have discovered your game organically.
Revenue Impact: Determine how your advertising affects in-app purchases, subscription signups, or ad revenue generation.
User Engagement: Assess how campaigns influence metrics like session length, retention rate, or level completion.
Return on Ad Spend (ROAS): Calculate the genuine return on each advertising dollar by measuring incremental revenue against campaign costs.
Remember that your objectives should align with your game's current lifecycle stage. For new game launches, incrementality of install volume might be paramount, while established games might focus more on the incrementality of revenue or engagement metrics.
Design Your Test
Your test structure must ensure a clean separation between control and test groups while maintaining statistical validity. For mobile games, you have several methodologies available:
User-Level Randomization: The gold standard for incrementality testing, this approach randomly assigns individual users to test or control groups. This works particularly well for in-app campaigns targeting existing users.
Geo-Based Testing: Divides geographic regions into test and control areas, which is especially useful when individual-level targeting isn't possible. This approach is privacy-friendly as it doesn't rely on individual tracking.
PSA (Public Service Announcement) Tests: Your control group receives non-branded public service ads instead of your game ads, allowing you to maintain similar ad exposure while measuring the impact of your specific creative.
Ghost Ads: This advanced technique records when a control group user would have been shown your ad but doesn't actually display it, creating a perfect comparison group.
Preventing Data Contamination
To maintain test integrity, implement these safeguards:
Establish Holdout Groups: Create a truly isolated control group with zero exposure to your campaign.
Account for Spillover Effects: Users in control groups might be influenced by users in treatment groups through word-of-mouth or social sharing. Add buffer zones between test regions or exclude users with strong social connections to minimize this effect.
Consider Seasonal Effects: Mobile game installs and purchases often fluctuate due to seasonality. Ensure your test runs long enough to account for these patterns or use predictive modeling to establish expected baseline performance.
Maintain Consistent Measurement: Use the same attribution and analytics tools across both groups to ensure consistent data collection and comparison.
Even minor exposure can skew results and reduce statistical confidence.
Actionable Adjustments
The ultimate goal of incrementality testing is to refine your UA strategy based on findings. Here’s how you can make practical adjustments:
Reallocate Budgets: Shift resources toward channels or campaigns with higher incremental impact.
Optimize Creatives: Tailor ad creatives that resonate more with high-value players based on what worked during testing.
Iterate Continuously: Treat incrementality as an ongoing process rather than a one-time activity. Always test new strategies and refine them based on fresh data.
By focusing on high-value players and reallocating budgets strategically, Clash of Clans achieved a significant boost in incremental installs and player LTV within three months. Now that you have a full spectrum of insights and practical tips, let's summarize how incrementality can transform your marketing strategy.
Incrementality is critical for determining the true impact of marketing efforts, ensuring that ad spend drives genuine growth rather than cannibalizing organic installs. Traditional attribution models often overcredit certain channels, leading to misallocation of resources. Marketers can optimize their budgets, refine their creative strategies, and adapt to privacy-first environments by measuring incrementality.
Ready to optimize your campaigns with data-driven precision? Start your 14-day free trial with Segwise today and unlock the full potential of your user acquisition efforts!
FAQs
What is marketing incrementality? It measures additional conversions or revenue directly caused by paid campaigns by comparing the treated and control groups.
Why use incrementality over last‑touch attribution? It isolates causal impact rather than assigning all credit to the final interaction, reducing misallocation of ad budgets.
Which methods reliably test incrementality? A/B holdout experiments, geographic splits, and strategic campaign pauses are standard methods for quantifying lift.
What key metrics reflect incremental performance? Incremental Lift (%), iROAS, and Cost per Incremental Install (CPII) reveal the actual return on investment from advertising efforts.
How do I prevent organic‑paid cannibalization? Use holdout groups to detect paid‑to‑organic shifts and adjust targeting to attract genuinely new users.