Why More Than Data Doesn’t Sell Incrementality Tests

Aug 28, 2025

Marketing Has Plenty of Fads

Let’s be honest: marketing has plenty of fads. One year it’s “blockchain for ads,” the next it’s “AI-powered MMM SaaS that promises to solve everything except your dating life.” But nothing quite matches the religious fervour with which some people worship incrementality testing.

To some marketers, incrementality testing is the golden idol. The sacred cow. The “final boss” of measurement. They’ll tell you, wide-eyed and caffeinated: “If you don’t do incrementality testing, you don’t really know your marketing impact!”

And you know what? They’re not entirely wrong. Incrementality testing can work. But here’s the catch: it only works if you have the patience of a monk, the budget of a Fortune 500 company, the audience size of TikTok, and the campaign stability of a Swiss watch.

For everyone else? It’s more like chasing unicorns while blindfolded.

And that, dear reader, is why More Than Data doesn’t sell incrementality testing as a packaged service. Not because it’s “bad,” but because in practice, it’s like asking an SME brand to host the Olympics in their backyard.

Think of incrementality testing as A/B testing with a marketing twist. You split your audience into two groups—a control group that sees no change and a treatment group that gets exposed to your campaign. By comparing the results side by side, you can see the real incremental lift your marketing created and understand exactly how much extra impact it delivered
Incrementality testing is a rigorous approach rooted in the principles of A/B testing. By establishing a control group and a treatment group, marketers can benchmark performance across the two and isolate the incremental impact caused by a specific campaign or channel. This method provides a reliable way to quantify causal impact and validate whether marketing efforts are truly driving additional business outcomes.

Incrementality Testing 101: What It’s Supposed to Do

In plain English, incrementality testing is asking:

👉 “What sales (or sign-ups, or conversions) happened because of my ad… that would not have happened otherwise?”

Sounds simple, right? Well, technically, the gold standard way to answer this is through a Randomized Controlled Trial (RCT). That means splitting your audience into two groups:

  • Treatment group: people who see your ad.
  • Control group: people who don’t.

Compare the difference, and voila! You’ve got the incremental lift.

In theory, it’s the marketing equivalent of running a clinical trial for a new medicine. In practice, it’s the marketing equivalent of trying to do a clinical trial in your kitchen with your friends as test subjects.

Let’s be honest—patience isn’t exactly a luxury most SMEs or media agencies can afford. With tight budgets, broad audiences to reach, and the pressure to keep campaigns running smoothly, waiting months to see results feels impossible. That’s why the go-to strategy in today’s marketing world is speed: launch fast, measure fast, and if it works, keep going—if not, scrap it and try something new. It’s the most popular mantra you’ll hear in marketing practice, especially when every campaign dollar has to work extra hard
For many SMEs and media agencies, the biggest challenge in marketing measurement is patience. Budget limitations, mass-market audience reach, and the need for consistent campaign execution without sudden stops or changes make long-term testing difficult. In practice, most teams prefer a fast-paced approach: test quickly, analyze results, and decide whether to continue or pivot. This ‘test and learn’ mindset is especially common in campaigns with limited budgets, where agility often outweighs long-term stability.

Why Incrementality Testing Sounds Better Than It Is

Here’s the problem: incrementality testing has way too many constraints to be practical for most brands. Let’s run through them, marketer-style:

Patience: You Need the Zen of a Buddhist Monk

Incrementality tests don’t work overnight. You need to run campaigns long enough to collect meaningful data. That means several weeks, sometimes couple of months. But let’s be real: most agencies and brands want results faster than you can say “next client presentation.”

Budget: Bring a Pile of Cash

Running incrementality tests properly requires… well, money. Big money. You need a large enough sample to measure real lift. For SMEs with small budgets, that’s like telling someone they need a private jet just to go grocery shopping.

Audience Size: Go Big or Go Home

If your campaign is targeting a few thousand people, forget it. Incrementality testing needs massive audience reach to detect meaningful differences. It’s like trying to measure a temperature change by pouring one drop of hot water into the ocean.

Campaign Stability: No Mid-Test Shenanigans

Incrementality tests require that you don’t change your campaign while the test is running. That means no tweaking creative, no adjusting targeting, no “just a quick budget reallocation.” But media agencies and advertisers? They love tweaking. It’s in their DNA. Stability is a unicorn.

Running a marketing test isn’t as simple as plugging in some stats and praying it works. A solid experiment needs real design, discipline, and flawless execution. Yet, here’s what we see all the time: brands proudly say, ‘we ran an incrementality test—geo lift, even a user-level lift test!’ But when you dig into the details—how the control group was chosen, whether randomization was done properly, if the geo markets were comparable, whether the DSP set up ghost ads correctly—you quickly realize 99.9% of these tests weren’t run in a strict or proper way. And that’s the problem: investment decisions like ‘let’s double down on this channel’ end up being biased because the test itself was never set up to deliver trustworthy results
Designing marketing tests and experiments is not about simply applying statistics and hoping for the best. It requires thoughtful planning, careful execution, and rigorous validation. In many client cases, we see claims of incrementality tests—such as geo-lift studies or user-level lift tests. However, once we examine the actual design and execution, issues quickly surface: improper control group selection, lack of true randomization, flawed geo-market matching, weak DSP integrations, or poorly configured ghost ads. The reality is that nearly all of these tests—99.9% by our review—fail to meet the strict standards required for reliable measurement. As a result, marketing investment decisions, such as doubling down on a so-called ‘promising channel,’ are often biased or misleading due to flawed experimental design.

Funny But True: How Agencies Get It Wrong

Here’s a scene I’ve seen more than once:

An agency proudly announces, “We’re doing incrementality testing!”

A week later, they’ve changed the targeting, swapped the creative three times, shifted budget between channels, and oh—did I mention they paused the campaign over the weekend because the client had a sudden “gut feeling”?

Congratulations. You’ve just invalidated your incrementality test. The “proof” you’re about to show your client is about as reliable as a horoscope.

And that’s the real reason More Than Data doesn’t package incrementality testing as a service. Because nine times out of ten, agencies simply don’t follow the rules. And when they don’t, the results are misleading at best and dangerous at worst.

Here’s a classic mistake we see all the time: agencies run an incrementality test right through Christmas and New Year. Sounds festive, right? Except it’s a measurement nightmare. Not only is the campaign often too short to smooth out seasonality, but December and January are basically the worst months to test incrementality. Consumer behavior is all over the place—people are shopping differently, taking annual leave, and generally not behaving like they do the rest of the year. Unless your campaign kicks off earlier and runs long enough to cover both December and January, you’re better off avoiding these months for testing altogether. Don’t get us wrong—you can absolutely run campaigns during the holidays (nobody is saying cancel Christmas ads). Just don’t expect your incrementality test to survive the chaos of the holiday season without getting skewed results
A common mistake many agencies and marketers make is designing incrementality tests that overlap with major holiday periods. Not only are campaigns often too short to smooth out seasonal fluctuations, but December and January in particular are poor months for reliable measurement. Seasonality, consumer behavior shifts, and annual leave patterns all distort results. Unless the campaign begins well before December and extends long enough to cover the following January, the data will be biased by holiday effects. To be clear, this doesn’t mean marketers should avoid running campaigns during December or January altogether. Campaigns can and should run, but they should not be used for incrementality testing. Otherwise, the unique dynamics of the Christmas and New Year holidays will undermine the validity of the experiment and lead to misleading conclusions.

Why Other Vendors Sell Incrementality Testing Anyway

Let’s be blunt: many MMM and measurement vendors sell incrementality testing because it sounds impressive. They know marketers love buzzwords and “scientific proof.” Slap “incrementality” on a slide deck, sprinkle some AI seasoning, and suddenly you’ve got a product that looks worth $10,000 a month.

But here’s the secret: if the test isn’t designed properly, executed carefully, and stabilized long enough, the results are basically fake. Shiny, expensive fake.

At More Than Data, we’d rather be honest and say:

  • Incrementality testing can be powerful.
  • But it’s rarely practical for SMEs and most agencies.
  • And if you can’t do it properly, it’s better not to do it at all.

The Harsh Truth: Incrementality Testing Isn’t For Everyone

So let’s recap the brutal facts:

  • Patience? Most brands and agencies don’t have it.
  • Budget? Most SMEs don’t have it.
  • Audience size? Many campaigns target relatively smaller audience base.
  • Stability? Agencies can’t resist tinkering.

That means for most real-world marketers, incrementality testing is less “gold standard” and more “gold-plated pipe dream.”

It is also important to recognize that marketing is not the same as medicine. In medical double-blind studies, doctors do not know which patients are receiving the treatment, and patients themselves do not know which medicine is being administered—ensuring unbiased results. In advertising, however, bias is inherent: advertisers choose which audiences to target, and those audiences know exactly which brand is behind the message, complete with logos and creative assets. This unavoidable visibility means marketing experiments can never achieve the same level of neutrality as medical trials, making rigorous design and careful interpretation all the more critical.

Double-blind testing is considered the gold standard for selecting control and treatment groups, whether at the user level or across geo markets. However, GDPR and other privacy regulations have significantly restricted access to user-level data, especially with the deprecation of cookies. This leaves geo-based experimentation as the next option, but it comes with its own set of challenges. Ensuring comparable audiences across markets—matching attributes such as demographics, purchase behaviors, and intent—is extremely difficult. Equally, finding ‘clean’ markets without overlap from other media channels or advertising interruptions is almost impossible in today’s fragmented media environment.

What Incrementality Tests Can Teach Us

To be fair, incrementality tests aren’t useless. When they’re done right (by massive platforms with sufficient data, money, and patience), they can:

  • Prove which channels actually drive conversions.
  • Expose the flaws of last-click attribution.
  • Help reallocate budgets toward true performance.

But let’s face it: unless you’re Google, Meta, or a Fortune 500 with cash to burn, you’re probably not running a statistically bulletproof incrementality test anytime soon.

Marketers often criticize last-click attribution for over-simplifying the customer journey by giving full credit to the last interaction. A stronger, evidence-based alternative is the use of incrementality testing, which measures the real contribution of each channel to business outcomes
Last-click attribution has long been criticized because it assigns 100% of the credit to the final touchpoint in the customer journey. The most effective way to challenge this flawed model is by running incrementality tests, which reveal the true impact of marketing activities.

Why More Than Data Says “No Thanks”

At More Than Data, we don’t hate incrementality tests. We just refuse to pretend they’re realistic for most marketers. We don’t want to sell you a dream that turns into a nightmare and wrong decision.

Instead, we focus on MMM (Marketing Mix Modelling) and practical measurement approaches that:

  • Work with the data you actually have.
  • Don’t require million-dollar budgets.
  • Don’t collapse if your campaign changes mid-flight.
  • Give you results you can trust (and explain to your boss without needing a PhD in statistics).

That’s why we leave incrementality testing where it belongs: in the land of theory, whitepapers, and very patient, very rich advertisers.

Running a proper incrementality test isn’t a job for guesswork—it needs a PhD-level statistician. But that PhD can’t work in isolation; they need to sit right beside a marketer who knows campaigns inside out. The catch? Both need to be treated as equals. If marketers pull rank, dismiss the stats, and push their own agenda, the PhD won’t stick around. No expert wants to be sidelined—and sooner or later, they’ll walk
Designing, implementing, monitoring, and explaining an incrementality test is a complex task that truly requires the expertise of a PhD-level statistician. Equally important, this statistician must work hand in hand with an experienced marketer who deeply understands campaign strategy and execution. For the partnership to succeed, both roles must be treated with equal respect and fairness. If marketers dominate the decision-making and override the statistician’s expertise, the collaboration becomes unsustainable—and the statistician will inevitably step away.

Final Thoughts: Don’t Worship False Marketing Gods

So here’s the takeaway:

Incrementality testing is not a god. It’s not the magic answer to all your measurement prayers. It’s a tool—one that requires a very specific set of conditions to work properly.

If you’re a marketer at an SME or agency, obsessing over incrementality testing might be a distraction. Instead, focus on tools and frameworks that are practical, usable, and reliable.

And if anyone tries to sell you an “AI-powered incrementality testing platform” for $5,000 a month, just remember: sometimes, the emperor has no clothes.

At More Than Data, we’d rather give you solutions that fit your reality, not just your buzzword bingo card.