Why More Than Data Doesn’t Sell Incrementality Tests
Marketing Has Plenty of Fads
Let’s be honest: marketing has plenty of fads. One year it’s “blockchain for ads,” the next it’s “AI-powered MMM SaaS that promises to solve everything except your dating life.” But nothing quite matches the religious fervour with which some people worship incrementality testing.
To some marketers, incrementality testing is the golden idol. The sacred cow. The “final boss” of measurement. They’ll tell you, wide-eyed and caffeinated: “If you don’t do incrementality testing, you don’t really know your marketing impact!”
And you know what? They’re not entirely wrong. Incrementality testing can work. But here’s the catch: it only works if you have the patience of a monk, the budget of a Fortune 500 company, the audience size of TikTok, and the campaign stability of a Swiss watch.
For everyone else? It’s more like chasing unicorns while blindfolded.
And that, dear reader, is why More Than Data doesn’t sell incrementality testing as a packaged service. Not because it’s “bad,” but because in practice, it’s like asking an SME brand to host the Olympics in their backyard.

Incrementality Testing 101: What It’s Supposed to Do
In plain English, incrementality testing is asking:
👉 “What sales (or sign-ups, or conversions) happened because of my ad… that would not have happened otherwise?”
Sounds simple, right? Well, technically, the gold standard way to answer this is through a Randomized Controlled Trial (RCT). That means splitting your audience into two groups:
- Treatment group: people who see your ad.
- Control group: people who don’t.
Compare the difference, and voila! You’ve got the incremental lift.
In theory, it’s the marketing equivalent of running a clinical trial for a new medicine. In practice, it’s the marketing equivalent of trying to do a clinical trial in your kitchen with your friends as test subjects.

Why Incrementality Testing Sounds Better Than It Is
Here’s the problem: incrementality testing has way too many constraints to be practical for most brands. Let’s run through them, marketer-style:
Patience: You Need the Zen of a Buddhist Monk
Incrementality tests don’t work overnight. You need to run campaigns long enough to collect meaningful data. That means several weeks, sometimes couple of months. But let’s be real: most agencies and brands want results faster than you can say “next client presentation.”
Budget: Bring a Pile of Cash
Running incrementality tests properly requires… well, money. Big money. You need a large enough sample to measure real lift. For SMEs with small budgets, that’s like telling someone they need a private jet just to go grocery shopping.
Audience Size: Go Big or Go Home
If your campaign is targeting a few thousand people, forget it. Incrementality testing needs massive audience reach to detect meaningful differences. It’s like trying to measure a temperature change by pouring one drop of hot water into the ocean.
Campaign Stability: No Mid-Test Shenanigans
Incrementality tests require that you don’t change your campaign while the test is running. That means no tweaking creative, no adjusting targeting, no “just a quick budget reallocation.” But media agencies and advertisers? They love tweaking. It’s in their DNA. Stability is a unicorn.

Funny But True: How Agencies Get It Wrong
Here’s a scene I’ve seen more than once:
An agency proudly announces, “We’re doing incrementality testing!”
A week later, they’ve changed the targeting, swapped the creative three times, shifted budget between channels, and oh—did I mention they paused the campaign over the weekend because the client had a sudden “gut feeling”?
Congratulations. You’ve just invalidated your incrementality test. The “proof” you’re about to show your client is about as reliable as a horoscope.
And that’s the real reason More Than Data doesn’t package incrementality testing as a service. Because nine times out of ten, agencies simply don’t follow the rules. And when they don’t, the results are misleading at best and dangerous at worst.

Why Other Vendors Sell Incrementality Testing Anyway
Let’s be blunt: many MMM and measurement vendors sell incrementality testing because it sounds impressive. They know marketers love buzzwords and “scientific proof.” Slap “incrementality” on a slide deck, sprinkle some AI seasoning, and suddenly you’ve got a product that looks worth $10,000 a month.
But here’s the secret: if the test isn’t designed properly, executed carefully, and stabilized long enough, the results are basically fake. Shiny, expensive fake.
At More Than Data, we’d rather be honest and say:
- Incrementality testing can be powerful.
- But it’s rarely practical for SMEs and most agencies.
- And if you can’t do it properly, it’s better not to do it at all.
The Harsh Truth: Incrementality Testing Isn’t For Everyone
So let’s recap the brutal facts:
- Patience? Most brands and agencies don’t have it.
- Budget? Most SMEs don’t have it.
- Audience size? Many campaigns target relatively smaller audience base.
- Stability? Agencies can’t resist tinkering.
That means for most real-world marketers, incrementality testing is less “gold standard” and more “gold-plated pipe dream.”
It is also important to recognize that marketing is not the same as medicine. In medical double-blind studies, doctors do not know which patients are receiving the treatment, and patients themselves do not know which medicine is being administered—ensuring unbiased results. In advertising, however, bias is inherent: advertisers choose which audiences to target, and those audiences know exactly which brand is behind the message, complete with logos and creative assets. This unavoidable visibility means marketing experiments can never achieve the same level of neutrality as medical trials, making rigorous design and careful interpretation all the more critical.

What Incrementality Tests Can Teach Us
To be fair, incrementality tests aren’t useless. When they’re done right (by massive platforms with sufficient data, money, and patience), they can:
- Prove which channels actually drive conversions.
- Expose the flaws of last-click attribution.
- Help reallocate budgets toward true performance.
But let’s face it: unless you’re Google, Meta, or a Fortune 500 with cash to burn, you’re probably not running a statistically bulletproof incrementality test anytime soon.

Why More Than Data Says “No Thanks”
At More Than Data, we don’t hate incrementality tests. We just refuse to pretend they’re realistic for most marketers. We don’t want to sell you a dream that turns into a nightmare and wrong decision.
Instead, we focus on MMM (Marketing Mix Modelling) and practical measurement approaches that:
- Work with the data you actually have.
- Don’t require million-dollar budgets.
- Don’t collapse if your campaign changes mid-flight.
- Give you results you can trust (and explain to your boss without needing a PhD in statistics).
That’s why we leave incrementality testing where it belongs: in the land of theory, whitepapers, and very patient, very rich advertisers.

Final Thoughts: Don’t Worship False Marketing Gods
So here’s the takeaway:
Incrementality testing is not a god. It’s not the magic answer to all your measurement prayers. It’s a tool—one that requires a very specific set of conditions to work properly.
If you’re a marketer at an SME or agency, obsessing over incrementality testing might be a distraction. Instead, focus on tools and frameworks that are practical, usable, and reliable.
And if anyone tries to sell you an “AI-powered incrementality testing platform” for $5,000 a month, just remember: sometimes, the emperor has no clothes.
At More Than Data, we’d rather give you solutions that fit your reality, not just your buzzword bingo card.