profile
Phillip Keefe
Product Strategy, R&D

A Case for Independent Measurement

It was recently announced that Spotify, a prominent streaming and advertising platform, acquired two podcast attribution companies to provide clients direct evaluation of their ad campaigns. Claritas has long advocated for understanding the impact of media investment on consumer behavior, and we’d like to take this opportunity to share the qualities we feel are important in a measurement partner.

Regardless of who you work with, we encourage you to ask questions when evaluating the compatibility of their solutions, and we’re no exception.

Is your methodology media agnostic?

As consumers, we’re not only influenced by who we are, where and how we live our lives, but also by the media we watch and listen to throughout the day. Advertisers use these channels as ways of reaching us, telling us about their products and services, and no single channel tells the entire story. Solutions should not only be compatible across marketing channels, but also be analytically neutral when determining the impact of advertising. It’s one thing to say, “This is how podcast ads performed among a subset of podcast listeners,” but it’s another to say, “This is how display ads performed among a subset of podcast listeners, and that represents the larger targeted display audience.” While the first is debatable, the second is an assumption very difficult to make.

Data representation matters because groups of consumers behave very differently. This is something measurement partners are responsible for solving. It’s also what distinguishes these partners. To illustrate the importance, take the assumptions above. As an advertiser, I’d want to know the true impact of podcast advertising. That requires knowledge of two groups: the people that heard the podcast ads, and those that didn’t. Knowledge on the latter must include all consumers, whether or not they listen to podcasts. Media is an important factor, but it’s certainly not the only, or even most, important one. Ask Proctor & Gamble, “Who is most likely to buy Crest toothpaste tomorrow?” It’s doubtful they’d answer, “Podcast listeners.”

A representative comparison point is needed to understand how media changes behavior. That can’t be done by limiting the potential to only those within a single channel. How closely do you feel that print magazine readers represent the general population? How about TikTok members? Ironically, to understand any single channel’s audience, you have to understand everyone else. Having data and technology that represents the general population is the only way to do that.

How confident are you in attributing media and determining only what my ads drove?

Let’s say I see an ad for exercise equipment today while watching a show on Hulu, then purchase the product a week from now. One might assume the ad prompted my purchase. But maybe I’ve always purchased that brand. Maybe my trainer recommended it. Maybe I’m not actually me, and I don’t mean philosophically. Understanding how media shapes consumer decisions means understanding two things: consumer identity and consumer influences.

Consumer data isn’t always as certain as my example above. Consumers engage with brands across many environments and devices. To complicate matters, some consumer decisions are made at the household level. Spouses consult each other on insurance options. Parents are implored toward the sweetest of breakfast cereals. Having an identity graph with as much consumer and household representation as possible is critical to understanding how advertising influences household buying decisions. That can’t happen if your end point is Hulu viewers, to keep with the example, which may not characterize all members of a household, let alone all households in the US.

The decision to choose a brand is a result of many things. These influences need to be considered to understand how media alone drove that decision. That starts with good data that represents and differentiates U.S. consumers. As an aside, before we got into the business of digital targeting and measurement, Claritas spent over 40 years developing and patenting a process of creating consumer segments that are unique based on their demographics, lifestyles, and past behaviors. That data works incredibly well for a wide range of consumer behaviors. However, as KPIs become much more specialized, special data is required. No one knows my behavior on Hulu better than Hulu, including me. If they want to know what drives future viewing habits, no one is positioned better than they are.

Ask your measurement partner how they handle identity resolution and multivariate modeling. They’re not going to have everything – we certainly don’t. Data gaps are okay, but it’s how a partner handles those gaps and reports confidence that’s important. Find out if their technology and methodological considerations are right for your business.

What do my measurement results represent?

Representation, as shown above, keeps your partner media agnostic and separates the influence of advertising from other factors that define us as consumers. To explore this concept further, consider reasons why there may be data gaps. Consumers engage with media and brands at home and at work, on their home Wi-Fi and on public cell towers, in retail stores or on digital properties. Capturing and resolving all of this is nearly impossible.

Measurement shouldn’t begin or end with data collection. Data hygiene is often more important than acquiring additional data. The most analytically advanced methods are invalidated by bad, irrelevant, or improperly prepared data. The same is true with good data and bad processes. In addition to data hygiene, which filters a data pool to only quality records, consumer activities can’t always be resolved or even captured. Weighting and projections, based on what’s known among your sample and what’s known about the population, are how you get there. Limiting measurement to a single channel creates blind spots in consumer behaviors and leaves measurement results open to more tenuous interpretation.

With that, how should you interpret a 20% lift? What change is being represented? Don’t be satisfied with “consumer activities” as an answer. Find out which consumer base is being represented, because a 20% lift in one channel may not be a better indicator of performance relative to a 10% lift in another. Percent lifts are relative to a baseline, and baseline values are difficult to align across companies. Ask for absolute values. There shouldn’t be any ambiguity in a partner stating, “Your podcast campaign generated a total of 1,000 incremental conversions,” if that partner is representing the total population.

Are your results descriptive or prescriptive?

A 20% lift may sound good to most advertisers, but what if 30% were possible? What if 1,000 incremental conversions could have been generated with 15% fewer impressions? Measurement shouldn’t conclude with an overall performance metric. It should include ways to improve future investment. Sometimes that means shifting impressions to an audience responding more favorably than others, or maybe it’s investing in another media channel. Make sure your measurement partner is providing not only relative measures of ad performance, but also alternatives that are neutral across your media plan.

Independence in measurement is something worth considering. Talk to your measurement partners and ask the necessary questions. After all, it’s your company’s bottom line at stake.

Are you ready to know more?