Anindya Datta, Ph.D.
Founder, CEO, Chairman Mobilewalla
March 4, 2015
Measuring the effectiveness of marketing spend in general, and advertising spend in particular, are
issues of great importance to CMOs and CFOs. Much work has been done in this area, often referred
to as Marketing Performance Measurement (MPM), and Advertising Performance Measurement
(APM), respectively[1,2]. At a conceptual level, both MPM and APM consist of two related activities:
(a) defining measurement metrics that depict the performance of campaigns, and (b) linking these
campaign performance metrics to business outcomes.
In both traditional (e.g., print, TV) and digital(desktop web) marketing/advertising, such metrics are well understood. For instance, in digital media, common advertising performance metrics include measures such as Reach, Frequency, CTR (Click Through Rate) and CVR (Conversion Rate). Upon completion of a campaign, the "spender" i.e., the brand, receives a report detailing the media properties that were bought, and how individual property performed. Combined with media audience data that is readily available (such as from Nielsen and Comscore), the brand is then able to assess the effectiveness of the campaign spend. In other words, for a campaign seeking to reach women 30-45, if the final results indicate that 30% of its budget was spend reaching consumers outside of this target audience, the campaign may be assumed to have been about 70% effective. Over time, and across multiple campaigns, these metrics are used to create benchmarks that are then used to compare both campaigns and associated vendors, thus ensuring accountability.
Unfortunately, the state of affairs described above don't play out the same way when advertising on mobile media, the fastest growing digital media segment. It turns out that a remarkably different set of practices have evolved around mobile display advertising that greatly impede the performance assessment of mobile campaigns. Moreover, certain data that are standard reference in all other media, (including the desktop web), are hard to come by in the mobile context, making routine tasks like audience verification virtually impossible. To appreciate these differences, consider the following.
Meaning it does not offer details of media purchased in the course of a campaign. Such systemic lack of transparency, where the advertiser and its agency have no knowledge of where ad impressions were served, only prevails in mobile. When asked to explicate, ad networks typically offer up the explanation that contractual obligations with the publishers in their network prevent them from disclosing purchase details. While that maybe is the reason, it is also interesting to note that these publisher contracts typically require the ad network to commit to certain threshold number of impressions (and, by extension, threshold dollars) across a publisher's media inventory. For instance, say the Big Ad Network (BAN) memorializes an agreement with Renowned Publisher (RP). Assume that RP's mobile inventory consists of a set of 100 apps, out of which 2-3 are popular (meaning that they are in the top 10 of their respective app store categories) and the rest are not. This distribution is very typical of most successful publishers. Typically, the reason that ad networks like BAN execute such deals is to have "preferential" access to supply from a super-popular app or web site (e.g., Candy Crush Sage, or Weather.com). However, once the agreement is in place, BAN has obtained not only the right to monetize inventory from the 1-2 world-beating apps from RP, but they are also expected to serve a certain number of ad impressions on the other 98 apps in RP's portfolio, which usually possess few attributes to attract advertisers on their own. Consequently, when BAN executes a campaign, in order to fulfill these expectations (and enjoy continued goodwill with RP), it seeks to deliver impressions across these "less attractive" media. Clearly, being blind is helpful under these circumstances - BAN does not have to report the tens of millions of impressions it delivered to the 98 obscure apps, which, if they were made aware, might cause the client to ask "hard" questions. The point here is that in mobile, blindness, and the resulting opacity, turns out to be quite useful to networks dependent upon large publishers with diverse portfolios consisting of both popular and obscure media properties. The effect of such blindness of course is that brands, or their agencies, have absolutely no idea where its impressions were served, and therefore are totally in the dark regarding how effectively its budget was managed.
Audience measurement is key tobrand advertising in every media environment. As a result, media-specific audience measurement vendors have established themselves worldwide: Nielsen for TV, Comscore for the desktop Web and Arbitron for radio for example. In mobile, at-scale audience measurement has been a big problem, especially for mobile apps, where 80% of mobile ad display impressions are served [3,4,5]. Lack of audience data makes it hard to target specific audience segments, and even more importantly, makes it impossible to verify the targeting effectiveness of a mobile campaign.
Clearly, without transparency, and effective audience measurement, mobile advertising will have a hard time achieving levels of accountability common in traditional media advertising. That, in turn, will cause brand advertisers to be wary of this medium, impending growth at a scale that has been widely predicted but yet unachieved.