ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

Critical Look: Personal People Meters

Critical Look: Personal People Meters Joe Pilotta
VIEW SINGLE PAGE

The advertising industry is buzzing about Personal People Meters (PPMs). These devices, which measure media consumption, promise more objective ratings that will clearly raise the value of radio advertising. Consequently, some say PPMs may help stem the flow of ad dollars going to new media.


Now this essay is not to determine whether or not agencies will make good on their promises regarding PPMs (which I'll discuss later); or if PPMs are better than the diary method (another way of measuring media consumption). Instead, my interest is to share with the iMedia audience how a communication analysis can inform our decision about PPMs, where they may fall short and where they could be rehabilitated.


The objective of PPMs is to measure the listening audience of radio programs quantitatively, and with neutrality through the use of encrypted signal coding to be decoded by a receiver suspended from a human body. The problem: From a communication research point of view, how should we view PPMs?


Basic communication sensing


Let's start with a very basic premise of all communication: there are distinctions between seeing and watching, speaking to and speaking with, and listening versus hearing (which I'll discuss later). All are quite distinct processes and effects and are at the heart and soul of a communication enterprise. We can easily demonstrate these distinctions by example. Seeing TV vs. watching TV; speaking to one's self (monologue) vs. speaking with somebody else (dialogue); and listening vs. hearing. With these distinctions in mind let's look at PPMs.


PPM is a technology, a technology designed for a purpose. However PPM is not unique as it is a part of a whole network of sensory technology (for example, smoke detectors, GPS, home security and ankle cuffs). In this case PPM senses signals/sounds of radio programming which has been encoded. All of these technologies are very good to excellent in "sensing," picking out or detecting stimuli, motion, smoke, sound, heat, et cetera.


What are PPMs supposed to do?


What claims are being made regarding PPM sensing activity? At a very basic level the purpose of the PPM is to discriminate sound (let's call it radio program or station) from "noise." It should be noted that PPM accurately discriminates 59 percent of the sound (see Rajar validation Test -- Feb. 14, 2005, reprinted in Industry Commercialization Viability Analysis of the Personal People Meter, Interim Report as of July 2005, RAB PPM Task Force and RAB Board of Directors). Now, how does this square with a communication analysis? The PPM is a fine example of the transmission model of communication.


Transmission model of communication


I have given an introduction of this model in a previous iMedia article. In the PPM context, additional insights are highlighted. Shannon and Weaver's model was set forth in 1949 in the Mathematical Theory of Communication. They define communication as "all procedures by which one mind may affect another." The model itself was designed for purposes of electronic engineering. It is essentially a linear, left-to-right, one-way model of communication. It led to technical improvements in message transmission.  The transmission communication model looks like the following:



While this model had a relatively long career in the '70s, it was finally put to rest in the '80s. But it does appear again as PPM.


Definition of hearing/listening


Arbitron defines hearing/listening: "The definition of listening both in the diary and with PPM according to Arbitron is the ability to hear a station, not the conscious choice to listen. Their stance is that the PPM goal of capturing audible listening is consistent with this and therefore the issue of unintended listening is not a major issue." (See report previously cited.)


Some in the agency world would have us believe that there is something magical about electronic data collection. They claim self reporting is less accurate. And that the short comings will be overcome with technology. This line of thinking requires you to assume that PPMs are more accurate and that the electronic transmission of encoded signals to a device overcomes recall limitations. That's a pretty big leap and it's easy to see why radio owners are reluctant to embrace the technology.


The PPM picks up unintended sound and interprets all data as listening. However, all hearing is not listening as listening refers to something one attends to which may be radio as foreground or background. Yet PPMs can not discriminate the difference. The sound could be all listening, part "noise" or all "noise," to the individual in the household.


PPMs interpret all hearing as listening. However, it would be equally logically to assume that all hearing is "noise" as there is no human discrimination involved in separating listening from noise. For PPMs, all coded sound is a signal to a destination receiving the signal, which is transformed into an interpretation of what the signal means, which is already pre-defined as listening. With an accuracy of 59 percent, PPMs "hear" with only one eye.


Conclusion


Even with the technical perfectibility of the PPMs or the best sampling technique imaginable, it is impossible for PPMs to discriminate listening from noise, because that requires a human differentiation. Therefore, at present, the PPM as an objective rating tool is an impossibility.


While media buyers may think PPMs answer their needs for buying radio more efficiently, does it really answer the needs of marketers who are moving to a consumer-centric model in order to increase ROI? Will this technology readily stem the flow of radio dollars that are going to new media? Perhaps radio would be better served with a consumer consumption component to complement the technology.


Additional resources:


Read Arbitron's comment on this article.


Joe Pilotta is vice president of BIGresearch, and a professor at Ohio State University, School of Communications. He holds two Ph.D.s from Ohio University (Communication Research) and from University of Toronto (Sociology), Canada. Member of Word of Mouth Association (WOMMA) Standards and Metric Committee and the ARF Long Term Advertising Effects Committee.

Click it where the sun don't shine
I had a client who loved search for exactly the reason that search is lovable. CPC pricing is super-efficient and leads are all qualified, so we assume that the ROI must be high. But we dug into the client's search results and found that the first page bounce rate was more than 90 percent. (So much for pricing efficiency.) That's not search's fault; search was doing its job. The search was driving to the site's homepage, and there were a lot of different and very specific terms and short blurbs that were driving consumers there. Essentially, the paid search results were making a promise that the homepage couldn't keep. So, don't just look at clicks -- look at what happens to those clicks once they land.


Of course, there's another issue with clicks as a metric: click fraud. It just keeps getting worse because it's so easy for crooks to automate fake clicks. Search isn't the only channel facing this problem; CPC display is as well. Look for pricing models that are more resistant to game playing.


Success you can't see
An automaker wants to reinvigorate its brand and move metal. The agency plans a great campaign using data tools like BlueKai to target "intenders" -- that is, people planning to buy a car in the next six months. The agency knows that for a brand campaign, click-through is a poor metric, but there needs to be some measurement that shows movement down the purchase funnel. The agency agrees with the client that a visit to the automaker's site to either configure a car or use the dealer locator will suffice, so they set up view-through tags to see how many people who have seen the ad wind up on the automaker's site.


This is guaranteed to under-count the effectiveness of the campaign. The creative might have done a wonderful job of making the audience want the car, but there are a lot of ways to move down the funnel without ever going to the automaker's owned and operated site. Going to Edmunds or Cars.com would furnish the same (or more) information to the buyer but not feel as pushy and limited as the manufacturer's own site. Dealers can be contacted and quotes given through these sites. But people who take these incredibly positive actions will never hit the view-through meter, and the money spent exposing them will often go into the wasted bucket by default.


Instead, consider spending some extra money to do a bit of behavioral research with a panel provider to see whether people exposed to the campaign searched for the brand, visited endemic sites, or took other actions inside and outside the manufacturer's site at higher rates than the non-exposed audience. Spending more money to find out if the money you've spent worked is not wasting money -- it is being strategic.

Reach without reaching
CPM is a great way of getting pure reach at enormous scale. Advertisers accustomed to TV and print often set reach goals (usually expressed as GRP) for a good portion of the budget. It makes sense -- it's cheap and with a frequency goal there's an expectation that you'll at least imprint the audience with a simple message or value proposition. But there's one problem with pure reach as a metric: Digital ads are a bit more ephemeral and hard to audit than those in print magazines or running on reputable broadcast networks.


There are shenanigans we've seen that are every bit as repugnant as click fraud. Sites will put up impression burner pages laden with tens of ads -- no way for a viewer to crack that clutter. Or sites will sometimes put ad tags on redirect pages. The audience doesn't see, but the ad server doesn't know. You've seen sites that require you to continuously click through page refresh after page refresh to consume an article. That's setting up a bad user experience in order to burn impressions -- impressions on pages with low natural time spent are not going to imprint a message on anyone.


One particularly nefarious charade we uncovered recently was a site that served itself into an ad banner. What's that? Well, by playing this clever but dishonest game the site lived inside itself, so it was counting ad calls as impressions even though they were invisible. Imagine putting two mirrors facing each other and watching iterations of reality spin off infinitely. It's like that. And worse, each iteration was reporting to comScore, so the site's traffic looked phenomenal. When Joestoenails.com has Top 50 traffic, you've haven't spotted a trend -- you've spotted a game.


When looking for reach, it's important that you validate that your impressions are actually hitting eyeballs and staying in front of them for at least a little while.
 
Dilution by abstraction
You're an AMD, and you've found a great new model for delivering results to your client. However, the client has worked hard to develop a media mix model and holds you accountable for eCPM pricing standards. Here's the problem: eCPM rolls up a lot of disparate tactics, each with its own unique metrics, and tells you how much you spent for them without assessing the relative value of those things.


You know what some buyers do to bring down their bottom-line eCPM? They buy the things that they know will be effective, and then they buy a lot of cheap, mass-impression bulk to bring down the eCPM. In order to appear to be saving money, they waste money. Should the clients be happy about that? Well, no, but they seem to feel good about telling their bosses how low they drove the eCPM. And you can fully understand why the buyer does it -- it's the only way they can deliver what they actually believe in while doing right by the wrong measure they're held to.


Rigidly looking at cost as the success metric ensures that you are over-spending. Annoyingly ironic.

Correlation does not imply causation
There are a few reasons to buy pre-roll. It's creatively easy, it's familiarity to the broadcast TV structure appeals to less-adventurous clients, and you know you're getting complete views. Pre-roll generally gets about an 80 percent complete view rate, which is great, right? Well, it would be if complete rate were actually a proxy for recall. But there is no evidence that it is.


Even more basically, recall might not really be a valuable goal in the first place. It's measurable through surveys, but does recall get you any closer to intent to purchase? Creating recall seems to be important because buyers of brands are highly aware of the brands they buy. But is it that recall leads to purchase or merely that purchasers have high recall? There is, to my knowledge, no research that shows causation (rather than correlation) between recall and intent. That's the Rosser-Reeves Fallacy.


A timely conclusion
In case it appears that I'm suggesting the above metrics are meaningless, I'm not. I'm suggesting that they are meaningful, but only in the right context and when implemented honorably. Context shifts rather a lot from campaign to campaign, and honor is too often fleeting in this industry.


Is there a metric broad enough to be simply applied across lots of different types of campaigns? Well, I'm a big fan of engagement (though I think engagement rate is a poor metric because unless you're qualifying engagement rigorously, it's too easy to game). Engagement can often be measured in time spent. Of course, not all time is spent the same way, and some creatives, who know that success will be measured on time, play games like hiding the close button, which gets good results on the spreadsheet without helping the brand.


Assuming that your materials adhere to honorable best practices, time is a great metric. We've seen studies that indicate that increases in time spent correlate with increases in brand lift; the more time, the greater the lift (up to, I'm sure, a point of diminishing returns). With the right qualifiers and more research into how different modes of spending time should be valued, time might be the metric that can best normalize across ad types and publishers while being more immune to gamesmanship.


Matt Rosenberg is VP of solutions at VideoEgg.


On Twitter? Follow iMedia Connection at @iMediaTweet.

Joe Pilotta was Vice President of BIGresearch, and a former professor at Ohio State University, School of Communications. He holds two Ph.D.s from Ohio University (Communication Research) and from University of Toronto (Sociology), Canada. Senior...

View full biography

Comments

to leave comments.