Commentary

Can The ANA Drive Equitable Cross-Media Measurement?

  • by , Op-Ed Contributor, September 24, 2020

The third day of the ARF’s virtual AudienceXScience Conference Wednesday highlighted the Association of National Advertisers’ (ANA) launch of a cross-media measurement initiative that will based on a common currency across all platforms, an assessment of attribution approaches that demonstrate extensive inconsistencies (and, consequently, produce different outcomes measures), solid research on the value of attention as an ad metric, the value of ad position in commercial pods, and the damaging effects of ad clutter and long commercial breaks.  All this plus insights from a two industry CEOs.  

Reed Cundiff, CEO of Kantar North America, recommended combined research approaches to explore current realities in close to real-time -- “don’t be turtle” -- based on the digital transition of “a more dynamic research industry.”

advertisement

advertisement

“It’s not an either or,” he said, “but and.”

Bill Livek, CEO of Comscore, underlined the importance of determining valid unduplicated reach across media platforms for any ad campaign which would also control frequency of impressions that currently lacks meaningful controls notably against heavy viewers of any platform.  Comscore will be offering this reach/frequency metric early next year.  With consumer spending more time in front of digital screens, streaming, on-line shopping, etc., he sees a resurgence of “content is king” and new era for TV/video.  

Artie Bulgrin, project manager at the ANA, and formerly head of research at ESPN, formally announced the ANA’s initiative to develop cross-media measurement.  It will be based on a common currency, to be determined, that will enable true campaign reach and frequency to be analyzed.

There was a clue, however, about the common currency that may be used: Bulgrin’s slides indicated, “To create a marketer-centric cross-media measurement system for advertising that benefits the entire industry by providing complete measures of all ad exposures, …”

Let me repeat, ad exposures.

His next slide, regrettably, referred to the always nebulous term in our industry, “impressions” albeit they will supposedly be “reliable.”

A non sequitur?  So, as I asked during the session, is ad/content “exposure” the pure common cross-media metric we are all looking for?  Or will it be “attention” per the Brits?  

The ANA’s final approach will conform to the international principles recently established by the World Federation of Advertisers in its technical design proposal and will be compliant with Media Rating Council standards.  It will also respect consumer privacy regarding the extensive, complex database created that will feed the measurement methodology.  

This gordian knot of cross-media measurement, harmonization and comparability has been wrestled with from both the technical and business perspectives my entire career.  The blue ribbon panel of discussants did not respond to the question, “Will you establish a JIC (or joint industry committee-owned measurement service like those established in other markets worldwide), perhaps out of the ARF’s Coalition for Innovative Media Measurement (CIMM), based on the ANA’s final cross-media measurement specifications, such that all the major global video measurement companies can fairly bid on the execution to those specifications?”  

Sequent Partners, in collaboration with Janus Strategy & Insights, offered an eye-opening, but not surprising presentation based on a thorough review of TV attribution models and their data inputs.  Attribution modeling is a “hairy” technique in the best of circumstances.  The outcomes measure (ROI by platform) from the various model’s analysis of the same campaigns were substantially different in most cases.  

It was posited that this was due to the inconsistencies across the array of input data, notably TV tuning data, which was incorrectly referred to as “exposure” and/or “viewing” data.”  Not having real exposure data is surely one fundamental reason for the different results across models and their lack of precision?  

This important assessment was sponsored by CIMM and the full paper is available on its web site.  https://cimm-us.org/  It will certainly provide a critical framework for the evaluation and improvement of TV attribution models.  

Duane Varan, CEO of MediaScience, never disappoints.  With Nicole Hartnett, senior scientist at the Ehrenberg-Bass Institute, they revealed “multiple dimensions of attention” based on very sophisticated lab testing of video ads.  Measuring attention is extremely difficult and requires 12 different technological techniques that offer a relatively high degree of accuracy.  Measuring inattention is much much easier.  The study, which will be replicated in a simulated real viewing environment, identified significant improvement in key brand measures for high attention ads.  So, attention matters, but it is multi-dimensional.  

Nielsen data scientists Kay Ricci and Leah Christian reminded us that the first position in a commercial pod and being in the last pod in a program will generate the highest commercial ratings.  At the household level – essentially set tuning level regrettably - they were able to compare the exact commercial sub minute commercial rating with the average pod minute commercial rating which revealed the findings.  

While their work must be commended, it should be noted that, per the ARF media Model, tuning does not necessarily produce an impression (opportunity to see) and an impression does not necessarily produce an exposure or contact.  

MediaScience’s Varan teamed up with Comscore Director of TV and Cross Platform Research Jeff Boehme to share their findings regarding “commercial interruptions”  If interruptions were limited, based on their research, it would have huge benefits to broadcasters, advertisers, and consumers.  

Interruptions have two primary dimensions.  The number of commercials in a pod – clutter -- and the total commercial running time of a pod. As we know, commercial avoidance is a fundamental issue.  Typical ad content runs for 12 minutes per hour and ad avoidance has been estimated at $7 billion per year.  Subscription video-on-demand, has exploded although ad-based video-on-demand is catching up. 

In a more limited commercial interruption environment based on household tuning data (which would likely underestimate person-by-person avoidance), unaided brand recall can increase as much as 50% with aided recall increasing 20%.  As Varan concluded, “Ads cannot be effective if they are not processed.” 

36 comments about "Can The ANA Drive Equitable Cross-Media Measurement?".
Check to receive email when comments are posted.
  1. andy brown from Consultant, September 24, 2020 at 10:46 a.m.

    Thanks for the review Tony. I know that the US JIC/Anti trust issue is an old one. JIC's are often very effective ways of achieving high quality media buying and selling inputs. However the JIC system in the UK has not yieded any kind of solution or even a roadmap for true cross media measurement. In fact the ISBA Project Origin measurement initiative  (as endorsed by WFA) is arguably a result of frustration with the JIC's!

    PS. I do think that the attention work that you associate with the UK (but also led by a leading Australian academic) shows some promise to complement other metrics see launch of aacpm in Oz.

  2. Ed Papazian from Media Dynamics Inc, September 24, 2020 at 12:37 p.m.

    Yep, you are right, Tony. As we both feared, they will almost certainly utilize device usage as opposed to anything that approximates eyes-on-screen---or ad "viewing" --as the basis of cross platform measurement---which means that nothing of great value will develop. As that old saying goes, "haste makes waste". Sigh!

  3. John Grono from GAP Research, September 24, 2020 at 6:10 p.m.

    Thanks Tony.

    Yes served 'impressions' is flawed.   Yes 'ad exposures' is better.   Yes 'ad attention' is better yet again.   And 'ad message take-out' is even better.

    Now what is a media company selling?   They sell ad-breaks.   Yes they should be quantifying the loss of audience during the ad-breaks.   And yes they should be quantifying the ad-break by position-in-break and break-in-programme (using TV speak).   It is done sporadically but we should be lookng at that as being part of the currency.

    Here in Australia we get the minute-by-minue data so we can 'gauge' the impact.   And Ed, yes I know that will still include people who take a nature break or pick up the tablet or phone - but that also happens during programmes.   By 'gauge' I mean we can get a guide - not a measurement.

    But in the big picture, it should be all about the advertising message take-out, because that is where the brand affinity and likelihood of brand purchase happens.  

    So who should be responsible for that?   The TV station?   The programme might have 10m watching in the minute before the ad-break and then it drops to 9m in the first minute.   Isn't the TV station being asked to pay to measure how bad the ad is?   OK, fair enough because maybe it is the ad-break's fault.

    But how is the TV station expected to measure/rate compensate a stinker of an ad?   Should all ads be pre-tested and rated (not that I think that pre-testing is accurate) and THAT become part of the currency or ratings guarantee?  OK, we'll guarantee your ad at 9m in the break, and YOUR ad only did 7m.   Just how bad was your ad!?!?!   And as a matter of fact your ad affected the others in the break.   So sorry, we'll have to surcharge your ad for under-performing.

    And no I am not being serious about this as a system.   But I hope it does show scenarios where such 'granular measurement' could be detrimental.

    I think that making better ads would probably be more fruitful.    I also acknowledge, how do you meausre the quality of an ad !?!?!   I've never seen it done reliably.   Oh, except for performance against the sales or brand value metrics post campaign.   There are no guarantees in life or advertising.

  4. Jim Spaeth from Sequent Partners, September 25, 2020 at 7:28 a.m.

    Right, Tony, Set-tuning
    and/or set top box tuning, not viewing or exposure data. Thanks for the reminder. We've all fallen into the bad habit of conflating devices with people. 


    jim

  5. Ed Papazian from Media Dynamics Inc, September 25, 2020 at 8:04 a.m.

    Guys, I think that we all agree that it is unreasonable to make a media ad seller responsible for viewer response to an ad---that's the ad's job. But John, the evidence is very clear that absentee audience rates as well as lack of attention are far higher during commercial breaks than during program content---which is a major reason why some sort of eyes-on-screen measurement for "commercial minute audiences" is needed. Also, part of the equation is the amount of ad clutter in the break. Again, it is evident that as you pile in more and more ad messages of various lengths into a break, you lose more and more of the audience for the average ad. Here is something which the ad seller can control and advertisers who ignore this problem are paying for a considerably higher number of "phantom viewers" than those who opt to place their ads in less cluttered breaks. Regarding position in break, this is usually handled by the sellers  who rotate spots throughout breaks---so this is not a major problem.

    What gets me is the fact that eyes-on-screen measurements now seem possible for in-home TV and digital usage---but I see no movement to seriously  explore these possibilities. Why?

  6. Kevin Killion from Stone House Systems, Inc., September 25, 2020 at 8:26 a.m.

    "Nielsen data scientists Kay Ricci and Leah Christian reminded us that the first position in a commercial pod ... will generate the highest commercial ratings."
    And how does one determine that with a data source (like AMRLD) with a one minute resolution?
    "they were able to compare the exact commercial sub minute commercial rating"
    OK, then, "sub minute" - that explains that! But how did they do that?

  7. John Grono from GAP Research, September 25, 2020 at 8:28 a.m.

    I agree Ed.

    But a good econometric model doesn't really care what the varaiable is, or what its relative value is.   For example, it may find that when you habve thee exposure of an ad-break audience of >5m that sales increase (after taking in all the other variables).   The fact thet the ad-viewing audience may have only been 3.5m doesn't change it.

    Take for example magazine ads.   The model may find that when you run a DPS in the front quarter in magazines with a circulation of > 850k that sales increase.   Circulation has no idea how many people read the magazine, let alone see the ads.

    My point is that actually seeing the ad is important to accuratalely assess CPMs.   But good marketing models can still accurately assess effectiveness.

  8. John Grono from GAP Research replied, September 25, 2020 at 9:11 a.m.

    Quite simply Kevin.   While I don't know the intracacies of Nielsen's US system, I am familiar with Australia's OzTAM system.   They are likely to be similar.

    In the panel homes the 'data record' is actually the audio.   Simultaneously, the audio of all the measured (linear) sources are recorded with time stamps to create a unique reference file.   Overnight the panel data is sent to be processed.   Each device's audio is then matched to the reference file, and the viewing attributed to the matching channel or source.   (Importantly, when you have syndicated programming, the ads and station idents are pivotal in assigning the viewing to the correct cvhannel).

    If someone is watching recorded content, it can be matched back to the reference sources and determine that the viewing was to (say) xx minutes of a programme Wednesday week ago.   That then gets added into the catch-up data.

    A corollary is that if someone watches a programme without audio they get no credit as a viewer.   If they mute the TV when ads are on the ads are not credited.

    And speaking of granularity think of it as basically at the same frame rate of the old analogue broadcast systems (20 fps in the US, 25 fps in AU).

    The data is not released at that level, though it is technically possible.   I'd have to check but I think AU is based on 'dominant channel in each second' level, but then released at 'dominant channel in each minute' level.   Or it may be "middle-second of the minute".   But that is counting angels on the head of a pin.

  9. Ed Papazian from Media Dynamics Inc, September 25, 2020 at 9:15 a.m.

    John, I have seen many attempts at modelling of the sort you are---I assume---referring to. Most of the time they find some sort of rather basic correlation---often relying on very "soft" data such as spending, sometimes with GRPs added and, once on a while other factors are tossed in. However, the demand, these days is for more precision---"attribution"?---with attempts to zero in on specific media "vehicles", audience duplication and "ad exposure" patterns. To  get meaningful results on a more "granular" basis you are going to need data that reflects the reality of ad exposure, not simply data that is easy to gather. I don't think that device usage, while readily available, comes  close enough to satisfy this need. Again, I ask, why aren't "we" aggresively exploring the possibilities of eyes-on-screen" research instead of merely giving lipservice to the huge distinction between tuning and viewing? Hoping I'm wrong, the result will probably be "haste makes waste". 

  10. Kevin McCollum from None, September 25, 2020 at 9:20 a.m.

    Agreed with all of the above, but the fact that we cannot currently frequency cap between a broadcast network, FB, and AMZ is a huge source of frustration for advertisers, and can lead to massive overfrequency with consumers across channels.  
    I'd love for all the problems to be solved in one fell swoop, but I applaud any step taken in the right direction.
    What's the point of measuring eyes-on-ad if you don't know if they are the same eyes that saw your ad 10 min ago on another platform/device/channel?
    Measuring eyes-on-screen will be a thorny business...  Remember the outrage of smart speakers and phones listening on every conversation?  Yes, that has subsided, but are you ok with your TV watching you and your wife get frisky on the living room sofa, even if it is allegedly "only an algorithm or computer" that is watching?
    Yes, just like Nielsen's people-meters, it is possible to get to the end game, and there are companies who already have a TV attention measurement solution, but the early adopters/panel sizes are miniscule, and reaching critical mass will likely attract the furor of privacy advocates and bureaucrats.

  11. Ed Papazian from Media Dynamics Inc, September 25, 2020 at 9:32 a.m.

    Kevin, when I propose an eyes-on measurement, the only efficient way that this could be done is via an ongoing panel. So you would know whether the same set of eyeballs was fixated not only on all of your commercials across a reasonable period of time---say a month--- but also on ads for rival brands. It would be a huge breakthrough in terms of data quality.

  12. John Grono from GAP Research replied, September 25, 2020 at 9:44 a.m.

    Good points Ed.

    The models you are alluding to are like the models I was writing back in the mid '80s on my IBM PC AT (no expense spared!).   Thankfully computing doubles in power (and capacity and capability) every 18 months.

    I am referring more to models that are non-linear, can use non-numeric data (during a Sydney 2000 Olympics model we assigned Gold, Silver and Bronze medals to the TV ads ... it worked well).   These models can accept hundreds of potential variables.   Many of those variables are correlated to other variables.   So the model utilises Occam's Razor and favours the simpler variable, and also aims for the most parsimonious model.

    Typically you will start with hundreds of numeric and non-numeric 'candidaite' variables.   The model will winnow that data pile down to, typically, single figures of the key variables.   Those variables will generally be able to describe 60%-75% of variation in sales.

    But the key thing you get out of the models are the confluences of the values of those variables.

    As an example, it may be chocolate bars.   You may be #2 in the market and the data in the model shows that if you drop your price in supermarkets to 85% of the market leader with at least 10 shelf-talkers, and concurrently use in-store radio that uses the tag line from your 125 GRP TV campaign in the last week then you outsell the #1 by 30%.

    The key is that these are BESPOKE models.   They are hard work - primarily to collect the masses of data only to find out that you exclude 90+% of it.   They also take a pretty loing time to verify.   The models have to work using data-exclusion rules - for example, randomly remove 10% of the data and see if the model parameters only change slightly - and repeat that process hundreds or thousands of times until you converge on a robust set of varaibles and data.   And of course, you have to continually update the data set.

    And marketers and media people may be surprised that some of the metrics that they hold dear only have minor impact.   You have to be brave and not afraid.   For example, in one model (based on daily sales of a fast food) that the presence of rain was the third (from memory) most important variable (which actually makes sense).   It led to a rain = radio strategy to trigger the recall from the TV and radio campaign.   The 'value' in the 'Rain' variable was either "Y" or "N" - as simple as that.

    Anyway, I have raved on long enough.

  13. John Grono from GAP Research, September 25, 2020 at 9:58 a.m.

    Apparently I haven't!   LOL.

    Ed, yes a panel is about the only way.   It would require  'gaze-glasses' and would have to encompass all media, both in-home and out-of-home.   We haqvbe successfully used then to measure OOH panels.   I am a tad wary about them in-home.   Yes, some would say yes and comply.   Others would say yes and only partially comply.   Some would say no.   The risk of bias would be high - but probably could be managed.

    Kevin, you raise the BIGGEST issue of all.   How do we de-duplicate the various media?   Do we insist on a panel that lets us tradck/measure their TV, radio, mobile, computer, tablet, car usage?   And also monitor their trip to and from work and through the shopping mall to get OOH.   Oh yes, and the trip to the cinema on the weekend.   And look up in the sky - there is some sky-writing.   I'd better get back to the newspaper and magazine I was reading.   And where did I put those AR glasses.

    It may - and I mean MAY - be able to get measures for each 'vertical' (i.e. medium).   We may be able to overlap some media.   But to de-duplicate all media is a dream until we are born with chips in our brains and eyes ... IMHO.

  14. Ed Papazian from Media Dynamics Inc, September 25, 2020 at 10 a.m.

    How true, John. In many cases I have seen, the client and agency just couldn't be bothered to collect all of the most important information---especiaally trending over time and intangibles such as ad awareness plus, pricing changes,sales promotion hypes,  the activities of competing brands, etc. As a result, the models  were unable to account for a substantial percentage of the shifts in sales that were recorded. As for attributing results to "exposures" in specific TV shows or channels, these invariably revealed little  as audience duplication info was not available. Even when simplistic daypart variables were explored as a fall back position---such as heavy use of prime vs. daytims or fringe--- there really wasn't much to hang one's hat on. All of which tells me that if we keep using the same kinds of inflated and misleading media audience data that I see in all of these investigations, we are not going to learn all that much.

  15. Tony Jarvis from Olympic Media Consultancy, September 25, 2020 at 10:37 a.m.

    I am delighted and humbled that my review of the ARF AxS Conference and particulalry the ANA XMM initiative has driven such a valuable discussion for Artie and the ANA Project Committee as well as hopefully for ISBA in the UK. 
    Jim:  I think we all know the companies and platforms that are deliberately driving the conflation of devices with people and also exposure.  As the Attribution experts Sequent Partners have a mission!
    I would endorse Ed's reminder that media's responsibility and accountabilty only extends to maximizing eyes-on or ears-on for a defined target group.  A measure of attention and/or listening introduces the creative impact which, with many other non-media dimensions, ultimately interact hopefully in a synergistic manner and together drive the advertiser's desired outcomes.
    There is perhaps a good reason we use the term"vehicle" for media.  In a car race the "vehicles" need many other elements to complete and win the race the most important being the "driver" - the creative, n'est pas?
    Andy:  Please note that based on an intense ARF Symposium many years ago JIC's are NOT anti-trust in the US and we essentially have one - GeoPath, formerly TAB.  "We" saved the industry ~50% of the cost for a solution that advanced OOH measurement beyond anything being proposed by the independent research companies at the time and took ownership! 
    Thanks everyone. I feel an Op Ed developing!  Stay tuned. 

  16. Ed Papazian from Media Dynamics Inc, September 25, 2020 at 11:05 a.m.

    Regarding which media are included, it's pretty obvious that virtually all of the interest coming from advertisers and agencies is about TV/video exposures, not those for other media---which, while unfair to , print, radio, OOH, etc.---greatly simplifies matters. I also believe---though this requires careful investigation---that keeping it focused, initially, on in-home---where 90% + of TV/video is consumed, also makes some sort of resolution easier---providing we approach this not as a matter of using convenient  but not very meaningfu tuning data or seeing what can be realistically done to obtain eyes-on-screen information. Here, I would make ita point to take a long, hard look at what TVision has been doing for a number of years with its 7000 person panel operation. The question being whether this could be expanded to a panel for both "linear TV" and  in-home digital video which produces show by show ratings as well as eyes-on-screen ratings and can track individual panel members' ad exposures for all national ad campaigns over time. Ultimately, this could lead to an ongoing panel of 50,000-100,000 homes that becomes a standard national TV rating source.

    As for print and radio, their inclusion introduces a whole new set of definitional issues. For example 70% of radio listening is done away from home--roughly half in cars and half in other locations. How do you measure ad exposure when it's not eyes-on -screen but "listening" data that you want---and for specific commercials? And magazines or newspapers? Even if you had a way to record which page was opened---that carried an ad---is that enough?Which ad was "seen? And what about pass along "readers" for magazines? If we demand that everything be included at the outset--- we will probably wind up in endless theorizing and bickering about what constitutes a comparable metric for determining ad exposure across media---and everyone will lose interest.

  17. John Grono from GAP Research replied, September 25, 2020 at 6:18 p.m.

    Fair point Ed.   Yes TV/Internet video is a logical starting point.

    But I challenge your comment that 90+% of video is consumed in-home.   That used to be the case.   I don't have robust data as to what it is likely to be now, but consider the following:


    • Pre-mobile and high-speed internet (i.e. smartphones) 3%-5% of TV was consumed in pubs, clubs etc.  Given that viewing is a socially driven occurrence and not a device-based decision that quantum has probably not changed much.

    • Most pundits are saying that around two-thirds of internet video is now consumed on mobiles.

    • Internet usage minutes are now approaching TV usage minutes.



    Our AU TV ratings system has two panels.   A metropolitan panel of 5,250 homes and a regional panel of 3,198 homes, which with an average 2.6 people per home which means more than n=20,000 people every day.   There is also a 'sub-panel' of the National Subscription TV homes which is 2,120 of the 8,000+ homes in the Metro+Regional panels combined.   Given our population of around 25m that's a pretty fair effort.

    But as a media researcher who has worked on the 'currency' measurement of all the major media in Australia over the past 20 years, making them as harmonious as possible - while still recognising that each has its own strengths and benefits - I am loathe to propose a non-inclusive system.

    For example, 'eyes-on' does not apply to radio.   'Must have audio to be a viewer' doesn't apply to newspapers and magazines who appily view their printed content.   Not including them would be very detrimental to the advertising market per se.

    These are the reasons why I favour media-mix modelling for the result.   Small panels could produce duplication factors (which range from 0% to 100%).   There would need to be lots of research to establish the construct of the panels, becasuse the chances of a single panel being able to measure all duplications is likely to be an unrepresentaive panel with unknown biases.

    There is a reason that one of my favourite quotes, attributed to Einstein, is 'Not everything that can be counted counts and not everything that counts can be counted'.

  18. Ed Papazian from Media Dynamics Inc, September 25, 2020 at 7:12 p.m.

    John, may I suggest that you access Nielsen's website and take a close look at its Total Audience Report for the first quarter of this year. I believe that you will be surprised at how little video usage Nielsen's meters are capturing for mobil relative to "linear TV". As for OOH, the numbers are so small---across all content types--- as to be negligable. Part of the answer is that most videos seen on smartphones are very short, hence tha reduced time. I'll stick with my 90%+ estimate for now---unless Quibi suddenly comes up with something really new and exciting for mobile phone users and turns the world around.

  19. John Grono from GAP Research replied, September 25, 2020 at 7:44 p.m.

    I'm a tad confused.

    I've been looking at 'Overall Usage Over Time' ... https://www.nielsen.com/us/en/insights/article/2020/the-nielsen-total-audience-report-hub/.    That orange section for App/Web on a smartphone looks about the same as the blue section for Live TV.

    There are many, many reports that quote the proportion of mobile internet usage that is video at around 60%.   Forecasts are saying 80% within a few years.   Given the IAB AU data I see those proportions feel right.

    That video may not be like TV - scripted, acted etc. - but it is video and that is the new paradigm.   Content (and ads) will get shorter IMHO.   Lots and lots of short video as attention spans shrink.   On the flip side I believe/hope that long-form will come back and will be the premium conduit for advertising.

  20. Ed Papazian from Media Dynamics Inc, September 26, 2020 at 12:24 a.m.

    John, Nielsen reports it two ways. One, is time spent with smartphones regardless of what is on the screen---which gets you the far bigger number---and time spent with videos on smartphones---which is a very small number, compared to "Live plus delayed TV". Both stats are in the report.

  21. John Grono from GAP Research, September 26, 2020 at 12:35 a.m.

    Ed, could you please do me a favour and email me the report you are referring to.

    We're in the process of starting the rebuild of our home and things are pretty hectic making sure everything is Ok with the plans and costings etc.   There is no 'Upfront;' in a re-build.

    I suspect that the viewing thaty they are referring to, is viewing to the network broadcast content that can be added to the live + time-shift minutage.   The numbers I am referring to is smartphone usage of video - such as YouTube, TikTok, FB etc. which can be an advertising channl as well as reflecting total usage.

  22. Ed Papazian from Media Dynamics Inc, September 26, 2020 at 7:43 a.m.

    John, I sent you a link via Linkedin. In case it doesn't get through, here are the relevant figures from the 4th Q 2019:

    Average daily time spent per adult---total pop.

    Live plus delayed TV:4:14
    TV connected devices: :57
    Video on computer::09
    Video via smartphone: :16
    Video via tablet::07

    The same report also cites daily per-capita averages for all uses as follows:

    Computer: :34
    Smartphone:4:03
    Tablet::51

    As you can see there is a hige disparity between use of any kind and use for videos.


    Obviouisly I can't vouch for the accuracy of Nielsen's findings---but they have been very consistent on this point. What's not shown is where the activity takes place, however, if one accepts the findings you can see that most TV ad impressions must be attained in-home.

  23. John Grono from GAP Research replied, September 27, 2020 at 10:23 a.m.

    Thanks for the link Ed.

    There appears to be something a tad rum in the report.   I found the numbers you provided in the report for video and total usage- they were daily video times.

    So I looked at the video proportion of total usage for PCs, phones and tablets.

    - Computer 9 mins of 34 mins = 26.5%
    - Smartphone 16 mins of 243 mins = 6.6%
    - Tablet 7 mins of 51 mins = 13.7%

    So that implies that video's share is highest on computers, then tablets then smartphones.   That doesn't accord with most commentary and other data sources.   Smartphones are a pretty ideal device for short-form video such as sharing on FB, Tik Tok etc.

    So I had a closer look.   First, the data is Adults 18+ (due to research privacy rules I suspect) so that misses out on a big chunk of smartphone video usage, and introduces a bias.

    Then I noticed that it said "Video Focussed App/Web" for both Smartphone and Tablet.   So it looks like it could be a 'Defined Group' of URLs rather than a 'total video' figure.   This may mean that many sources of video may be omitted like Facebook, Zoom, FaceTime etc. ... all rich sources of audience for advertisers.

    I realise that accurately tracking video on a smartphone is problematic, but does anyone know if the above could be the case?



  24. Ed Papazian from Media Dynamics Inc, September 27, 2020 at 10:31 a.m.

    John, I have long suspected that Nielsen is missing an unknown amount of digital usage. That said, the difference between video usage on smartphones and "computers"---which I take to mean laptops and/or desktops---is not surprising in view of the smartphone user's prediliction for very short videos as opposed to the "computer" user's inclination to consume videos of many lengths. As for the teens, sure, they are big smartphone users---but does this carry over to videos? Maybe, yes---maybe not so much. In any event, the issue at hand is how much usage is done away from home. I believe that 65- 75% of smartphone usage happens out of home and this, if correct, would apply even more so to videos.

  25. Ed Papazian from Media Dynamics Inc, September 27, 2020 at 10:40 a.m.

    John, while I'm thinking about it, an additional point concerns commercial viewing --as opposed to program content viewing. Whatever the percentage of OOH "exposure" is we' re really talking about program content--in as much as device usage measures anything. I would hazzard a guess that when it comes to commercials that the smartphone video teenybopper "viewer" who is trundling down the street with a gang of fellow students---all of whom are busily texting---would be very easily distracted and not pay any attention to most commercials---a situation that might not apply as much to solitary,  in-home usage. The same point arises for OOH TV "viewing". I seriously doubt that OOH commercials get anything close to their in-home counterparts in terms of attentiveness---even though device usage might imply otherwise.

  26. John Grono from GAP Research, September 27, 2020 at 5:15 p.m.

    All very true Ed.   I have some access to 14+ data and find that the 14-17 cohort is prolific.   Of course, attention is another issue.   But getting total usage 'right' has to be both the starting point and the bedrock.

    One other thought/comment.   If my hunch if right, then the Nielsen data may only be 'on platform' video and not 'off platform' video.   For example, someone is swiping through their FB feed and see, for example, some breaking news that piques their interest that links to a 1 minute CBS video of the story.   Who gets the credit?   CBS or FB?   Or do both?   If the Nielsen data is for a 'defined group' of URLs (well, more like domains) and FB is not seen as primarily a "Video Focussed App/Web" but a social media one, then we'd be leaking masses of usage.

  27. Ed Papazian from Media Dynamics Inc, September 27, 2020 at 6:18 p.m.

    Interesting point about how smartphone usage is measured, John. As I'm not a Nielsen client I can't ask them to reply to this question and expect an answer in detail. Perhaps someone from Nielsen will happen to note this discussion and will offer an explanation---or a Nielsen client may take it up with them and let us know what they have to say.

  28. Joshua Chasin from VideoAmp replied, September 30, 2020 at 2:06 p.m.

    I agree with Mr. Grono. What are media operators (I think "programmers" is the current term in vogue) contracting to sell? Frankly, I've always believed media providers were in the "leading the horse to water" business; not the "making them drink" business. As a TV network or station or cable operator, I can get your ad onto the screen and into Tony's living room. I have literally no control over whqt goes in IN that living room. 

    Historically in advertising measurement, we've had audience measurement, and we've had creative testing. How well the ad holds attention is obviously essential to the overall mix; but that's between agency and brand, not broadcaster and brand. We've always treated the two fields (attentiveness/creative efficacy and audience counting) as separate disciplines that co-exist (I suspect Tony might want to interject about now with the old TAB's "Eyes on" wortk.) But thus far the consensus among the parties building these systems (i.e. ANA, WFA)-- which happily have a robust advertiser representation-- has been to leave the cognitive experience of the viewer (including attentiveness) off to the side for now. Which I think is the right decision.

    Now, having said that, I'll also leve the door ajar by noting that no one ever innovated in a revolutionary fashion by limiting their thinking to the practical and the possible.... 

  29. Ed Papazian from Media Dynamics Inc, September 30, 2020 at 2:55 p.m.

    Josh, I think that we all agree about the respective roles of the media time seller and the advertiser regarding getting ad messages to the consumer and having the ad's actually  watched with attention as well as motivating consumers to action. The latter is clearly the advertisers' responsibily.

    However, the media seller can have an impact on whether an ad is watched in several ways. One is by offering the kinds of program content that hold viewer attention and interest---to the point where many program viewers, who might otherwise leave the room or dial switch during a commercial break, stay put, lest they miss the return of program content when the break ends. Clearly, highy involving dramas meet this requirement while many other genres do not. A second---and key ---area where the TV time seller can exert  influence on viewer attention to a commercial is by restricting the amount of ad and promotional clutter per break. Here, as well, there is ample evidence that commercials in short breaks---less than two minutes in duration perform far better than those rotated through a sea of messages in breaks lasting four minutes or longer. Sadly, relying on device usage to define"ad exposure"---the ad appeared on the viewer's screen---ignores these critical distinctions.

    Worse, by not taking into account real indicators of viewer presence and eyes-on-screen attentiveness advertisers are rewarding sellers with overly cluttered breaks and poor quality programming---which is not good for anybody---except the accountants who tally up the sellers' ad revenues and figure out how profitable they were. If there's no penalty for putting out lousy content and cluttered breaks, why should  we expect time sellers to do better?

  30. Joshua Chasin from VideoAmp replied, September 30, 2020 at 3:32 p.m.

    Ed--

    Indeed I moderated the session TOny writes about where Duane Vartan and Jeff Boehme presented on the impct of clutter. So point taken. But for both tune-in and tune-away in differentially-engaging shows, and for clutter impact, I believe audience measurement can provide appropriate window via sufficiently granular measurementy (i.e. second-by-second ratings.) We should be able to quanrify the extent to which shows hold commercial audience throughout the pod with second-specific measurement; and too, we should be able to measure the impact of clutter on tune-out (but not attentiveness) in a similar fashion. 

    Fundamentally, the problem we all have is that panels are no longer sufficient to do what we need done. So we migrate to solutions that integrate naturally occurring or other big data assets (e.g. STB data or other device data) and panels together. But that tends to place the second-by-second tuning data and the rich people data in different places to be integrated (not unlike meter/diary integration.) I believe that audience measurement going forward will be a challenge of enriching device data assets with different kinds of person-based insights; there's no single magic bullet anymore, which makes our jobs harder (but more interesting.)

    Alhough I am reminded that in Chinese, "May you live in interesting times" is a curse.

  31. Ed Papazian from Media Dynamics Inc, September 30, 2020 at 4:01 p.m.

    Josh, while I agree that you can capture dial switching via meterized device usage measurements, this is the smaller part of the ways people avoid ads. Far larger are remaining in the room and  payingb no attention plus leaving the room entirely. Where dial switching may give you a differential of 3-6% per commercial, total lack of attention usually runs to about 25-30% ---with wide variations depending on ad clutter, in particular, while leaving the room can average 25-35% of the audience, depending on how it's measured. All I am saying is "we" should be investigating what's being done--and what could be affordably done in this regard now rather than much late, if ever---and I fear that this is simply not the case---the primary reason being to get something done as quickly as possible, rather than delaying to develop far better metrics. I understand why this is happening---this is not the first time iIve seen it---but I feel compelled to point out the extreme limitations of the data that will be used as I see little of value coming from the path that seems to be chosen. I do hope I'm wrong about this, by the way.

  32. Tony Jarvis from Olympic Media Consultancy, September 30, 2020 at 5:11 p.m.

    Whether the marvellous Commentators on this matter are right or wrong (doubtful as they are highly experienced media and ad researchers rather than data engineers!), I trust that the ANA Committee working on developing a US version of the WFA XMM Techncal Design will read every POV expressed here.  I am sure any of us would make ourselves available to provide further counsel. 
    This ANA initiative will inevitably require the cooperation and collaboration from a wide array of companies to "pull off" what will be a highly complex integrated solution to producing a common, privacy compliant, cross-media currency that ultimately must be designed to go beyond TV/video.  (Yes, Josh something similar in principle to what GeoPath/TAB does for "Eyes-On" ratings for OOH but ~100 times more complex). 
    I respectfully suggest it can ONLY fully succeed if managed, owned and run as Joint Industry Committee or JIC.  As stated earlier and as P&G and later ARF established many years ago, JIC's are not antitrust in the US or anywhere else for that matter. 
    So, Andy Brown, will we come full circle here in the US to achive the goal? 

  33. John Grono from GAP Research, September 30, 2020 at 8:01 p.m.

    Hi Josh and Ed.

    First Josh, I love the 'horse to water but can't  make it drink' analogy.   I might have to borrow it.

    Ed makes good points about the media seller being able to influence ad effectiveness and he cites some examples such as more engaging content (which I think is a primary issue).   The ad-break issue is also very true.   When I played around with a week's worth of data in our Sydney market for every ad on the three commercial free-to-air channels, there was a very clear pattern (unfortunateky I lost the data and the report in the bushfire).   Basically think of the audience viewing pattern as a trough.   The narrower the trough (the ad-break length) the shallower the trough  (the audience drop-off).   As a statistician it is a bit like the normal-distribution curve but inverted.  So the longer the break the bigger the audience loss - closer to an exponential loss than a linear loss.

    And Ed I agree that during a programme probably something like a quarter of the audience will leave the room at some stage.   However, my contention is that very few people leave the room every ad-break.   So if the 25% estiamte is around the mark, and in a one-hour programme there may be four ad-breaks, the normal person may leave in just one.   That would peg the loss per break around 6%.   From memory the project I referred to was an avereage 4.5% loss and a trough of around 8%.   My recall may not be 100% accurate, and I am sure things have worsened, but it all seems to accord.

  34. John Grono from GAP Research, September 30, 2020 at 9:26 p.m.

    Ed, I forgot to add one other thing that I found handy when making strategic calls on a TV buy.

    Everyone looks at the programme Rating (Average Minute Audience) and the Cost and derive the CPM then rank it.

    I was adding the Programme Reach and then dividing Average Minute Audience into the Programme Reach.   So you might have a 2m rating programme that was based on 2.5m programme reach, and you might have another similarly costed programme that also had a 2m rating but with a programme reach of 5m ...so which one dio you buy?

    I found that AUD/REACH was a pretty good proxy indicator of programme attention without relying on expensive third-party data.   That is the 80% ratio for the first programme beats the 40% ratio for the second programming.

    So I adjusted the ranking by the AUD/Reach factor and found that we had more effective and efficient buys.   You had to be wary of different programme durations and factor that in as well.

    I dubbed it Grono Rating Points which brought around the GRP abbreviation and the rest is history.   OK ... I made that last sentence up, GRPs well and truly pre-dated me!

    Waddyathink?

  35. Ed Papazian from Media Dynamics Inc, October 1, 2020 at 12:05 a.m.

    John, we used the difference between the "total audience" and average minute audience as an indicator of program involvement mnany years ago, so I agree, it has merit. However, you must account for program length and type of content. The results---hence holding power ratios--- for a half hour show will seem higher than for a one hour show because the longer the duration the more likely you are to get people qualifying as "total audience" viewers---unless you use a varying definition---which makes it hard to compare across show lengths. Also, a one hour variety or reality show will usually seem to have less holding power than a drama of similar length due to the less intense nature of the content---another problem that needs solving. Most important is the question of commercial ---as opposed to program content--- exposure. While it may be true that higher holding power suggests more attention to commercials, this will vary all over the lot due to the ad clutter element. Take a highly involving one hour drama on a channel with very high ad clutter in its breaks and you probably have less attention to ads than another channel gets with a program with lower holding power, but much shorter ad breaks.

    Regarding people leaving the room, I have tracked this for many decades and our "TV Dimensions" subscribers have seen many reports on the findings from many sources---camera stidies, teenage and college student "spies", heat sensor studies, etc. What these tell us is that there is always a certain amount of "absentee viewing" but it is much higher when commercials are on the screen. TVision, which has an ongoing panel of 7000 persons whose attentiveness is monitored by a form of "eye cameras" reports that 30% of the average TV show's audience prior to a break is not present during an average commercial. Other studies don't go quite as high---but they come close. So, even though I, too, was skeptical, at first, about the extent of this form of avoidence I now accept it as a very significant factor.

  36. John Grono from GAP Research, October 1, 2020 at 12:19 a.m.

    Thanks for that Ed.   The data I have seen has "eyes-on-screen"reducing (chatting, reading, phones, leaving the room etc.) but "bums-on-seats" not reducing as much.   That is, there is still an audio presence.   I think sound and music in an ad is greatly under-stated.   After all, radio ads rely 100% on audio.   A phrase such as "I'm On A Horse" can be so powerful.   We have a few here in AU ... "Not Happy Jan".   No-one knows why it was so popular and memorable but it is now in the publc lexicon.

    And ... I reckon this debate beats the Presidential debate hands down!

    Thanks to Tony 'light blue touch paper and run' Jarvis as your article has ignited quite a good to-and-fro.

Next story loading loading..