David Shiffman, MediaVest WW
Britta C. Ware, Meredith Corporation
David Dixon and Julia Soukhareva, Ninah Consulting
Worldwide Readership Research Symposium Valencia 2009 Session 4.6
Background
Over the past few years, there has been a downward trend in magazine advertising revenue, some of which can be tied to increased competition across media channels or audience migration to new and different channels, but also a result of the growing importance of proving media accountability and the perceived lack of appropriate ROI metrics. In the new media and economic landscape, media dollars are being scrutinized more than ever and magazines, as well as other media channels, are increasingly being challenged by marketers to prove their value and demonstrate their specific role in driving sales. Upon closer examination, particularly in the CPG environment, it appears that this trend can be directly tied at some level to the limited success in demonstrating the power of magazines in current marketing & media mix models. Naturally, this is concerning to the magazine community because of the importance that ROI plays, particularly in these economic times. If media mix models are the ideal (or de facto) ROI metric, magazine publishers, agencies and marketers need to be confident that these models can truly demonstrate the role that magazines play in efficiently driving volume..
Our clients are asking for help in demonstrating the value of magazines and although the traditional models may not give them the green light to increase print levels, they do understand and value the role magazines play and are looking for practical solutions to build the case for magazines.
This paper will explore, 1) how leading modelers are currently (and historically) evaluating magazines in the mix for their clients, 2) the various inputs used to drive the model for magazines and their impact on the results and subsequent recommendations, 3) best practices and guidelines for maximizing the marketing mix modeling process to ensure the best chance of developing accurate, informative and prescriptive models, and 4) review situations where traditional market mix modeling may not be the most appropriate solution for magazine accountability and provide alternative solutions.
Our goal is to provide a straightforward road map of the modeling process for publishers and marketers and practical guide to the steps involved which will allow them to help improve the process. The authors would like to begin to see modeling as a valuable learning tool to improve a marketing plan and not simply a “black box” institution that seeks only to prove a point (or worse, provide misleading recommendations) with no learning to improve future campaigns.
Hypothesis
Our paper is driven by three primary hypotheses.
- Magazines are losing business due to the increased reliance on marketing mix models that are in part TV-centric. This is due to both the nature of the media plans measured and the granularity and accessibility of TV data, and the results that these models produce.
- There is little consistency and lack of awareness of available tools across leading modeling companies on magazine inputs.
- Improving the quality and consistency of data inputs can result in different – and more accurate – results and recommendations.
Synopsis
This paper will provide an overview of the process currently being utilized by leading modelers, starting with media, marketing and sales inputs and ending with the presentation of model results and recommendations. We will examine the inputs: What are they, who provides them and where do they get them from? And then examine the presentation of model results and recommendations – who is involved and how are the results delivered? We will make the case for the importance of media inputs that best align with sales data and illustrate the negative impact on results when inputs are not sufficiently descriptive. We will also show that, despite our going-in hypothesis, most modelers are seeing very favorable results for magazines in mix models! Most importantly, we will provide a list of “best practices” for anyone playing a role in the modeling process to make certain that the right steps are being taken to ensure that magazines are accurately represented in the modeling process. Finally, we will remind readers to consider alternatives to traditional marketing mix models and make the case that these models are just one form of accountability, but may not always offer the best method for measuring magazine effectiveness or the role that magazines play in the media or marketing mix.
Overview of Interviews
To gain insight into the process of developing and reporting marketing mix models, the authors interviewed representatives from leading modeling companies with a focus on:
- Primary purpose of marketing & media mix models
- Magazine data inputs utilized
- Review of the process (from who provides the input to how are the results reported)
- Best Practices and/or improvements seen in magazine modeling
- Case histories
- Random thoughts
The following sections review the results of these interviews highlighting where there was consensus as well as differing approaches.
Purpose of Models
There was general agreement that the primary purpose of marketing mix models is to assist marketers in determining the effectiveness of each element of a marketing plan or campaign.
Why are media mix models utilized?
- Determine overall marketing/campaign effectiveness and in particular the relative effectiveness of each element of the plan.
- Justify total marketing/advertising investment (overall and by channel) to senior management (increased need to justify in current economic climate).
- Determine optimum media mix as a basis for future campaigns.
- Establish optimum spend, weight levels & scheduling tactics to guide future investments.
- Assess the contribution and ROI of individual elements (specific magazines or genres, for example) of the media channel, provided that weight levels are sufficient to allow for measurement.
As with all forms of measurement, the first stage in the process is to define and align on the learning objectives and the specific answers being sought. In addition, it is important to gain agreement on the expected application of results – how are the modeling outputs going to be used and by whom, with consideration for uses within the marketing organization and by partner companies and agencies.
What can be measured – and at what level of detail – is limited by a number of factors. No model can measure everything. While data availability or a lack of “good” data is often a barrier, there are other elements and threshold levels to consider. A few rules of thumb will help ensure that the modeling objectives are achievable.
Considerations for Modeling
- Weight & Spending Level Considerations: Marketing mix models cannot provide a read on every single marketing dollar spent by a company. A certain threshold of spend or weight must be reached. The guidelines for that threshold vary from company to company, but should be discussed up front. For example according to Analytic Partners, “in terms of the amount of support needed to take a deeper dive into print by campaign or genre, the rule of thumb of about 1% of total marketing spend behind each campaign. This will vary on a monthly basis depending on the amount of support behind other marketing activities, but 1% of marketing spend would be the minimum.” This guideline is useful in managing expectations before initiating a model. In many cases, print levels are not sufficient to allow for a deeper drill down. On the other hand, Ninah recommends setting a minimum threshold based on targeted reach in the 5% range. Managing expectations upfront on what can/can’t be measured is recommended in either case.
- Scheduling Considerations: If the goal is to measure the volume contribution or ROI of individual elements – in the case of magazines, individual titles, ad positions, genres, or specific creative executions – it is important to review scheduling by title to ensure that the contribution of individual elements can be isolated. This is similar to the broader question of whether there is enough variation in scheduling across marketing channels to measure the discrete impact of any single marketing or media channel. In many cases where magazines are flighted during the same periods, it is more difficult to isolate the impact of individual print elements (referred to as co-linearity). Note: this can be improved with additional observations (market level, etc.).
- Number of Variables: It is Important to manage expectations on the number of variables to include in a marketing mix model. According to Ninah Consulting, the rule of thumb is for every 100 observations on the dependent variable (weekly sales), a model should include no more than10 independent variables (TV, Primetime TV, Magazines, Family Circle, Newspapers, etc.). This helps to ensure statistical reliability in the model. According to Marketing Evolution, 30 observations per variable is recommended for this issue, known as “overfitting” a model.
Types of Models
All market mix models are based on some type of regression or general linear model which track the variation in sales to variation in media levels. Most models are done on market/DMA level (and some on store sales level) in line with availability of sales data. While on the surface, it appears that modeling is a commodity business, it is clear that there are techniques employed
- starting with the collection of the data inputs, including proprietary modeling techniques (“secret sauce”) and ending with the interpretation and communication of results and recommendations – that can impact model outputs and make a difference in recommendations to marketers.
Model Inputs
Based on the interviews conducted in preparation for this paper, it is clear that the biggest opportunity for improving the accuracy of magazine measurement (and, frankly, other media as well) is in the model input. Modelers rely on their contacts at media agencies as well as the clients directly to provide data inputs. We found that in most cases, the modeler is relying on these sources to provide the most accurate input, however the communication on what is “most accurate” or most up-to-date may be lacking. For example, if magazine GRPs are provided on a national level only, the modeler will either assume the same GRP level across sales markets or make another type of estimate rather than pushing for a standard input, or a better input, despite the fact that “better” data might be available.
The focus of this paper is on maximizing the opportunity for magazines to be fairly – and accurately represented. The authors’ contention is that standardizing media inputs using the most up-to-date data that comes closest to matching the sales input parameters and representing the true pattern of media weight distribution, will ensure that magazines are better represented in the models.
All modelers strive to align the independent variables with the primary dependent variable. In other words, the objective is to align the input for all marketing channels – including magazines – with the sales input, i.e., if sales data is available on a market level data, marketing/media input should also represent market level delivery; if sales is available by week, audience delivery should correspond with weekly input. As a rule, media input should align with sales input. Standards for input would eliminate the “guess work”.
-
- Align with Sales Inputs: Agencies and advertisers typically have the most robust print readership data from Mediamark Research & Intelligence (MRI) and similar sources in other markets. In many cases, modelers rely on data that is supplied by the client; in other cases, print data is provided by the agency. Importantly, if client or agency is not aware of modeling best practices and exactly what is needed to best align with sales input, they may have correct data but not know to provide it to modelers, resulting in inaccurate models.
- DMA Level Data: Based on our interviews, many modelers are not currently taking advantage (or aware) of MRI DMA level magazine audience data, but agree that this level of data would fit well in the models (rather than assuming each market has the same GRPs as national) and welcome it. Where DMA level audience data is not available, some marketers and agencies have developed methods (often using circulation by DMA) for estimating local market magazine impressions and/or GRPs, though application of these approaches does not appear to be widespread.
- Accumulation Curves: Lack of consistency exists on use of audience accumulation curves that estimate weekly audience levels. Again, modelers rely on the information provided by the agency or client and if it is insufficient, they improvise based on their experience or historical information rather than push for existing syndicated data.
Our review of current practices on model inputs indicates that there is a lack of standardization on model input. This provides an overview of the differences in input and the effect it may have on the outcome.
Testing the Impact of Print Inputs on Model Outputs
To illustrate how different inputs impact the outcome, the authors built an econometric sales model for a product in the CPG category. The model was built using the standard inputs for market mix models: volume sales, distribution, promotion, media spend by channel, with all inputs based on weekly and market level data. Once the initial model was built, different print inputs were tested and the resulting outputs were compared against the original. The variations for print inputs were as follows:
Original (Version 1): Weekly GRPs by market
GRPs
GRPs
Weekly GRPs by market
Version 2: Monthly GRPs by market (1st week of a month)
12/24/06
1/24/07
2/24/07
3/24/07
4/24/07
5/24/07
6/24/07
7/24/07
8/24/07
9/24/07
10/24/07
11/24/07
12/24/07
1/24/08
2/24/08
3/24/08
4/24/08
5/24/08
6/24/08
Monthly GRPs by market (1st week of a month)
Version 3: Monthly GRPs by market (distributed evenly across each week of a month)
GRPs
Monthly GRPs by market (distributed evenly across each week of a month)
Dollars
12/24/06
1/24/07
2/24/07
3/24/07
4/24/07
5/24/07
6/24/07
7/24/07
8/24/07
9/24/07
10/24/07
11/24/07
12/24/07
1/24/08
2/24/08
3/24/08
4/24/08
5/24/08
6/24/08
Version 4: Weekly National GRPs
National GRPs
Version 5: Monthly Dollars (1st week of a month)
GRPs
12/24/06
1/24/07
2/24/07
3/24/07
4/24/07
5/24/07
6/24/07
7/24/07
8/24/07
9/24/07
10/24/07
11/24/07
12/24/07
1/24/08
2/24/08
3/24/08
4/24/08
5/24/08
6/24/08
Monthly Dollars (1st week of a month)
Version 6: Monthly Dollars (distributed evenly across each week of a month)
Dollars
Monthly Dollars (distributed evenly across each week of a month)
Weekly vs. monthly GRPs
- Weekly GRPs are seen as the most appropriate input. If weekly GRPs are not available, modelers will estimate either by allocating monthly GRPs equally by week within the month or using old (or estimated) accumulation curves, or (worst case) assumes the entire weight is in the first week of the month.
- Putting GRPs in by month (not by week) or all in the first week will limit the variability in the model and not reflect the way (typically) sales data are input, or even more importantly, it will inaccurately allocate media weight (which should be spread over 13+ weeks!).
- Flighting can impact ability to read specific magazines (or even media) – the use of weekly data points greatly increases the potential to isolate the impact of individual media and discrete media elements by providing more observations of impact of weight on sales.
Benefits of weekly GRPs:
- Best aligns with sales inputs.
- Provides better variability in data, an important driver in a model’s ability to measure volume contributions.
- Increases likelihood of measuring individual elements within the print campaign.
- More accurate representation of media weight distribution.
Chart 1. Comparison of an estimated Print parameter in the model: Weekly GRPs by Market vs. monthly GRPs by Market
0.035
0.033 | ||
0.028 | ||
0.030
0.025
0.020
Coefficient
0.015
0.010
0.005
0.000
Weekly GRPs by market Monthly GRPs by market (1st week of a
month)
When print parameter is measured using weekly GRPs by market, the estimated coefficient is higher in the model by roughly 18% than when print is measured using monthly GRPs that are allocated to the first week of a month.
National vs. Local GRPs
- Local market (DMA) GRPs are preferred since this most closely aligns with sales data inputs. In addition, this reflects the way other media (particularly TV) is input.
- Local GRPs also reflect regional skews of media – Southern Living, Midwest Living vs. mass titles. However, local GRPs are also essential to illustrate distribution of mass titles which do not deliver impressions equally across the country – for example, 100 national average GRPs in Better Homes & Gardens translates into a range of between 68 and 135 GRPs for individual DMAs.
Benefit of using local market GRPs:
- Aligns best with sales inputs.
- Reflects true regional and local skews in media delivery.
- Allows marketers to better understand market-specific impact.
Chart 2. Comparison of an estimated Print parameter in the model: Weekly GRPs by Market vs. National GRPs
0.035
0.033 | 0.031 | |||
0.030
0.025
0.020
Coefficient
0.015
0.010
0.005
0.000
Weekly GRPs by market National GRPs
When Print is measured using weekly GRPs by market (Local), the estimated coefficient is higher in the model by roughly 6% than when print is measured using National weekly GRPs.
Dollars vs. GRPs
-
- Most agree that dollars are only used when other “more robust” information is not available, “We are measuring weight/impressions, not spending”.
Sometimes dollars may be better than GRPs because the value of GRPs by publisher may not be perceived to be equal (or in some cases – such as with unmeasured titles, not available). However, distribution of dollars should follow that of GRPs (if GRP data is not available this may not be possible). Depending on the objectives of the model, it may be beneficial to run both dollars and GRPs to see which correlate best with sales.
Chart 3. Comparison of an estimated Print parameter in the model: Weekly GRPs by Market vs. Dollars
Coefficient
0.033 | ||||
0.027 | ||||
0.035 | ||
0.030 | ||
0.025 | ||
0.020 | ||
0.015 | ||
0.010 | ||
0.005 | ||
0.000 | Weekly GRPs by market |
Monthly Dollars (1st week of a month) |
When Print is measured using weekly GRPs by market, the estimated coefficient is higher in the model by about 18% than when print is measured using monthly dollars that were allocated to the first week of a month
Total Plan GRPs vs. GRPs by publication/genre (or other element)
-
- Assuming weight levels are sufficient and variability in levels can be found, a model can isolate magazines at the most granular level, including individual titles, genres, campaigns or even positions.
- Isolating by title (or otherwise) can provide valuable information on the performance of individual titles and/or the value of one campaign message vs. another. Magazines have unique accumulation curves, resulting in the ability to take a closer look into what is driving the volume.
Benefit of using GRP’s by publication/genre (or other element):
-
- Allows for measurement of the unique contribution of individual titles and specific elements of the print plan.
- Delivers a clear assessment of which elements are working best vs. those that may be underperforming, offering insights into how best to maximize print performance in future campaigns.
Note that past work illustrates that when individual titles can be isolated, results can vary greatly. For example, over the course of 18 months for a $10 million+ magazine plan, results for individual titles were as high as 6-times the average.
Overall Modeling Results of Print Input Comparisons
Chart 4. Comparison of an estimated print parameter in the model: All six variations of an input.
105
100
94
94
85
85
82
100
95
Index
90
85
80
Weekly GRPs by market
Monthly GRPs by market (distributed evenly across each week of a month)
Weekly National GRPs
Monthly Dollars (distributed evenly across each week of a month)
Monthly GRPs by market (1st week of a month)
Monthly Dollars (1st week of a month)
The strongest estimate for print ROI is achieved when print is measured using weekly GRPs by market (100 index), as this level of detail is the truest reflection of print readership over time and best aligns print data with weekly, market level sales inputs.
Monthly GRPs distributed over multiple weeks of a month (94) and National weekly GRPs (94) deliver slightly lower estimates
- while this does aim to reflect weekly accumulation of print readership, it ignores market variations in print audience accumulation.
The use of monthly GRPs (85) and dollars (82) delivered the worst estimation of the print ROI, particularly when the inputs are allocated to the first week of a month (82), which drastically distorts the true readership build pattern of print.
As with any model results, it should be noted that changing the inputs of one variable in the mix – in this case, varying the print data input – impacts more than just the one variable being tested. In other words, changing the input of one variable does have a significant impact on the measured contribution of that variable, but it also alters the measured contribution of other marketing variables in the mix. While outside the scope of this paper, our results show that as the contribution of print increases with the use of “better” data inputs, the contribution and ROI of other marketing channels decreases proportionately with the improvements in print.
Model Results Application
Model results highlight the importance of proper data input in market mix models – the value attributed to print advertising varied by as much as 18% simply by changing the print inputs in the model! Below are two hypothetical examples which demonstrate in dollars what this means in the marketplace. The first example is at the brand level and assumes a “typical” brand print budget of $10 million. The second example assumes a total magazine spend in the US of $20 billion. The examples paint a clear picture that improper print data inputs greatly undervalues the impact that this medium has on a brand’s business results; extrapolating this out to the marketplace at large, less-than-ideal inputs for print minimizes the perceived overall contribution, value and importance of print advertising in the total marketplace. The implications of this are significant – improper input for print underestimates its impact and overstates the impact of other channels, which can lead to less than optimal allocation of advertising budgets.
Original (Version 1) | Version 2 | Version 3 | Version 4 | Version 5 | Version 6 | |
Weekly GRPs by market | Monthly GRPs by market (distributed evenly across each week) | Weekly National GRPs | Monthly Dollars (distributed evenly across each week) | Monthly GRPs by market (1st week of
month) |
Monthly Dollars (1st week of month) | |
ROI Index | 100 | 93.9 | 93.9 | 84.8 | 84.8 | 81.8 |
Brand Example: Print budget of $10,000,000
Contribution 5,000,000 to Sales from Print Difference in Estimated – Contribution from Print |
4,696,970 | 4,696,970 | 4,242,424 | 4,242,424 | 4,090,909 | |
303,030 | 303,030 | 757,576 | 757,576 | 909,091 | ||
Industry Example: Magazine spend of
$20,000,000,000 Contribution 10,000,000,000 to Sales from Magazines Difference in Estimated Contribution from Magazine |
9,393,939,394
606,060,606 |
9,393,939,394
606,060,606 |
8,484,848,484
1,515,151,516 |
8,484,848,484
1,515,151,516 |
8,181,818,182
1,818,181,818 |
Model Details:
Print Original Parameter (Version 1) | Version 2 | Version 3 | Version 4 | Version 5 | Version 6 |
Coefficient 0.033
T-statistic 3.30 |
0.031
2.75 |
0.031
3.12 |
0.028
3.13 |
0.028
2.79 |
0.027
2.93 |
Coefficient is a constant multiplicative factor of a specific variable (Print). In this case, the coefficient also represents percent contribution.
T-statistic is used in a t-test, a statistical hypothesis test for a coefficient in the model (null hypothesis is that the coefficient is zero).
Additional suggestions for improving print data input
-
- Total gross impressions (total GRPs) assume all GRPs are created equal and don’t take into account extreme creativity in messaging, differences in ad position, or other factors that went into the magazine selection and buying process. “Adjusted” GRPs could be used to reflect these different values (or can be reflected in the back end diagnostic phase).
- Magazine Ad Ratings (as determined by the upcoming launch of MRI/Starch’s AdMeasure or Affinity’s Ad Ratings tools) that provide a measure of advertising exposure, rather than simply “opportunity to see”, may also improve the input.
- Some modelers (such as Nielsen’s “Consumer Marketing Mix”) weight the impressions based on consumer purchase data (from Spectra, for example) providing a “tighter link between sales response and media activity”.
These types of additional “value” can also be included in the diagnostic stage once model steps are completed.
Collecting Data Input
A final thought on input. For modelers, the data collection is one of the most painful and time-consuming parts of the modeling process and involves a great deal of people: the client (often several teams: media, marketing, product, and finance), the agency or agencies, the modeling team and sometimes media companies. In many cases the client does not have data available beyond a flowchart, but the agency is not paid for the time spent accumulating the details needed for modeling, and time has not been built in on either side. Thus at the end of the day the data that is provided may not be chosen for its granularity and precision, but for its “good enough and made available in a shortest period of time” quality.
There is general agreement that a standard approach would be extremely valuable. This is where the magazine industry
(publishers, MPA, etc.) may able to help.
Alternatives to Modeling
While market mix models offer tremendous value to marketers, it is not the only road to accountability; importantly, these traditional models also have limitations in terms of what they can measure. There are alternative methods to illustrate magazine accountability available to marketers and there is a wide spectrum of approaches. This includes methods for measuring the relationship between marketing investments, specifically print, and brand sales or transactions (similar to market mix models) and also those that provide critical learning across other performance metrics throughout the consumer purchase funnel.
- ROMO, or return on marketing objectives, studies can measure the impact of individual marketing channels and the individual elements within a channel, as well as the impact of multi-media programs on purchase, purchase intent and various brand health and equity measures.
- Brand tracking studies measure movements in awareness, brand perceptions and equity, consumer intent and purchase; when conducted on an ongoing basis and supported with a strong analytical framework, tracking studies provide the necessary inputs to assess the direct impact of marketing on consumer perceptions and actions.
- Control vs. Test studies (or market tests; pre-post tests) also provide marketers with a way to assess the impact of individual and multi-channel marketing investments. Though it can be difficult to fully tease out a clear cause and effect relationship, when structured well, these studies offer a mechanism for understanding the role and impact of print. .
- Marketing simulation research or other simulations can be used to measure the impact of marketing exposures and spending on consumer choice, brand perceptions or other measures. For example, Ninah’s Marketing Plan Lab employs a “virtual test” environment to estimate marketing impact.
- A host of other options are also available to measure the role and impact of magazine investments on a brand, including advertising recall/actions taken studies, website visit/call center tracking where volume can be directly tied to print advertising (i.e. through unique call-in numbers; web addresses), etc.
Prescription for Success with Market Mix Models
- Clearly define measurement objectives and ensure that there are sufficient data available and that minimum spending levels exist to enable statistically reliable outputs.
- Ensure marketing and media inputs are aligned with sales inputs. This means using audience accumulation curves, weekly, DMA-level GRPs (currently available from MRI in the U.S.).
- Evaluate inputs to be sure magazines “get credit” in the model by ensuring that the inputs used truly reflect what is happening in the marketplace (weight levels, message, etc.).
- Ensure accuracy of data inputs and that the most up-to-date data is being used before the modeling begins; this means constant communicating and checking with agency or source of input to be sure data is representative and accurate.
- Manage expectations on number of variables to consider. Rule of thumb is for every 100 observations (weekly sales) only have 10 variables (TV, Print, or daytime, prime, BHG, etc.).
- Modeling is a team game. Involve and communicate with key players at all agency partners – creative, media, research, planning & account teams, client – brand manager, research – to ensure that all elements of the plan are understood and incorporated (e.g., creative tested poorly; product performance hasn’t been good, major competitor launched new product, etc.).
- Don’t just let modelers report and interpret data. Create a workshop with all key players (planners, research, creatives, buyers, brand managers, etc.) to discuss learning/provide insights into modeling results and implications. Consider a phased rollout of results allow for incorporation of feedback, input from all parties.
- Avoid following the model blindly. Seek to understand the results – why did magazines do so well (or poorly) and how inputs affect them? Include all relevant players in the discussion prior to recommendation.
- Models should not be the only data for measuring marketing effectiveness. Consider how magazines are being used – what role are they playing? Brand building (equity, attributes, etc.) and other consumer-based metrics can be measured through other means (“purchase funnel” analytics) and can work in conjunction with mix models to create the required outputs for maximizing marketing effectiveness.
- Use models – and all measurement approaches – to improve rather than just prove (or justify). Develop a future-forward learning agenda that drives improvements in marketing performance over time.
Reviewing the Hypotheses
In reviewing our going-in hypotheses, at least 2 out of the three were proven.
- Magazines are losing business due to the increased focus on TV-centric marketing mix modeling and the results that these models produce.
o MAYBE. Magazines may be losing business due to inaccurate or misleading input, but most modelers have abundant examples of the strength of magazines.
- There is little consistency and lack of awareness of available tools across leading modeling companies on magazine inputs.
o TRUE. Particularly with accumulation curves and DMA level GRPs.
- Improving the quality and consistency of data inputs can result in different – and more accurate – results and recommendations.
o TRUE. Up to 18% improvement.
APPENDIX
ROI by channel based on historical models from leading modeling companies.
Analytic Partners
-
- Relative to other marketing activity, Print is actually right at the average in terms of ROI.
- Chart below should be viewed as directional as factors such as sample size (e.g. Cinema is only based on 4 cases from the confectionary category) as well as spending and scalability will influence ROI (e.g. spend behind PR or Digital has historically been relatively low).
Marketing Channel ROI Index
Cinema | 192 |
Broadband advertising | 184 |
Digital (non-broadband) | 147 |
PR | 146 |
Special packs | 132 |
Direct mail | 107 |
Trade | 104 |
100 | |
TV | 95 |
In-store | 91 |
Sampling | 82 |
Hispanic TV | 81 |
FSI | 72 |
Catalina | 69 |
Outdoor | 64 |
Radio | 53 |
Sponsorships | 46 |
Shelf-talkers 36
Source: Analytic Partners
Information Resources, Inc. (IRI)
-
- Based on analysis of 63 CPG models over a 5 year period, IRI found that print delivers a higher ROI than TV and FSIs, and just below Trade.
IRI ROI Norms
Average across 63 Brands/Sub-Brands analyzed from 2003-2008
$0.84
$0.69
$0.58
$0.41
Source: IRI
Trade Print TV FSI
CPG
MediaVest/Ninah
- Based on data from 49 market mix model cases in the US, spanning CPG, Finance, Retail, Services, Electronics, and Pharmaceuticals, the data shows that magazines have one of the top ROI scores when compared to other marketing channels.
ROI
Marketing Channel Index*
Total Marketing 100
Radio 166
Magazines 145
Online 128
TV 107
In-Store 97
Trade/Field Mktg 91
Newspapers 60
Out-of-Home 37
*Indexed to Total Marketing ROI
Source: MediaVest/Ninah
Acknowledgements:
Insights and best practices illustrated in this paper are based on conversational interviews with key representatives from leading modeling companies. Input was gathered through a series of informal interviews and reported in aggregation, highlighting both similarities and differences in approach.
The authors wish to give special thanks to: Maggie Merklin, Analytic Partners
Rick Watrall, Hudson River Partners
Jeff Groen and Brian Burke, Information Resources, Inc. Craig Winters, Johnson & Johnson
Dave Gantman, Marketing Evolution Rex Briggs, Marketing Evolution
Marty Frankel, Mediamark Research & Intelligence Ben Elkins and Bruce Pivarunas, Nielsen