Positive Effects of Negative Publicity: When Negative Reviews Increase Sales
There are some instances where negative reviews are beneficial. Specifically, the authors found that if a product has no awareness in the market, negative publicity has a positive impact on sales and purchase intent. If there is significant awareness in the market, negative publicity has a negative impact. The root cause is dissociation with the original source of information, or put more clearly, people tend to forget the original meta-data associated with a reference to a brand over time.
The authors present three studies. The first compares New York Times book reviews with actual sales data provided by BookScan. The authors coded the sentiment of the New York Times reviews using a Perl Script, optimized to minimize False Positives. They evaluate the existing awareness of a particular author based on past publication record. A prolific author is considered to have more awareness than one who hasn’t published before.
They found that for an unknown and known author, a positive review correlated to a 32% to 52% increase in demand.
They found that for a known author, a bad review resulted in a 15% decrease in demand.
For an unknown author, a bad review resulted in a 45% increase in demand.
The authors support their contention that this is a causal relationship by pointing out that the New York Times is read more in the New York designated market areas (DMA’s). If the relationship between the review and sales was causal, one would expect book sales to be higher where readership is higher. Indeed, they found a stronger correlation between sales and reviews among the 33 New York DMA sales than the rest of the country.
The second study examined the role of time to explain this effect. To demonstrate this, they ran an experiment on 252 participants. They gave people two reviews of a book, one verified as being negative and one verified as being positive. They listed one of the reviews as a Grisham novel, and the other as an unknown author. Finally, they gave one group non-relevant tasks before asking them purchase intent, and asking purchase intent right away of another group. They found that purchase likelihood after a negative review of an unknown product became more positive when questioning was delayed. The negativity of a review didn’t impact purchase likelihood of an unknown author, but did of a known author. The impact of negativity fades at different rates depending on familiarity or awareness.
This is the subject of the third study – just how sensitive is the negativity-awareness relationship? They repeat a version of the second experiment, adding in a control for awareness of a given book and ask for purchase intention long after the review. They found, perhaps not surprisingly, that "publicity only increased awareness when the product was not well known already.”
Why is this Important To Analysts
These findings add intelligence to the sentiment discussion. Conventional wisdom holds that all negative reviews are detrimental to sales. This paper finds that this is not always so, and offers a compelling explanation as to why.
It shouldn’t be surprising that reviews impact sales. It is surprising that negative reviews, which generate a large number of impressions and increase awareness, may be beneficial if a firm or product is trying to break out. In the early stages of the customer lifecycle – budgets are tight and impressions typically have to be paid for or carefully greased by easy-to-tell stories (which helps word-of-mouth to spread along). As the awareness of a product increases the marginal benefit of negative reviews decreases. It is unlikely that a product group can survive persistent, consistent, negative reviews. However, a negative review of a new brand may not be catastrophic.
Analysts should consider this when deriving recommendations. In certain instances, the relative risk of executing strategies intended to negate negativity may exceed the relative benefit expected.
The second is dissociation and the implications for observed lag. Web analysts are accustomed to observing a very tight temporal link between cause and effect. Deploy an A/A/B test on a heavily trafficked website – and get an answer in a day. Send out an email blast and observe a spike within 4 to 30 hours. Increase SEM spend and see a fairly instant spike. The linkages are very tight.
The linkages are looser on the front end of the customer lifecycle. Months may pass between an initial impression of a brand and the initiation of exploration for a solution. Consumer dissociation has been proven and should be accepted. We’re routinely carpet bombed with impressions for products that we have no interest, need, or want on any given day. It’s not until we’re into curiosity that more effort is made to remember sources.
Analysts may experience dissociation themselves. The efforts of an analyst are frequently geared to what happened during the most recent reporting period. Thought processes of many analysts are focused on predicting what’s next. In somewhat of a curious situation, analysts themselves experience dissociation about consumer dissociation.
The value of the initial negative impression may be simply the privilege of getting into the consideration set later on. Once memory of the negative review fades, that’s when potential sales would kick in. The critical window is right after this initial memory fades, after market presence has been established.
The longer the period of time between awareness and active curiosity, the more opportunity additional factors, such as dissociation, have to intervene. These factors are dirty from a web analytics perspective because they don’t fit nicely into a lag-free cause-effect model.
The study does not incorporate the role of search engines on memory recall, opening up the third key implication.
Considerable effort has been made to include reviews and customer feedback on websites - including blogs, forums, review sites, YouTube, and so on. These sources are made relevant to marketers by the search engine, through which ancient negative reviews may be discovered or rediscovered. Web analysts may not even be aware of such data because they never actually generate direct clicks to the website. Moreover, conventional #measure twitter wisdom is that such referring sites are irrelevant because they constitute a marginal percentage of traffic to the site, in spite of an existing body of evidence that confirms multichannel effects in traffic sources.
Wishful thinking will not make such loose data sources go away.
This domain of analytics, so-named ‘pre-click’ analysis by Alex Langshur, has traditionally belonged to media buys and demand generation exercises. There are real opportunities here to use pre-click data sources, such as reviews, to forecast future consideration sets and to optimize against them.
Nobody really wants a bad review. This study offers a compelling causal model for explaining why, under very specific circumstances, bad reviews don’t always result in bad sales. It also offers important findings about the role of time and memory at the early portions of pre-click analysis.
I recommend this journal article to members of the Web Analytics Association.