Online Product Opinions: Incidence, Evaluation & Evolution

Back to the Peer Reviewed Journals List

Moe, Wendy., Schweidel, David. (2012) Online Product Opinions: Incidence, Evaluation, and Evolution. Marketing Science, Articles in Advance pp. 1-15.

Reviewed by Christopher Berry, February 2012

Executive Summary

Moe and Schweidel are the first to model individual-level decisions on whether and what to post in an online product review. To test their model, they used a database from BazaarVoice dating to 2007, that contained 4,974 unique individuals, posting 10,460 ratings across 1,811 products.

There are two long-standing concepts in marketing science. One is around pre-purchase evaluation, which is essentially the study of market hype and its impact on setting expectations. And there is post-purchase evaluation, which is the study of experience and realized expectations.

To explain – say that there is tremendous amount of hype about a product, and you buy that product based on the expectation you formed. Say that the experience isn’t as good as you thought it was going to be. The resulting letdown has a pretty big impact on your postpurchase evaluation, wouldn’t?

From there, a person makes a decision about whether or not they’re going to post anything online, called an incidence decision. They also decide what content to post online, called an evaluation decision.

To expand – say a product really didn’t meet your expectations at all. The first decision is whether you’re going to post anything. The second decision is how you’re going to adjust your evaluation. So, if you’re really disappointed with a product, you just might exaggerate. You may also encounter a very positive community about the product. Perhaps it’s not worth engaging that community with language that is too strong. Perhaps, you would adjust your opinion as a result.

The authors modeled these factors, ran a simulation, and compared their predictions against actual data.

They found that "positive environments increase posting incidence, whereas negative environments discourage posting; and…less frequent posters are more positive and exhibit bandwagon behavior, whereas more active posters are more negative and exhibit differentiation behavior.”

To elaborate – there’s a linkage between post frequency and positivity. Less frequent posters are positive. High frequency posters are more negative and try to differentiate themselves from the pack by being disproportionately negative.

They also found that "online opinions are dominated by activists who offer opinions that are more negative and differentiated from previously expressed opinions. Moreover, participation by these activists increases over time, whereas participation by low-involvement individuals decreases. This shift in the composition of the posting population can substantially affect the overall tone of posted opinions.”

Effectively, long tenured individuals grow more negative over time, and, if given enough of it, can cause major changes in tone.

Editorial

The authors give us a new application of sentiment (valence) in a review community over time. In so doing, they spawn off a set of hypotheses to be tested in many others communities. They elaborate on the long run effect of expert differentiation on a community. This effect is particularly important to understand and has very major implications for social media marketing.

The competition for expert recognition is a zero sum game. There isn’t enough recognition to go around. Top ten lists only have ten spots. It’s impossible for any given niche to have too many experts; otherwise, everybody would be an expert. And that’s just ridiculous.

The race to the top ends in a race to the bottom.

There are many ways to demonstrate expertise without negativity. One way is to articulate a well-reasoned preference for an alternative because it is superior. This is comparatively harder than say, going negative on something because it is inferior, and then competing in a pile-on. Going negative requires less logic and less thought.

Older communities tend to have a few long-tenured individuals. These individuals have greater incentive to differentiate through negativity. In the event that experts agree that something is bad, they still compete over who is best able to express how much worse it is. That negative energy discourages those with little or no tenure, people called newbies or n00bs, from ever participating in the first place. Experts start to resemble trolls, and the community ultimately dies because it can’t retain enough membership to sustain adequate evidence of an audience.

Even black holes evaporate eventually.

The notion that the longest tenured individuals pose an existential threat to the very communities they form ought to raise some cackles. And yet – there are enough case studies out there to substantiate the claim. And, to really enamor myself to those communities, I won’t mention them by name.

There are practical implications for digital analysts and marketers.

Moderation and moderation policy is particularly important in social marketing. A disproportionate volume of negative commentary will come from a handful of high frequency, long tenured individuals. Some of these people will also be the primary engines of positive commentary and volume. Creating an environment that is welcoming to newcomers is particularly important if engagement rates are to be supported, and yet, must be balanced against tenured individuals who will start bumping into the policy as time goes on. These individuals may be quick to shout censorship without really being aware that their own behavior has changed.

This is a case where valence / sentiment can be a very useful tool. This is an application that goes beyond the ‘I’m reporting on sentiment’ perspective. It’s the application of the numbers to decision making that matters, not a summary of the numbers without context that matters. Some of the negativity towards a product may not entirely be a function of the actual evaluation themselves, but reflective of the age and tenure distribution of the social property itself. That’s still a problem for a marketer in a particularly mature product category.

This is an important consideration for web analysts making the transition to social analytics in particular and digital analytics in general. The underlining phenomenon observed changes over time, making causal inferences from trend analysis that much more difficult, and interfering with precision.

Another avenue for research is a gamification mechanism that recognizes experts so they don’t have to compete in a race to the bottom. Traditional point accumulation methods, such as link karma and points, generate a number of undesirable newcomer behaviors. Understanding this underlining driver ought to at least inform or inspire a mitigation mechanism.

Finally, there may be a bias in the way that valence is coded by many sentiment engines. It may also be far easier for machine learning sentiment analysis engines to detect negative expertise than positive expertise. Negative arguments contain relatively simple subject-verb-noun trigrams. Valence dictionaries, against which most engines are trained, return more predictable figures on such data. Measuring preference for an alternative is comparatively much more difficult, as the person setting up the instrument in the first place has to know all the alternatives to be able to label them. This is a component of real expertise, which is the quantity that is supposedly so rare in the first place! This is a problem worth solving for any firm that differentiates on topic mining.

I’d like to see this vein of research continue. Specifically, does the comprehensiveness of product features contained within a product review thread discourage those with positive experiences to add their feedback, because everything has been said? What is the relationship between valence, comprehensiveness of the product features reviewed, and sales volume? Finally, is this core insight generalizable across most online communities of interest?

I strongly recommend this article to members of the Web Analytics Association who want to learn more about Online Product Opinions. I predict that this will be a very influential paper on the topic.

A single copy of the full journal reviewed above is available to members of the Digital Analytics Association. To request a copy, email Shannon Taylor.