Demystifying Disruption: A New Model for Understanding and Predicting Disruptive Technologies
Sood and Tellis put forward a measurable and predictive model for disruptive technologies. A disruptive technology is one that causes turbulence in a market because it is one that nobody expected. That is to say, generally speaking, everybody expects a 10GB hard drive will follow a 9GB hard drive. A surprise would be a commercial 5GB hard drive the size of a button that you sow onto your shirt and is WIFI enabled.
A cottage industry has set up around the terms ‘innovation’ and ‘disruptive technologies’, gathering steam in part to Christensen popular book The Innovator’s Dilemma (1997). Sood and Tellis state "the theory suffers from circular definitions, inadequate empirical evidence, and lack of a predictive model”.
The theory of disruptive innovations can be understood as follows. A firm seeks to maximize their offering along a particular primary dimension. For instance, makers of hard drives seek to maximize megabytes of storage per square inch. Chip manufacturers seek to maximize processor speed. Cameras seek (sought?) to maximize megapixels. These are all primary dimensions. Disruption may occur when a new innovation is introduced that seeks to maximize a secondary dimension, typically at the expense of the primary one. For instance, a company may introduce a much lighter camera at the expense of megapixels. A firm may introduce a much cooler (less hot) chip at the expense of performance. A hard drive manufacturer may introduce a much faster hard drive at the expense of memory capacity. This is the introduction of a second dimension that may be of interest to a niche market.
The Christensen model, and subsequent contributions in the literature, makes a number of predictions. Namely, that smaller, new entrant firms tend to introduce more innovations. They tend to be more disruptive. They tend to be cheaper because they are inferior along the primary axis.
Sood and Tellis put forward a model that incorporates incumbency, attack strategy, firm size, relative price, order of entry, and percentage change in performance. They take a sample of companies and test their model. They adjusted it, and then tested that model against a number of companies that were held out of sample deliberately for testing. Their model is adequately predictive.
Their findings challenge what had been widely assumed. First, it had been assumed that large, established firms were less likely to introduce disruptive technologies than small startups. After all, why would a large firm destroy its own market? The authors found that large firms introduced their own disruptive technologies as often as small startups. It was assumed that new, disruptive, technologies were cheaper than the existing one. This was found not to be the case in 90% of the instances studied.
Analytics practitioners should care.
What is the primary dimension along which we, as analytics practitioners, principally compete?
Putting aside the inevitable ‘dashboards per quarter!’ answer, the ideal measure would be ‘value add’, or more specifically ‘profits accrued as a result of analytics’. Another candidate would be the simpler and more cynical ‘return on insight’. Of course, that measure would assume that the term insight has been commonly defined across the industry. Worse, profit-as-a-result-of-actioned-insight is typically proprietary. That is to say, one does not publicly share business intelligence data. On the positive side, it’s not that the metric has to make sense. The power of the single dimension is in the inherent simplicity of that dimension. Consider video game consoles.
There was very little confusion as to the necessity of the Super Nintendo when it was announced. It had 16 bits instead of 8. The N64 had 64 bits. Children would justify the purchase of a new gaming system by arguing each one was twice or four times better than the previous. Parents would eventually buy one just to shut them up. So, even nonsensical measures can work if they’re quantifiable.
So what of analytics? What is our primary dimension?
Behaving like a customer, I went to Google and typed ‘web analytics consultant’ and opened up the top 10 consultancies that came up. I only looked at the landing page. How would anybody sort these along a primary dimension? Only two suggest that years of experience are important. Five visually emphasize a list of certifications. Perhaps photography of people looking confused and/or bored while staring at data is the primary dimension? This lack of a primary dimension should be a source for concern.
And what of web analytics products?
Product centric debates used to permeate our community. Comparisons of Google versus Adobe Omniture versus Coremetrics versus Webtrends would always end in the primary dimension of number of custom variables available. Adobe Omniture had 40+ custom variables. Google, initially, had far fewer. In 2006, Google introduced a second dimension – usability. Zero out-of-pocket cost for sites under 30,000 visits wasn’t the game changer. Usability was the real disruptor. We’ve since seen the introduction of animation / visualization into the mix, such as TeaLeaf and Clicktake – products that are superior in terms of secondary dimensions, but fall behind in the vital primary one. Are practitioners well served by this primary dimension?
Practitioners should care because it goes directly to their own value proposition. A primary dimension must exist to frame a market, and to a certain extent, develop a market. What should be the primary dimension? Sood and Tellis have provided a useful conceptual framework and a predictive model to approach the problem.
I recommend the article to members of the WAA who want to understand more.