Christensen Vs. Lepore: A Matter Of Fact

Editor’s note: Thomas Thurston is a Partner at WR Hambrecht + Co, a San Francisco-based investment bank and venture capital firm. He is also Fund Manager at Ironstone, a San Francisco-based private equity firm that uses algorithms to identify disruptive startups, CEO of Growth Science, a data science firm, and former Chief Investment Officer of Rottura Capital, a long-short equities hedge fund. Formerly, Thomas worked at Intel Capital where he used data science to guide growth investments. A Fellow at the Harvard Business School, Thomas holds a BA, MBA and Juris Doctor.

Nothing gets keyboards clicking like a good controversy. Recently Jill Lepore, a history professor at Harvard, published a fierce article in the New Yorker accusing another Harvard professor, Clayton Christensen, of being a quack.

Lepore didn’t use that word, but she may as well have. Christensen is a business school professor renowned for his “Disruption Theory” about why businesses survive or fail. Lepore basically says Disruption Theory is no-good because it’s reckless, based on bad evidence and can’t predict the future. An ability to predict the future is, after all, the true test of a model.

Christensen fired back in a Bloomberg BusinessWeek interview days later, followed by droves of Internet chatter by onlookers. The real question is, who’s right? Christensen or Lepore? Is this just a case of one reasonable opinion versus another?

Actually, no. The unpopular, debate-killing truth is opinion doesn’t matter. Whether or not Disruption Theory can predict the future isn’t a matter of opinion, it’s a matter of fact.

Here are the facts.

Predictive Validity 

Most people don’t know this, but it turns out Disruption Theory is the foundation of the most accurate, thoroughly vetted, quantitative prediction models of new business survival or failure in the world today. Oops.

Allow me to explain. Nearly a decade ago I was working at Intel when it dawned on me to turn the company’s new business investment history into a formatted dataset. The goal was to look for quantitative patterns to better predict which Intel innovations would succeed or fail. Generally speaking, most businesses fail (around 75 percent) before their 10th birthday, regardless of whether they’re a startup, a venture capital investment or launched by a company like Intel. I wanted to know if data-centric analyses could better pick winners.

Whether or not Disruption Theory can predict the future isn’t a matter of opinion, it’s a matter of fact.

Strong patterns began to emerge, suggesting it was far more possible to predict the fate of innovations than anyone thought possible. The clearer these patterns became, the more I noticed how similar they were to phenomenon Christensen had already been writing about for years. At the time Christensen had last published the book Seeing What’s Next, claiming Disruption Theory could predict the kinds of outcomes my research focused on. While Christensen’s work had a litany of supporting examples, it struck me (perhaps as it struck Lepore) that the research didn’t have the kinds of data I cared about – quantitative predictive data.

Christensen had reason to believe Disruption Theory was predictive, but I wanted to know how predictive – exactly. Was it 10 percent predictive? 21 percent? 55 percent? 98 percent? As a manager in the trenches of Intel, this was the specificity I needed before deciding if Disruption Theory was useful. Those details were the gap between theory and practice.

Since only around 25 percent of new businesses survive, to be useful any model would have to be more than 25 percent accurate at picking winners on a consistent basis. It’s important to note how improvement, not perfection, is the standard to which science is valued. For example, a new cancer treatment is valuable if it saves 10 percent more lives, even if it doesn’t cure 100 percent of patients. At any point in time, solutions just have to be better than the alternatives. Since the patterns I found were more than 25 percent accurate, and those patters seemed to dovetail with what Christensen had long written about, I decided to test Disruption Theory on its own.

Predictive testing is part of a structured discipline called the Scientific Method. While it can be part of a social science education, it’s most commonly associated with “hard” sciences like Physics, Chemistry and medicine. It’s why new drugs have clinical trials. A model has to pass through stages including blind tests across random control groups to see if its predictions are not only accurate, but also support statistically significant levels of confidence. Predictive accuracy with 95 percent or more statistical confidence means the model is probably right. Less than 95 percent confidence means the model isn’t reliable enough.

So how’d it do? Was Disruption Theory more than 25 percent accurate with at least 95 percent statistical confidence at picking winners? In the first round of tests, the only blind dataset I had at the time was barely big enough to meet minimum sample size requirements (it only had 48 companies). Still, it was enough to at least run some preliminary trials, and it’s worth noting Christensen wasn’t involved – I’d never met the man. Instead, I did my best to reduce his theory to falsifiable yes/no logic using published research. Even so, in the first round these relatively crude rules based on Disruption Theory blindly predicted if new businesses would survive or fail with 94 percent accuracy and over 99 percent statistical confidence. Holy crap.

If business research had “Eureka” bathtub moments, this would be one of them. This early test was described in detail by a former co-author of Christensen’s named Michael Raynor in the book The Innovator’s Manifesto. These results alone satisfy the burden of proof demanded by Lepore’s article. The debate could end right there.

But there’s more.

Research Expansion 

My research started getting attention in and out of Intel. So while at Harvard one day I barged into Christensen’s office unannounced (he asked, confused, if I was there for a job interview). I introduced myself and summarized what I’d been working on. Months later I found myself living in Boston, leading joint research between Intel and Harvard to expand and improve these predictive models for new innovations.

Improvement, not perfection, is the standard.

I was surprised to learn Christensen wasn’t the only guru whose theory hadn’t been tested. To my knowledge – brace yourself – zero business gurus in the fields of strategy or innovation had ever subjected their theories to the level of predictive testing we put Christensen’s work through (except for, partly, a little work by Eric Von Hippel at MIT in 1976 that, by oddball coincidence, made reminiscent discoveries to what Christensen and I found decades later).

In business strategy and innovation departments, predictive testing simply isn’t the norm. Digest that for a moment. I applaud Lepore for calling out a popular business theory for lacking proof, but it’s no small irony that she targeted the one theory that’s been tested from hat to socks.

Following the Intel-Harvard research I’ve continued to build predictive models as a data scientist, and more recently as a venture capitalist and head of research of an investment firm. In hindsight, the early Intel sampling cited in The Innovator’s Manifesto seems quaint compared with the subsequent work that’s followed.

Persistent Results 

Nearly a decade later, highly refined versions of these Disruption-based models had produced more than 3,400 blind, real-world predictions about business survival or failure. These predictions informed more than $100 billion in organic growth, venture capital, stock trades and acquisition investments. When the models predicted survivors, they were right 66 percent of the time. When they predicted failures, they were right 88 percent of the time. Adding all survival and failure predictions together, the total gross accuracy was 84 percent.

While lower at first glance than the 94 percent accuracy of the first early test at Intel, the models now account for robust combinations of industry, geography and temporality in ways early models didn’t. In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch.

Science is a process, not an event, and last year the models took another leap forward. More sophisticated models yet – all based on Disruption Theory – continue to evolve, now involving more advanced algorithms and technologies. Taken together, the latest methodologies produced over 20,000 blind predictions (and counting). Not one but multiple Disruption Theory-based models, each drawing from different data and underlying algorithms, continue to deliver 66 percent sustained accuracy with 99 percent statistical confidence.

Put into perspective, the models have now made more predictions than all U.S. venture capital deals over the past five years combined, with a predictive accuracy more than 2.5X greater than the venture capital industry as a whole.

Lepore’s article suggests the word “disruption” is over-hyped to the point of an empty rallying cry. She’s right.

A lot of people point to examples of when Disruption Theory, or Christensen, was wrong. It was wrong about the iPhone. Tesla. Ralph Lauren. In fact, it’s been wrong over 7,500 times by my count (remember it has a 33 percent error rate when predicting winners). Keep in mind, however, it’s 66 percent right while everything else is stuck at 25 percent. Improvement, not perfection, is the standard. Disruption isn’t the end-all-be-all of management thinking, but it’s a solid contribution to the field.

The theory’s accuracy is also disproportionately higher for big financial wins, as opposed to small wins. I bring this up because some people look at exceptions like the iPhone, Tesla and Ralph Lauren and fret that the models somehow miss blockbusters. This too is a question of fact, not opinion, to which there’s been considerable analysis. The bigger a win, the greater the odds current Disruption-based models will catch it. I just used examples like the iPhone and Tesla because they’re well known.

As if it weren’t enough, Disruption Theory has also proven highly replicable. It’s rules-based, not a fuzzy art form. More than 1,000 corporate managers and students at schools including Harvard and MIT have been tested both before, and after, specific training in Disruption Theory (over 8,000 observations). When asked to make blind predictions about the survival or failure of real (but disguised) businesses, test subjects with no training averaged 35 percent accuracy, whereas after being trained the average accuracy rose to 65 percent. This demonstrated that anyone following certain Disruption-based rules can achieve similar results — a hallmark of good science.

Final Opinion 

Lepore’s article suggests the word “disruption” is over-hyped to the point of an empty rallying cry. She’s right. My research treats disruption as an extremely narrow, specific term of art, much as Christensen also takes great pains to articulate. Most people throw disruption around loosely, misstating, misunderstanding and misapplying it at the same time. I’d say at least half of the startup pitches I hear claim to be disruptive, but few of them are.

Disruption Theory is like quantum mechanics in that, while anyone can read books about it, it takes a relatively high level of rigor and precision to accurately apply. It’s science, not art. As someone who understands disruption at a quantified level, I heard Lepore’s critique the way I’d probably sound if I read just one book on quantum physics, determined myself to be an expert (which I’m certainly not), and then called it all hogwash.

Yet the article goes further. Entrepreneurs are called “ravenous hyenas,” investors are accused of having no conscience, innovation is blamed for the Holocaust, Hiroshima, genocide, global warming and both World Wars. That’s a stretch, to say the least. Innovation isn’t monolithic – the word is like “engineering” in that there are many flavors with different impacts on the world. Christensen writes about “sustaining” verses “disruptive” innovation, where sustaining innovation tends to deliver incremental growth, favor powerful incumbents, decrease access for those with fewer means and drive up costs.

In contrast, disruptive innovation tends to create transformational growth, opportunity for underdogs, greater access for the less fortunate and lower costs. This is why, for many, disruptive innovation is a worthy goal. By no means does it inherently negate the conscience, loyalty or character of those who pursue it.

I can’t help but notice another irony. Christensen has written two books arguing colleges and universities are beginning to face signs of disruption from online education, corporate and on-the-job training, and even YouTube (think Kahn Academy). For example, the University of Phoenix is now the largest college in the U.S. by enrollment, having over three times as many students as the second runner up (Pennsylvania State).

Lepore could be right about Disruption Theory, but the odds are literally over 500,000 times greater that, as a matter of fact, she’s just plain wrong.

Christensen says higher education faces a genuine threat – even at incumbent bastions like Harvard where he and Lepore work. However Christensen also predicts incumbents, when faced with disruption, overwhelmingly dismiss it, downplay its encroachment and resort to justifying their industry domination as a moral imperative.

Lepore dismisses Christensen’s arguments about disruption in higher education. As support, rather than challenging the substance of Christensen’s case, Lepore takes a superficial, snarky stab at some of his examples and quickly migrates to another topic. The irony, however, is by offhandedly dismissing evidence that higher education may be facing serious disruption, Lepore – as part of the incumbency – is doing exactly what Disruption Theory would predict.

This isn’t the first time Christensen’s theory has been challenged, and Lepore is correct to demand more predictive proof from business theories. There’s no shortage of hucksters, and bad business advice isn’t a victimless crime; especially for anyone whose life has been damaged by business collapse. It’s just a shame that when the article says “disruptive innovation can reliably be seen only after the fact,” it doesn’t seem to be aware of the relatively quiet, albeit massive, vetting that’s been done. Lepore could be right about Disruption Theory, but the odds are literally over 500,000 times greater that, as a matter of fact, she’s just plain wrong.