Based on a running total approaching ten million case studies, we know that 98% of all innovation attempts end in failure. Put into context, that represents about $120B per year of wasted resource, globally. One might imagine that organisations would make some kind of investigation to understand why so much money is thrown away, but it seems very few are brave enough to commission the work. Perhaps this is because the leaders in the world’s enterprises may discover an uncomfortable truth: they are the heart of the problem.
This is not necessarily to blame them for that fact. In almost every case, what allows a person to rise to the top of the hierarchy within their organisation is their ability to demonstrate good Operational Excellence characteristics. Which means people that consistently hit their targets, people that consistently did what they were told, people that followed the rules, people that made sure everyone that worked for them also followed the rules.
Organisations need such people. Operational Excellence pays next month’s salaries. But organisations also need people that are able to innovate. And in many ways, the skills required to be a good innovator are 180 degrees opposite to those required for Operational Excellence. The better a person is at doing the Operational Excellence job, the worse they are likely to be at innovating. Almost no-one that has reached the top of a modern enterprise, therefore, has the first clue what innovation is really all about.
That’s not to say that business leaders won’t pronounce the need for innovation. It is a very brave CEO indeed that will stand up and say, ‘we don’t want innovation’. Markets expect innovation to happen. Innovation is the future. Innovation pays everyone’s salaries (and gives shareholders their dividends) in three years, five years and ten years’ time.
But when a leader who doesn’t understand innovation stands up and asks for innovation, the way those words are heard is, ‘bring me lots of exciting new ideas, but don’t you dare change anything’. And so what most organisations end up with is a depressing internal game. Design Thinking is currently very fashionable, so someone in a position of authority declares that, in order to innovate, everyone needs to be taught Design Thinking. After a couple of years of zero impact (except some of your best people will leave the company to set up Design Thinking consulting companies), leadership looks for another, different, strategy. And so something like Open Innovation becomes the new plaything. Two more years go by and then that fails. All the time leaving the scientists, engineers and designers feeling more and more like they’re caught on a merry-go-round that never stops and quite literally never goes anywhere.
So, a few years ago, we thought, if the leaders in large companies don’t want to understand the problem, we will have to understand it for them. We study innovation all the time, and have been doing so for the last 20 years. What we started to do five years ago is examine the causal links between innovation and the tools and methods organisations supposedly use to help spark those innovations. We’ve been fortunate along the way to be able to conduct the research in conjunction with some of our clients and frustrated Innovation Heads (Note: almost everyone inside a large organisation with ‘innovation’ in their job title, is a very frustrated person) to try and reveal the under-pinning DNA of their innovation success stories. Attempting such a feat is difficult at the best of times; doing it in the complex/chaotic environment that invariably accompanies any kind of discontinuous jump is a particularly hazardous job. A big part of the challenge, therefore, is to somehow parallel-test the negative hypothesis. In this case, that goes something like this:
a) Identify the tools, methods and processes that we think contributed to the success of an innovation project.
b) Identify the tools, methods and processes that were utilized during innovation projects that subsequently proved to be unsuccessful
When it is possible to meaningfully grasp the answers to these two questions, ‘success’-contributing methods turn out to be no more or less likely to be present as the answers to both questions. Thus, as 98% of all innovation attempts fail, so we see that those that claim to have made use of Agile methods – to choose another currently very fashionable Weapon of Mass Distraction – will also fail 98% of the time.
The only vaguely meaningful way to test the efficacy of one method over another would be to conduct some kind of parallel set of innovation-project experiments: one using Agile, for example, one using (another random choice), Outcome Based Innovation, and one – the ‘control group’ using no formal support method at all. Such experiments are both expensive and fraught with the difficulties of trying to compare the un-compare-able. There are one or two examples, but really precious few that would pass any kind of academic scrutiny. That’s one of the big challenges of working in complex environments, of course: its never possible to step in the same river twice.
The most usual ‘alternative’, historically, is the Jack Welch SixSigma ‘myth-builder’ strategy. Which basically involves never doing any kind of double-blind experiment, and instead telling people that ‘quantifying how much Six Sigma helped is good for your career’. Then, hey-presto, and surprise-surprise, any piece of moderately successful work ever done thereafter gets a Six Sigma label attached to it, until Jack is able to announce to the world’s media that the savings amount to ‘$9B’. Every cent of which is utterly fictitious.
Then there’s a final twist of the knife pertaining specifically to the tools and methods that might be brought to bear during the ‘fuzzy-front-end’ period of an innovation project. By the time the project gets somewhere near to delivering success, this fuzzy-front-end is a long distant memory, and, unfortunately, if you’re part of the method team, participants’ memories tend to be short. After the 99% perspiration, people, in other words, tend to have forgotten where the 1% inspiration came from. It was the perspiration that delivered the success, not the weird bit at the beginning.
Taken all together, the overall picture can tend to make people in the innovation world depressed if you’re not careful. Well, actually, the reaction can be quite amusing if you’re that way inclined. Which we tend to be on occasion since it turns out to be a good initial filter mechanism for sifting out the pretend-innovators from the potentially real ones.
The real ones, when they’re exposed to these kinds of result are much more inclined to ask whether there is anything at all that meaningfully distinguishes the 98% of innovation attempt failures from the 2% that end up being successful.
Ten million case studies later, it turns out there is.
In the world of tangible tools, here are the things that we can safely say have a statistically significant likelihood of contributing to innovation success and a corresponding absence in innovation attempt failures:
– The ability to identify and resolve trade-offs & compromises
– Having a clear meta-level compass heading relating to customer value
– Systems Thinking
– Management of the unknowns trumps management of the Gantt Chart
– Clear understanding of complexity – rapid-learning cycles, ‘first principles’, s-curves, patterns – and need for requisitely fast learning cycles
– Clear understanding of (von Clausewitz) ‘critical mass at the critical point’
– Requisite understanding the customer/consumer say/do gap and how to deal with it
Important as these tangible – i.e. very teachable – elements are, they tend to get dwarved by the intangibles. Which look something like this:
– An excess of influencing skills
– Strong ability to working together in cross-disciplinary teams
– Persistence/bloody-mindedness/willingness to stick-with-difficult-stuff
– Strong ability to live with continual ‘failure’
– Acknowledgement that ‘ideas’ have zero value
– Ability to design and manage a clear sense of progress across the team
This stuff is much more difficult to teach. Not impossible, but it does require a heap more time than most organisations are prepared to devote to the task. Interventions, these days, able to be assisted by the fact that we’re able to measure a lot of these intangible success-driving elements, including the ones that spill over into the tangible arena.
Repeatable innovation success ultimately means getting all of these tangible and intangible things right, but any thousand mile journey necessarily begins with the first few baby steps. The best two seem to involve a) creating a sense of progress, and b) the best way to do this is to start looking at some of the many trade-offs and compromises that sit at the heart of Operational Excellence, and saying to ourselves, we’re not going to make those trade-offs any more.