I like Tom Peters’ work. His 1997 effort, The Circle Of Innovation, still stands up today as some kind of innovation leadership mindset benchmark. Peters could have justified retiring a decade ago. And in some respects he did. Last year’s 450-page, ‘The Excellence Dividend’ in all probability represents a summing-up of all that Peters has written over the course of his career. A lot of it is written in capital letters. Which, I guess, is Tom telling us he has revealed a lot to shout about over the course of his guru-ship-by-wandering-around life. And so we learn that ‘listening’ is the most important leadership trait. And then so is self-knowledge. Reading is leadership requirement #1. And so is saying ‘thankyou’. And so is preparation. ‘Putting people first’ is a leader’s main job. Everything in Tom’s world of advice, it seems, is more important than everything else.
Which,
ultimately, says to me that Tom has missed something critical from his
catalogue of shout-y advice. If everything is important, nothing ends up being
important. And that includes having an ability to prioritse one important thing
over another. If everything is important, the only sensible way to navigate the
jungle involves carrying a contradiction-finding torch and a contradiction
solving machete.
The only way to
make meaningful progress in a right-versus-right world is to transcend the
contradiction and create higher-level both/and solutions.
If a top-five management guru doesn’t get this, after fifty years looking for excellence, its no wonder we have a world full of leaders who, as we’re already starting to see in the early stages of the Covid-19 chaos, find themselves struggling to deal with society’s right-versus-right problems armed only with either/or tools. Here’s hoping that chaos brings with it some rapid both/and learning. Contradiction finding is (and has actually always been) leadership skill #1. It’s just hidden itself very well.
Covid-19 offers a timely reminder that experts aren’t perhaps such a bad thing afterall. In a battle between Fake News and a real virus, I know who’s side I’m on. That said, its also fair to say that we’re already receiving some pretty horrific advice from the so-called experts.
Which might sound like we’ve just come full circle again. We want experts, we don’t want experts. The problem is what kind of expert is it that we need? Classic consultants answer: it depends.
It depends on what sort of situation we find ourselves in. If it’s Complicated, then classic T-shaped vertically-educated domain experts are the people we should seek advice from. If it’s Simple, anyone that has been given the relevant recipe will do. On the other hand, if the situation is Complex, we need #-shaped experts (SI ezine, Issue 183). People that are able to combine their deep domain knowledge with a requisite level of horizontal skills. In the case of pandemics, horizontal skills in complex adaptive systems and risk management.
If you’re lucky enough to live in a part of the world that doesn’t have any cases of Covid-19, from a counter-measures perspective, you’re in Simple mode. This means that a range of simple social-distancing measures will likely keep your population safe. Most nations are now past this stage. So simple instructions (failing to take account of first principles of human behaviour) won’t work any more. The acceleration from Simple to Complex we now know from China’s experience happens very rapidly. The transition from Complex to Chaotic, as we’re starting to see in Italy, is almost as rapid. This is the consequence of applying linear thinking in exponential situations.
China acted ‘late’ but, through massive and highly impressive infrastructure initiatives managed to retrieve the unfolding disaster. South Korea acted earlier and more decisively and now seems to have put in place measures consistent with the prevailing complexity of the situation. Italy, meanwhile, it seems is still being guided by medics and politicians (and, by implication, economists).
And here’s the problem. Complex situations need #-shaped people, and in the case of Covid-19 there don’t appear to be any. Not in the West at least. If you don’t have these people, your next best bet is Type 2 people like Nassim Taleb and people who’ve been through the Real-World Risk Insititute. The second worst situation is to put Type 3, Intellectual-Yet-Idiots in charge. The worst situation is to have these Type 3 people advising the Type 4 politicians and letting them make the decisions based on, what looks in the UK, like some kind of mind-today’s-pennies-pretend-the-future-is-linear economic trade-off calculation.
No, sorry, backtrack a second, because here in the UK, we’ve got Thaler-inspired ‘Nudge’ people also playing in the game. In theory, Nudge is very much about understanding complex systems. Specifically, in this case, the idea of injecting small psychological messages that you hope will create non-linear shifts across the population. Like telling people to wash their hands more often. In theory, this means we do have some Type 2 people in the game. Except they’re not really. Nudge is good. But only when you’re nudging in directions consistent with a clear understanding of fat-tail risk. And that part seems to be missing from the equation. So what we have in place in the UK right now is that actual worst-case: Half-educated Type 2s (which, I think, makes them Type 3s) listening to Type 3 medics advising Type 4 politicians listening to half-educated Types 2s. I’m sure everything will work out fine.
It is often said that true change only happens in the presence of chaos. So maybe there is some good news in this story. Maybe, in the aftermath of Covid-19, society starts acknowledging the Type 2 people more? And, better yet, starts using them to create more Type 1 people. Fingers-crossed… remember to wash for at least twenty-seconds.
Every time I look,
the catalogue of different types of ‘intelligence’ seems to get longer. Whether
it be musical-rhythmic, visual-spatial, verbal-linguistic, logical-mathematical,
bodily-kinesthetic, interpersonal, intrapersonal, naturalistic, existential or moral (to take just the
expanded list of the original source of thinking in this domain, Howard
Gardner), my working hypothesis is that when a human is born, we all carry more
or less the same machinery and potential to develop our overall intelligence.
Some will tend to bias that potential towards one form of intelligence over others,
while others will have a natural proclivity for another. Vive la difference and
all that.
Unfortunately,
perhaps because, in theory at least, it has been the easiest to measure, the
world still seems fixated on IQ as the primary means of quantifying
intelligence. One of the consequences of this is that people with ‘a low’ IQ (e.g.
Leeds United fans) somehow get stigmatised (e.g. as ‘Leeds United fans’). The
net result is that, because of the sort of work I do, I frequently come into
contact with ‘high IQ’ individuals. I’d have to say that some of these people
might well have a lot of book-smarts, but they’re also, innovation-wise, the
dumbest people I’ve ever met. Their focus on logical-mathematical intelligence
has come about because they’ve been fooled into thinking this is the best way
to get on in (corporate) life.
From an innovation
perspective – innovation being the most difficult thing any person might choose
to get involved in – IQ certainly has its place. But far more important is a critical
mass of many of the other intelligences. Right at the top of the list being
Emotional Intelligence, EQ. EQ eats IQ for breakfast, innovation-wise. EQ
>>> IQ.
This is a pity because,
in the same way the innovation world attracts a lot of IQ, it rarely manages to
attract sufficient EQ. The innovation world to a large extent is an EQ desert.
Which, I imagine, might have something to do with fact that 98% of innovation
attempts end in failure? I’m not sure how I’d set about testing that the EQ-deficit
was significant. But I’m not sure that the answer would be meaningful in any
event.
It wouldn’t be
meaningful because, what’s happening here is, yet again we find ourselves
having a pointless either/or debate. Albeit a subtle one. There might be an
overall EQ shortage amongst innovators, but that doesn’t mean we want more EQ
and less IQ. Rather it means the multiple intelligence contradiction needs to
be managed. Or, better yet, transcended. We want IQ and EQ.
From the ‘management’
side of the story, one thing well worth bearing in mind when an innovation team
is working on a complex problem and therefore making use of some kind of
divergent-convergent iterative process that different stages of that process
benefit from different EQ/IQ ratios.
Having a high
level of EQ, for example, is really really important during the initial ‘fuzzy-front-end’
phases of a project when we’re seeking to explore what, if any, market opportunity
there might be. Once we’ve explored and diverged as much as possible to
identify options, it then becomes time for IQ to take over. Firstly, to help us
work out which of the potential opportunities we’ve identified we’re best able
to pursue, and then secondly to generate our first swathe of solution clues and
ideas.
Finally, when we’ve again ‘diverged til it hurts’, its time for an EQ-dominated mindset to come to the fore again so that we can make sense of the ideas that have been generated, looking at combinations and builds that offer the best chance of resonating with the customer we have in mind.
All four of these stages are necessary, but it is the first and last that best enable us to learn the quickest how we’re going to deliver the ‘best’ solution to the customer. The innovative solution they couldn’t have described to us if we’d gone to ask them directly.
This is how it works now. Efficiency. Efficiency and the middle-management blob (MMB). Scared, out-of-their-depth people intent on creating knowledge asymmetries that will keep them in power. Asymmetries that allow them to demonstrate improving operating efficiency to those sitting above them, while making the lives of those below steadily worse. Not to mention the poor old customer. Who ends up spending more to get products and services that get progressively worse.
On my usual
running route is a chicane in the road. Whenever it rains hard, guaranteed that
one of the drains on the first bend gets blocked. Having endured several months
worth of rain in the last couple of weeks, the drain blocked early on, and has
stayed blocked so that half of the road is underwater. I run past it, to the
next bend. About 100m. Around this bend is a team of workmen from the water utility
company. They’ve blocked the road to repair what I soon learn is a broken pipe.
It’s teabreak time, so I get to chat to three members of the team. Is there any
chance you could pop around the corner when you’re done and unblock the drain?
The expression
on their faces suggests I’m not the first person to ask this question. ‘We’re
not allowed,’ one of them tells me.
‘It’ll only take a minute,’ I say, hopefully, ‘all it needs is one of your long rods’. I nod in the direction of the equipment on their truck. ‘I can do it if you like.’
‘Not allowed,’ the
second team member says.
I can understand
where he’s coming from. I’m not insured. I tell them that if I injure myself,
it won’t be their fault. No dice. I get it, still. ‘How about if I speak to
your boss?’
All three look horrified at this point. My immediate thought is now they’re angry with me because I want to complain about them. I try to re-assure them I’m not complaining. To try and demonstrate this, I suggest I speak to the boss to tell him what a good job they’re doing. And could they or I have permission to pop around the corner with a rod to unblock the drain.
The older one
shakes his head again, ‘it’s not on the job sheet’.
‘Can’t we add
it?’
A laugh this
time. ‘If we don’t do what’s on the sheet we’re in trouble. If we do more than
is on the sheet we’re in trouble. Boss says no. Computer says no.’
‘How about if
we raise a new job-sheet with the drain unblocking on it? That way you get to
complete two official jobs.’
A flicker of
interest this time. But then another shake of the head, ‘we can’t add new jobs.
The boss adds the jobs…’
‘That’s what I’m
going to ask him to do…’
Another head
shake, ‘he decides the priority. Unblocking drains not high. Lots of effort for
not a lot of benefit.’
I want to say, ‘but
you’re already here’, but I realise now that I’m fighting the boss’s job-sheet
efficiency algorithm. I stand no chance. Time to smile at the three workmen and
continue the run. Nothing to see here.
Except the MMB gone mad. Allow the team to add a new job-sheet. Do the job well within target because you’re already 100m away, happy boss. Road unblocked, happy citizens. Workmen allowed to use their initiative, happy workmen. Win-win-win. It’s not rocket science. But, tragically, it is anti-Blob. And if there’s only one rule in life right now, it is this: Blob wins; you lose.
So now we know. Smart motorways aren’t. Smart motorways are the opposite. Smart motorways kill. Thirty-eight people so far.
When the Government decided it was prudent to remove 96% of the safety margins on the UK’s motorways they were wrong.
Or rather, we now know, what they originally approved was the removal of 88% of the safety margin. And then subsequent bean-counters decided that they’d save even more money if they removed a further two-thirds of the planned Emergency Refuge Areas.
Now, I’m not privy to the calculations that got made in either the 88% or the 96% justifications, but given what I do know about risk, if I had to make a bet I’d say that neither solution was safe.
So, what do we know about humans and risk?
Humans make mistakes. The likelihood of failing to notice a major crossroads while driving is around 0.05%. The likelihood of leaving your indicator light on is about 0.3%. The likelihood of failing to notice another driver’s indicator is about 10%. The likelihood of making a mistake when stressed is 25%. The likelihood of failing to act correctly after 1 minute in an emergency situation is 90%. (https://www.lifetime-reliability.com/cms/tutorials/reliability-engineering/human_error_rate_table_insights/)
If a person is cruising along in the fast-lane of a motorway and a tyre blows out, that gives the driver an immediate complicated problem to solve. It is complicated rather than complex because there is a clear ‘right answer’ to the problem: navigate across to the hard-shoulder as quickly and as safely as you can.
Drivers of the vehicles behind the vehicle with the blown tyre also have a complicated problem to solve. They too have clear right answers to their problem: don’t hit the other car; decelerate; if it’s safe, switch to another lane.
We’re in 10% failure likelihood territory here. As I’ve mentioned before, most traffic accidents occur not because one person makes a mistake – we all make mistakes all the time – but because two or more people make a mistake simultaneously or within a few seconds of one another. Burst tyre problems don’t result in accidents 10% of the time. They might result in accidents 0.1×0.1 times. Or about 1%. And because everyone is slowing down, the likelihood of an accident causing injury is much lower still.
Remove 88% of the hard-shoulder, and I believe this kind of problem situation flips from complicated to complex. There is no such thing as the ‘right’ answer any more. Instead there are just heuristics: look for signs to let you know how far away the next Emergency Refuge Area is; put on your hazard warning lights; try and navigate across to the inside lanes; try to keep going until you reach the next ERA; if you can’t keep going, stop the vehicle; do not try and get out of the vehicle; phone emergency services to let them know what’s happened; cross your fingers and hope this stretch of smart motorway has radar detection switched on so the red-cross sign will tell drivers to leave the lane you’re stuck in; if the radar isn’t switched on, brace yourself for an (average) 17 minute wait for the emergency services to come and rescue you.
Moreover, it is clear that most drivers don’t know what all of the desirable heuristics are in this new world of smart motorways. So the driver with the blown tyre is highly likely to do something wrong. Now we’re in 90% error-rate territory. Which, if we take the ‘it-takes-two-to-tango’ factor into account and assume the following drivers are in 10% error mode, still means that its 0.9×0.1 = 9% likely there’s going to be an accident. The fact everyone has a heuristic saying ‘slow down’ is the only thing preventing a lot more than thirty-eight deaths so far. To calibrate ourselves a little bit more here, what we also all now know is that on one particular stretch of smart motorway, in the five years before the road was converted into a smart motorway there were just 72 near misses. In the five years after, there were 1,485.
When it comes to safe motorways, turning complicated problems into complex ones is what I think I’d have to say is the precise opposite of smart. Spending billions of pounds to make them less smart then feels rather like adding insult to injury.
The biggest innovation opportunities exist between the echo chambers. But getting others to let you into theirs can be pretty tough. Several times I’ve tried to get papers into economics conferences and journals. I’ve only ever succeeded once. And even then, when I made it to the conference to present, it quickly became apparent that I wasn’t welcome. The main reason for my repeated rejection is that I don’t meet the mathematics quotient. There aren’t enough equations in my papers. Usually there aren’t any.
And that was kind of the point. There is no such thing as a mathematical formula to describe what is going on here:
This kind of statement seems to be another good way to inflame economists. Of course there is, I’ve had them say to me. And, speaking as someone who started his career as a mathematician, I can see where they’re coming from. It’s a pair of tangent curves laid on their side, right?
But, of course, that’s not the point. The point is the space between the two S-curves not the curves themselves. The space between the curves is where innovation happens. It’s where the current rules (‘formulae’) get challenged and replaced with new ones.
Economics is about top-down modelling of measurable patterns.
The real world emerges from innovators who’s job is to look at the world bottom-up, to look for gaps between the patterns and how to break the existing patterns.
Innovators know something that economists clearly still don’t. There is no such thing as ‘top-down’ anything in a complex world. If you’re not looking at things bottom-up, you’re getting it wrong. The only uncertainty is how quickly you’ll go wrong. Even the best economic ‘superforecaster’ can expect to have an predictable event horizon measurable in weeks. The more complex and inter-connected the world gets, the shorter this predictable period becomes. For most people, we’re lucky if our predictions of what’s for dinner tonight come good.
Economics is about correlation.
Innovation is about finding needles in haystacks. Needles being causal relationships. Which, again, you only get to find if you’re looking for what’s happening between the patterns.
Economics is about numbers.
Innovation is about putting the numbers to one side and looking for the unquantifiable.
In a world which is all turbulence, when economists try and run things using spreadsheets and ticker-tape they take us all closer and closer to the edge of yet another financial precipice.
In a world which is all turbulence, the innovators are the ones that survive. They do it by having a clear compass, and looking for the weak signals that help propel things in the right direction. They look for vectors. Manage the vectors, innovators know, and the right numbers will arrive of their own accord.
I think the economist echo chamber is only going to listen to this idea after they collapse the global system again. And even then, I’m not certain. They certainly didn’t learn anything after 2008. Or, worse, they learned to look harder at the numbers, to make more numbers. In so doing, they didn’t just ‘kick the can down the road’, they gave the can wheels.
Economics by numbers is taking us ever faster to the brink of chaos and oblivion. After which all we’ll be left with are cockroaches and innovators. Only half of which sounds good to me.
Everyone’s looking for the ‘easy button’. It was the dominant theme of my 2019. Everywhere I went, its what people said they wanted. ‘Just make it easy for me.’ ‘If it’s difficult no-one will do it.’ ‘We’re all too busy, we need things to make our job easier.’
I get it. Sometimes, what people were asking for was possible to deliver. But mostly it wasn’t. As far as possible, those places where it wasn’t, we extricated ourselves from the discussion before there was an opportunity to do any damage. It’s nice to be able to say ‘no’ to a prospective client some times. Especially, if what they’re asking for looks set to deliver a short-term win that subsequently puts everyone on a long, slow slide down a slippery slope to mindless oblivion.
There’s only one type of situation where an ‘easy button’ has the possibility to be the right answer and that’s here:
Anywhere else on the Complexity Landscape and there is no such thing as easy anything. If there are two or more humans expecting to press the Easy Button, it won’t work. Humans are complex. Or rather, they are if we have a desire to treat them like humans. Easy Buttons are for robots. Or situations where humans need to mimic the behaviour of robots. Like pilots. George, the autopilot, is far safer than the best pilot. Or at least he is provided everything is happening normally. When everything is normal, the job of the co-pilot is basically to make sure the pilot doesn’t touch anything. In the same way, the job of the pilot is to make sure the co-pilot doesn’t touch anything.
The moment this kind of robotic normal doesn’t exist any more, we need to throw the easy button away. When things aren’t normal in the cockpit, we have a complex situation, no matter what the checklists might suggest otherwise. The sort of complex the world saw in the 1990s in what turned out to be a spate of Korean airliner crashes. Crashes in which the co-pilots were happier to die rather than tell their superior that he was making a mistake.
Easy buttons are great. Provided the system we’re designing is going to spend all of its time in the Easy Button zone. Easy Buttons mean maximum efficiency and no need for anyone to think. Some times this is the right strategy. Most times, we need to remember that when a situation shifts from simple-simple, it may be that the people involved may have spent so long not thinking, they’ve forgotten how to start again. Which kind of says – to me at least – that there probably aren’t that many Easy Button situations anywhere on the planet any more. Easy, in other words, is anything but.
I’m not enjoying ROI-Guru, Jack Phillips’ latest book, ‘The Value Of Innovation’, but I’m sticking with it. Business books are a bit like conferences for me these days. In that, for the most part, they exist to make me angry. And if I get angry enough, once the first wave of quiet seething is over, it can become a trigger for some kind of insight.
One of the reasons I’m having to stick with the book is because we’re getting lots of ‘can you help me measure this’ type innovation work at the moment. And, in classic, ‘someone somewhere already solved your problem’ fashion, Phillips is purportedly the go-to guy when it comes to ROI. He’s the one that already solved your problem. Except, apart from stating several times that everything is measurable, only very rarely does his book justify the statement with any evidence.
This problem is particularly acute when it comes to intangibles. Phillips admits they are important, but that’s as far as his story goes. In the end, he declares that the very definition of an intangible is something that is ‘unquantifiable’ and therefore something that won’t be included in any ROI calculation. Which sounds like the ultimate cop-out to me. Except, I guess that Phillips’ glib statement will have something to do with the fact that his primary audience is other finance-type people. Who, as far as my experience goes, not only don’t understand intangible, but actively don’t want to understand intangibles.
As I’m gritting my teeth reading the book, I’m also noticing that Phillips has no comprehension of complexity and complex systems. And, perhaps less surprising, he also has no conception that the level of Innovation Capability of an enterprise also has to have an impact on ROI calculations. I’ve said a little bit about that latter topic in the last two issues of the SI ezine. It’s the complex systems topic that I wanted to explore a bit here.
The straw that broke this camel’s back was a mini-case study concerning Wal-Mart, and Phillips’ praise for the not-so-bright spark who’d calculated the ‘cost’ of every minute a delivery truck was at the unloading bay waiting to be unloaded. I have a feeling this kind of mis-guided thinking is why so many employees are actively dis-engaged from their work and trust in corporations is at a historic low. Everybody is on the clock. The drivers get stressed. The truck unloaders get stressed. Job satisfaction takes a downward turn. Employees feel out of control. They feel ‘its not fair’ when things go wrong that have nothing to do with them.
So then what happens? When we get treated unfairly, we compensate. We sneak an extra break when the boss isn’t looking. Or we ‘borrow’ some office stationery. Or we add a more expensive meal to the expenses claim. Which, when the ilicit activity eventually gets found out, means the bean-counters get angry. And when bean-counters get angry, all they want to do is dream up ever more convoluted beans to count. Which means that, as well as costing out lost minutes at the loading bay, they bring in tools to monitor breaks, put cameras in the stationery cupboard, and put checks in place to reject expense claims with an over-allowance dinner receipt on it. Which, surprise, surprise, now closes the loop and makes employees feel even less trusted and even less in control. All the time this toilet-swirl of distrust is going on, the bean-counters are rubbing their hands with glee because their figures are getting better because things like trust and control are ‘intangible’ and therefore aren’t on the balance sheet. The bean-counters, in most cases, only even begin to realise there’s a problem when very tangible things like increased sick-rates and staff-turnover start to show up on their dashboards. By which time, sadly, the vicious cycle has turned into a tailspin and the game is just about over.
The simple truth is this. Working out the ‘cost’ of waiting delivery trucks is a simple response to what is in reality a complex problem. Moreover, it is one of the thousands of other wrong simple solutions to a complex problem.
If a system includes two or more humans, it is complex. And, given that almost every measurement in existence is done by one human on another, almost every ‘measurement problem’ is complex.
And if that is the case, any measurement situation demands new rules of behaviour. None of which will be found in Jack Phillips book. First of all, there are no absolutes any more. This means there is little point in quantifying things. Far better instead to be looking out for relative changes, for ‘vectors’ and ‘rules of thumb’. None of which are popular with today’s bean-counters, granted, but that’s a lesson they’re going to have to learn. Most likely the hard way.
Thinking about the Complexity Landscape Model, my current thinking is this. The only sensible place to make quantitative measurements, and use the information to ‘run the business’ is here:
Numbers are for robots communicating with other robots. Robots that have been taught to operate in the ‘simple’ world: ‘do this; keep doing it forever, until I tell you to stop’. Perfect for automated processes where the aim is maximum efficiency. Robots aren’t quite to good at ‘complicated’ situations yet. They will be, because complicated problems are amenable to finding a clear and definitive ‘right’ answer. Right now, however, it is generally speaking going to be a human that is going to make such decision and such calculations. I used to design jet-engines. That’s a complicated problem, but it is also one in which it is possible to make a clear calculation that tells a designer that if they manage to redesign a component and reduce the weight of the engine by Xkg, the net worth to the Company will be $Y. Not quite so simple, obviously, but once you’re taught to take into account s-curve, ‘law-of-diminishing-return’ and other mechanistic effects, you’re pretty much good to go.
I think where I’m going with all this is to try and reach some kind of rule-of-thumb measurement heuristics. The first of which was going to be, ‘if you’re problem situation is not in the Numerical Measures Zone, quantification is pointless, and will probably lead you astray’.
But then I thought about my old hero, W. Edwards Deming – a man who pretty much lived in the Numerical Measures Zone – and his careful division of the measurement world into ‘common’ and ‘special’ cause situations. I find the distinction has become blurry in a lot of post-Deming continuous improvement initiatives. Which is more than a shame, because, if a person mixes the two up, the only result is that they will make things worse. If a system is ‘in control’, the job of the continuous improvement team is to focus on ‘common cause’ measurements if the wish to improve the system. If the system is ‘out of control’, the job is to focus on ‘special cause’ measurements. Somewhat conveniently, this distinction between ‘in’ and ‘out’ of control maps beautifully onto the CLM like this:
So now I can really propose a new heuristic: ‘if you’re problem situation is not in either of these Numerical Measures Zones, quantification is pointless, and will definitely lead you astray’.
I think there might be more profound implications to come out of this. But first, I think I need to go test it with a few semi-friendly bean-counters…
First up, there’s risk. Likelihood multiplied by consequence. And sometimes, like for example when the future of huge swathes of humanity is at stake, the size of the consequence means that, no matter how small the likelihood, we ought not to ignore what’s going on.
Second up, when the problem at hand is a wicked one, the best – some might say, ‘only’ – way forward involves managing the unknowns. All the stuff we don’t know about a situation (including as best as possible all the things that are today ‘unknown-unknowns’), we work out what to do, and then go do it.
Its been my working assumption for coming up to three decades now that there’s a global climate change problem. Its been my working assumption for the last two that the climate scientists have been busy managing the unknowns, doing their level best to work out whether the problem is an actual crisis, and what we need to do about it. Its been my dawning realization for the last few years that this second assumption is poorly founded.
“97% of climate scientists agree.” That’s what we’re all being told. 97% of climate scientists agree that the Earth’s climate is reaching a tipping-point, beyond which the consequences will be catastrophic. And there begins my problem. All of the TRIZ research tells us that smart people run towards contradictions, and that solving contradictions is just about the only way to make meaningful progress. Full-stop. Climate scientists aren’t exempt from this rule. If I want to build a better, more meaningful, prediction about how our climate might change in the coming years I would be well advised to seek out those people – the 3% in this case – that are drawing different conclusions to the ones my model is generating. Look for the differences, understand them, build a better model.
Except that’s not what I see happening with the 97%. Like so many things in life these days, what I see instead is a great big climate-science echo chamber. One that automatically excludes the minority that don’t agree with the consensus view. Papers written by a member of the 3% don’t get listened to, don’t get published in recognized journals and, instead – and even worse – get adopted by the other echo chamber. The ‘climate-denier’ chamber. Which then puts the two echo chambers even further apart. Looked at from outside and all I see is the world’s biggest case of confirmation bias. One that, by its very nature, prevents anyone from making any kind of meaningful assessment of what’s going to happen in the next ten, fifteen, twenty, fifty years.
Ah, but, a voice from the climate-catastrophe echo chamber shouts, look at some of the idiotic comments coming out of the other echo chamber. The idiots that say, ‘look there’s only 412 parts per million of CO2 in the atmosphere, how can such a small proportion make any difference’ (to which my answer is usually, let’s put 412ppm of potassium cyanide in your double-shot latte and see how that grabs you). Not to mention the biggest idiot of all, Donald Trump, who’s pretty much made a game now out of saying something even more ridiculous than he said the last time. I’m still in awe of the one about the ‘tremendous fumes’, ‘Gases are spewing into the atmosphere. You know we have a world, right? So the world is tiny compared to the universe. So tremendous, tremendous amount of fumes and everything.’ This on the subject of wind-turbine manufacture.
Idiots. I get it. But how about the fact that there are the same % of idiots in the other echo chamber? The virtue-signalling vegans, for example, insisting that we all need to move away from a dairy-based diet because cows are massive contributors to CO2 and methane emissions. So massive tracts of land get turned over to almond production. The almond trees need bees to pollinate them, so millions of bees get brought in to do the job (I know, I know, ‘real vegans won’t drink almond milk either because it ‘exploits bees’). But there are too many mock-vegans and so an almond monoculture gets created, and all the bees get sick and start dying out.
Or, how about the climate models that ignore any and all of the difficult issues that might complicate their calculations. Next time you meet a climate scientist, ask them how they incorporate solar activity in their models. Or the fertilization effect that increases as CO2 levels increase. Or how adaptation factors have been built into the models. These are questions I’ve been asking for a few years now, and I’ve not heard a single coherent answer. All I get instead is a curled lip and a dismissive comment along the lines that I sound like the enemy and that, if I know what’s good for me, I’ll shut up.
Yet again, the core problem concerns contradictions. In the same way that we had a blinding flash of the obvious twenty years ago with our TrenDNA work on consumer and market trends. That it’s not the trends themselves that help us to see the future, but rather the relationships between those trends. And particularly the relationships where one trend conflicts with another. The exact same thing applies when trying to predict climate. You can’t simply extrapolate along a CO2 prediction or a temperature prediction and hope to have any chance of achieving any kind of accuracy, because that’s not how complex systems work. You can’t just look at one attribute. There’s no such thing as a ‘root cause’ in a complex system. CO2 isn’t a root cause of global temperature rise. Neither is methane or NOx or SOx. Or industrialization. Or fossil fuels.
The only meaningful way to model a complex system is bottom-up, from first principles and taking into account ‘every’ aspect of the system. And if that means your computer’s not big enough to make such a calculation, that’s the unknown you’d better start managing… by, no surprise, solving the contradiction.
Time and time again, the human inability to look beyond single-parameter ‘root causes’ ends up creating way more harm than good. My recent favourite involves attempts to try and rid the world of malaria. A noble ambition. Mosquitoes are more than just a pest – they can be downright dangerous carriers of disease. One of the most innovative ideas to control populations of the bugs has been to release genetically modified male mosquitoes that produce unviable offspring. But unfortunately, a test of this in Brazil appears to have failed, with genes from the mutant mosquitoes now mixing with the native population.
The idea sounded solid. Male Aedes aegypti mosquitoes were genetically engineered to have a dominant lethal gene. When they mated with wild female mozzies, this gene would drastically cut down the number of offspring they produced, and the few that were born would be too weak to survive long. Ultimately, this program should have cut down the population of mosquitoes in an area – up to 85 percent, in some early tests.
Unfortunately, that hasn’t been the case. Researchers from Yale University have now examined mosquitoes around the city of Jacobina, Brazil, where the largest test of this technique has taken place over the last few years. Not only did numbers bounce back up in the months after the test, but some of the native bugs, they found, had retained genes from the engineered mosquitoes.
“The claim was that genes from the release strain would not get into the general population because offspring would die,’’ says Jeffrey Powell, senior author of a study describing the discovery. “That obviously was not what happened.”
Worse still, the genetic experiment now appear to have had the opposite effect and made mosquitoes even more resilient. The bugs in the area are now made up of three strains mixed together: the original Brazilian locals, plus strains from Cuba and Mexico – the two strains crossed to make the GM insects. This wider gene pool looks set to make the mozzies more robust as a whole.
All this was totally predictable. Or at least it was if you used something other than a one-dimensional prediction model. It’s not the trends, it’s the relationships between the trends that determine the emergent outcomes of a complex system.
So, where does this leave us? Is there a climate emergency? Do we have ‘twelve years to save the planet’?
No-one really knows. Which, on the one hand takes us back to rule one. Risk equals likelihood times consequence. We can’t afford to do nothing. But that then takes us to rule two. Manage the unknowns. Both sides of the climate ‘debate’ need to climb out of their echo chambers, start listening to the contradictions and use them to build better models. Until that happens, not only are we not solving the problem, we’re not getting any closer to understanding what the problem is.
I try not to impose too many rules on myself. Rules bad, heuristics good. One of my main heuristics is ‘when it comes to innovation, avoid crowd-sourcing’. There is almost no correlation between innovation success and use of a particular methodology. The only real – causally linked – relationship we’ve found is that innovation attempts purporting to use crowdsourcing are four times more likely to fail than attempts that don’t. Hence the heuristic. Why make one of the world’s most difficult jobs four times more difficult than it needs to be?
So much for heuristics. I’ve been hit twice by the crowd-sourcing world this week. Admittedly the first was while I was compiling our annual job of awarding prizes to the worst business books of the year and had to decide whether Henry Chesbrough’s latest tome, ‘Open Innovation Results’ was the third, second or first worst book of the year (you’ll have to check out the January ezine at the end of this month to find out what I decided). My exposure here was self-inflicted.
The second, where the infliction came from elsewhere, was a paper I’d been given to review for the International Journal Of Systematic Innovation. With Chesbrough’s ‘work’, given the amount of damage it has caused over the years, I felt it was okay to be brutal. When it comes to reviewing ‘academic literature’ I figured I needed to wear a different hat.
The whole academic world is subject to blind-review rules, so its not appropriate for me to say too much about the paper in question. Needless to say, I very much doubt that the author is likely to be a subscriber to this blog, or to anything that comes out of Systematic Innovation. I can say this with confidence because their paper exhibited zero interest or knowledge of either.
Anyway, that’s my problem not the author’s. The gist of their paper was a proposal to utilise social media to source inputs that will ‘in the future’ enable the construction of a ‘crowd-knowledge’ database. Already I hate it. Why are there so many academic papers these days about things people want to do rather than what they’ve done? Answer because the former involves a lot less blood, sweat and tears than the latter. No matter. Get past it, Darrell.
The basic premise of the paper seemed antithetical to the very foundations of ‘systematic innovation’. The author mentions TRIZ but really only to the extent that it allows them to declare it ‘too difficult’ and that therefore the crowd-knowledge database idea holds open the possibility of an easier alternative. I read on in the hope that the author would convince me this could be so, and that they might deserve a place in the International Journal of Systematic Innovation.
In practice, sadly, it quickly became clear that the model proposed by the authors in effect ignored all of the hard work done by the TRIZ community and replaced it with an idea that effectively starts a ‘systematic’ creativity tool from a new blank page. In this sense the paper describes a classic ‘can’t get there from here’ problem. The value of a functioning ‘crowd-knowledge’ database is high, but the likelihood of achieving such a thing using the suggested method is, to all intents and purposes, zero. The authors, unfortunately, exhibited a high degree of naivety and confirmation bias – their intent, it was now becoming clear, was to demonstrate that crowd-sourcing is an innately good (and ‘systematic’) route to innovation.
The first major problem with Open Innovation (strangely, barely mentioned in the paper, Chesbrough will no doubt be disappointed to learn) is that it does nothing to help with the crucial issue of garbage-in-garbage-out: if the wrong questions are posed, the wrong answer will be the inevitable result.
Existing crowd-sourcing advocates, regarding this issue, will no doubt declare the paper to be more useful than it actually is. The crowd will tell us what the right problem is, they will declare. The mistake here is to not look at the realities of the dysfunctions of the crowd-sourcing domain from an innovation perspective. Like so many things today, the crowd-sourcing world, is caught in its own echo chamber. As such it finds itself caught in a vicious cycle from which it is less and less likely to emerge. The paper merely serves to reinforce that vicious cycle. Crowd-sourcing advocates continue to believe that failures occur because not enough members of the crowd are participating yet. All we need to create this new database, the author declares, is to get everyone contributing more ideas. This is exactly the same fallacy as can be seen with Mark Zuckerberg at Facebook… the reasons for the company’s problems and its destruction of democracy, in their eyes, exists because ‘not enough’ people are connected yet. Seeking more knowledge from the crowd, however, merely adds exponentially more noise, and hence makes it progressively less and less likely that solutions to the current problems will be found. The haystack of apparent knowledge gets bigger, but the number of needles (i.e. useful insights and solutions), TRIZ tells us, remains largely constant.
This second critical problem (contradiction!) also hazards the creation of the desired database. From a TRIZ perspective, of course, all such problems are all solvable because TRIZ shows us that all contradictions have been solved by someone, somewhere before already. Because the authors clearly don’t appreciate this fact, however, they missed a potentially big opportunity to use TRIZ to identify solutions to the real crowd-sourcing problems.
Lack of knowledge of TRIZ thus becomes the primary cause of failure of the paper.
Now I have a new problem. The author of this crowd-sourcing paper is merely guilty of not knowing about TRIZ and so ended up writing something very naïve. The real problem needs to move up the line. To the editors of the Journal.
I can see that they too have a contradiction. On the one hand, the wish to broaden the scope and audience for the Journal, and on the other, the vast majority of ‘new’ authors have no grounding in all of the hard work done by past TRIZ researchers, and hence end up re-inventing the wheel, or, as in the case of the offending crowdsourcing paper, talking about a direction that makes no sense once you know what TRIZ has already done. The contradiction looks something like this:
And here’s the rub. The editors do know TRIZ. They know about contradictions. They know that the innovator’s job is to solve them. That’s the paper I want to referee. The paper describing how the contradiction got solved. That’s the paper, more importantly, I want to read. Because I might actually learn something. Crowd-sourcing might one day have a useful part to play in the (systematic) innovation story, but it will only happen if and when the TRIZ community decide to actually use TRIZ.
TRIZ people not using TRIZ. Hmm. Where have I heard that one before?
Let’s see what happens to my suggested solutions to the editors. If they come back suggesting we solicit more ideas from the crowd or, worse, from the crowdsourcing paper author, I’ll know I’ve lost. Again.