Maybe its because I’ve been reading too much Jordan Peterson recently. Or maybe it was re-reading AntiFragile? Whether I need to blame Peterson or Taleb, my Generation-Snowflake radar has been particularly sensitive these last few weeks.
The levels of fragility in many of the students I’ve been meeting is reaching frightening proportions. Not helped, I might add by certain members of the media seemingly bamboozled by a belief that offence taken by a particularly fragile little (Millennial) flower somehow trumps another person’s freedom of speech. Or inconvenient truth.
Anyway, I heard the expression ‘Peak Snowflake’ during my trip to the US earlier this month. The context being, ‘could this political correctness horseshit possibly get any worse?’ Is the level of Millennial fragility going to get even higher than it is right now, or do we still have a way to go?
Answer: by my reckoning, we still have one or two years of nausea ahead.
Peak Nurture was around 2015. Parents, in other words, are starting to get the message that the suffocation of their precious offspring is not a wholly good idea.
But then that isn’t the end of the story. If the kids go to college, they’re going to be exposed to a whole extra level of molly-coddling and the half-baked, delusional ideologies of the liberal-arts intelligentsia. Add a year after graduation for all the nonsense to percolate, and that gives us a Peak Snowflake date coinciding with, most likely, the Class of 2020.
Careful with that axe, Eugene, we still have a way to go.
I’ve turned my library upsidedown three times now and I still can’t find the book where I read about the Native American belief that we all have 87 problems in life. I can’t find any reference to the idea on the Web either. If I didn’t know better, I might be inclined to believe its something I dreamed. In some ways, that’s quite a nice idea. Rather than have ‘failing memory’ as one of my 87 problems.
The full ’87 problems’ idea, meanwhile, is that whenever we solve one of our 87 problems, a new one is sure to appear to restore the requisite number of problems.
Over the years, when I’ve had occasion to mention the idea to others, I notice two reactions. The first (most common) one is a look of horror, followed by a furrowed brow and then a discussion about how the idea contradicts their life strategy of trying to find ‘happiness’. Here’s the kind of person who seeks to avoid problems by progressively cocooning themselves from the real world. Or, if I accept the Native American logic, its the person that, if we really do always have 87 problems to contend with, seeks to substitute big ones with progressively smaller and smaller ones, until, perhaps, finally, the eventual 87 are each no bigger than the hassle of trying to get the last drop out of the toothpaste tube?
At the other end of the reaction spectrum, then, are the shoulder-shrugging nihilists whose immediate reaction to the 87 problems idea is ‘blimey, what if I solve a problem and receive a worse one in its place? Rather than take the risk, why bother trying to solve any more problems?’
I have to admit, I occasionally have some sympathy with both extremes of the spectrum. Then again, being one of those annoying ‘third-way’ people, I feel that my best strategy is to make sure I’m always working my way through the ‘right’ 87 problems.
Which perhaps make the issue of establishing what ‘right’ means becomes one of my 87.
While that doesn’t feel like such a bad idea, it also feels a bit abstract.
But then again, I know it ought to have something to do with ‘meaning’. And probably also ‘mastery’ of whatever it is I decide I should knuckle-down and do.
This is the moment when I find myself connecting to the now largely discredited idea of Malcolm Gladwell that it takes 10,000 hours to master anything. I have person experience of the fallacy of the idea. There being several things that I know I’ve devoted more than 10,000 hours to and still feel like an absolute novice.
Connecting the Native Indian and Gladwell dots, however, something begins to dawn on me. The reason my 10,000 hours of guitar playing hasn’t resulted in sold-out concerts at Madison Square Garden is because I rarely if ever solve any problems when I’m trying to play. If something gets difficult – there are a couple of tricky licks in Johnny B Goode for example – its too easy to give up and move on to playing something else.
On the other hand, there are other areas of life where I think I’ve achieved something like mastery in a lot less than 10,000 hours. These are the parts of (work, sadly!) life where my strategy has been – as TRIZ tells me – to actively run towards the difficult stuff. These are the areas where, even though I haven’t done my requisite hours, I have solved a requisite number of difficult problems. And by ‘difficult’, sticking with TRIZ, I obviously mean contradictions. And by ‘requisite’, sticking with my elusive Native American aphorism, I probably mean 87.
Mastery, in other words, is what happens when we’ve solved 87 contradictions in a chosen domain.
Which sounds like a pretty good piece of research to do. What were the 87 contradictions Lennon and McCartney worked their way through prior to Please Please Me? And was one of them that damn lick from Johnny B Goode?
If I believe in the Lindy Effect, the most antifragile industry ought to be the oldest industry. There’s probably a lot of truth in that idea. Except there’s a problem: the oldest industry, as far as I can tell, is an industry that still consists for the most part as large numbers of individual, ahem, ‘artisans’.
Artisans are one of the few instances of life’s skin-in-the-game heroes if I read Nassim Taleb’s work correctly. The other heroes are entrepreneurs.
There’s a problem here, I think. For society to function, there are things that need to be done that go quite some distance beyond the capability of individual artisans and entrepreneurs. For the thirty-plus people attending the AntiFragile get-together in London on Tuesday, for example, their timely arrival was only made possible thanks to the coordination of several thousand employees of Transport For London.
Looked at in that sense, I think there’s a need to re-calibrate the ‘antifragile industry’ question. To some extent, TFL – and other large enterprises – do what they do with thousands of employees who have little skin-in-the-game above and beyond the possibility they might lose their employment if they don’t do the work that’s asked of them. But then, to quote W. Edwards Deming, ‘no-one comes to work to deliberately do a bad job’. Despite the dumb things I sometimes see bosses asking them to do. Maybe a half-decent salary and a desire to serve the customer, when scaled up to include each individual in the organisation is sufficient to deliver requisite collective skin-in-the-game at the enterprise level? Maybe the individuals that put up with the crap dished down to them from above and still do a great job for their customers are the real heroes in life? Maybe it is this sense of collective responsibility to do the right thing that keeps society on an even keel?
Either way, I think there’s something significant missing from Taleb’s perspective on the heroes and villains of modern life. To divide the world into individual skin-in-the-game heroes and the villainous rest represents a failure to accept the possibility that we don’t live in an either/or world. It is possible – as nearly every large enterprise on the planet demonstrates – to have the best of both worlds. Not every organisation, of course. I completely agree with Taleb’s perspectives on organisations like Monsanto or, my own ‘favourite’, SAP, in both cases collectives of individuals – none of whom come to work to do a bad job – that can very easily be seen to deliver considerable collective harm.
One hopes that organisations like this will prove to be very fragile. 90% of the enterprises on the original Fortune 500 list no longer exist. They turned out to be very fragile indeed.
So what about the large enterprises that do prevail in the long term? Which of them is the most antifragile?
In my opinion, the answer to this question is the aerospace industry. Even though it has only existed for the last hundred years. By definition, everything that happened after the Wright Brothers flew at Kitty Hawk in 1903, getting people into the air safely has demanded large numbers of people working together. And because the industry very quickly learned that when people die in aeroplane crashes that is very bad news, ‘safety’ became the absolute. It therefore embarked on a very rigorous journey of building better and better safety protocols. At the same time, I might add, as also constantly innovating. The 100 year jump from the Wright Brother’s efforts and an Airbus A380 is quite mind-blowing if you think about it.
The aerospace industry is the most antifragile because it has to solve the safety AND innovation contradictions every day of its existence. And that is only achieved by making sure everyone in the industry learns from anything and everything that ever goes wrong. Every incident is investigated and the findings are shared across the industry to make sure the incident has as little chance of being repeated as possible.
As it happens, I started my career in the aerospace industry. I worked there for fifteen years. When I left to begin working in other domains, it took me a while to realise that not everyone saw the world in the way that had become the norm in aerospace. The cognitive dissonance was one of the things that prompted us to reverse engineer the evolution journey of the industry and to formulate the ‘Resilient Design’ evolution trend pattern:
Tracing back through the evolution of the design methods deployed in the industry, it was possible to identify a number of step-changes in capability. Design method s-curves if you like. That’s what each stage on the trend picture is intended to represent.
The latest stage on the trend – ‘antifragile design’ – is where I think the industry is pretty much at these days. In the 1990s we used to talk a lot about – and design for – ‘Murphy’. In a Design-for-Murphy world you’re forced to accept that customers will occasionally do stupid things, but that when they do such things, the aircraft should still be resilient enough to make sure that everyone gets down onto the ground again in one piece. Nowadays, thanks to scenarios like GermanWings Flight 9525, when a co-pilot decided to commit suicide with 144 passengers and five other crew members on board, the industry has evolved capabilities to ensure it’s a one-off. The outcome for the 150 unfortunate souls on Flight 9525 wasn’t good, but for the rest of us, their story means we can take to the skies safe in the knowledge that the aerospace industry was made stronger as a result.
When I look at – and work with – other industries, one of the first things I look to calibrate myself on is how far along the Resilient Design trend pattern are they – actually, we should be looking to rename the trend ‘AntiFragile Design’ – in order to better understand how we set about innovating with them.
Transport For London, much as they succeed in getting millions of commuters to their destination kind of on time most days of the year, is still essentially at the second stage of the trend. As anyone who’s ever tried to get across London following the (transient) arrival of half an inch of snow will attest, there are days when the system is very fragile indeed.
Someone at the AntFragile meeting earlier this week asked me whether it was possible to use this trend-pattern way of thinking to decide where to invest money. I’ve forgotten the answer I gave at the time, other than remembering it was horribly glib. If I could turn back to the moment of the question again, I think I’d probably answer that I don’t invest in any kinds of stocks or shares because I can’t think of any bank or broker that’s ever reached the third stage of the Resilient Design trend. Also, I don’t know whether its possible to invest in an ‘industry’… i.e. I’d quite happily invest in the antifragile aerospace industry, but am somewhat less clear about investing in any individual aerospace company, given the potential possibility that at any given moment they might have a very fragile management team in charge. I think, if I could ever motivate myself to spend time thinking about stocks and shares, I would very definitely do it by looking at the Resilient Design (AntiFragile Design) level of the enterprises I’m thinking of investing in. Which, thinking about it, is the reason why I don’t invest in anything other than our own business. And the things we occasionally spin-out. We know the trend pattern, but we’re still very much in the minority. The vast majority of enterprises on the planet don’t know the pattern and therefore, in my eyes are all very fragile. Even if they might happen to have a lot of money stashed away in the bank at the moment.
Imagine an archer facing a wall 10 metres away and about to fire lots of arrows at it. The archer is not so accurate and will shoot randomly within a plus or minus 45-degree angle as shown in the figure:
The question is, if X is the point on the wall directly perpendicular to where the archer is standing, when lots and lots of arrows have been fired, what’s the average position along the wall that they will end up?
The answer is, of course, that X marks the average.
Now let’s rotate the archer by 45-degrees and, retaining the same plus or minus 45-degrees random accuracy range, what will be the average position on the wall after firing lots and lots of arrows this time?
This calculation is a little bit more difficult unless you can remember some of your school-level trigonometry class work on right angled-triangles and tangents being opposite over adjacent.
Most people’s instincts let them down when trying to answer this question.
What are your instincts telling you right now?
If you had to answer the question, what would you say?
I went to a Twitter-sparked AntiFragile meeting yesterday and Mark Baker (aka the rather famous @guruanaerobic) showed everyone a lovely sequence of YouTube videos of a father talking his son’s through the problem. If you’ve ever got 20 minutes to spare, you should watch them (https://mikesmathpage.wordpress.com/2018/04/08/sharing-an-advanced-expected-value-problem-from-nassim-taleb-with-kids/).
It offers an inspiring journey involving a computer programme that allowed the kids to fire millions of random arrows at the wall and see what the average distance from X turns out to be. The main learning being that their (and I think most people’s) instincts are quite badly mis-calibrated.
The answer, in case you’re interested, is infinity.
Most people can’t imagine this could be the case. That’s because most situations we encounter in life are like the first, symmetrical version of the problem. In this version, the arrows all hit the wall and thanks to the symmetry are equally likely to end up one side or other of the X point. Which then means that, the more random arrows we fire the more likely everything balances out to make X the average. This problem is convergent.
In the second case, however, not only is there an asymmetry, but there is also the possibility that a fired arrow might end up being fired exactly parallel to the wall, in which case it will never hit the wall. This second problem contains a non-linearity. Remote as the extreme possibility might be (it is, after all, right at the limits of the range of randomness of the archer), it is nevertheless a finite possibility. The more arrows the archer randomly fires, the more likely this remote possibility comes true.
Anyone that can remember those trigonometry lessons will have a vague recollection that the tangent graph contains exponential characteristics. And whenever we see such a phenomenon – they’re everywhere in the real-world (for example in s-curves) – we know that our human instinct for linearity is no longer a good idea.
The father-son videos offer up an inspiring illustration of kids re-thinking their instincts. The magic in the videos comes from the way dad gets his sons to hypothesise their answers and then runs increasingly more simulations to test them. It’s great learning.
One of the points I tried to make during my 20 minute ‘this is what antifragile means to me’ post-lunch diatribe was that the aerospace industry is the safest industry on the planet because it has learned to be more antifragile than other industries. Accepting the non-linearity of the world, like what dawned on the kids in the video, was an early stage in this journey.
Later on, you begin to realise that running millions of trials to test your non-linear hypotheses is a very expensive business. I once destroyed a jet engine on test. The rate of spend during the failure was around £2M per second in today’s money. Spending like that makes you quickly recognise you have a contradiction – you want to continue to be the safest industry on the planet but you also need to innovate and try new things without spending all your money exploding thousands of expensive engines. The way you solve this contradiction is you work out what the worst case is, then add a big safety margin, and then design for that. You quickly learn that you don’t learn anything from doing millions and millions of ‘average’ things. Averages are pretty much meaningless in complex, non-linear worlds.
An aerospace engineer tasked with working out the second archer problem has retrained their instincts to not need millions of trials to know the answer is infinity. You only need to run one trial: the extreme one. The worst case in the second archer problem is that the arrow flies parallel to the wall and thus never hits the wall. The extreme X answer is therefore infinity. Calculating the ‘average’ is then going to be done by summing all of the distances from x from each random arrow, then dividing by the number of arrows fired.. When the extreme case happens the number of arrows fired will have been finite, the average is going to be infinity-divided-by-a-finite-number. Which equals infinity.
Retraining our linearity-assuming brains to acknowledge non-linearity is hard. Even though the kids in the video learned something important, they’d need to see a bunch more examples of non-linearity to really get the point. The point is the journey. Retraining our (first principle-holding) brains to shift away from averages to extremes, however, is somewhat easier. It’s also a good step in the antifragile direction. One smart (extreme antifragile) trial always beats a billion dumb (random/average) ones.
This just in from the research team: most innovation is meaningless. Or, worse, serves to diminish meaning. That’s ‘meaning’ as in the raison d’etre of human lives. Humans being meaning-makers.
Or at least that’s what I thought we were. Now it seems we spend the majority of our innovation time making lives more convenient. Or more superficial.
This is what the high-level summary of the six-month analysis looks like:
The 2×2 matrix plots meaning and innovation. ‘Innovation’ is defined in our usual ‘successful step-change’ terms. The top row of the Matrix shows there has been no change in the overall 98% failure rate of innovation attempts. That overall number has barely shifted in all of our analyses over the course of the last eight years.
Now we can break the number down further into successful innovation attempts that were meaningful versus those that were not. The ratio of meaningful-to-not is revealed to be 0.6/1.4, which means 30% of successful innovation attempts deliver increased meaning, and 70% are either meaning-neutral or diminish meaning. The biggest offenders in this 70% are innovations aimed at increasing the convenience of consumers. The food and beverage sector looking particularly bad. From food delivery apps to eat-on-the-go-breakfast-drinks, from microwave puddings to easy-peel oranges, here’s an industry that seems to have largely forgotten that the preparation and consumption of food is supposed to be a meaningful act.
The ratio of meaningful to meaningless gets even worse when we then look at the bottom row of the matrix, with a shade over 20% of failed innovation attempts seeking to increase meaning, and the remaining 80% don’t. Looking at the ‘meaning’ columns of the matrix reveals that, overall, 79.4% of innovation attempts are meaningless and 20.6% are meaningful.
That feels like an awful lot of wasted effort to me.
The ‘meaning’ data comes from the PanSensic ABC-M tool, which we used to analyse consumer feedback on several thousand novel products and services launched in the last two years.
More details will be presented in the May issue of the Sytematic Innovation e-zine. Meanwhile, I thought it would be good to plant the ‘meaningless’ seed in peoples’ minds ahead of time. Smile.
The best part about working with Millennials is their passion to do big things and make a difference. I was working with a number of young teachers this week. Their passion was to innovate in the education sector. The Clay Christensen view of education, if you believe his book Disrupting Class, is that the whole shebang is going to be disrupted in the next 6 years. On the other hand, if you read Class Clowns, it tells a very different story. A story of how many big would-be disruptors have lost billions of dollars in the last decade trying to disrupt education and failing miserably. My take-away from Class Clowns is that the whole system is locked-in.
Tell passionate change-agent Millennial educators the Christensen story and they rub their hands with glee. Tell them the Class Clown locked-in scenario story and they get quite depressed. How do you make a difference if the system proves to be impossible to change?
Well, one answer, is that you keep chipping away at as many of the problems as you can until you find a way through the maze. The other answer – the (Nomad) pragmatist’s answer – is that if you’re an innovator a certain amount of banging-your-head-a-brick-wall is a prerequisite, but you should only do it for so long before allowing yourself to wonder whether there might be some other, easier, walls to knock down somewhere else.
Then came my flash-of-the-blinding-obvious moment. This desire to not bang your head against brick walls leads to a significant winner-takes-all bifurcation: innovators sooner or later all migrate to organisations (or industries) that are good at innovation. The converse of which is that organisations (or industries – like education) that aren’t good at innovation become even less likely to be able to innovate in the future, because all the innovators have migrated elsewhere.
Innovation Capability Level 4 enterprises are progressively more likely to evolve to Level 5; Level 1 enterprises are progressively more likely to stay stalled at Level 1. The good get better; the bad get worse…
…until, the bad get so bad they either collapse, and/or, more likely – thanks Clay – eventually get disrupted by the good. And so, per the trend, the winner takes it all.
Which, if I apply this to my Millennial teacher friends, means the best way for them to make the difference they’re so desperate to make, is to leave the current education system and go work for one of the enterprises that do innovation well. Then, if they still want to make a difference in the education sector, the only other thing they need to do is choose an innovator that will sooner or later become one of Christensen’s predicting Class disruptors. It might take a bit longer to get there, but it’s a strategy that will get there. The shortest path between two points is rarely a straight line in innovation world. Winner takes all…
…until, of coure, the natural law of the meta S-curve kicks in. Innovators like innovating. Enterprises need innovation, but they also need operational excellence and the proper execution of the mundane day-to-day business. When the balance between operational excellence and innovation tips in the wrong direction, that’s when the trouble starts again, and a different winner takes all.
When I worked at Rolls-Royce, there was a joke that went something along the lines, ‘how do you remember whether it was Henry Royce or Charles Rolls who was the engineering genius? Answer: Rolls was the sales and marketing person, that’s why his name comes first’.
It was telling at the time, and I still see it most places I go. The Sales and Marketing people are the ones most likely to earn the big bucks and climb the corporate ladder the fastest. Is this because they’re inherently ‘better’? Or is it because they’ve understood how the business world works far better than the Engineers and Scientists?
If I were to exaggerate what this means, I’d say the Engineers and Scientists are allergic to corporate bullshit and much more interested in the next exciting problem to work on, whereas the Sales and Marketing people know they’re supposed to let everyone know how much they contributed to the last success before taking on their next challenge. To be slightly cynical about what I see, the Sales and Marketing people use their skills to sell and market themselves as well as – if not better than – the products and services the sell and market to customers.
Case in point. I’ve been working with a group of engineers and scientists at a big multi-national for the last few months. Never have I seen a harder-working group of individuals. Nor have I seen one so frustrated at their inability to find any time to do anything other than the urgent day-to-day ‘operational excellence’ work. I’m supposed to help them to innovate. So far I’m not winning. Operational Excellence beats innovation every hour of every day of every week in their world.
One of the engineers, bless him, told me he’d just used some of our innovation tools to help solve a customer problem. Great, I said, how much was that worth to the Company? Silence. I let the silence build. Whoever spoke next was going to have to reveal something important. The silence continued. I wasn’t going to lose this one. Finally, the engineer reluctantly squirmed and offered me an embarrassed-sounding explanation to the effect that improvements in customer satisfaction were intangible and couldn’t be quantified.
Now I had a dilemma. I’d been teaching the group about complex adaptive systems and so we all knew that there was almost no way of establishing the value of any change unless a very carefully conducted back to back experiment had been configured and conducted. And even then, in true ‘you can never step in the same river twice’ fashion, you still can’t be sure whether your intervention had a quantifiable benefit. This message seemed to have gone in. He was right: there had been no back-to-back experiment in his situation and so customer satisfaction improvements couldn’t be translated into a project value.
But then the other side of the dilemma was the knowledge that the Sales & Marketing people know that the real world might be complex, but the business world is all about sales and marketing. Their view of the world is that it is eminently possible to make a correlation between a percentage point increase in customer satisfaction and how much monetary value that has to the business. Sometimes they even write the method up (‘someone somewhere already solved your problem’, right?). Sometimes you find papers that publish formulas like this:
This is a classic example of what I refer to as ‘crackpot rigour’. From a complex systems perspective it is complete and utter nonsense. From a Sales & Marketing perspective, however, it is ‘evidence’. At first glance, to some, very convincing evidence. Someone did some maths. Moreover, though, it is ‘plausible deniability’. Which means that when anyone challenges you on the fictitious numbers you might create using the formula for the increase in customer satisfaction you just achieved (fictional though that probably is too, no matter what ‘scientific method’ you used to demonstrate the improvement), you simply point them to the reference. Then, assuming they even both to look, they’ll see something that looks like the maths no-one normal human understands. If they don’t understand it, they figure, neither will their boss. And then things get even better: if they use the reference when their boss challenges them on the numbers, the boss can’t admit that they don’t understand the maths. Rather, they are more likely to be impressed that the people below them are smart enough to understand this kind of gobbledy-gook. And, hey presto, everyone is bought into the numbers.
Suddenly, the ‘intangible’ increase in customer satisfaction the engineer thought couldn’t be quantified, has a very clear number attached to it: every percentage increase in customer satisfaction in our business unit is worth $350,570 in incremental future sales, and hence this project has just delivered $848,652 in tangible value to the Company. Boom.
If I’m sounding cynical here, I propose it’s exactly the same kind of cynicism that allowed people like Jack Welch to stand up and tell the world that GE ‘saved $9B using Six Sigma’. Or Sergey Brin to say in 2012 that all his Google employees would be driving autonomous vehicles within a year, and that by 2018, we’d all be able to pop down to our local car-delaership and buy one. It’s all a lie. But – and here’s the important point – if we do it right it’s a useful lie. If you don’t think you could live with your conscience making up fictitious numbers, remember what’s behind the message when people like Welch and Brin espouse this kind of quantified-fiction nonsense. They know its nonsense, but its nonsense that tells people in the organisation where they should be heading. What gets measured gets done. Connecting customer satisfaction to revenue is pure fiction, but its good fiction. In that it tells everyone in the organisation that improving customer satisfaction is a fundamentally good thing to do.
At the risk of beginning to sound like a broken record, here’s another rant about Nassim Nicholas Taleb’s apparent either/or worldview. In that world, people have either got ‘skin in the game’ or they don’t have skin in the game: they’re either good or bad. In this way journalists, academics, bankers, lawyers and politicians all fall on the bad side of the line and artisans, entrepreneurs and suicide bombers all fall on the other.
To overcome the futility of either/or debates, I thought it might be a good idea to create some kind of a skin-in-the-game landscape. Drawing such I landscape, I figured, might be a way to capture the relative amounts of skin that different professions might have, and the relative amount of skin the people listening to them might have. Here’s the basic idea:
One of the things that I see causing the most push-back to Taleb is the view that all of us have, to some extent, skin in whatever games we choose to play. In that if we do something bad, at work for example, there is the possibility that we lose our job and therefore our source of income. That said, some professions are more likely to sack wayward employees than others. Journalists live on a fairly precarious employment knife-edge these days, whereas few if any bankers or lawyers find themselves out of work as a result of their mistakes. Any sensible skin, landscape, therefore needs to relativise skin in some way. The landscape image attempts to do this via the diagonal ‘Taleb Line’ or ‘Requisite Skin’ line. The idea behind this line is that if I’m advising a client, the amount of skin I possess in my game should be proportional to the amount of skin the client expose themselves to by following my advice. If my bad advice risks N% of the turnover of the client’s enterprise, if I have requisite skin in the game, I should risk N% of my annual income as a sign to the client that I believe in what I’m telling them to do. Above this line means the client’s risk is proportionately greater than mine; below the line and my risk is proportionately greater than theirs.
The figure also allows for Taleb’s acknowledgement of a third kind of person, down in the bottom right hand corner of the landscape, the (relatively rare) individuals with ‘soul in the game’.
The next thing we can do to continue the relativisation journey is to add in a couple of vertical lines. The first one indicates the skin associated with a person’s salary. The second one, set at a much higher level of skin in the provider’s game is a (perhaps hypothetical) level of skin beyond which no individual or provider organisation could hope to match the skin commitments of the recipient. The thinking here is that there are certain capital-intensive construction or infrastructure projects that can only be afforded by recipients capable of mitigating their risks by means other than those coming from the provider. Of the two lines, the salary line is probably the more important one in terms of hopefully helping to answer the complaints of Taleb detractors.
Now we have defined the landscape, we can start adding a few data points to it. In this next image, in green, I’ve tried to pick out one or two of Taleb’s Intellectual Yet Idiot (IYIs) victims, plus one or two of my own. And in orange some of Taleb’s ‘do have skin in the game’ heroes. Or maybe, in the case of suicide bombers, not so much ‘heroes’ as ‘people with a very high level of soul in their game’ (which is why the problems they’re associated with are so difficult for society to resolve). Finally, for good measure, I thought I’d add Nassim Nicholas Taleb himself (blue dot) onto the picture:
Looking at this populated landscape, I think, reveals a somewhat different picture of who the criminals and heroes in modern society are. Criminals in this sense being the people on the top left corner, furthest away from the Requisite Skin line, the people that have the least skin in the game and the recipients of their spoutings have the most to lose.
Most interesting to me is that this picture places Taleb pretty close to one of his favourite targets, journalists. It’s often said that the people we bear the most anomisity towards are those closest to ourselves, and this picture seems to bear that out. Taleb is not far off a journalist himself. He might earn more than most, but once we relativise everything to salary units, he’s pretty much exactly the same. He’s a self proclaimed ‘flaneur’. The same as the better journalists. Taleb’s only real advantage over them being he understands better how the world works at the first-principle level, and he’s had skin in multiple games.
First up, everything is complex. The only thing in doubt is how far up or down the hierarchy of turtles do we have to travel before we realise we don’t really understand what’s going on any more.
I used to design jet engines. Jet engines are complex. Not that we liked to admit it because it tends to make passengers and crew nervous. I used to design compressors. They’re complex too, and we didn’t like to admit that to ourselves either. Much of the complexity occurs when we get down to the molecular level and start to think about the behaviour of the air molecules flowing over the compressor blades, especially in the boundary layer region. Especially when the layer is turbulent. With a big enough computer program you can sort of model what the air molecules are doing, but you’re not entirely sure. So what you do is do the best you can, then build in a safety margin. When every designer of every component in the engine does the same and we bolt all the components together (a jet engine is often described as a ‘quarter of a million components flying in close formation’), we then add another safety margin for good measure. Then you can say to a pilot, provided you keep your aircraft within this operating envelope, we guarantee everything will be all right. Flying the aircraft will still be complicated, but it will be guaranteed safe.
A similar thing happens at the other end of the turtle scale thinking about the people tasked with maintaining the engines. The moment we add humans to the picture, we know we’re in complex territory again. Humans are not error-free. Stuff happens outside work that affects what happens in work. The crew just had an argument about overtime with the boss. Everyone’s in a bad mood. But that doesn’t affect the safety of the aircraft because everything’s been designed so that the complexity is confined within a set of very clear protocols. If anything goes wrong the system won’t let you proceed. If any two things go wrong, ditto. And so on.
(By the way things aren’t this sophisticated in other industries. If the driver of a car does something stupid, the safety systems will usually save the day. Most car accidents occur when two or more people aren’t driving with due care and attention – driver A is sending a sneaky text and accidentally drifts out of their lane at the same time Driver B is shouting at the kids in the back of the car and isn’t really looking when they pull back into the same lane after overtaking another car.)
Meanwhile, back in the far safer air, so long as pilots stay within the allowed operating envelope and everything will be okay. Complicated, but okay. Go outside the limits, and now you’re back in the hands of complexity. Sometimes circumstances put you in such situations. If you’re a fighter pilot in a dogfight for example. Now you can easily find yourself outside the controlled ‘complicated’ envelope and into a world of ‘not quite sure’. Keep veering into this territory and fairly swiftly ‘complex’ turns ‘chaotic’. Reach the chaotic stage and you’re really in trouble. While in complex, at least you know know the likely dangers because part of the job of designing and testing the aircraft has been to go and fly in those parts of the envelope to see what happens.
The next important feature are the labels on the axes. They’ve been ‘non-dimensionalised’. Which means that the numbers are relativised to the real world around the aircraft. If the environment changes, the aircraft’s position in the envelope changes. Pilots have to live in the real world, and that’s only possible when we know where we are relative to our surroundings.
There’s another, final, feature of the flight envelope story that is important, and that’s the ‘design point’ or ‘cruise point’. This is the point in the flight envelope to which everything has been optimised. It’s the point where the aircraft will spend most of its time. It’s the point where the plane pretty much flies itself. At the cruise point, not only has the complexity been absorbed, so has the complication. The plane is in ‘obvious’ mode.
Every enterprise seeks to find these ‘obvious’ operating conditions. The primary thrust of things like Six Sigma is to minimise deviation from an optimum ‘design point’. Deviation in the Six Sigma sense is bad. Deviation brings complication. And, worse still, if we deviate too far, we can find ourselves out-of-control in a complex environment. The main difference between aircraft design and most enterprises on the planet is that in an aircraft you have to know where the obvious, complicated, complex and chaotic boundaries are, and not only that, you also have to know where you are relative to the environment that you’re operating in. When I try and quiz enterprise leaders on these issues the inevitable response is they have no idea what I’m talking about. They have no idea what their envelope is. And even less idea of how to non-dimensionalise the envelope to take account of the external environment. If pilots don’t understand these things, they don’t survive for very long. The same applies to business. If managers have no idea where they are, the likelihood of appropriately managing the business is small, and, as the environment gets increasingly turbulent, becomes ever smaller. Especially since, irrespective of our desire to make everything obvious, the reality is turtles-all-the-way complex.
“Never read a book written by a journalist” Nassim NicholasTaleb
Here’s my new piece of confirmation bias. Nassim Nicholas Taleb’s either/or worldview. Which seems to be rapidly spreading to all of his legions of followers. In this world, you’re either a no-skin-in-the-game charlatan, or one of the rare cluster of types – entrepreneurs, artisans, etc – that might be described as skin-in-the-game-Taleb-friends.
The world is only either/or if we lack a modicum of creativity.
When we give ourselves permission to think creatively, we rapidly realise it is always possible to achieve both/and solutions.
Per my last post, there aren’t two types of people in the world, there are four:
There are people who are just idiots (95% of the population), there are the ‘journalists’ subjected to Taleb’s ire, there are his Friends, and – top right – there are people who both have skin-in-the-game and can contextualise a wider world that those with the skin-in-the-game usually can’t see.
Admittedly, this last group is the smallest of the four. But it does exist. And the only reason it’s the smallest, I’m convinced is because influential flâneurs like Nassim Nicholas Taleb don’t seem to be able to (or want to) recognise that if we’re creative, we can have our cake and eat it too.
We’d all be a lot better off, I think, if instead of making futile either/or attacks, we’d start looking for, encouraging and recognising the people in the top right hand corner of the matrix, the ones that solve the contradictions.