Predicting The Future Of A Generation That Has Only Just Started To Hit Their Teens?

(These words formed the initial draft of the first chapter of the GenZ book we contributed to earlier this year – http://www.happen.com/48-hr-book/download-the-48hr-book. Page number limits ultimately meant it didn’t fit… so it’s now here instead.)

 

“At bottom every man knows well enough that he is a unique being, only once on this earth; and by no extraordinary chance will such a marvellously picturesque piece of diversity in unity as he is, ever be put together a second time.”

Friedrich Nietzsche

or

“There are only 40 people in the world, and five of them are hamburgers.”

Don Van Vliet (Captain Beefheart)

How and why does each of us grow up to be a unique individual? Are people’s characters fixed early in life or can they change as adults? How will our collective characters affect the future? What will the stock market be doing in two years’ time? In five? In ten? What products and services will people be buying? What won’t they be buying any more? How can parents best prepare their children for the future? What should they be doing for them? What should they not be doing?

We, all of us, like to know what’s around the corner. The human brain is, in effect, a prediction machine. Albeit one that only tends to look forward a short distance before our powers of deduction fail us. Some purport to do the job better than others. Some even write books about what the future will look like. Sadly, some of the things that emerge from these predictions tend to come back and haunt the authors. Heavier than air machines will never fly, there’s a global market for about a dozen computers, no-one will need more than 64K of computer memory. The inability of even the experts closest to their subject to get it right is the stuff of gleeful legend. To the extent that, if anyone approaches us claiming to be able to see into the future, our best course of action is probably to cross the road and get as far away from those people as possible.

So why are we now about to do the same thing?

Well, first up, at the very least, we’re strong believers that planning for the future is important. Even if the plan that emerges ends up with a relatively short shelf-life. Secondly, because we’ve been working at this problem for the last 20 years now, we think we’ve learned a few things about the way things evolve that allow us to do a better job of mapping the future than anyone else out there. Not that that is necessarily saying very much. If the finest minds on the planet can’t do it, what chance have we got?

Actually quite a big one. It’s a chance that starts from an idea we all carry to some extent: just because we can’t predict everything about the future, doesn’t mean that we can’t predict anything. For some reason, the world seems to have developed a depressingly black-and-white view of futurology, when in reality it’s a million different shades of grey.

Some aspects of the future are nigh on guaranteed. The number of babies born into the world this year, for example, gives us a pretty good indication of how many primary school-age infants we will have in five year’s time. And a strong set of clues about the extent of government services, the number of doctors and nurses and the amount of food and water we will need for the next 80. But somehow our governments seem to be surprised when these kinds of things pan out the way they do. We might think of them as ‘inevitable surprises’.

Beyond these ‘inevitable’ things then come a bunch of things that are to some degree calculable. It’s difficult to know with any kind of certainty, staying with the primary school theme, precisely how many parents will decide to home-school their precious offspring, but that’s not the same thing at all as being able to make some kind of meaningful calculation based on past patterns of behaviour.

And that’s where the methodology underpinning this book comes in to play. There are a host of patterns that we can look back through time and see playing out time and time again. We can also see that there are times when they don’t. Traditionally, that’s when these kinds of prediction stories come to a sticky end. The real trick is knowing why sometimes patterns repeat and sometimes they don’t. That’s the underpinning ‘DNA’ of our research, and the heart of what we’ll reveal about Orkids in this book.

Look back through history – whether it be decades or centuries – and one of the things you can observe are a host of oscillatory patterns. Between left and right wing governments for example. Or between economic boom and bust. Centralisation and decentralisation. Individual freedom and collective responsibility. Gender difference. Or between baby-booms and baby busts. These oscillations keep occurring so long as no-one tackles the underpinning conflicts and contradictions that create them. Example. The NHS has recently undergone another traumatic re-organisation and shift of power away from ‘managers’ and back to ‘clinicians’. In addition to being traumatic for those involved, it’s a shift that has already been massively expensive in both cash and patient care (or lack thereof) terms. Crucially, too, all it has done is shifted the same basic problem from one side of the trade-off back to the other. And because that’s all that’s happened, we can make a fairly safe prediction that at some point in the not too distant future, the pendulum will swing back in the other direction.

Here’s another one. If you’re a parent with young children right now you’re very likely to have them on a pretty short leash. It’s a good idea to know precisely where they are and what they’re doing at any moment in time. If only because all the stories you hear in the media tell you this is what parents are supposed to do. Go back 40 years though and parental attitudes were very different. Some of the members of this team of authors were practically feral when they were kids. If you’d’ve asked those parents where are your kids, they would very likely have shrugged their shoulders and speculated, ‘out playing?’ Which is not to say that those parents loved their kids any less, but simply that what we’re seeing is two ends of the same pendulum. The fundamental contradiction between looking after our children while simultaneously providing them with the skills they need to, later, survive as adults in the big wide world still hasn’t been solved. And probably won’t be for a good long while yet. Which in turn means we can make a fairly safe prediction that parental-leashes will start to lengthen again in the future. The only uncertainty here is when?

And, we propose, even that answer is mappable with a fair degree of precision. A precision based on a (painfully gathered) understanding that the ‘pulse rate’ of many things in society is dictated by a generational transfer of behaviours from one generation of parents to the next generation of offspring. The way you were raised by your parents, in other words, will later on affect the manner in which they will raise their own.

That’s the first strand of the basic bottom-up ‘DNA’ of the model we use in this book. The way in which you the individual reader reading this paragraph were raised by your actual parents will influence the way you are or will raise your own offspring. Like your parents, you too are unique. Just like everyone else in your group of friends and peers. And that’s the next strand of the societal DNA… it’s difficult for any of us to stand out too far from those peers. In no small part because the media keeps reminding us that it can be a pretty lonely place standing too far away from the crowd. Society, in other words, has a way of putting in place correction mechanisms that mean we all tend towards a self-organising average.

The third and final strand of DNA holding this book together is an understanding of complex systems, and specifically the idea of emergent behaviour. What this translates to in practical terms is that there are a whole bunch of random events that occur in the world, some of which will quickly fade into insignificance while others will come to change all of our behaviours. ‘Shit happens’ was the oft used phrase of the 80s, but society’s reaction to whatever shit it might be is very strongly conditioned. And, moreover, is most often conditioned by our generational cohorts. Of which, contrary to the suggestion of Captain Beefheart at the start of this chapter, it turns out thus far that there are only four. Only one of which is a hamburger.

Children, to move on swiftly before we get into an argument we don’t want to start, have been the subject of kidnappings since humans evolved to live in tribes. Thousands of children a year are kidnapped. But if you look back through the last hundred years only two seem to have stuck in the collective memory. Today, it doesn’t matter where you are on the planet, people know the name Madeleine McCann, and the fact that poor little Maddie still hasn’t been found. The other name is Charles Lindbergh Jr. Okay, you may not have heard of him, but we all still remember his father, the first man to fly solo across the Atlantic. Back in 1932, though, it was the kidnapping of Lindbergh’s son that had the global media in the same Maddie-frenzy we see today. The fact that Lindbergh Jr and Madeleine McCann are precisely four generations apart is, we propose, quite significant. Out of all the thousands of kidnappings that take place, these are the ones that hit a moment in time when the world was at its most receptive to media messages reminding parents that the world is a dangerous place, and you need to keep your eyes on your precious little ones at all times.

Here’s another one. September 11, 2001. A day when, no matter where you lived, the world changed. Its four generation ago equivalent was the Wall Street Crash of 1929. Again, events can happen at random, but society’s reactions are strongly conditioned by generational effects. Both 9/11 and the Crash – two quite different (random) events on one level – ended up having the same basic trigger effect on societal patterns. 9/11, indeed, proves to be particularly significant as far as this book is concerned. Many things changed ‘post 9/11’, but one in particular was the behaviour of parents. A baby born into this new world was a baby born to parents who now had tangible evidence that the world was a dangerous place. A dangerous place that meant a significant shift in parenting behaviour towards making sure our kids were safe at all times. 9/11 turned out to be a significant generational turning point. And, in a classic case of ‘you reap what you sow’, we’re just about to start experiencing some of the consequences of that shift in parenting behaviour. The oldest of those post 9/11 babies hit their teens this year. And as such – no matter what their suffocating parents might think about it – they begin to start making their own way in life. Making their own decisions and doing what they want to do rather than what their parents might desire. And that’s precisely why we’re publishing this book now. Sure, we’ve been researching this subject for the last 20 years, and sure too that research will continue for the foreseeable future, but the reason for embarking on a ’48 hour’ book writing blitzkrieg is that moments like this only occur once every four generations.

Now, we don’t know about you, but if anyone comes to us trying to tell us that our Society emerges from a bunch of patterns that somehow keep repeating every four generations, no matter how hard they might argue their case, we’re still unlikely to believe them. That’s why our entire research rationale for the last 14 years has been to try and dis-prove the model. The fact that – no matter where or when in the world we look – we’ve as yet failed to do that means that we’re happy to present some of the things that come out of applying the model. One might say were at the stage of believing ‘all theories are wrong, but some are useful’. We know ours is at the ‘useful’ stage because we’ve been working with clients from literally all walks of society in just about every region of the world helping them to design and deliver what we can now rightly claim to be billions of dollars of new revenue and hundreds of millions of dollars of bottom line savings. The model, in other words, has been verified and validated in the only meaningful manner possible: did it tell us something that allowed us to create deliver a successful step-change to our clients.

It’s not the job of this book to describe all of the underpinning research to readers. Any that want to delve deeper might like to explore one of our TrenDNA or GenerationDNA texts. The job of the book is rather to reveal clues and insights into a specific emerging generation of what we’ve come to think of as Orkids. In the next chapter we’ll share enough of the model to show readers why and how this generation will be classified as ‘Artists’ and what this means for the next twenty years of their evolution. After that the focus shifts to the construction of a description of likely characteristics of the Orkids and, then, to some of the likely implications, threats and opportunities for parents, teachers, government officials, product designers and marketers.

We realise, almost finally, that there are sceptics out there (hello, Generation X readers!) who wouldn’t believe this stuff even if they’d lived with us for the last 14 years. To them we say, it’s great that they bring that scepticism to bear on the words to come in the rest of the book. We’re not asking anyone to ‘believe’ every word of what we write. What we are asking is that, at the very least, you use our thoughts as provocations, stimulus and some perhaps far-fetched sounding clues to base some of your future scenarios around. Insight, we firmly believe, comes from contradiction. It is, therefore, the places where you find yourself disagreeing most vehemently with our projections, where the greatest innovation opportunities exist.

Finally, by way of a health warning for those carrying an upbeat, glass-half-full view of the world (hello, Generation Y readers!), a lot of what we’re suggesting is likely to occur in the next twenty years isn’t good news. Not for our Orkids or the world they are about to begin exploring for themselves. The next ten years, our model suggests, is likely to see the calm-to-crisis pendulum swing even further into the direction of ‘crisis’. Some people won’t like to read these words. We write them for two reasons. Firstly, in any crisis period there are always winners, and you’re more likely to be one of them if you have your eyes open and know where and how to look for the inevitable opportunities. Second, and more important, knowing that complex systems are emergent, we also know that the crisis isn’t inevitable. Or, if we’re already too late to prevent it from happening, at the very least, we might – collectively – be able to change sufficient small things to create a momentum that mitigates the worst of it. Our Orkids are depending on us.

Big Data Capability Levels

Well, it was always going to happen, but the world of Big Data has recently very likely hit its peak of over-inflated expectations on the Hype Cycle. We know this because every one of the Big Five consulting leviathans has had to create their own special version of ‘social intelligence’, ‘BigInsights’ or ‘Big Decision Analytics’. Large multi-national players get away with these puffed-up offerings largely because they know there’s an enormous market of managers and leaders who a) have been told Big Data is the future, b) don’t really understand what it is or means, and c) know that if they buy a Big Data package from a Big Five player and things go (inevitably) wrong, they can turn around to their bosses, shrug their shoulders and say they did the best they could. This is how the world of over-inflated expectations works.

Matters will right themselves soon enough. Mainly because the market will learn that some types of Big Data Analytics (BDA) are bigger than others.

The best way to start sorting the Big from the Bigger – to take precedent from other walks of life – is to create some kind of standard or language that allows people to understand what kind of capability exists in a given BDA offering.

The immediate challenge involved in creating any kind of standard in the Big Data world, however, is that there are many different dimensions to consider – does the software have self-learning capabilities? Does it handle input from different senses? Does it segment different types of population? to take just three relatively simple examples.

All of these and more will need to be incorporated into a mature evaluation methodology one day. Today, though, I think we need a place to start, and for me the best place to make that start is by looking at the ‘engine’ of any BDA system – the algorithms that convert the mass of incoming Data into a (hopefully) meaningful set of outputs.

Already, just looking at the world through this lens, we can see a confusing smoke-and-mirrors sea of different types and levels of capability. Probably because few if any BDA providers would like outsiders to see what is – or, more usually, is not under the hoods of their Big Data vehicle.

Here, then, by way of trying to de-mystify things a little is a first attempt to try and distinguish between the various different engines on offer:

 

BDA Level 1

The first thing you observe when attempting to blow the obscuring smoke away is that the large majority of Big Data initiatives are based purely on the analysis of numerical input data. These kinds of quantitative algorithms define what I would say is a very clear ‘Level 1’ capability. They include things like Loyalty Cards and market research questionnaires based on Likert Scale responses from respondees. According to our research, somewhere over 80% of BDA programmes are working at this Level. Some more successfully than others. The Tesco ClubCard, for example, was done early enough and well enough that it became the main driver behind Tesco’s success over the last 20 or so years. Today, sadly for Tesco’s, we start to see the limitations of quant-only Big Data with the supermarket giant currently hitting the headlines for all the wrong reasons: analyzing the numbers will take you so far, but no further in your attempts to understand what goes on in peoples’ heads. Better to know how many people bought your anti-dandruff shampoo last month that not know, but not really that helpful to know how to change things to sell more next month.

 

BDA Level 2

The key to defining any kind of capability model is to identify the step-change differences between one system and the next. The most obvious first step-change that we see having happened in the BDA world is the shift from quantitative to qualitative analysis; from numbers to words; from star-ratings to narrative. A Level 1, quantitative analysis allows us to see that a reviewer gave their anti-dandruff shampoo a five-star rating. A Level 2 qualitative analysis allows us to gain a few first clues about why they liked the product. The predominant Level 2 BDA output tool right now is the Word Cloud. Which, in essence, is merely a tool for counting the number of times different words appear in a sample of narrative.

word cloud

 

 

 

 

 

BDA Levels 3, 4 and 5

So far so good in terms of mapping step-changes in BDA capability. Beyond Level 2, unfortunately, things get a deal more complicated for a while. The problem here is that BDA is a convergence technology in which multiple different research communities find themselves starting from different places, but, because they’re working on the same basic problem – namely how do we improve the accuracy of an analysis of narrative input – all eventually begin to converge on solution strategies that are ultimately complementary to one another. From where I sit, there seem to be three main step-changes that have variously been identified:

  1. Semantic/’Natural-Language-Processing’
  2. Ambiguated Signifiers
  3. Relativism

Any one on its own will improve the accuracy of a Big Data analysis activity, but because different researchers have started from different places, it’s not possible to say that a Semantic-enabled solution is ‘Level 3’, or that a capability making use of Ambiguated Signifiers is ‘Level 4’. There is no ‘right-sequence’ in other words for implementing the different step changes. The ony meaningful way to describe a given capability as one of the different Levels I propose is to say that a Level 3 BDA solution has implemented one of the three possibilities; a Level 4 solution has implemented two of them; and a Level 5 solution has implemented all three.

Here’s a quick guide of the three technologies as they apply in the BDA world:

Semantic/Natural-Language-Processing – comprise algorithms capable of ‘reading’ narrative to the extent that the analysis can extract information relating to the structure of sentences (subject-action-object triads for example). The main benefit obtained from a semantic-enabled BDA capability is its ability to identify and eliminate false-positives from an analysis. It is very easy, for example, to count the number of times the word ‘cross’ appears in a collection of words, it is a deal harder to work out how many of them relate to someone who is angry versus someone who visited Kings Cross recently versus someone who merely wears one. This is the sort of job a semantic analysis capability will do. If it’s a really good semantic engine it will be further be able to identify the sentence negations – i.e. recognizing that someone who’s ‘cross’ and someone who’s ‘never cross’ are very definitely not saying the same thing.

Ambiguated Signifiers – this step change happens on the input side of narrative analysis. It builds on the recognition that when we, for example, ask consumers a question about a product or service, they tend to do one of two things: a) they either ‘gift’ us the answer they think we want to hear, or, b) they ‘game’ the analysis by deliberately setting out to confuse our analysis (hello, GenX’ers!). Either way, when we ask consumers questions like, ‘how likely are you to recommend this product to a friend?’ we’re very unlikely to obtain an answer that is reliable in any way. Ambiguated Signifiers are all about questioning techniques that disguise true intent in such a way that a participant no longer knows how to, or has a desire to gift or game their answers. You can generally spot a BDA provider that has thought about this problem because they’ll tend to ask questions that are either very vague (‘tell me a story about the last time you had dandruff’) or apparently nothing to do with the topic of investigation at all (‘tell me about a situation in which you were embarrassed’).

Relativism – fundamental to the way in which our brain interprets the world is the way we map the relations between things. A person that earns £25K a year will declare themselves to be much happier if their peers all earn 20K than if they all earn 50, even though in both cases they have exactly the same amount of money in their pocket. The BDA implication of this kind of world model is that we’ll get a much more representative answer from an analysis if we ask question that start with ‘compared to…’ or ‘describe a time when you were in a situation like…’ A good ‘relativism’ analysis engine recognizes that the relationships we build between things are at least as important as the things themselves, and that a meaningful analysis needs to examine both. ‘Two substances and a field’ if you’re familiar with TRIZ.

 

BDA Level 6

By the time you’ve reached Level 5, you’ve eliminated almost every one of the BDA providers on the planet. If you try and look beyond this level, you’re basically left with PanSensics. The start point, in fact, for the PanSensic development was the recognition that a lot of what people say has very little to do with their subsequent behavior. Per the J.P. Morgan aphorism, ‘people make decisions for two reasons, good reasons and real reasons’. Making use of ambiguated signifiers is a useful first step towards capturing the behavior-driving ‘real’ reasons, but the real step change in this direction only occurs when the analytics capability becomes specifically focused on capturing what comes from our limbic brain rather than from our rationalising pre-frontal cortex (PFC). Level 6 BDA is thus all about ‘reading between the lines’ of narrative input to listen to what is coming from our limbic brain. Our JupiterMu metaphor-scraping engine represents a good example of what this Level 6 capability is all about: when you ask a consumer what they think about a product, their rationalizing PFC is hard at work trying to disguise what is happening in the limbic brain. Generally speaking the PFC works fast enough to be able to construct a reasonably coherent set of reasons why we do or don’t like something. Our PFC, on the other hand, is not fast enough to massage and re-engineer the metaphors we use, and so a BDA engine that is tuned to extract and analyse metaphor content is much more likely to capture what’s happening in our behavior-driving limbic brain. In effect, the entire PanSensic research programme has been about tapping in to all of the various different ways to capture limbic-brain content.

 

BDA Level 7

Capturing what’s happening in our limbic brain is as good as it gets as far as being able to predict how people will behave. It doesn’t, however, represent the end of the journey as far as BDA capability-building is concerned. The ultimate job BDA is there to do is to know what people will do. The key word at Level 7 being ‘will’. As in ‘in the future’. A Level 7 BDA capability – also now a key part of PanSensics thanks to our TRIZ/SI roots – is not just about analyzing what’s happened, but to be able to extract insights into what people will do in the future. It is, in other words, about prescience. From the TRIZ perspective, the key to mapping the future involves finding and then resolving conflicts and contradictions. BDA Level 7, then, is about building in the capabilities to do this job. It’s about uncovering and interpreting the logical (and illogical) inconsistencies that people express when they’re telling a story, and moreover, doing it in a way that allows conflict-solving solutions and strategies to be designed. We’re in the process of testing how best to present this kind of Level 7 insight with some of the dashboards we’re building for clients. One way of doing it has involved the creation of ‘hazard warning lights’ that illuminate when a new opportunity or threat arises as a result of the emergence of some form of conflict.

 

More on that topic, no doubt, on this blogsite and in the SI e-zine in the not too distant future. Ditto the findings of our current ‘BDA Level 8’ step-change activities. For the moment, though, I suggest that we have at least the bones of a transferable standard by which to assess any given BDA offering. Before you write that big cheque to analyse the petabytes of data sat on your company servers, you might like to think about what level of accuracy and insight you might be looking to achieve from it. Here’s a crude starter for ten:

bda capability

Aphorism Hierarchies?

I love aphorisms. I love them because they’re like little nuggets of truth. I also love them because, no matter what the situation you find yourself in might be, you can always find something to suit the mood. One of the reasons that’s possible is, if you look a little deeper, for every aphorism saying one thing, you can always find another saying the polar opposite. Aphorisms, in other words, contradict one another.

And if there’s anything I love more than aphorisms its contradictions. Contradictions lay at the heart of innovation. Innovation is pretty much all about solving contradictions: finding two conflicting truths and uncovering a higher level truth that allows both the original truths to hold true. So sayeth Hegel in his thesis-antithesis-synthesis model, back at the beginning of the 19th Century.

Which all goes to show there is nothing new under the sun. One of my favourite aphorism.

Here’s another one: ‘you can never step in the same river twice’.

Taken together, I think they make an elegant example of a contradictory pair: one says that everything is new, the other says nothing is. So how can they both be true? And if they are representative of a thesis-antithesis pair, what might the synthesis look like?

One of the most effective ways to try and solve this kind of puzzle (he would say this wouldn’t he?) is to construct one of the Systematic Innovation ‘Contradiction Maps’. Here’s what I think it might look like for this pair of aphorisms:

metaphorism 1

Looking at this picture as a whole, per its intended function, presents us with a variety of different ways to try and either solve the contradiction or challenge the underlying assumptions that connect each of the bubbles. We get a good clue from looking at the physical contradiction ‘look to the past and don’t look to the past’. We get another one by thinking about the left-hand side of the Map and what ‘successful outcome’ might mean. What’s the successful outcome we’re looking for that would come if both the aphorisms are true?

I think the answer to that question has something to do with learning from the past (not re-inventing the wheel), but simultaneously being able to apply it to solve a problem in a future that will inevitably be unique.

Taking this idea a step further, I think an aphorism that successfully synthesizes a solution to this past/future thesis-antithesis pair is Sir Isaac Newton’s saying, ‘If I have seen further than others, it is by standing upon the shoulders of giants.’ In other words, looking to the past is about working out which giant’s to go and stand on, and not looking to the past, is about standing on those shoulders and looking in to the (unknown) future.

In other words again, I would say that Newton’s aphorism sits at a higher level to the other two:

metaphorism2

Which possibly makes it some kind of ‘meta-aphorism’?

All of which makes me wonder whether it might be possible to repeat such a thesis-antithesis-synthesis trick for every pair of contradicting aphorisms. That’s a piece of research I’d love to see. What intrigues me most about this is whether – per this example – we end up with a meta-aphorim pyramid, on the top of which sits the ‘ultimate’ aphorism, the one that explains life, the universe and everything. Or whether we end up, like a benzene ring, working our way in a gentle arc back to where we started. Do all the aphorims of the world form a hierarchy or a circle? I think we need to know.

Metaphorical Trapdoors

As the post-match analysis of the Scottish Referendum gets into gear, many commentators have already made mention of the likely significance of the barnstorming speech of Gordon Brown given on the day before the voting took place. By any account it was a beautifully constructed and passionately delivered call to arms for the Better Together and ‘undecided’ voters.

From my perspective, it struck exactly the right chords in terms of the heart of the No campaign argument. Our analysis (see previous blog post) to try and establish the core virtuous loops of the argument for ‘Better Together’ suggested there were two things that the No side of the debate would do well to focus on: first, the 3+1>4 synergy effect of being part of the Union, and second, playing on the economic doubts associated with Scottish debt.

Read the speech (http://www.buzzfeed.com/jimwaterson/gordon-brown-delivered-a-passionate-speech-against-independe#zzcu49) and you can quickly see Mr Brown hit the nail squarely on the head on both counts.

Here’s what he had to say on the synergy part of the story:

“So let us tell people of what we have done together.

Tell them that we fought and won a war against fascism together.

Tell them there is no war cemetery in Europe where Scots, English, Welsh and Northern Irish troops do not lie side-by-side. We fought together, suffered together, sacrificed together, mourned together and then celebrated together.

 And tell them that we not only won a war together – we built a peace together, we created the NHS together, we built a welfare state together.We did all this without sacrificing within the union our identity, our culture, our tradition as Scots. Our Scottishness is not weaker, but stronger as a result.”

Not only that, when we analyse the entire speech with our PanSensic tools, especially when we compare what Mr Brown said relative to what Alex Salmond was saying in his run up to the start of the vote, we see that the ‘yes, and..’ tone indicative of the synergistic builder was the dominant one across the whole speech:

gordon brown 1

Notice too how, beyond the expected emphasis on the ‘uniter’ tone, Mr Salmond was far less focused overall in his output.

We can see a similar focus characteristic when we look at the Brown and Salmond words through another PanSensic lens, this time looking at emotional archetypes. As we might expect, both featured the ‘warrior/lover’ archetype strongly. But while Brown’s speech was almost exclusively speaking from this archetype, Salmond may well have blurred his position by also speaking from a ‘Monarch’ stance:

gordon brown 2

High ‘Warrior’ archetype scores are closely connected to authority figures that are positioning themselves as the ‘right’ person to tackle issues of fear. Playing on the fear card can always be a tricky one if you don’t get the message exactly in tune with the intended audience. Again, I think Mr Brown got it just right, both from the overall tone perspective, but also through his ‘economic trapdoor’ metaphor and the sentence, seven deadly risks pushing us through an economic trapdoor from which there is no escape’.

Here was what I think will come to serve as a classic ‘word that speaks a thousand pictures’ situation. Not only was it massively subtle in its construction, but it was also evocative enough that all the media picked up on it, and, as a result of that, it planted an unforgettably potent mental image in the mind of anyone that heard it.

And that’s perhaps the ultimate brilliance of the speech. It’s also very likely the lesson we might all take away from this story: understand what the key issues are and then connect them to a powerful visual image that supports your argument. Over the course of what will very likely go down in history as the most striking 13 minutes of his political life, Gordon Brown did exactly that for both of the core issues. Consciously or otherwise, we all went to bed on Wednesday night thinking of trapdoors and British soldiers in war cemeteries.

Scottish Referendum: Perceptions & PanSensics

scotland piechart

The smartest move any English person can make this week would most likely involve staying as far away from discussing or expressing opinions about the Scotland independence referendum as possible. On the other hand, the ongoing debate represents a somewhat unique opportunity to examine and analyse two opposing sets of perspectives to see what they might reveal.

Why, for example, is such a large percentage of the voters still apparently undecided three days before the referendum?

Is there a way to get beneath the noise of the ‘yes’ and ‘no’ campaigns to see what the core issues are?

And, if there are ‘core issues’, what might the politicians best say during these last days of electioneering to get the undecided to land their vote on their side of the debate? Or rather, what should the impartial onlooker be listening out for in the next three days?

In any kind of complex situation like the one in Scotland right now, we tend to use our Perception Mapping process to try and make sense of the things that a group of people are saying about a situation.  This time around, however, we can draw up a pair of Maps – one attempting to make sense of the Yes campaign and one making sense of the No.

In the Yes campaign, a scrape of the various media commenting on the debate revealed 19 main reasons why Scots should vote ‘Yes’ to independence. Having compiled this list we conducted our usual ‘leads to’ analysis to try and make sense of the relationships between each of the reasons in order to establish what the main issues were. Here’s what the resulting Perception Map looked like:

scotland yes If anyone is interested in looking at the full analysis, we’ll be featuring it in the September issue of the Systematic Innovation ezine to be published after the dust has settled on the Referendum. For now, fosucing on the key points of the Map, the Yes campaign pretty much distills down to two core issues:

  1. A hope that independence will create a virtuous cycle of fairer society, better wage equality and increased importance of the family.
  2. A hope that independence will create a virtuous cycle of job creation, leading to greater individual benefits, fresh ideas and hence even more job creation‘Yes’ advocates would do well to talk convince voters about the validity of these two assumptions. Conversely, ‘No’ advocates would do well to try and show that they are not valid assumptions. Impartial outsiders, might like to watch for evidence of either thing happening in the coming days. If the race is as close as current polls are suggesting, the right words in either of these two directions might just be sufficient to sway the result.

So, what about the No campaign? An equivalent search for reasons to say No was performed across the media, this time revealing a slightly shorter list of perceptions. Here’s what the final Perception Map looks like when we map the relationship between them:

scotland no

Again we end up with two core independent issues:

  1. A downward spiral concerning doubts about the future being expressed both individually and collectively, and driven by the fact that Scotland is currently financially dependent on the UK
  2. A hope that by being part of the Union Scotland opens up the opportunity for greater cross-border unity and synergies coming from the Government in Westminster and their ability to look at the UK from a big picture ‘whole’ perspective.

‘No’ advocates would thus do well to play up the negative uncertainty issue and the positive synergy opportunity. And, conversely again, the ‘Yes’ campaign should be looking to allay the doubt concern and offer up arguments against the synergy opportunity. Impartial outsiders might like to keep a look-out for signs of either set of arguments to again see how the ‘undecided’ might be influenced.

So much for analyzing what people are saying. Perhaps the bigger issue at this point is whether there really are over half a million Scots that haven’t made their minds up yet. This is obviously a much more difficult question to answer, but as such makes it a perfect challenge for our PanSensic toolkit to try and sort out. Here’s another topic requiring a more detailed discussion than is appropriate here. That said, we’ve done a lot of scrapes of what Scots are saying about the Referendum in the past couple of weeks to get a flavor of what’s going on ‘between the lines’. Here are the results of a pair of the PanSensic tools that we’ve used to analyse what’s being said on the ‘Yes’ and ‘No’ sides of the debate:

scotland pansensic

We’ll leave those familiar with PanSensics to try and summarise what this picture is trying to tell us. From my perspective, I think the story they’re trying to tell us is that the polls are likely to be quite significantly in error. And that the ‘No’s will have it by a big enough margin for the pollsters to be concerned about the validity of their techniques. Whether that will result in a surge in interest in PanSensics is rather more difficult to predict <grin>.

 

 

People Say Things For Three Reasons

Anyone that’s spent any time at all with anyone from the SI team will have fairly rapidly grown sick of hearing us use the J.P.Morgan aphorism, ‘a man makes a decision for two reasons, the good reason and the real reason’. The idea is that it acts as a reminder to always be thinking about both the tangible and intangible factors that lay behind a decision. Or a piece of communication.

A couple of weeks ago we had a timely reminder that there is often a third reason why people say the things they do. The furore in the UK when Government Minister, Mark Simmonds announced that he was resigning because, to para-phrase, he wasn’t able to live on the salary the government role attracted. When the public learned that this income, including housing allowance, amounted to £120,000 a year, to say there was a lack of sympathy would be one of the understatements of the year.

simmonds

 

 

 

 

Now, working on the assumption the Right Honourable Mr Simmonds is a smart guy, one is still left wondering what his real reasons behind his resignation might have been. Some internal politicking with the rest of the Conswervative Party, for example, or a desire to protest at an issue he felt strongly about. Something with a ‘pay peanuts, get monkeys’ theme perhaps? He might have a point. But, regarding his ‘good’ reason, using poor salary wasn’t perhaps the best choice in the world.

Given the public outcry that followed his announcement, it feels to me like there was a clear third reason why Mr Simmonds explained the resignation in the way that he did. Maybe people say things for three reasons. The good one, the real one, and the half-baked, ‘didin’t think this one through properly’ reason.

What’s for sure is he certainly didn’t try and see the way his words might be interpreted by his constituency members or the public at large. Or, if he did, there was a striking flaw in his logic somewhere.

To most people in the UK, £120,000 a year represents an awful lot of money. If Mr Simmonds had had them in mind, and had spent no more than another couple of minutes thinking about things, he might well have chosen to give a different ‘good’ reason for his resignation. Had he said, for example, that he was resigning from the Government because his role was forcing him to spend too much time at work and not enough time with his family, he might well have garnered a whole lot more sympathy. Well done, a lot of parents might well have said, someone who’s prepared to give up his career for the sake of his work/life balance and to spend more quality time with his family.

But no, Mr Simmonds succumbed to third-reason logic. Perhaps he was trying to remind us all that foot-in-mouth disease is an ever-present danger. That there is always the possibility that after we’ve thought through our good and real reasons, there’s still one more reason to go.

 

The First Rule Of TRIZ Club

About a year ago we conducted a study to investigate the impact TRIZ had made inside organisations. To say the results were disappointing was something of an understatement. Even within ‘famous’ TRIZ users like LG and Samsung evidence that TRIZ was genuinely contributing to the success of either organisation was sparse to say the least. The problem we uncovered bears a lot of similarities to the GE Six Sigma story from a the last twenty years: first-up, no back-to-back experiments were ever conducted to demonstrate that the benefits being purportedly delivered by SixSigma wouldn’t have been matched or exceeded by any other toolset or method. Second, and probably more importantly, once CEO Jack Welch had stood up and said that the company had saved $9B through Six Sigma, whether it carried any truth or not, the method began carrying a ‘good for your career’ aura that quickly turned into a self-fulfilling prophecy: now anyone anywhere inside the organisation had a vested interest to attribute any money they saved on any kind of project to their use of the method, ensuring the real truth would never be known.

It has become a very similar thing at Samsung. Except. Despite the ever-growing number of employees who have attended workshops, the self-declared statements from the TRIZ team regarding the number of patents they’ve filed suggest TRIZ has contributed very little to the wave of success being experienced by the organisation. We only have to compare the number of patents being attributed to TRIZ to to the per-capita patents filed by the company as a whole to see that something doesn’t add up somewhere. According to this kind of comparison, the use of TRIZ would appear to impede the invention process by a factor of around three. Again, it is almost impossible to get to an objective truth, but the view from outside, it has to be said, doesn’t look great.

Anyway, following this disappointing result, we shifted our attention to individual TRIZ practitioners. If there was no evidence that TRIZ was good for organisations, we speculated, was there any to demonstrate that TRIZ proved to be good to a person’s career. We immediately, of course, fall into the same problems as occur at the organisational level since there have been no – nor could there be any – back to back trials comparing a ‘with-TRIZ’ person to an individual who knew no TRIZ or who maybe used another method. We can’t even realistically go and ask individuals what they thought TRIZ might have or have not done for their career since we felt it was a topic area that was very difficult to obtain objective truth about.

What we did instead is conducted an outsiders look at people we knew of in and around the TRIZ world – conference attendees, TRIZ Journal authors, etc – and looked for evidence of the likely impact of TRIZ on their careers. The big hope was that we would find compelling evidence to indicate that TRIZ was good for an individual. What we found was overwhelmingly the opposite. Here’s how the overall analysis stacked up:

triz career 1.1

In less than 10% of cases could we find evidence that TRIZ had been good for a person’s career. Evidence that TRIZ had had a negative effect was present in well over half of the cases we looked at. Her are a few examples of the sorts of problem we observed:

Exhibit A: mechanical engineer that has consistently generated a significant number of granted patents for his employer, and yet somehow finds himself frequently having to justify his continued employment at the company. While he has never been made redundant, he has been re-deployed several times over the course of the last decade, each time to a job that is increasingly peripheral to the company’s core business. There is no evidence that any of his patents have been commercialised.

Exhibit B: lead a management supported initiative to bring TRIZ into the organisation, organised the training of several dozen engineers, circulated regular TRIZ bulletins and ‘case studies’ across the organisation. Is currently perceived, following a leadership change, as the main instigator behind an ‘obscure cult’ and, despite delivering several successes to the organisation, is perceived as a person who ‘does not deliver’.

Exhibit C: a career academic that made the brave move of bringing TRIZ into their engineering department curricula, in the process alienating several domain experts who apparently felt the ability of TRIZ to transpose solutions from one domain into another was somehow threatening to their expertise. While there is no evidence of a personal vendetta, the tangible evidence is that the academic in question has still not secured tenure after over a decade, and finds themselves isolated within the department.

Exhibit D: chemist, one of a cluster of people trained in TRIZ by an outside consultant. Subsequently gained a reputation within his team of hampering progress on projects by ‘asking awkward questions’. When the business was forced to reduce head-count, he was one of the first people to be made redundant. At his exit interview, he was informed that his prospects of employment elsewhere would be greatly improved if he ‘became a better team player’.

All in all, the picture looked somewhat bleak. Digging a layer deeper into the minority of people for whom TRIZ appeared to have benefited their career, however, and something significant emerged:

triz career 2

Nearly 90% of the individuals in this category had learned to keep their TRIZ skills hidden from the view of others. They used TRIZ in their work (and planning their career as far as we can establish anecdotally), but they had quickly learned that ‘using the T-word’ was career-poison and so best not to mention it or any of the surrounding jargon at all.

Again, it is very difficult to gauge whether even this group would have done even better had they devoted the time they spent learning TRIZ to something else. Maybe it was merely their inquisitive nature that has stood them in good stead over the years? But I’m not sure. From where I sit, it feels like TRIZ did play a role in helping them to create a clear compass heading for everything they did, and a confidence to know that whenever bumps in the road appeared, they had a great set of tools to overcome them. All they needed beyond that was a dogged persistence that meant they didn’t just dream up some cool solutions to the right problems, but they executed too. They put in the ‘99% perspiration’ hard-yards, in other words. We probably shouldn’t be too surprised. Maybe the real message here is a story analogous to Fight Club: the first rule of TRIZ Club is don’t talk about TRIZ Club.

The CEO’s Guide To Innovation #1: Should I Innovate?

Status

‘Innovate or die’ has become something of a truism in most realms of human endeavour. The big problem with truisms in the world of innovation is they have a horrible knack of sending organisations running after things that make very little sense. So is it really true? Does every organisation really have to innovate? And if they don’t, how does a leadership team know what they should be doing instead?

 

I’m in the very fortunate position these days of having the opportunity to talk to and work with a broad spectrum of leaders and leadership teams. Here’s a general version of the conversation I’m most likely to have with a CEO, CFO or COO:

 

CEO: I’m glad you’re here. We need your help. We need more innovation. We need an innovation culture.

Darrell: Great that you invited me. Thanks for the opportunity. I’m not sure we’ll be able to help, but I’m very happy to explore options with you. Probably best to start with, what yo mean when you say ‘innovation’.

CEO: Err. Better ideas I think. We never seem to be able to come up with the next big thing.

Darrell: Really? Why do you need more ideas?

CEO: To create new value. Get the shareholders engaged. You just have to look at our track record to see we haven’t created anything new for years now.

Darrell: Is it lack of ideas though? I almost never go to organisations and find that lack of ideas is the problem. For me innovation is all about successful deployment of ideas. The real challenges seem to be giving people the time to try stuff. To get things wrong. To learn from failure. To explore different options. To do things that challenge the staus quo. Break rules.

CEO: Oh.

Darrell: Challenging the prevailing common-sense.

CEO: (shaking head) I’m not sure that’s what I want.

Darrell: So what do you want?

CEO: New ideas. New thinking.

Darrell: Without breaking the rules.

CEO: Exactly.

 

By the end of these conversations things usually boil down to a leadership team that wants change without having to change anything. And certainly not anything they do themselves.  A tough nut to crack, but at the end of the day, it’s merely another contradiction, and any contradiction can be solved. The bigger question at this point, however, is do we need to solve it at all? Does the CEO really need to be putting the organisation through the inevitable trials and tribulations of delivering successful step change?

 

Here’s a simple flow chart designed to help C-Suite leaders to decide if they need innovation or not:

 ceo process chart

I’ve found it’s saved me and the leaders I’m privileged enough to get to talk to a lot of time in the months since I started using it. I’m not always convinced I get the whole truth and nothing but the truth when I hear CEOs walking through the process, but I think I get to know enough to know whether we’ll be seeing any innovation any time soon. In most cases we won’t. 

 

 

Crackpot Rigour #34

One of the reasons I left the academic world was that I got sick of having what I thought were really insightful papers rejected because they didn’t contain enough rigour. Now because I look after a team of 20 much-smarter-than-me full-time researchers, this sort of comment used to catch in my craw a little bit. They’d done rigour until they bled; my job was to try and turn their hard work into something interesting. Which in my mind meant getting through to the other side of the complexity, to something that was both simple and meaningful. So, whenever I questioned what sort of rigour referees were looking for what it seemed to always come down to was mathematical formulae. The basic correlating relationship is this: the higher the number of convoluted mathematical derivations per page, the higher the likelihood of acceptance, and the smaller the likely readership.

I used to be a mathematician, and I can still get excited by a good piece of mathematical genius, but when I saw this equation for the first time, I knew I had to begin weaning myself off the algebra and back into the real world.

This is a formulae that delivered a Nobel prize in Economics to the pair that came up with it a couple of decades ago.

carackpot 1

Today we know that it is meaningless crap. Nay, it’s worse than that, it’s dangerous meaningless crap. But because it came with a  ‘Nobel’ pedigree, and it made for some really great computer models, it in effect generated its own little industry. It’s the very definition of crackpot rigour. i.e., the unquestioning acceptance that sophisticated mathematical models must automatically be correct.

The clue to the problem here is in the word ‘economics’. Increasingly a terrific warning sign that what you’re about to hear next is also meaningless crap. Economists serve as the veritable archetypes of crackpot rigour. I blame the numbers. Economists live numbers and love numbers. Their problem is that the large vast majority of the important numbers in life are, to quote W Edwards Deming one more time, ‘unknown and unknowable’.

Any idiot, for example, can look at the patent databases of the world and count the number of times that a patent was cited by other patents. But the crackpot-rigour idiot (CRI) wants to turn it into this:

crackpot 2

 

 

 

 

 

 

 

A really beautiful picture, that – best of all for the CRI – clients appear willing to spend lots of money for. After all, it looks really scientific. Really plausible. Just think of all the data and all the intricate mathematical analysis that went into its creation.

Because these days my aforementioned researchers spend a lot of their time analysiing patents, this is the sort of picture guaranteed to make my blood boil. I know, I know, I should jump on the bandwagon and make some pretty pictures of my own, but sadly for me, I have the naïve desire to sleep at night without the feeling that I’ve been wasting people’s time and money.

The problem is this. When you really get down to it, and try and work out what’s meaningful and what’s relevant, if we want to really study what’s happening in the world’s store of intellectual property, the number of citations a patent receives has practically nothing of value to tell us. All it means is this is how many other inventors referred to this one. It doesn’t matter whether they might have referred to it as an example of a really bad invention that their’s is an improvement on, or whether they referred to it because it was one of their own earlier inventions that they’ve subsequently improved upon, the key thing is that the cited patent has now been improved upon, thus making the earlier patent somewhat redundant.

Add to that rather inconvenient reality the fact that citations can only ever happen after the earlier patent has been made public, and you’ve now just made a beautiful picture that’s also about two years out of date.

And if that’s not bad enough, some ornery new inventor might just come along tomorrow morning with a new invention (uncited by anyone, anywhere) that just made your invention completely irrelevant.

There’s the only meaningful stuff: how much is my patent worth? (easy one: probably zero) How likely is it that someone will come along tomorrow and design around it? Who could I license it to? How could I use it to block my competitors. All the stuff, in other words, that has nothing to do with mathematics at all. Much as the economists might deny it, there is no mathematical model that will ever be able to take due account of the testosterone-driven foibles of the business strategist. Well, unless it’s our PanSensics tools, of course, but that’s another story. Albeit one that also doesn’t produce pictures that are nearly as pretty as the crackpot rigour ones.

Big Data Analytics apologists will argue that meaning comes from the meta-data. i.e. we need to step far enough above the mathematical detail to see the bigger picture. The meta-data doesn’t lie they will argue. Well, there certainly no rule that says it will, but it probably does. If your starting assumptions were garbage (i.e. numerical), when you integrate it all together into your pretty picture, all you’ve really done is created meta-garbage.

 

 

Learned Helplessness #23

“It was vertigo. A heady, insuperable longing to fall. We might also call vertigo the intoxication of the weak. Aware of his weakness, a man decides to give in rather than stand up to it. He is drunk with weakness, wishes to grow even weaker, wishes to fall down in the middle of the main square in front of everybody, wishes to be down, lower than down.”

Milan Kundera, The Unbearable Lightness of Being

 

One of my longstanding theories, no doubt borne of the last twenty-some years of working with TRIZ, is that there aren’t that many problems in the world. In my mind I’ve imagined the total number is somewhere in the vicinity of a hundred. The more I think about it, the lower the number gets.

We’ve been doing lots of projects in the healthcare sector in recent months, trying to understand how to solve the crippling rock-and-a-hard-place contradictions in a system that everyone seems unable to change. We’ve also been doing some work with employment agencies, trying to get long-term unemployed people back into work. And then with transport sector bodies trying to reduce the number of complaints they receive from an ever growing, ever more demanding population of commuters.

Each of them thinks their problem is unique, but each time, when we’ve constructed a map to show how all the different opinions, policies and vested interests interact, we keep coming up with the same basic finding: as individuals we – all of us – increasingly find ourselves sliding down a slippery slope towards helplessness, and reliance on others – usually expensive officialdom – to sort out our problems for us.

How could that be? How can we find ourselves in a society in which literally millions of people find themselves on this slope. Do none of us have the gumption to say, ‘enough already’, or, per the words of pub landlord philosopher and comedian, Al Murray, ‘snap out of it’?

It felt like time to draw another map.

As usual, the start point was to define a question. In this case it was: ‘a culture of learned helplessness has arisen because…’

Then we set about scouring all the reasons we could find, across all the different sectors of society where we could see evidence of the slippery slope. We ended up with a list of just over twenty different answers:

learned helplessness 1Then we looked at the interactions between these answers by asking the ‘leads to’ question, ‘which of the other ones does this one lead to first?’

Here’s the map we ended up with:

learned helplessness 2

It clearly showed a single vicious circle. If there’s a slippery slope at play in the learned helplessness problem, chances are this is where it is. Which turns out to provide something of a shock. At least to my way of thinking. Here’s what the vicious circle looks like close up:

learned helplessness 3

What it basically says is that the principle driving force behind learned helplessness is… the helpers.

W Edwards Deming famously said that nearly all (he eventually ended up at 95%) problems come from the system rather than the individuals within it. No-one comes to work to do a bad job. It’s just that somehow, all that positive intent sometimes finds itself combining in ways that create unfortunate outcomes no-one could have expected. In other words, while the vicious cycle tells me that the helpers are the problem, the real problem is the emergent complexity of their combined actions.

Blame is not the point. The point – for all of us helpless ones – is that when we’re trying to change our behviours and finding it really difficult, the very people we go to for help, aren’t helping. In the words of Nicholas Taleb, society has inadvertently made us all much more fragile. We live increasingly on a fragile knife edge, and the only way to turn things around, to make ourselves ‘antifragile’ is to realise who the enemy is. The enemy is us. Us plural.