In the spirit of not re-inventing wheels, the SI research team have spent some time in recent months looking at how the service industries measure their innovation activities. To say that search has ended in disappoint would be something of an understatement. Put crudely, no-one in or around the service sector seems to have the first idea how to measure the impact of their innovation efforts. What does exist seems to fall into two basic camps. In the first camp, ‘measuring service innovation’ seems to mean surveying people in the service sector to see how well they think they’re innovating. This seems like a fundamentally flawed way of doing things to me. Firtly because the survey instruments don’t define what they mean by innovation, and secondly because it’s difficult to imagine a scenario in which respondents would have any desire to answer the questions honestly. All in all, the whole exercise resembles a study asking gamekeepers to rate their poaching abilities.
The second service innovation measurement camp appears to have worked out that it’s a difficult problem and that ‘somebody should do something about it’. The most pro-active members of this camp appear to have tried running competitions to see if anyone is up to the challenge. As far as we can tell, no prize money has been handed out.
Given that the GDP of many developed nation is now heavily dominated by the service sector (in the UK, US and Australia it’s already over 80%) it somehow seems a little odd that no-one knows how to measure their future lifeblood.
In these kinds of situations, we’d normally expect to begin solving the problem by trying to define what the ideal solution would look like. Based on our definition of innovation as ‘successful step-change’, a meaningful measurement ought to be based on an ability to define and measure ‘success’. This might mean return on investment – something like ‘how much did we invest in creating our new service offering, versus how much did we get back in terms of new revenue from customers – or some form of net value addition. Or jobs created. Or maybe even money saved. All sound both logical and plausible. Except for a couple of awkward facts. Firstly the fact that the service sector is inherently embroiled in and surrounded by complexity. Which means that it is difficult if not impossible to reliably connect causes and effects. Any change made to a complex system is prone to a host of unintended consequences. One part of the system, in other words, might get rewarded for successfully getting butterflies to flap their wings faster, while another ends up becoming the victim of a surprise hurricane. A bit like when the UK Government ‘innovated’ after the 2011 riots in London and locked up the gang leaders. A big tick in the ‘no more gang leaders’ box. A big disaster for everyone else when the power vacuum left in the gangs created a mass of new work for the police trying to calm the ensuing intra- and inter-gang warfare.
Secondly, and perhaps even more significant, is a recognition that a very large proportion of how customers decide whether the ‘innovative’ new services they’re offered are ‘good’ or not are driven by intangible, emotional factors. Which in turn creates the new problem of how on earth can we reliably measure intangible things like ‘wow’, trust, happiness, empathy or any of the other factors that might driver a change in customer behaviour?
It is, of course, the main motivation behind our PanSensic tools and our philosophy of helping organisations to measure what is meaningful rather than merely convenient. In terms of measuring service innovation, PanSensic tools are able to measure a host of different elements that might, individually, collectively or in some combination help organisations to measure how well their service innovation activities are going. Things like:
- Number of ideas being generated
- Quality of ideas being generated
- Number of ideas being executed
- Reduction in customer frustration
- Increase in customer Autonomy, Belonging, Competence and Meaning
- Increase in customer trust
- Improvement in staff engagement
- Emotional ROI
- Increase in customer propensity to recommend to other prospective customers
- Etc
Before we get too far ahead of ourselves, we’ve perhaps now revealed a new problem. Namely which of thes and the host of other things we could add to the list is more relevant than others? Where should an organisation start? What should they be aiming for?In classic bad-consultant language, the answer is – sadly – ‘it depends’. Fortunately, we know there are two things that dominate the answer to this dependency question. The second most important, is the Innovation Capability Maturity Level of the organisation making an innovation attempt. Level 1 companies will generate success by taking on Level 1 projects, and measuring things relevant to their Level 1 capabilities. When an organisation advances to Level 2, how they measure success will change. And it will change again when they hit Levels 3, 4 and 5. In each case what they should change to has been mapped and verified over the course of building the Capability Model over the course of the last eght years.The most important dependency determining factor, and in many ways the thing that forms the underpinning DNA of the Innovation Capability Maturity Model research is that the most important service innovation measurement parameters are those that enable the appropriate ‘innovators’ to see that they are moving in the right direction. Only when people feel like and can sense that they are doing the right things and moving in a positive direction are they likely to keep going. Service innovation measurement job one, therefore, is providing whatever is needed to visibly show the do’ers and their supervisors that progress is being made.