I was debating whether I should change the title of this blog to “Statistics must-haves for CX and EX professionals,” but I feared that would lead to little enthusiasm… 😉
Therefore, it became a mix: yes, I end this blog with 5 crucial CX metrics that almost no one thinks about, and I start with some basic statistics that every (CX and EX) professional should know.
Why? Because I encounter too many professionals who lack this basic knowledge, resulting in their impact within the organization being much lower than it could be with this basic knowledge.
Why? Because every company needs steering information, and the right statistics make the fluffy experience phenomenon attractive to the more blue, number-driven people.
Why? Because everyone is overwhelmed with 100,000 to-do’s and therefore must be able to prioritize.
But how do you prioritize Customer or Employee Experience initiatives if you can’t make a strong case for the impact?
Enough reasons, I would say, so let’s go!
Lesson 1: The “one question with an open answer” trend is risky.
There is a big difference between what customers think is important (rational) and what they feel is important (emotional). With the one question + open answer technique, you only get rational answers. The customer starts thinking about why they give an 8 NPS score. So, even if you apply smart AI techniques that many survey tools claim to have, you are still analyzing rational answers.
The more transactional and specific your question, the better you can use the one question + open answer technique. Think of the customer effort score in a specific location in the app (“The app made it easy for me to view my invoice”). Open question: can you name one thing that would have made it easier? And you can analyze that with AI just fine.
The more emotional and broader your question, the less you can use the one question + open answer technique. Think of measuring the experience of my journey. You can’t get away with this technique there. You really have to use other questionnaires and smart statistics to uncover the emotional, unconscious drivers.
Lesson 2: Correlation does not imply causation.
I am amazed on a daily basis by how little statistics are used in general. Usually, I come across so-called straight counts (how often is “unclear invoice” chosen or mentioned as a reason). With the assumption that the more often mentioned, the more impact it will have if we improve this. To remember that this is almost never the case: how many people mention salary during an exit interview (a lot), and how often do you think it is a driver (never)? Salary only becomes an issue if other things no longer suffice.
If some statistics are occasionally applied, they are often prioritization matrices based on correlation. You know them: 4 boxes based on axis with score and “impact” (read: correlation), and then the “crown jewels” are in the top right (high score and high impact), with the advice to retain and/or communicate about them.
But… correlation is not causation! Just because something is related doesn’t mean it’s the cause of a higher NPS or satisfaction. You need other techniques for that, like regression analysis for example.
Furthermore, I don’t share the conclusion that you should cherish high scores and high impact. My advice is actually that you should do even better on the high impact themes, because that’s where your NPS or satisfaction will improve the fastest! Unless you’re already at a 4.6 or higher on a scale of 5. But as long as you’re not at a 4.6 on the most important topics, that’s where you’ll get the most benefit.
Lesson 3: Explained variance is your best friend
One of the most valuable outcomes of a regression analysis is a percentage called explained variance. In layman’s terms, this tells you something very important. We’re all customers. So we all think we know what our customers value. And we’ll discuss it extensively internally, which slows our transformation down.
This percentage is the solution. It tells us whether we’ve investigated the themes that matter to our customers – not based on assumptions, but on what we know for sure. Science thinks 10% explained variance is great, but I always aim for at least 40%, and usually we’re between 50 and 70%. Which means you’re strongly in control of increasing your satisfaction or NPS.
Lesson 4: Ask for factor and regression analysis
I just mentioned regression analysis, but be careful not to rely solely on regression analysis. I won’t bore you with too many technical details, but the key is to first have a factor analysis done (which tells you which statements from your questionnaire belong together in the eyes of your customer) and then a regression analysis with the found factors.
I consciously write: ask for. Don’t go tinkering around yourself (unless you’re proficient in these techniques). Find a company that can do this for you, like Etil (a party with whom I’ve been working for 15 years to run this kind of statistics). In principle, any research company should be able to do this for you, because these are not complex techniques, but I haven’t come across any yet (and trust me, I’ve reviewed a lot of surveys that organizations have had done by well-known agencies).
Lesson 5: Do a quiz before sharing your results
This is one of my most valuable lessons learned in recent years. What usually happened when presenting the results? “Yes, that makes sense” or “That’s nothing new.” Not really helpful if you want to mobilize people to improve. So that had to change…
Now, every time before we share the results, we first do a Kahoot quiz of up to 10 questions to test what they think has the most impact. Then everyone gets at least 50% of the questions wrong, which means they’re really open to hearing what the analysis says.
It’s an ideal mix of fun and energy because of the quiz and competition, but the most important effect of this intervention is that the outcome of your research is taken to execution mode.
These are the basics in statistics that make the difference between mobilizing everyone to improve with the right priorities or working hard to convince everyone that they should “do something” with Customer Experience.
Just a disclaimer. It’s still statistics, and statistics are as patient as paper. But with these techniques, we’ve proven time and time again that you can achieve a strong increase in satisfaction in only 3 months time. Because now you have found the true drivers for satisfaction or NPS.
The real causality is proven by improving on the most important drivers and then actually achieving the increase in the next measurement.
In addition to the more extensive statistical techniques described above, there are also five basic checks you can do to get very valuable insights and better prioritization.
The 5 metrics that hardly anyone thinks of
Ready for list number two? Here it comes:
Metric 1: End-to-end journey lead time
This was one of my very first “eureka” moments (in 2007, yes, yes, I know 😉 about how valuable using the customer journey is as a tool to mobilize people. I worked with an insurance company and there was an issue with complaints. The operational director didn’t think it was that important because what are 100 complaints out of 100,000 customers (fake example) and all the internal SLAs were green.
Then we mapped out the journey from beginning to end and what did we find? If you added up all the internal steps from all those departments, this journey took a whopping 38 working days. What did they promise on the phone? 10 working days (the complaint was that it took too long). This insight into the 38 days was completely new to the operational department and so we got them on board to work on it.
We would never have discovered this if we had looked at it from the internal departments and individual process steps. So the tip is: check your total lead time of your end-to-end journey alongside the separate internal lead times of each piece in the journey.
Metric 2: First Time Right (no, that’s not First Time Fix)
I once worked for an organization where the question was: the contact center needs to be cheaper. Now, I’m not a contact center manager and I’d rather not annoy customers by hiding the phone number, so we wanted to achieve this through better customer experience and optimizing journeys.
But… how much potential is there actually? How many contacts could we prevent? No idea, let’s ask the customers. So we added the question to both the phone and email survey: dear customer, do you feel we could have prevented this contact? On average, 30% say: yes! That is both 30% potential reduction and happier customers (because they didn’t want to have to call for this question). A business case is then quickly and easily made.
Metric 3: Percentage of customers per journey
At the highest level, an organization can never define more than 8 to 10 journeys. Think of buying a product, product usage, payment, etc. Multiple detail journeys are part of these journeys, and in the end, there may be 100 mini journeys.
Where do you start with journey mapping and journey management? A very simple tool is to start with the question: what percentage of our customers go through these journeys per year? Start with the most common journeys.
You can also add: which journeys cause the most customer contact? An important note: many people in the contact center world can have contact tunnel vision. They think: we can prioritize by looking at how many calls we get per journey. That’s true, but beware: that is not an indication of the number of customers who go through the journey!
If a journey goes well, customers don’t need to call. So be careful not to only look at contact data, but also look at operational/IT data to see how often something occurs.
Metric 4: Percentage of customers per channel
You run a similar risk if you focus too much on the number of contacts in the contact center instead of the number of customers. If you want to get a feel for the impact of your channel on the total (brand) experience, you don’t need the number of contacts, but the number of customers!
For example, if it turns out that 20% of your customers call you annually, this means that even with fantastic customer service, you can never influence more than 20% and that you need to influence other elements of my journey with you for 80% of my total experience or NPS.
Metric 5: The best 80/20 checks
We can’t end this blog without mentioning the lovely Pareto rule (80/20). A few variants:
- Are 20% of my customers responsible for 80% of my contacts?
- Are 20% of my customers responsible for 80% of my revenue?
- Do 20% of my journeys affect 80% of my customers?
During a journey workshop, one of our most important mantras to not get bogged down in numerous exceptions: does this situation occur with 20% or 80% of the customers? It always works very well, much to the delight of the participants.
I hope you take away some insights from this that will help you make even more business impact with customer experience!