I could have started this blog with a heartfeld argument why we should stop measuring NPS.
There is enough evidence for that (check out my previous NPS blog to read more about that).
But as far as I’m concerned, that’s the core of the problem: a discussion about the metric.
Yes, you should roughly know that you mainly use the Customer Effort Score (CES) for User Experience, the Net Promoter Score (NPS) for Brand Experience and Satisfaction (SAT) and/or NPS for the journeys.
That covers the basics of the discussion about the metric.
A much bigger problem that I see in most organizations is how these metrics are used.
So the right question is not: Which metric should we use?
The right question is: How do you really find what drives a better experience?
In the dilemma of employees who already have 1000 to dos, how do we ensure that we help them prioritize exactly those to dos that have the most impact on the customer experience?
How do we extract from all that overload of information (surveys, calls, open answers, etc.) exactly the information that allows us to know: if we improve X (e.g. personal attention of employees), the experience improves 5x faster than if we put our energy into Y (e.g. MyAccount)?
What strikes me is that there is a big gap in knowledge that is needed to find those drivers.
That’s why this blog contains 5 concrete insights that will help you get in the driving seat (or to understand why you feel like you can’t seem to get in the driving seat).
1. Number of respondents too low
One of the most common complaints, especially about NPS, is that it is so volatile: month A an NPS of +30, month B an NPS of +5 and no idea what the explanation is.
In several reports (even when outsourced to research agencies), I see NPS scores based on, for example, 35 respondents.
This is less of a problem for the CES and SAT, because there is no calculation for that, you use the average of all individual customer scores.
Therefore, for CES and SAT, a minimum limit of 30 observations is acceptable for drawing conclusions about trends, for example.
But because the NPS subtracts the 0-6 group from the 9-10 group, you want at least 60 respondents before you jump to conclusions.
What I increasingly recommend is to report the average NPS (divide the summarized individual scores on the 0-10 scale by the number of respondents) in addition to the ‘original’ NPS (% promoters – % detractors).
It is much less volatile.
Just because someone happened to think of subtracting that % 0-6 from % 9-10, doesn’t mean you can’t also just monitor the average as a shadow trend just like satisfaction.
2. Overestimation of the impact of direct feedback
I also hear a lot of organizations about using direct feedback: contacting customers based on their individual score.
Often with NPS, but this can also be done with SAT and CES.
The principle of overestimating the impact of direct feedback is universal across the 3 metrics.
Suppose you send 100% of your customers a survey.
Suppose 20% of customers respond.
Suppose 25% of them are dissatisfied (and you want to call them).
Then you only reach 5% of your customers by using direct feedback!
In reality, this % will be much lower, because you never send 100% of your customers a survey, for example.
Direct feedback is fine if you want to realize a top individual customer experience (think of a B2B context), but it is not a quick fix to improve the total customer experience.
3. Too often 1 question with an open answer
That response % hey… that remains a thing.
How often do I hear: “but customers do not want to fill in a long(er) questionnaire”.
Yes, they are willing to do that, trust me, we have already proven that a thousand times.
But this belief is persistent and often the reason why very short questionnaires are chosen.
Certainly also stimulated by survey tooling parties, there is an increasing trend of asking 1 question (NPS, SAT or CES) and then an open text with the why question.
But… even if you have very short questionnaires, if you can’t get the right drivers out, you’re still not in the driver’s seat.
What is the core problem of using that open question?
You get rational answers from customers.
However, we as humans are not very rational beings.
So even though you have super smart AI in your survey tool that analyzes thousands of open answers, you are still analyzing the rational answers of customers.
Consequence? You think you’ve found the right drivers, but these aren’t the ones that really matter.
Which explains why you try very hard to improve on these drivers, while you see little or no result.
We have also tested this many times and you see that the number 1 subconscious, latent driver only comes in place 4 or 5 in the list of rational buttons.
So you can really be pretty far off. This applies to both NPS and SAT.
Only with CES that one question with open answers is no problem.
Because you use the CES at very specific, transactional moments (for example on the order page of your website).
Then it is fine to ask the open question very specifically: can you name 1 thing that would make it easier for you (to complete the order).
4. Measure touchpoints instead of journeys
Let 2023 be the year we stop using the word touchpoints.
It’s confusing, the scope doesn’t help and that also keeps us from getting in the driver’s seat of customer experience.
Rather use channels (call, email, WA, etc.) and journeys (I order, I have damage, etc.).
Touchpoints are now a mix of channels and moments from the journey (ie. visit of an account manager).
In addition, they give you a false sense of ‘we’re doing great’.
Suppose I call customer service.
I call to ask if the employee can send me form X.
“No problem ma’am, we’ll do it right away!”
Immediately after the call I get a few satisfaction questions (touchpoint).
I’m giving a high score because I’m totally happy that it was handled well.
Four days later I still have no form.
I’m not happy, but that dissatisfaction is nowhere in your perception measurements.
So you think you are creating a top experience, but you have a big blind spot.
Which explains why churn is a problem despite those high scores you seem to keep getting.
5. Insufficient knowledge of statistics
Yes, consciously the last of the 5 insights, because I can already hear you thinking: “Phew, statistics, well, what was that again? Didn’t I have something like that in school?
But fear not, I’m going to make it very easy for you.
You don’t have to be a statistics wizard, you just need to know what to ask your research partner and/or your data science colleagues.
If your research partners in crime do anything with statistics at all, it is often correlation (but they often don’t even do that).
But… correlation (something is related) is very different from causation (something causes something else).
What would we like to know in our case? What causes a higher NPS or SAT!
That is not rocket science in the research world, these are existing techniques: factor analysis and regression analysis.
In layman’s terms, factor analysis groups the statements from your questionnaire.
So it actually says which statements belong together in the eyes of the customers (!)
This is the key to finding those latent, unconscious drivers.
In layman’s terms, regression analysis (with the factors, not the individual statements!!) tells you 2 important things:
A. Did we ask the right topics, did we forget anything important?
B. What is the impact of each topic (factor) on the NPS or SAT?
Pretty relevant insights to be in the driver’s seat, isn’t it?
If you want to know within 10 minutes how this exactly works with journey mapping, latent needs and finding the right drivers, watch this video I made about it.
Or if you want more indepth knowledge how to maximize NPS for your organization, then check out this NPS toolkit slidedeck I made for you including a voice over video.
In conclusion: 2023 is not going to be the year in which we can stop using NPS.
Though let’s decide that it’s the year in which we stop using the root cause technique of the NPS.