You cannot read any trends in customer experience, without reading about the huge potential of artificial intelligence (AI).
And there is not a CX tooling company that is not promoting its AI capabilities.
But is AI the way to go in CX research?
In this blog, I share my views on why most current AI applications in CX research can lead to wrong conclusions on what your customers really need.
And what you can do to maximize the value of AI by tweaking where and when you use it.
Trend toward open text
The trend toward asking as few questions as possible, and using smart ways to analyze open text answers, seems unstoppable.
This is accelerated by the popularity of ratings and reviews, and the assumption that open rates in emails are dropping fast.
More and more CX research therefore focuses on one question (often NPS) and then asks for comments on that score via open text.
The open text is analyzed to understand the drivers to improve the experience of your customers.
If you’re a regular reader of my blogs, you’ll know that using open text will lead to finding the rational drivers of your customers.
And that can be very risky.
After all, those rational drivers are most probably not your customers’ real, latent drivers.
Current state of AI
When AI techniques are applied in order to analyze the open text answers, two techniques are combined in most cases:
- Counting the frequency of specific words / sentences
- Checking for positive or negative sentiment in the text
So when you ask the NPS question (“Would you recommend company X to your friends / colleagues”) and analyze the answers why customer gave their scores, you will get a rating of topics mentioned.
And for each topic, you can get an indication of whether this topic is mostly accompanied by a positive or a negative sentiment.
Don’t use AI to find drivers
Of course, AI saves an enormous amount of time in analyzing all the text manually.
But you are actually analyzing the rational answers given by your customers.
I may say, “the waiting time really annoyed me” (and it probably did).
The AI will then count the number of times “waiting time” is mentioned and the annoyance will lead to negative sentiment.
But my real, latent need may be the personal attention I would like to receive, which I didn’t mention.
To find the latent drivers, you should therefore not use AI, but instead statistical techniques designed to find the underlying, subconscious needs.
So is AI completely useless in CX research? Definitely not.
You just have to know where to use AI in your various forms of CX research.
Use AI to deep dive into specific topics
In my opinion, there are two very valuable ways of using AI in open text analyses, for a better understanding of the needs of your customers.
- To deep dive regarding the latent drivers
Once you have identified the latent drivers, your employees will have plenty of ideas how to improve them straight away. So go and experiment first!
After the initial improvements, it becomes more tricky to further fine tune and improve.
That’s where the open text deep dives come in handy.
Let’s say one of the drivers you found is “Organization X empathized with my situation”.
Then it can be very helpful, to understand what constitutes “empathy with my situation” for your customers, and what behavior from your organization positively adds to their feeling of empathy.
Measuring the score of the driver on a 5-point scale, with an additional open text to share their story, will help you deep dive into this driver.
Combining this with sharing a specific experience (sharing their story), will get the most valuable insights.
The open text question accompanying the driver score could then be:
“What specific experience did you have with our organization, that best represents your score?”
You can then apply AI analyses, to really understand what this driver means to your customers and where the focus should lie in improvements for this driver.
- To deep dive into user research
As I mentioned, finding the drivers of satisfaction or recommendation is difficult when using open text, because of their complex nature.
When you’re asking very specific feedback, then it’s perfectly fine to use open text.
User research that focuses on optimizing existing functionality is such a specific context.
You ask for feedback on a specific page of the website, on a specific function of the app, etc.
I then have no issues giving a detailed answer why the page is not user friendly, or why the page did not answer my question or where the app did not work properly.
For example, you can use the customer effort score (CES) question: “The app made it easy for me to check my order status”.
Followed by an open text: “What wasn’t super easy when checking your order status?”
AI can then analyze these specific answers, giving you a simple one pager with the exact pain points, to optimize conversion and experience for that page or functionality.
That’s how you can capitalize on the growing popularity and further development of smart AI, without running the risk of applying it in the wrong context.