I was asked in a survey recently “How likely would you be to recommend this tool to your colleagues?”
At first glance it seems like a fair enough question, and one we’re all used to seeing since NPS (Net Promoter Score) became ubiquitous. But here’s the problem; the tool in question was an internal HR tool. I have no choice but to use it. Whether I would recommend it or not is therefore irrelevant.
It’s not the first time I’ve questioned the use or implementation of NPS. And I’m not alone.
What is NPS?
NPS or Net Promoter Score is a single number that represents how likely a customer is to recommend your product/service/company to their friends, family or colleagues.
People are divided into detractors, passives and promoters based on their score. The number of promoters (those who gave a score of 9 or 10) minus the number of detractors (those who scored between 0 – 6) gives you your net promoter score. So the score can be negative.
It seems that the distribution of responses falls on a bimodal distribution with people scoring either strongly supporting or strongly detracting, with 65% of all scores being either 0, 9, or 10.
What’s Wrong with NPS?
It’s culturally insensitive; I used to see survey data from training courses, we used a 10 point scale and noticed that Dutch people tend to score any survey question on quality 1-2 points lower than their US colleagues. One respondent famously took points off because the food was too good! It was incredibly unlikely to ever get a 10 from a Dutch participant. I’m sure this has implications on NPS scores.
Limited use in B2B; because the decision cycle is more complex, with multiple stakeholders and influencers an NPS based on the scores of just the person known to the survey sender is not a useful measure.
It can be gamed; on a holiday last year I was asked to give a company feedback, and advised that only scores of 9 or 10 would count as positive. As it happens I was happy to score a 9 without the guilt trip, but how accurate are surveys when they come with scoring instructions?
It’s not actionable; it can be really hard to understand what to improved when the NPS score is calculated across a team or across an audience as a whole. NPS Monitoring blog gives a nice hypothetical example about how breaking out an audience according to time spent with the product might help understand what approach to take to improve user experience (and therefore NPS).
Some of the issues above are around implementation; if frontline people don’t benefit individually from NPS then the risk of gaming NPS drops for example.
Is it Useful?
NPS can be useful either as a single figure that allows a manager to see a changing trend of customer reaction, or compare businesses or markets across a large company – preferably using trend data rather than absolutes to limit any cultural biases.
If set up properly it can also be used to diagnose areas for attention by drilling into the reactions of specific groups or analysing where a respondent is in the product purchase cycle.
But it should never be used to asses a compulsory tool.
Image: Emoticon via pixabay