Customer ratings reveal what is working for them and what isn’t.
They also indicate much about the effort customers perceive is being put into resolving their issue by the business.
As a rule, some contact channels are more satisfying than others. For example, businesses typically get poorer satisfaction ratings for support provided via email in contrast to support provided over the phone. Likewise, satisfaction ratings for self-service options tend to be lower, especially when content is not regularly checked and updated.
Let’s look at this in more detail
Customer ratings can also reflect the amount of effort the customer believes is being put into resolving their case.
Take a typical phone call between customer and agent, for example. Some representatives faithfully take the customer through the script to tick call quality monitoring boxes, while others go a step or two further, showing greater empathy to customers. If the customer feels this is going the extra mile, even providing guidance how to prevent it happening again rather than simply resolving the problem, they will probably give a higher rating for the call.
This also applies to self-service. When self-serve resources have been carefully planned, prepared, deployed and maintained with the aim of helping customer take care of needs themselves as much as possible, they will rate this higher than a large block of dense text that’s difficult to digest and even covering several issues with little or no structure.
When analysing customer ratings, businesses must identify with particularly high performers. If, for example, a self-service article receives particularly high ratings, consider what makes it different to others available. Is it the topic? Or the way the information is presented? What makes it so helpful for customers (and appreciated)?
Equally, if one agent receives consistently high customer ratings, consider what they are doing differently. The data will guide thinking, training and change to be implemented.
Other things to consider
Many customer service channels are siloed, meaning the supervisor of a voice support team may have no interaction with the self-service team, for example. As a result, consistency is missed and noticed by customers across available channels.
Data measured for individual channels should also be compared across different ones. If customers are generally happier with voice support, for example, than they are with self-service, the business needs to consider why that is. This data can also be used to justify a decision to eliminate a channel totally. If it’s consistently the least satisfying (in other words, the most dissatisfying), and efforts fail to improve performance, is it worth continuing? Effectively, customers are telling you it isn’t working.