After some posts on Social CRM I feel it is time to write about one of my favorite topics. I wrote about metrics in the Customer world before. Most of them can be considered “rants” against current measurement practice in Call Center / Customer Services today.
Desired Outcomes Metrics – A design thinking approach –
Out of the many things that can be wrong about metrics and measurement frameworks, the one thing that is hindering companies from making break-through improvements is the lack of measuring against Customer desired outcomes. The ability to measure against these starts with understanding them. Product, service and/or experience designers are focused on understanding Customer desired outcomes (or needs) first. Only after they understand those they will start the process of ideation on how to meet those outcomes, followed by the actual designing (or prototyping) and testing.
Managers of Customer services departments or otherwise responsible for (parts of) the Customer services experience would do themselves, as well as their Customers, a huge favor when they would do the same; starting with the design (for the avoidance of doubt: I’m not talking visual presentation here) of their balanced scorecards, dashboards or what have you.
Outcomes Customers desire from a company’s Customer service
All outcomes that Customers desire from a company’s Customer services, in my humble opinion, can be summarized in these 4 categories:
- Understandable information
- Easy to use
- Right answer or advice the first time
- Speed of resolution
In this short deck I show how widely adopted metrics in Customer services are not focused around these desired outcomes. They may even hinder them.
Do you have examples of how you measure against your Customer’s desired outcomes? Or more bad examples that I should include in my little presentation? Please share your views and experiences in the comments.
Oh man-I thought your deck was hilarious. I’ve definitely felt that way as a customer, and I’ve seen that happen as a manager and it makes me very disappointed.
How about this one? Our billing system shows that your credit card was declined. This is usually due to inadequate funds. It may also be a problem with your bank.
I got an email a few weeks ago that said that. I was trying to figure out what why my payment went didn’t go through and that is the email I received. I was so furious because it made me feel like I was a low life for not having enough money in my account- and then it made me upset/anxious because I thought for a second my debit card had been used fraudulently without my knowledge. It turned out that they lock anyone from ‘attempting’ to make payments after 3 times in a 24 hour period. I was not a happy camper.
I just felt like they didn’t have to ‘accuse’ me of not having enough funds. That was really low.
LikeLike
Interesting article. good new ideas…
LikeLike
This is where John Seddon started a lot of his work, and using lean for services as the way to fix it. SLAs are there to be gamed.
LikeLike
Pingback: Not agreeing with Dick Lee’s “is Social Media Responsible for Declining Customer Service Satisfaction?” « Fredzimny's Blog
Pingback: Reading EXCLUSIVE: Social CRM Vendors Don’t Walk The Talk by Jeremiah Owyang, partner, Altimeter Group « Fredzimny's Blog
Great post. I like the ideas you raise.
LikeLike
Wim, Esteban,
i am a champion of business and i have a slightly divergent view.
i think Customer Satisfaction were relevant for ITS time. its just the CS’ time has passed.
the issue with ANY metric will be the same – and that is what is their shelf life and what do metrics really mean?
yadu
LikeLike
I’m not a customer service expert, but I think “metrics” should start much higher than a department or function. The slide show unveiled that your departmental metrics may be great, but another function could be killing you.
How many companies understand their customer lifecycles? Do they understand whether they are shortening or lengthing? Do they know if customer value (to them) is increasing or decreasing which will help them determine if they are delivering value to the customer?
It seems to me that if you can’t measure these things at a high level, you have no real way of pinpointing that there is a problem, or where it might lie.
If you want to “co-create” value, ie, deliver value to your customer in a way that delivers value to you (Dick Lee’s definition of CRM), then I would argue that you need to start at the top of the organization and drill down.
LikeLike
Hi Mike,
You got that absolutely right: another function could be killing you. We should really stop using (Customer) metrics on the departmental level as we know them today. Linking them to the Customer Life Cycle is another good point you make. Understanding what the really important experiences are, in the lifetime of a Customer, is important. If only to understand where and when there is risk and opportunity (usually they come in pairs..).
Now, only to be picky, on a very insightful comment: starting top-down is not the “right” terminology. When starting with Customer desired outcomes, we are approaching this from the Outside – In (really, just picky).
Thanks for reading and commenting. Hope to see you around for more.
Wim
LikeLike
I just love the way your sarcasm just oozes out of this presentation!
It seems like a lot of the desired outcomes (or customer jobs as Graham Hill would say) are highly dependant on the quality of the inputs. Not only should you be measuring how well customer service staff is performing against your criteria to meet the customer’s needs, but also correlate this to the quality of information they have to work with and how the use it – benchmarking against other staff’s use of it to measure Resolution Efficiency? Also you might want to add measures concerning the employee contribution to resolution efficiency (helping others resolve customer jobs faster).
LikeLike
Hi Mark,
One could say that 100 % of the Customer (realized) Outcome depends on the input. Input can just come from various stakeholders, including Customers themselves.
Research even suggests that 1/3 of mistakes are caused by Customers themselves. You can find that here: http://ariegoldshlager.posterous.com/one-third-of-all-service-problems-are-caused
Now with regard to “resolution efficiency”. You are absolutely right that the quality of the company “input” towards the Customer services rep (and the Customer!) is of vital importance. That input needs to be nearly perfect for them to provide adequate outcomes for Customers. The challenge here is that “input” is not limited to “information” or “knowledge”. I think one should regard the entire “system” (no.. not IT system, but “business system”) as “input”.
Basically I’m not all that interested in monitoring the individual performance of the Customer services rep, not with respect to productivity, quality of the call nor resolution efficiency. From my personal experience the “inputs” are causing the problems and is why Customers are calling in the first place. I think the measurement framework one should develop is focusing on correlating the “input” with the “outcomes” (<— can I coin this one? ;-). The main question will then be: If we know the desired outcomes, can we track them back all the way to the inputs (that matter most)?
For Customer services this really is not that difficult, given that we know the Desired outcomes (mostly). When we talk customer jobs, end-to-end experience & value co-creation this is a different ballgame, but basically the same "procedure" applies (I think; and now I'm also getting more into the area of Esteban's latest comment on this post..)
Well, before I completely drift away: Thx again for reading & commenting here. Looking forward to the next time.
Wim
LikeLike
here is the very high-level of where i was trying to get with corelated metrics.
you talked before about co-created value. when you co-create value there is always, in my mind at least, two metrics: what is value for the user and what is value for the enterprise. if you can come up with the framework that will measure those two, correlate them, and present them in some sort of makes-sense model you are probably getting closer to where things should be.
in my perception it would take list the basic metrics for each type of value, and enforce some sort of dependenecies (such as metric A cannot be used with metrics Z, V from… or something like that). once you have the pairs you can begin to see the metrics that the company can use based on the desired outcomes from the customer.
or something like that, better explained and implemented.
sorry, kinda late / early over here… :)
we can talk more at another time…
LikeLike
Hi Esteban,
Here’s my thoughts, elaborating on yours:
I think the approach on correlating value for the Customer (= Desired Outcomes) and value for the Company ( = Desired outcome for the Company = Customer Life Time Value) is definitely the way to go. Building a measurement framework is then about defining company “inputs” that correlate to both Desired Outcomes and positively influence correlation between the two outcomes (I have simple picture in mind of a triangle up-side down, inputs on the bottom and correlation arrows between the points on all of the “legs”, make sense?). One could define the inputs as outcomes too. Meaning that the “inputs” need to have a certain value outcome to optimize correlation between Customer value and Company Value. Ergo: what you should aim for is a triangle of metrics (Customer value – Company Value – Company Inputs) that show high correlations all together.
The best way to go of course is not proving mere “correlations” but true cause-effect relations. This is all about understanding what “inputs” have most significant effect on which outcomes and then balancing them all (meaning: prevent a certain “input” to have a positive effect on only one outcome and not on the other or worse, have a negative effect). I think this is what you mean by enforcing dependencies. Please correct me if I’m wrong here.
Inputs, in my humble opinion, can be a whole lot of things. I think of pricing, customer experiences, (customer services) policies & business rules, product/service features, employee engagement, etc etc.. The ones (it will not be one for sure..because killer metrics do not exist ;-) that matter most will differ per company and may even per customer (job) segment. Finding those can be a lifetime of work, but a fun journey too ;-)
Well.. blow me to bits..
LikeLike
Wim,
This is a great summary and something that is right in line with my observations – especially around feedback and satisfaction. I still don’t know why most people worry about satisfaction index or satisfaction levels and they don’t correlate it with churn. What good is a 90% satisfaction if you have a churn of 23%? That means there is a 13% of customers that are either not yours (since yours are 90% satisfied), or that are lying in the survey. Either way – solve that problem before you gloat about your satisfaction levels!
If I had a magic wand I would definitely wave it and make people simply correlate their metrics before they commit to them. Just doing that would solve around 75-80% of measurement follies out there.
In any case, great post (and i adore the slides), and I am certain that we will see in the near future the proposed solution to avoid making these mistakes — right?
(BTW, very nice the new look and feel of the blog – spot on!).
Thanks
Esteban
LikeLike
Hi Esteban,
I had a feeling you liked this one. And your absolutely right, it is time that I get to solutions. I’m working on that for a while now and my thoughts are getting more clear every day, even when we discuss Social CRM, Outcome Driven Innovation, Experience Design and more..
One thing I know for sure: designing a measurement framework and metrics to measure against desired outcomes, will be one of my key-propositions when I’m launching my new Consultancy-firm early next year… Building on a firm approach still. So now would be a great time for you to share your thoughts & experiences on that too ;-)
Thx for reading, compliments on the new look & challenging me.
LikeLike
Thought I’d share this link too http://twurl.nl/wg3hwp on my woes with bad alignment of CS metrics
LikeLike
This is an excellent post and I hope people who create and design strategy around CS get to understand and take a reality check.
To a large extent, I am beginning to think this problem is like the common cold. ALL of us have been affected by bad alignment you are talking about. And yet, we just dont see it.
I have been at the receiving end (and still am) of wrong metrics controlling the CS engine – an example is the measure by “time to close call. The real way that turns out is the Helpdesk folks seem to always find excuses to close a call for the flimsiest of reasons so, I have had calls closing not because my issue is resolved, but because I was “not answering phone” or “not at desk” or some such thing. I have to repeat the process of logging the call ALL over again if I even want any attention.
Thats is NOT service. That is NOT solving a problem.
You got the point absolutely spot on right at your title. These metrics are not about defining the operations in the Customer Service Department, but about serving customers. Process owners who define these metrics must sit and visualize how they will be able to delight or even satisfy customers who seek help – and then see how that can be achieved in a repeatable, measureable way.
LikeLike
Thx @BouncingThots (making sure readers can find you on twitter too ;-)
Nothing to add, and I like your great example post, you mention in your other comment.
LikeLike
This slide deck is spot on.
What I’ve experienced in past Call Centers is completely in line with the metrics you’ve cited throughout this slide deck. The numbers are based solely upon call time, volume, and a note in the system that closes the case “Resolved”. The disparity between these metrics and true customer satisfaction rates is troubling. More often than not, the customer satisfaction survey is limited to an email with a 1-5 scale and a comments section which is largely ignored if filled out at all.
LikeLike
Thx Dik for reading, Re-tweeting & commenting! Good to see I’m not the only one seeing it this way.
LikeLike
Pingback: Twitter Trackbacks for Are your Customer Service Metrics aligned with Customer Needs? « Wim Rampen's Blog [contactcenterintelligence.wordpress.com] on Topsy.com