A few weeks ago I posted a blog with the title: Why keep chasing the wrong Goose? At the end of that post I promised to come back to you with a more in depth post on what you should be chasing, or in other words: Which metrics should you put in place in the Customer Services environment to get “it” right?
I truly believe our “profession” is in need of a new dashboard. Current metrics have brought our customers and our companies too little. Most of the times when a company gets bad press it is about the bad service that has been provided through customer services. The existence of TV-watch-dog shows depends on this, and the number of shows is only increasing. Thus: we need to re-invent ourselves. We need to re-invent the way we do, act, live, breath and measure (the value contribution to the customer and the company of) Customer Services.
When reading lot’s of research on the link between the Customer Service Experience and Customer Loyalty, stuff on metrics like the ACSI, Customer Effort Score, NPS, ways to ask questions in customer surveys and discussing the theme on Twitter and other blog-posts, it quite dazzled me, so I let it rest for a while.
This week I came to think: why not use the power of Social Media to co-create, through collaboration, the new Customer Services Dashboard 2.0? Together we know more, we can discuss viewpoints and we can leverage experiences. I hope you share my thoughts and are willing to contribute.
A short introduction:
Before we can build the new Customer Services Dashboard 2.0 we need to have a goal (we do not measure because we want to measure, we measure because we want to improve something).
I would like to think that the main goal any Customer Services department should have, is to contribute to the company’s goal. Since I do not know all company’s goals I propose that we set as our goal:
- Improvement of Customer Loyalty (Behavior)
Customer Loyalty behavior consists of three specific “actions” that companies aim for to boost sales and profits. These three are:
- Buy again (i.e. extend a contract or repurchase the same product when used)
- Buy more (i.e. buy additional products or services from the company)
- Spread the word (i.e. tell friends and family about the great product/service and advice to buy it too)
To make it a little more easy to link this desired customer behavior to Customer Services I suggest you read some of the following:
So, I hope you are all warmed-up now. I am!
How will this continue?
First of all I need to know if you are up for the challenge? Do you also believe that our Customer Services Profession is in need of building a Customer Services Dashboard 2.0?
If yes, or no, please leave a comment below and share your views. Are you interested in contributing to build this dashboard, please connect to me through LinkedIn or Twitter, and let me know you’re interested. I’ll get back to you within a couple of days, to let you know how we are going to get this done and how you can contribute (and benefit from joining in).
Looking forward to the collaboration journey. Are your in?
@piplzchoice One question: how are you going to make it actionable? The value of experience gap alone tells you nothing if you have none of the textual data to compare it to — their intent (stated and actual — these too often differ).
There’s a whole lotta triangulation that has to go on here for REAL meaning.
I am very interested in this conversation and would like to suggest a somewhat different perspective. It was previously stated that every company and business model is sufficiently different to be measured using the same criteria and methodology – and I am completely in agreement. However I do think that from the customer’s (specifically consumer for the purpose of this discussion) perspective, the set of metrics can be extended to meaningfully measure the delta between consumer expectation and actual experience with different products of different companies. This approach can produce some valuable content to add for the discussed dashboard in my opinion because it would allow some market oriented reference points as well as more actionable, detailed decision support information.
I am adding the following one cent to the #cctr Dashboard 2.0 conversation. I am particularly following @GrahamHill’s advice on how to proceed: “@ariegoldshlager Co-creation metrics from customer and company perspectives? Have fun.” I am certainly planning on having fun, but the first part of the suggestion, is a tall order.
I would like to start by consulting WikipediA on Co-Creation:
As you can see the definition is very straightforward: “Co-creation is the practice of product or service development that is collaboratively executed by developers and stakeholders together”
The #cctr Contact Center, and all other touch point should clearly be transformed from Enterprise 1.0 (Value Extraction), to Enterprise 1.5 (Value Exchange) or to Enterprise 2.0 (Value Co-Creation).
However, as @GrahamHill already noted, the Value Co-Creation concept was more frequently applied, to date, to Business to Business situations. I would like to therefore start by more precisely clarifying and illustrating potential Call Center Business to Consumer Value Co-Creation applications.
By the way, we should change Business to Business to Business with Business and Business to Consumer to Business with consumer.
Can you ask the #scrm community to reply with specific Current or Future State-Vision examples or illustrations of Call Center Consumer to Consumer Value Co-Creation. How would the concept apply, for example, to a Business with Consumer Industry such as: Retail Banking, Insurance, Credit Card, Travel, Shipping, and (Online) Retail?
On creating this “library” of examples, we should next outline a conceptual framework for Business with Consumer Value Co-Creation. What are the key elements of the value that is Co-Created? The concept will then be less elusive and more conducive to Dashboard 2.0 development.
I hope I am on the right track,
Let me offer some of my ‘old’ advice, now updated (which I just recalled when recommending someone else to this discussion).
To truly ‘discover’ the relevancy of metrics you have to work between two distinct toolsets that I differentiate as Transactional Analytics and Behavioral Analytics. The right ‘questions’ are found in the data between these two focuses.
There are two particular tool approaches that I’m familiar with to do this.
1. Tie iPerceptions to something like Omniture (gosh, I just discovered that HBX/WebSideStory Hitbox, which I last used, was bought by Omniture in 2007 — I’ve been out of the loop for a long time)
2. Leverage the strengths of the transactional analytics in a tool of choice with the more specific behavioral/transactional analytics of Usability Sciences WebIQ
While I have always loved iPerceptions and their great UI, without a direct tie to the specific clicktrail for each interaction they report, the data is incomplete.
WebIQ does both, but covers only the ‘random’ transactions which visitors opt in for. A full picture of all transactions is also needed for baseline evaluations.
If you don’t know why people are coming to the site, what they really wanted to do, how successful they feel they were at it, you’ve not NOTHING.
As is illustrated in the research done for “Tribal Leadership”, the ‘language’ we use is significant. The typical terms that are being used “Customer Acquisition”, “Proactive Retention”, “Winback” — might as well say “inquisition”, “heresy”, “tribunal”. Clearly I’m being facetious to the extreme, but the languages of old business models cannot be used in the new business models required of an Enterprise 2.0 era.
If the terms are not terms that would come out of a customer’s mouth (not necessarily literally but clearly honoring their point of view), then they should be highly questioned for relevancy and applicability.
As well, even when these terms were used, they were used ‘globally’ as if they applied to all business models. They didn’t and the newer measures won’t either.
While there are a lot of business models that can be generalized (as any off-the-shelf product imposes), I’ve never worked in two similar business models, ever — even when the industries were the same. I have many examples, but one that illustrates huge differences in a single industry (and sadly the ‘individuals with responsibilities’ didn’t see these distinctions at all), was in banking. Two regional banks (both very large in their own right) merged. The technologists were attempting to standardize the systems across both companies, but they missed a critical distinction (and kept running into dead ends not realizing it): the customers were fundamentally different. While both banks had branches, the high-value (not just monetarily) customer was a ‘commercial’ one; for the other it was more ‘retail’. The data models for each of these customers are considerably different (for commercial the address would likely be a business one, for retail it would likely be residential — these are oversimplifications just to make a point).
The relevancy of the metrics depends on 3 critical dimensions with influence from 2 others:
+ customer attributes (long list here)
+ product/service attributes
+ business model/culture
+ state/relevancy of relationship (both sides)
+ health of channels (barrier-free)
Far too many major businesses I’ve been in have no idea who their customers really are and what is most important to them. They’re operating blindly.
Depending on where a business is with all of this dictates the relevancy of certain metrics. As is the case with all good design (and metrics fall into this category — they must be designed) the right metrics come out of the context of these dimensions. They literally ‘tell you’ what the right metrics should be (although they’re best at telling you what’s NOT appropriate).
And…what’s appropriate will change as the dimensions change. As certain issues are overcome and as new ones present themselves, what’s most relevant as a metric has to change.
Most of the measures I’ve seen mentioned here might tell you how a ‘machine’ is operating. Enterprise 2.0 is not ‘machine’ focused. I will concede that some ‘operational’ metrics will still be relevant (esp. “wait times”), but they’re supportive, not growth. They should be ‘givens’. They’re the sort of metrics for which no compromise is accepted. They’re fundamental to ‘having’ a relationship, not ‘building’ one.
To answer the question about variability — that’s a means by which to assess whether or not you have a metric that’s relevant. If a metric is ‘flat’ — it’s always the same (no variability) it’s not really useful. It’s not measuring for real change — it’s not reflecting the true ebbs and flow of business activity in a way that allows you to see how a particular action impacts the business. You WANT to see change so that you can assess the success or failure of specific actions and be able to adjust (the ability to differentiate as well as correlate actions and reactions is by no means easy, but that evolves, often leading to new measures).
You test the relevancy of the metrics by reverse-engineering from known ‘unacceptable’ situations to see if the metrics responded (advance warning signals).
ALL metrics must be actionable.
Oh, one of my favorite unopened can of worms. My time is limited at the moment and there’s SO much to say (but the stories have stayed in my head because no one was ready to have the conversations).
Categorically the standard metrics are next to useless. They’re like going into the doctor with issues and he says — “but your heart is beating”.
The true assessment of a measure is its rate of change, its variability. “Flatlines” are of no use.
Worse is having variability that is not tied to the activities that can lend some clues to the shift.
Let me just close by saying, this isn’t a ‘customer service’ issue. This is a “total customer relationship” issue. It transcends ALL touchpoints: call center, online, etc. If the dashboard is not addressed across all the points of interaction you’re trying to assess the whole with just a piece of the picture.
I would love to keep you on board of the collaboration. Please, when you have time, share your thoughts on the comment of Arie and my re-comment on that.
I would also appreciate a short elaboration on the “rate of change” or “variability” so I can better understand the concept.
Last but not least: great addition! It should be about the entire experience, not about one single touch-point in the experience.
■Improvement of Customer Loyalty (Behavior)
because there are so many angles to this post, i thought take one of them and go into more depth, let’s look at my participation as a customer in this engagement talks.
improvment on customer loyalty,
What if, for example, and i had written about Twitter and the Auto Industry, about improving their social media through the Twitter Medium, what if, they took all their customers, tracked them, found them and followed them on Twitter, behind that Twitter account was a personality, a brand ambassador as it has been called to contribute to keeping that loyalty, and to use social skills to keep them active in that loyalty and to see them as people and not customers.
This would take one person, who would converse with these loyal customers, get them excited about an actual rep from that brand talking with them and using strategy to keep them satisfied but to improve on the cusomter experience in a new way.
So when the time comes, and through this medium, you get to measure the behaviour of these loyal customers, in due time offer a free test drive of a new car and set up the details.
How about word of mouth, and how others will talk about it.
It’s a brainstorm idea, but the idea is to build upon the social using the media, and that takes a social person.
this itself is a way to keep customer loyalty.
My first question then is, what type of character or brand social media consultant, is needed, what are the qualifications?
You’re one step ahead of this discussion. This post aims to design a new set of measurements and metrics, set-up in a balanced way (the word balanced-score-card comes to mind ;-) I like Dashboard better still..). You’re already into measures for improvement.
Your idea, nevertheless, is very valid and some companies are experimenting with it already. I’m sure you will find them on Twitter or other places. And the good thing about the Social Media: you can ask them directly your last question.
Be sure to tell me the answers you get.
Thanks for stopping by!
My thinking on the Customer Service Dashboard 2.0 is very flexible, at this point, and I certainly do have more questions then answers:
1) Consider predicating the Dashboard on 3 elements Call Center Effectiveness, Call Center Efficiency, and Call Center Transformation. (How do you balance the 3 elements?)
2) Consider reflecting the following perspective: Customer Perspective, Company Perspective, Customer Service Representative Perspective. The Dashboard should therefore be designed to facilitate a 3-way-win: Customer Win, Company Win, and CSR Win.
3) From the Company’s Perspective the Call Centers has to be both Effective and Efficient. Balancing the two consideration, however, could prove challenging.
4) From the Customer Perspective, Call Centers has to also be both Effective (helping the customer accomplish a certain purpose), and Efficient, with the Customer Time. Balancing the two consideration, again, could prove challenging.
5) From the Company Perspective, Effectiveness could be measured in terms of Customer Profitability or Lifetime Value. Customer Loyalty will certainly be one key element, but not the only one.
6) We will have to potentially also measure several key concepts that defy easy definition, e.g., Empathy, Trust, Credibility, Engagement.
7) The Call Center Transformation could focus on Value Co-Creation or Exchange, but we need to define in what sense, e.g., Co-Product Design, Co-Service Design, Open Innovation, and more…
8) We also need to define the Call Center Role as a Customer Lifecycle Management “Instrument”: Customer Acquisition, Onboarding, Development, Proactive Retention, Winback.
More after I get feedback from you all…
Thanks for joining the discussion, or actually the collaboration! Great input.
Let me start by saying I’m as flexible and with questions as you are. I think we all are. The best we can do is share our views and try to co-create.
I also have a question beforehand: what do you think of Paula’s suggestion that this should be about all touchpoints and not the customer services / call center only. I do agree that value with Customers is co-created through the combination of all touchpoints that make up the experience. A perfect dashboard would reflect that and would enable to see effects of (change in ) actions on one touchpoint in the other. Please share your thoughts.
Now into the meet of your considerations and suggestions.
@ 1) Like the three elements. I would propose adding one: Touchpoint Value. Any touchpoint in the 2.0 era should add value for the three perspectives you mention at 2). E.g. the call center could have great value in reducing systemic costs (of failure), adding value to the Company in reduced costs and adding value for the Customer in an improved experience as well as adding value to the employee through less complaints, work-related stress etc..
@ 2) No argument here. My thought exactly
@ 3) Balancing effectiveness and efficiency is the main challenge in labour intensive touchpoints like the call center. That’s also why I think it is important to add the Value element above. This could provide an automatic balance. Productivity is and will be a required measure. If a productivity decrease can be balanced by a company or customer value increase, there is less of a problem (I hope).
@ 4) From a Customer perspective I think efficiency is not the right “wording”. Why not use Customer Effort. Companies should care about the amount of effort they require from their Customers to get the jobs done. Long-waits, call backs, answering questions for information that the company already knows etc.. are all covered by the Effort element.
@ 5) Agreed with the concepts. I suggest we clarify the specific measures on this later on, when we have some common understanding of the framework and general measurement concepts.
@ 6) Could not agree more. Same as in 5). I am hoping for Paula here to put in some concepts. Or maybe we should try get input from specialists on measurement of semantics? They should be out there.
@ 7) I’m not fully grasping what your aiming at with the Transformation element. Could you elaborate a little more on this?
@ 8) There definitely is a role for customer services in all those elements. As there is in other touchpoints too. Maybe that’s why it is best to extend this effort to the “whole” customer experience / touchpoints. I think that the framework and measurement concepts will be quite similar for all touch-points. Metrics will differ, but I do not see that as a big issue here.
Let me know your thoughts. I will pro-actively invite the others too, to keep the process rolling.
Glad to collaborate!
I generally agree with your thoughtful feedback, and would like to propose the following additional observations. I am also in agreement with Paula’s suggestion to extend the work to all applicable customer touchpoints.
@ 1) I am intrigued by your “Touchpoint Value” Concept. Do you mean “Value” in the “Relative Advantage (or Cost-Effectiveness) for Accomplishing Certain Purposes” Sense?
In any event, we need to also proactively consider any implications of the following: (The) Customer Has Escaped: customers are not mindful of channel boundaries: http://bit.ly/2UtaF
We can certainly “direct” certain customer to certain touchpoints for certain purposes, but they can prove elusive.
@ 4) I agree with leveraging the “Customer Effort” Concept. Consider also “Customer Effective Efforts,” to adjust for customer systematic failure.
How do we reflect the implications of: How to Prevent Your Customers From Failing: http://bit.ly/35hkFo
@ 7) I will elaborate on the Transformation element, under a separate cover later.
As already shazred with you via LinkedIn, I am alwways eager to learn better methods to improve customer loyalty. I have read the Customer Effort Score method on the website of CCC.
I found this “performance management guide 2009” http://www.bright.se/english/11/page.asp?page_id=3765&type=info&sub_nr=2&item_id=9287
my opinion: too much same old & too little effectiveness and customer experience
Also tweeted it with hashtag #cctrdb20
We can discuss there, for now.
Pretty obvious that this is going to be inflected with employee empowerment, enterprise 2.0, social crm, SCRM, and how transparency of interaction effects those metrics. Imagine that all of your interaction cycle was being broadcast, live, to anyone that was interested. Now dashboard that !
Pingback: Twitted by wimrampen
Allthough we have a small business, you can count me in!
John Vrakking | daVinci-Orde op Zaken
The customer values only one thing from customer service: give me what i want and quickly. That is why I focus on effectiveness metrics to make sure the organization is delivering on “customer values”.
this is not very complicated stuff. company provides customer service to customers with questions and complaints. company should measure whether those questions and complaints get answered and resolved.
am i missing something else in reference to customer values?
I am in – but with a caveat (you did not think it was going to be that easy – did you)
Here is my main concern. Each organization is going to be different. There is no way to make the same set of metrics to fit to two different organizations – even if in the same industry and with similar business model. This is the problem that all the metrics you mention above, NPS, PCE, ACSI, etc. fall into. They think that they can compare one with the other.
I have written about it, said it many times, and work with my customers to make sure they understand that the metrics to use are theirs, and their only, and that the correlation makes sense to their organization and strategies. What good would it do, for example, to correlate NPS to increased revenue when the organization is interested in cost savings? in a specific function? form a specific group?
As you see, once the questions begin to fly, what worked for company A is certainly not going to work for company B. Processes, action, functions, metrics, segments, etc. they are all different between organizations.
I am in, as long as we can work something up that is very flexible and does not commit any one organization to a specific set of metrics or measurements – but we build a framework (like the one I use in my methodology – Strategic Measurement Framework) that works across all of them.
Thanks for joining in! Great to have you on board.
With regard to your caveat: Exactly how I feel about it. It is not about defining exact metrics but a measurement framework that will provide insights on what “kind of metrics” one needs to put in place. The framework tells us what is important, more important and tells us how they (measurements) are related to each other as well as to the goals.
This brings me to a tweet a received from @rotkapchen:
@wimrampen Your goals are out of balance. I read them and immediately get “authenticity” alerts. Goals not focused on consumer values.
What are your thoughts on that? I read “consumer values” as jobs they want to achieve. Should we incorporate those into the goals? Can we find any substantial research on the jobs consumers want to achieve from Customer Service? Or are these “jobs” so clear to us “passionates about customer service” that we do not worry about being “out of balance”.