Measuring Customer Performance – The Value Co-Creation Way

[tweetmeme source=’MarkTamis’ service=’’]

I found a new Dutch initiative to measure a Company’s Customer Performance: The Dutch Customer Performance Index (DCPI) (Dutch only) – a new objective and validated index for measurement of Customer performance – . I thought it worthwhile sharing with you.

The Dutch Customer Performance Index is an initiative of the Customer Insights Center of the University of Groningen (Dutch only), intelligence bureau MIcompany and market researcher MetrixLab. The University of Groningen is responsible for the scientific bases of the research. MIcompany determines wich value companies create for themselves from their Customers and MetricLab is repsonsible for data collection and building the benchmark database.

The DCPI conducts their research on a regular basis for 80 of the largest service providers in The Netherlands, which is based on a research base of 4.000 Dutch consumers.

The DCPI measures and compares these 80 companies based on two perspectives of a company’s Customer performance:

  • The value a company creates FOR their Customers: Value to the Customer (V2C)
  • The value a company creates for themeselves WITH their Customers: Value to the Firm (V2F)

The Value to Customer Dimension

The V2C dimension is based on articles by Rust, Lemon and Zeithaml and Verhoef, Langerak and Donkers and is based on four components, all equal in weight to the total score:

  1. Relationship Equity: Valuation by Customers of the relationship with the company.
  2. Value Equity: Valuation by Customers of the price-to-value relationship.
  3. Brand Equity: Valuation by Customers of the brand
  4. Emotions: Valuation by Customers of both positive and negative emotions that can be associated with a company

The Value to Firm Dimension

The V2F dimension is based on articles by Gupta and Zeithaml, Reichheld and Gupta, Lehmann and Stuart and also has 4, equally weighted components:

  1. Revenue: Customer spend on a company’s service(s)
  2. NPS: Net Promotor Score
  3. Retention: The likelihood of Customer retention
  4. Risk: The risk of future revenue. This one is based on the variation between the three previous components. In short: the higher the variation between the three individual scores, the higher the risk.

My take on this

I like this research for a few reasons:

  • It’s Dutch.. but that doesn’t mean anything to most of you probably ;-)
  • It has a scientific/academic foundation and the research is conducted under the responsibility of a respected Dutch University.
  • The two dimensions fit into my “value co-creation” thinking.
  • The fact that the Value to Firm dimension does not talk only of financial value and it’s not based on one number.
  • I particularly like the way the research approaches the Risk of future earnings by bringing it into the equation for starters, but mainly by it being a component based on the variation between the three other components. This makes a whole lot of sense to me.

Additionally I would like to add that I’m not a fan of NPS as an indicator. Most certainly not when it’s presented as a “silver bullit”. I would choose to add at least one more question to the NPS question:
– Did you recommend company x/y/z over the past three months.

Unfortunately I do not have insight in the questionnaire itself. Hopefully I will obtain this. If I do, and get permission, I will put it up here too.

Curious as to what you all think. Is there something similar to this somewhere else in the world? If so, how’s that working? Is this the closest we get to measurement of value co-creation on a company to company comparable level? If not, what are your suggestions for improvement? [tweetmeme source=’MarkTamis’ service=’’]

Metrics – to fool or be fooled – that’s the question!

KPI’s should be about understanding what you need to improve to be better at meeting your Customer’s needs and desires. Designing a measurement framework, the metrics that go with that, and the cross-functional dashboards to ensure cross-silo understanding how improvement in one area effects another, should not be regarded lightly. At the same time putting together a Customer Experience Feedback Analytics team to keep tracking metrics, searching for new correlations and continuously increasing your understanding of what truly matters to Customers, as well as what you need to do about just that, I consider a must for every company.

Unfortunately, in the perception of many, KPI’s seem to exist only to please “the boss” or to show “the boss” how well one is doing. KPI’s or metrics are often not well designed, and are sometimes extremely well designed: Extremely well to suit an important purpose: Fool your Boss (e.g. for a bonus, for getting the money to launch that project you really want to do, or just to be able to not have to do anything).

Here are two examples of metrics that fit into that last category:

Pursuing your own desires, not your Customer’s:

A company understands that their customers desire a speedy turn-around time with regard to account-change-requests. They have asked their clients what they would consider a speedy turn-around time. On average the Customer has provided feedback that 10 business days would be fine. The manager therefore has put in place a metric: average turn-around time of account-change-requests. After a big ICT project (which they always wanted to do but did not have a sound business case for) they succeeded in getting it about right. Unfortunately Customer Satisfaction did not increase and the (complaint) volume in their Contact Center did not decrease either, it increased!

What happened: after analyzing the data on turn-around times it was discovered that the company has been successful in decreasing the turn-around time of requests, that were already being dealt with within 10 days before. They improved turning them around in 3 days. Great achievement, but clearly not in line with the desired outcome of their Customers. Worse even, the turn-around time of requests that were handled outside the 10 day limit, increased from 15 days to 18 days. A lot of money had been spend on reducing the average turn-around time (by system automation), only to find out it did not have any of the desired outcomes for the company. The manager is happy though, with a state of the art system and a good bonus for meeting the KPI-goals.

Little effort, maximum results

A company has analyzed that their Call Center First Contact Resolution-rate was too low, causing high levels of dissatisfaction among their Customers who contacted the Contact Center. They also analyzed that most of the repeat traffic occurred within 2 weeks after the first call. Hence the responsible contact center manager put in place a KPI to track and reduce the repeat volume that occurred within 2 weeks. After as little as one month they saw an increase in the new First Contact Resolution KPI and after 3 months they hit their target (95 % FCR). Unfortunately, and you feel it coming, dissatisfaction levels did not decrease, nor did call volume.

What happened: contact center management proved to be very effective. They implemented the new KPI all the way through to the level of Customer Services Representatives. They of course know exactly how to influence this, without structural improvements needed. The CSR’s made a great effort in managing expectations of the calling Customers: it will take at least 2 weeks before your requests will be dealt with. No improvements were made on the actual turn-around time of the Customer requests, hence all Customers kept calling back after the two weeks had passed. A good example of: little effort maximum result!

What kind of (bad) examples do you have to share? Or: how did changing the way you measured really improve understanding, what mattered for your Customers, for you? Please share your stories here.

Reblog this post [with Zemanta]