論説
概要
- Metrics should quantify what you really need to know.
- Useful metrics align with and balance corporate priorities; they can be measured accurately and benchmarked.
Given the proliferation of data and the new analytical tools designed to pull insights from it, one might think the measurement of business performance has greatly improved. Unfortunately, that is not always so. Too frequently, we find companies or functions within companies that simply measure the wrong things.
Focusing on metrics that are imperfect or not meaningful can have serious impact, leading management to make poor decisions that hurt the business. Usually the root cause lies in companies sticking to historical metrics that measure what was possible to track when they were created, not what should have been measured. Now, with so much more data and analytic capability, it may be time to revisit some of those metrics.
Companies that do see a clear payoff.
Recently a multibillion-dollar global technology firm reevaluated how it measures customer service. Traditionally, the company held itself to a standard of perfect order fulfillment, aiming to be in-stock on items when they were ordered and to deliver them quickly and flawlessly. The company, one of the most productive and efficient in its industry, measured “perfect order” across the board, with every customer. As a result, it paid for a lot of expedited shipping and other costs in order to meet this goal.
Universal perfection turned out to be the wrong measurement, however. After management designed a way to calculate the profit margin of each customer, they discovered that some customers are more valuable than others, and that almost all their profits come from a fairly limited group. Understanding that, the company refocused, channeling more effort to that all-important 20% of customers, improving on-time performance and its Net Promoter Score® among that group. At the same time, management stopped aiming for and measuring perfect order for the other 80%. It was simply too expensive. A more affordable level of service was acceptable to these customers and helped improve profitability.
As this experience shows, the most useful metrics align with and balance corporate priorities. They can be measured accurately, and benchmarked against both internal goals and competitor performance. Good metrics illustrate to staff and management a cause and effect—how their actions meet a business need—while helping to cultivate the right activities and behaviors across the organization.
It’s hard to imagine a company seeking sustained improvement in customer centricity, for example, or in unit cost without corresponding measures to evaluate its progress. But metrics are only approximations of desired behavior and output. They are often imperfect, and sometimes dangerous.
Measurement missteps
Sometimes a single metric becomes an obsession for management, overemphasized at the expense of other signals, and eventually skewing behavior. In recent years, there have been examples in multiple consumer service segments of organizations becoming excessively focused on the number of services per customer. This metric is not useless—a customer’s willingness to buy multiple services can be a sign of a healthy business. But in situations where it becomes the only number that matters, without an appropriate balance toward customer advocacy, frontline staff have in some cases worked to boost that cross-sell metric at the cost of corroding the company’s trust and relationship with its customers.
Some metrics reflect only part of a company’s performance, missing other significant elements. A call center that tracks customers’ average hold time is fine, but tallying the percentage of problems resolved on the first call may capture something much more important. Click-to-revenue analytics, a popular feature of “performance marketing,” is another example. These numbers are far more meaningful when combined with measures like brand value, how marketing spending is affecting that value and how much company revenue can be directly traced to marketing efforts. While harder to measure than clicks, these are invaluable metrics.
Too often, measurements emphasize activity that just doesn’t add value. One example: a research and development organization measuring raw developer output, such as the number of lines of code written, regardless of the quality of the code.
Measuring sales performance can be especially tricky. Revenue per sales rep, a common metric, is easily inflated by marketing spending and price discounts, but just as problematic is the fact that not all revenue is of equal value. Companies that sell a portfolio of products of varying profitability, as most do, need to acknowledge that some revenue brings more profit. At the same time, a company may want to incent sales from a new territory or customer that are harder to get than renewals from an existing account, but valuable in the long term.
For many years, one software company counted all revenue equally when calculating sales quota attainment. This resulted in no differentiation between revenue for the software itself, which was quite high margin, and the professional services the company offered to implement the software, which had very low or even negative margins. The company eventually fixed that issue by carefully evaluating its gross margins per product and moving to quotas keyed off of those numbers.
Some metrics are simply poor quality. Consider sales projections, which feed the broader business forecasts that CFOs make every quarter and are vitally important to a company’s future health. Yet many sales organizations rely on reps’ self-reporting, a metric that can be of suspect quality. Today, more-sophisticated companies use digital exhaust to stress test those predictions. It is possible to discern through email traffic and calendar analysis the frequency of interactions with a key customer in the sales pipeline, for example. In the weeks before the end of a quarter, if this exhaust shows no meaningful interactions with the customer, it might be prudent to discount the probability of the sales that have been projected to that customer. It could be wise to do the same for other opportunities that the sales team is also characterizing as highly probable.
Harnessing new data
New sources of data can be used to improve measurement in novel ways. In the 1990s, sabermetrics and big performance databases ushered in a number of improved gauges of baseball performance, such as moving from solely evaluating a hitter’s batting average to evaluating his on-base plus slugging number, which also assigns value to walks.
Similarly, businesses have moved from assessing customers’ satisfaction to assessing their willingness to recommend, as measured by the Net Promoter Score. Increasingly, finer cuts of data mean satisfaction can be measured not across a whole function but at a more granular level, by customer episode. An episode is anything that causes a customer to interact with a company, such as making a purchase or paying a bill.
Given the payoff for getting metrics right, there is a true imperative for companies to reconsider what matters to their business and how best to measure that. With new types of data easily accessible for analysis, including by some machine learning techniques, companies have an opportunity to understand their real performance much more deeply.
Many organizations and functions within companies have embraced the chance to identify the higher-fidelity metrics that will tune their business performance going forward. Among them: retailers moving from transaction-level economic assessment to assessing the lifetime value of a customer relationship, sales organizations replacing gross measures of selling effectiveness to appraising step-by-step conversion, and R&D departments moving from tracking gross activity to calculating reuse and durability measures.
Any group looking to improve its assessments needs to evaluate the state of its current metrics and data opportunities. This starts with asking the right questions:
- What factors matter most to the enterprise or function’s performance?
- Have we analyzed if those factors fully leverage current data and advanced analytic techniques?
- What, for instance, does machine learning tell us really matters to our performance?
- Are there any obvious weaknesses in our existing metrics?
- Which metrics could better reflect the actual performance in areas that matter?
- Can these metrics be accurately measured?
- Have we established an internal performance baseline against which to compare future results?
- Do we know how competitors perform on these metrics and how our results compare with theirs?
- Relative to one another, how important are each of these metrics, and how do we balance them?
- Does our operating model reflect that hierarchy and balance?
Chris Brahm is a Bain & Company partner based in the San Francisco office; he leads the firm’s Global Advanced Analytics practice. Mark Kovac is a partner in the Dallas office and leads Bain’s Global Commercial Excellence group. Peter Guarraia is a partner in Chicago.
Net Promoter Score® is a registered trademark of Bain & Company, Inc., Fred Reichheld and Satmetrix Systems, Inc.