記事
This article is reprinted by permission of Harvard Business School Press and excerpted from The Ultimate Question: Driving Good Profits and True Growth by Fred Reichheld, Harvard Business School Press, 2006.
Scott Cook was worried. His financial-software company, Intuit, was on a slippery slope, and he wasn't sure what to do about it.
Granted, his problems might not have looked overwhelming to an outsider. Intuit had grown like gangbusters ever since its birth in 1983. Its three major products-Quicken, QuickBooks, and TurboTax-dominated their markets. The company had gone public in 1993, and by the end of the decade was racking up sizable profits. Intuit had also been lauded by the business press as an icon of customer service, and Cook—a mild-mannered, bespectacled Harvard MBA who had done a stint at Procter & Gamble before cofounding the company—had a gut-level grasp of the importance of customer promoters. "We have hundreds of thousands of salespeople," he told Inc. magazine as early as 1991. "They're our customers." Intuit's mission? "To make the customer feel so good about the product they'll go and tell five friends to buy it."
But now—was that really happening? Cook wasn't sure. When the company was in its start-up phase, operating out of cozy offices in Silicon Valley, he had known every employee personally, and he could coach them all on the importance of making products and delivering services that customers truly loved. They could all hear him working the service phones himself, talking to customers. They could see him taking part in Intuit's famous "follow-me-home" program, where employees asked customers if they could watch them set up the software in order to note any problems. But now the company had thousands of people in multiple locations. Like many rapidly growing businesses, it had hired a lot of professional managers, who had been trained to run things by the numbers.
And what were those numbers? There were two requirements for growth, Cook liked to say: profitable customers and happy customers. Everyone knew how to measure profits, but the only measurements of customers' happiness were vague statistics of "satisfaction"-statistics derived from surveys that nobody trusted and nobody was accountable for.
So managers naturally focused on profits, with predictable consequences. The executive who cut staffing levels in the phone-support queue to reduce costs wasn't held accountable for the increased hold times or the resulting customer frustration. The phone rep who so angered a longtime customer that he switched to another tax-software product could still receive a quarterly bonus, because she handled so many calls per hour. Her batting average on productivity was easy to measure, but her batting average on customer good will was invisible. The marketing manager who kept approving glitzy new features to attract more customers was rewarded for boosting revenues and profits, when in fact the added complexity created a bewildering maze that turned off new users. Now, Cook was hearing more complaints than in the past. Some market-share numbers were slipping. For lack of a good system of measurement—and for lack of the accountability that accurate measurement creates-the company seemed to be losing sight of exactly what had made it great: its relationships with its customers.
The Challenge: Measuring Customer Happiness
In a way, Cook's experience recapitulated business history. Back in the days when every business was a small business, a proprietor could know what his customers were thinking and feeling. He knew them personally. He could see with his own eyes what made them happy and what made them mad. Customer feedback was immediate and direct—and if he wanted to stay in business, he paid attention to it.
But soon companies were growing too big for their owners or managers to know every customer. Individual customers came and went; the tide of customers ebbed and flowed. Without the ability to gauge what people were thinking and feeling, corporate managers naturally focused on how much those customers were spending, a number that was easily measurable. If our revenue is growing and we're making money, so the thinking ran, we must be doing something right.
Later, of course-and particularly after the arrival of powerful computers—companies tried to assess customers' attitudes more directly. They hired market-research firms to conduct satisfaction surveys. They tried to track customer-retention rates. These endeavors were so fraught with difficulties that managers outside marketing departments generally, and wisely, ignored the efforts. Retention rates, for example, track customer defections—how fast the customer bucket is emptying—but say nothing on the equally important question of how fast the bucket is filling up. They are a particularly poor indication of attitudes whenever customers are held hostage by high switching costs or other barriers. (Think of those US Airways Philadelphia travelers before Southwest Airlines arrived on the scene.)
Conventional customer-satisfaction measures are even less reliable. We will review their legendary shortcomings in detail later in the book (chapter 5). For the moment, it's enough to note that there is little connection between satisfaction rates and actual customer behavior, or between satisfaction rates and a company's growth. That's why investors typically ignore reports on customer satisfaction. In some cases, indeed, the relationship between satisfaction and performance is exactly backward. In the spring of 2005, for example, General Motors was taking out full-page newspaper ads trumpeting its numerous awards from J.D. Power and Associates, the biggest name in satisfaction studies. Meanwhile, the headlines in the business section were announcing that GM's market share was sinking and its bonds were being downgraded to junk status.
So as my colleagues and I continued our study of loyalty, we searched for a better measure—a simple and practical indicator of what customers were thinking and feeling about the companies they did business with. We wanted a number that reliably linked these attitudes to what customers actually did, and to the growth of the company in question.
What a chore it turned out to be! We started with the roughly twenty questions on the Loyalty Acid Test, a survey Bain designed several years ago to assess the state of relations between a company and its customers. (Sample questions: How likely are you to continue buying Company X's products or services? How would you rate the overall quality of the products and services provided by Company X?) Then we sought the assistance of Satmetrix Systems, Inc., a company that develops software to gather and analyze real-time customer feedback. (Full disclosure: I serve on Satmetrix's board of directors.)
With Satmetrix, we administered the test to thousands of customers recruited from public lists in six industries: financial services, cable and telecommunications, personal computers, e-commerce, auto insurance, and Internet service providers. We obtained a purchase history for every person surveyed. We also asked these people to name specific instances when they had referred someone else to the company in question.
When this information wasn't immediately available, we waited six to twelve months and then gathered information on subsequent purchases and referrals by those individuals. Eventually we had detailed information from more than four thousand customers, and we were able to build fourteen case studies—that is, cases for which we had sufficient sample sizes to measure the link between individual customers' survey responses and those same individuals' purchase or referral behavior.
Discovering the Ultimate Question
All this number crunching had one goal: to determine which survey questions showed the strongest statistical correlation with repeat purchases or referrals. We hoped to find for each industry at least one question that effectively predicted what customers would do and hence helped predict a company's growth. We took bets on what the question would be. My own favorite—probably reflecting my years of research on loyalty—was, "How strongly do you agree that Company X deserves your loyalty?"
But what we found was different, and it surprised us all. It turned out that one question—the Ultimate Question-worked best for most industries. And that question was, "How likely is it that you would recommend Company X to a friend or colleague?" In eleven of the fourteen cases, this question ranked first or second. In two of the three others, it was so close to the top that it could serve as a proxy for those that did rank number one or number two.
Reflecting on our findings, we realized they made perfect sense. Loyalty, after all, is a strong and value-laden concept, usually applied to family, friends, and country. People may be loyal to a company that they buy from, but they may not describe what they feel in those terms. If they really love doing business with a particular provider of goods or services, however, what's the most natural thing for them to do? Of course: recommend that company to someone they care about.
We also realized that two conditions must be satisfied before customers make a personal referral. They must believe that the company offers superior value in terms that an economist would understand: price, features, quality, functionality, ease of use, and all the other practical factors. But they also must feel good about their relationship with the company. They must believe the company knows and understands them, values them, listens to them, and shares their principles. On the first dimension, a company is engaging the customer's head. On the second, it is engaging the heart. Only when both sides of the equation are fulfilled will a customer enthusiastically recommend a company to a friend. The customer must believe that the friend will get good value—but he or she also must believe that the company will treat the friend right. That's why the "would recommend" question provides such an effective measure of relationship quality. It tests for both the rational and the emotional dimensions.
I don't want to overstate the case. Though the "would recommend" question is far and away the best predictor of customer behavior across a range of industries, it's not the best for every industry. In certain business-to-business settings, a question such as "How likely is it that you will continue to purchase products or services from Company X?" may be better. So companies need to do their homework. They need to validate the link between survey answers and behavior for their own business and their own customers. But once some such link is established, as we will see in chapter 3, the results are incredibly powerful: it provides the means for gauging performance, establishing accountability, and making investments. It provides a connection to growth.
Scoring the Answers
Of course, finding the right question to ask was only the beginning. We now had to establish a good way of scoring the responses.
This may seem like a trivial problem, but any statistician knows that it isn't. To be useful, the scoring of responses must be as simple and unambiguous as the question itself. The scale must make sense to customers who are answering the question. The categorization of answers must make sense to the managers and employees responsible for interpreting the results and taking action. The right categorization will effectively divide customers into groups that deserve different attention and different responses from the company. Ideally, the scale and categorization would be so easy to understand that even outsiders—investors, regulators, journalists—could grasp the basic messages without the need for a handbook and a course in statistics.
For these reasons we settled on a simple zero-to-ten scale, where ten means "extremely likely" to recommend, five is neutral, and zero means "not at all likely." When we mapped customers' behaviors on this scale, we found three logical clusters:
One segment was the customers who gave a company a nine or ten rating. We called them promoters, because they behaved like promoters. They reported the highest repurchase rates by far, and they accounted for more than 80 percent of referrals.
A second segment was the "passively satisfied" or passives; they rated the company seven or eight. This group's repurchase and referral rates were a lot lower than those of promoters, often by 50 percent or more. Motivated more by inertia than by loyalty or enthusiasm, these customers may not defect—until somebody offers them a better deal.
Finally, we called the group who gave ratings from zero to six detractors. This group accounts for more than 80 percent of negative word-of-mouth comments. Some of these customers may appear profitable from an accounting standpoint, but their criticisms and attitudes diminish a company's reputation, discourage new customers, and demotivate employees. They suck the life out of a firm.
Grouping customers into these three clusters—promoters, passives, and detractors—provides a simple, intuitive scheme that accurately predicts customer behavior. Most important, it's a scheme that can be acted upon. Frontline managers can grasp the idea of increasing the number of promoters and reducing the number of detractors a lot more readily than the idea of raising the customer-satisfaction index by one standard deviation. The ultimate test for any customer-relationship metric is whether it helps the organization tune its growth engine to operate at peak efficiency. Does it help employees clarify and simplify the job of delighting customers? Does it allow employees to compare their performance from week to week and month to month? The notion of promoters, passives, and detractors does all this.
We also found that what we began to call Net Promoter Score (NPS)—the percentage of promoters minus the percentage of detractors—provided the easiest-to-understand, most effective summary of how a company was performing in this context.
We didn't come to this language or this precise metric lightly. For example, we considered referring to the group scoring a company nine or ten as "delighted," in keeping with the aspiration of so many companies to delight their customers. But the business goal here isn't merely to delight customers; it's to turn them into promoters—customers who buy more and who actively refer friends and colleagues. That's the behavior that contributes to growth. We also wrestled with the idea of keeping it even simpler—measuring only the percentage of customers who are promoters. But as we'll see in later chapters, a company seeking growth must increase the percentage of promoters and decrease the percentage of detractors. These are two distinct processes that are best managed separately. Companies that must serve a wide variety of customers in addition to their targeted core—retailers, banks, airlines, and so on—need to minimize detractors among noncore customers, since these customers' negative word of mouth is just as destructive as anybody's. But investing to delight customers other than those in the core rarely makes economic sense. Net Promoter Scores provide the requisite information for fine-tuning customer management in this way.
Individual customers, of course, can't have an NPS; they can only be promoters, passives, or detractors. But companies can calculate their Net Promoter Scores for particular segments of customers, for divisions or geographic regions, and for individual branches or stores. NPS is to customer relationships what a company's net profit is to financial performance. It's the one number that really matters-which is just what Intuit discovered.
Solving Intuit's Problem
Intuit-worried as it was about slipping customer relationships-jumped at the idea of measuring its NPS and began an implementation program in the spring of 2003. ("Just one number-it makes so much sense!" exclaimed Scott Cook when he learned of the idea.) The company's experience shows some of what's involved in measuring promoters and detractors. It also shows how this measurement can transform a company's day-to-day priorities.
Intuit's first step was to determine the existing mix of promoters, passives, and detractors in each major business line. Cook suggested that this initial phone-survey process focus on only two questions. The team settled on these: first, What is the likelihood you would recommend (TurboTax, for example) to a friend or colleague? and second, What is the most important reason for the score you gave?
Customer responses revealed initial Net Promoter Scores for Intuit's business lines ranging from 27 to 52 percent. That wasn't bad, given that the average U.S. company has an NPS of less than 10 percent, but Intuit has never been interested in being average. The scores weren't consistent with the company's self-image as a firm that values doing right by its customers. The numbers convinced the management team that there was plenty of room for improvement.
The initial audit revealed something else as well: the telephone-survey process used by the company's market-research vendor was woefully inadequate. First, there was no way to close the loop with customers who identified themselves as detractors—no way to apologize, no way to develop a solution for whatever was troubling them. Second, the open-ended responses the vendor reported were intriguing, but managers had a tendency to read into them whatever they already believed. Third, the responses were often confusing and contradictory. For example, promoters frequently praised a product's simplicity, while detractors of that same product griped about its complexity. The teams obviously needed a way of drilling deeper if they were to understand the root causes of promotion and detraction.
In addition to these formal audits, some of the business units began to add the "would recommend" question to the brief transaction surveys they were already using to manage the quality of their interactions with customers. These responses provided a steady flow of NPS insights that illuminated hot spots and trouble spots relating to customers' experience with the company. For example, Intuit had decided to charge all QuickBooks customers for tech-support phone calls-even new customers who were having trouble getting the program up and running. Net Promoter Scores for customers who called tech support were drastically below the QuickBooks average, and it was immediately apparent that the policy was at fault. The business team tested several alternatives to see what effect they would have on scores; eventually the team found that the most economical solution was to offer free tech support for the first thirty days of ownership. Net Promoter Scores from customers who called tech support increased by more than thirty points as a result.
The Consumer Tax Group, home of the industry-leading TurboTax product line, faced a particularly tough challenge. TurboTax's market share in the increasingly important Web-based segment had plummeted by more than 30 points from 2001 to 2003. Managers in the division knew that they had to get a better handle on customer issues. One successful initiative was the creation of a six-thousand-member "Inner Circle" of customers whose feedback would directly influence management decisions. Customers who registered to join this e-mail community were asked some basic demographics and were also asked the "would recommend" question so that the company could determine whether they were promoters, passives, or detractors. Then they were asked to suggest their highest-priority improvements for TurboTax and to vote on suggestions made by other Inner Circle members. Software sifted the suggestions and tracked the rankings, so that over time the most valuable ideas rose to the top of the list.
The results were eye-opening. For detractors, the top priority was improved quality of technical support. To address that issue, the management team reversed a decision made two years earlier and returned all phone tech-support functions from India to the United States and Canada. The team also boosted tech-support staffing levels. The second-biggest priority for detractors was to improve the installation process. That became a top priority for TurboTax's software engineers, who in the 2004 edition of the program achieved a reduction of nearly 50 percent in installation-related tech-support contacts.
Promoters had a different set of priorities. Topping the list was the rebate process: some complained that it took longer to fill out all the rebate forms than to install TurboTax and prepare their taxes! After getting this feedback, the division general manager assigned one person to own the rebate process and held that individual accountable for results. Soon the proof of purchase was simplified, the forms were redesigned, the whole process was streamlined-and turnaround time was reduced by several weeks.
The Consumer Tax Group continued to study Net Promoter Scores, examining various customer segments. New customers, the group found, had the lowest scores of any cluster. Executives called a sample of these customers to find out why, and what they discovered was startling and unsettling. All the features that had been added year after year to appeal to diverse customer groups with complex tax needs had yielded a product that no longer simplified the lives of standard filers. In fact, more than 30 percent of new customers never used the product a second time.
In response, the management team issued new priorities for the design engineers: make the program simpler. Soon the interview screens were revised according to new design principles. Confusing tax jargon was eliminated-a new editor hired from People magazine got the job of making the language clear and easy to understand. In tax year 2004, for the first time, the NPS of first-time users was even higher than that of longtime users. In addition, the company introduced a streamlined forms-based option for people with simple, straightforward tax returns. This new product, SnapTax, was released in tax year 2004 and generated an NPS of 64 percent-scoring higher with first-time users than TurboTax.
Intuit's Results: Happy Customers and Shareholders
Over the two-year period from the spring of 2003 to the spring of 2005, Net Promoter Scores for TurboTax jumped. The desktop version, for instance, rose from 46 to 61 percent. New users' scores climbed from 48 to 58 percent. Retail market share, which had been flat for years, surged from 70 to 79 percent-no easy feat in a maturing market. Scores improved at most of Intuit's major lines of business. Thanks to this success, Net Promoter Scores became part of the company's everyday operations. "Net Promoter gave us a tool to really focus organizational energy around building a better customer experience," said CEO Steve Bennett. "It provided actionable insights. Every business line [now] addresses this as part of their strategic plan; it's a component of every operating budget; it's part of every executive's bonus. We talk about progress on Net Promoter at every monthly operating review."
At the firm's 2004 Investor Day, when executives update securities analysts and major investors on the company's progress, challenges, and outlook for the future, Cook and Bennett unveiled their renewed commitment to building customer loyalty. They described how Net Promoter Scores had enabled the team to convert the historically soft goal of building better customer relationships into a hard, quantifiable process. Just as Six Sigma had helped Intuit improve its business processes to lower costs and enhance quality, Net Promoter Scores were helping it set priorities and measure progress toward the fundamental goal of stronger customer loyalty.
Yes, there was still a long way to go. But Cook and Bennett pointed out that the new initiative was simply a return to the original roots of Intuit's success. As the company grew larger, the need increased for a common metric that could help everyone balance today's profits against the improved customer relationships that feed future growth. "We have every customer metric under the sun," said Cook, "and yet we couldn't make those numbers focus the organization on our core value of doing right by the customer. The more metrics you track, the less relevant each one becomes. Each manager will choose to focus on the number that makes his decision look good. The concept of one single metric has produced a huge benefit for us-customers, employees, and investors alike."
By showcasing Net Promoter Scores as the central metric for revitalizing growth in the core businesses, Cook and Bennett were signaling to their own organization that this was not some here-today, gone-tomorrow corporate initiative. On the contrary: it was a business-critical priority so important to Intuit's future that it deserved to be understood by shareholders. Intuit's leaders were also signaling to shareholders that at the next Investor Day, these investors would be entitled to learn more about the company's progress on Net Promoter Scores.
Maybe the event even foreshadowed the day when all investors will insist on seeing reliable performance measures for customer-relationship quality-because only then can investors understand the economic prospects for profitable growth.
© 2006 Harvard Business School Publishing