HBR.org
This article originally appeared on HBR.org.
If you’re trying to use advanced analytics to improve your organization’s decisions, join the club. Most of the companies I talk to are embarked on just such a quest. But it’s a rocky one.
The technological challenge is hard enough. You have to identify the right data and develop useful tools, such as predictive algorithms. But then comes an even tougher task: getting people to actually use the new tools.
Why is the people factor so important? It’s easy enough to automate routine decisions, such as identifying likely buyers for a product upgrade. But many decisions in today’s knowledge economy depend on expertise and experience. Think of bankers deciding on business loans, product developers determining tradeoffs between features and cost, or B2B sales reps figuring out which prospects to target. Analytics can help codify the logic of the best decision makers, but it can’t replace human judgment.
Moreover, the tools developed for contexts like these can be complex, often involving a steep learning curve. If decision makers aren’t willing to experiment with the tool and improve their outcomes over time, then your investment in the technology is wasted.
Right here, some say, is where a company could use gamification to encourage people to invest the time and learn how to use the new tools.
Gamification means using motivational techniques like those the videogame industry has put to such effective use. Anyone with teenagers in the house knows that they will spend long hours on their own, trying to get to the next level of their favorite game. Motivation experts like Dan Pink would say that the games are tapping into some basic human drives: for autonomy (you control your own pace), for mastery (you get better over time), and for a sense of purpose (you’re aiming at a well-defined goal). The social factor is important, too. Gamers love to match their skills against others and to compare notes on how they’re doing.
Can these motivational concepts and techniques encourage decision makers to use new analytical tools and collaborate with each other—both to improve the tools and to better their ability to make more informed decisions? We don’t yet have much evidence to answer that question. But early signs indicate that it might work.
A large property and casualty insurance company, for instance, invested several million dollars to create a new analytical tool for underwriting decisions. The tool allows underwriters to run a prospective insured’s properties (including office buildings, warehouses, and manufacturing plants in many different countries) through sophisticated risk models. The models help assess the potential for losses due to natural catastrophes such as earthquake or flood. They produce complex spreadsheets that list the properties, score the various risks, and flag instances where underwriters might want to seek additional information.
A year after introducing the tool, the company’s chief underwriting officer suspected that not everyone was using it. She wondered whether all the underwriters knew how to interpret the results and whether they bothered to seek additional information. Her solution was to engage them in a game. The company gave the tool’s output from five different example companies plus some additional information to four teams of underwriters. Team members worked together to assess the risk. They were told that more information on certain accounts was available, but they had to recognize when this was the case and specifically ask for it. Meanwhile, members of the team that had developed the tool listened in on participants’ deliberations.
In the end, the teams’ decisions were rated by their peers and by independent judges. Teams that not only interpreted the results correctly but also identified a need for additional information and incorporated it into their analyses earned extra points. To me, the whole exercise felt much like the teenagers’ collaborative learning of online games. It had similar elements of engagement, excitement, and the thrill of “mastery” as team members worked together to earn points in the game.
Of course, not every approach to gamification is likely to work. Employees may feel that the typical game’s points and badges trivialize their work—or, worse, that the company is somehow creating a Big Brother system that expects everyone to act like clones. One trick is to pilot a game with a small, diverse set of users and involve them in co-creating the game’s rules and rewards. These “power users” can help others as the game is rolled out to the rest of the team.
Done right, gamification seems to hold a lot of potential. But the proof will lie in experience. Can models and decision support tools be rolled out using a gamification format with rewards and explicit levels of “mastery”? I haven’t seen it yet, but I’d love to hear from others about successful (or not so successful) examples.
Lori Sherer is a partner at Bain & Company in San Francisco and heads the firm’s Advanced Analytics practice.