Blind Spots of Decision Making

Top 10 things that bias your decisions

Blind Spots of Decision Making - Page 2
Blind Spots of Decision Making - Page 3
Blind Spots of Decision Making - Page 4

"There is always an easy solution to every human problem - neat, plausible, and wrong." Little did he know it when he penned these words, but journalist H.L. Mencken was tapping into the very core of behavioral decision making and the need to understand and compensate for it.

Every day, senior managers are tasked with making very significant strategic decisions for their companies, which usually require support by teams of internal and external experts and a heavy dose of research. Theoretically, knowledge-based decision making underpins every successful organization. But, as Plato pointed out, "Human behavior flows from three main sources: desire, emotion, and knowledge." First-hand experience and best sellers like Daniel Kahneman's Thinking, Fast and Slow have confirmed an even broader range of behavioral vulnerabilities and vagaries in our abilities to make decisions as human beings.

For those of us tasked with modeling the risk/reward potential of various business opportunities, the need to address these influential, often subconscious factors in the modeling process is compelling. In the enterprise risk management (ERM) arena, in particular, it is mandatory that incisive analysis of decision options means taking rigorous steps to challenge not only the scenarios we develop, but also their underlying assumptions.

From what we have learned from behavioral economists, we - as actuaries in the enterprise risk management space - outlined some of the most prevalent biases that creep into all kinds of risk/reward decision making - personal as well as professional. By acknowledging and shedding light on these sources of distortion we can strengthen the relevance and reliability of our decision-making strategies and assessment of potential risk manifesting from these decisions. We not only have to consider our own human biases, but also those of our audience, our team, and our competitors.

(Click on the chart to view full-sized version.)

Minimizing the impact of these biases is crucial. They can sneak into any risk/reward management scenario we develop, unless we exercise considerable rigor at every stage of the process from assumption right through to the presentation of alternative scenarios and their attendant considerations. To address the kinds of biases outlined briefly here, we must challenge our decision making process by realizing that we both influence and are influenced by the format of the information. The above heuristics have served us well as human beings when we were employed in work such as tilling the land. They do, however, open us up for biased risk/reward decision-making when applied to today's knowledge-based work. To minimize their impact, we must:

  • Search relentlessly for potentially relevant or new disconfirming evidence
  • Accept the "Chief Contrarian" as part of the team
  • Seek diverse outside opinion to counter our overconfidence
  • Reward the process and refrain from penalizing errors when the intentions and efforts are sound
  • Reframe or flip the problem on its head to see if we are viewing the situation in either a positive or negative framework
  • Redefine the problem from here on out and ignore the old problem to avoid escalation of unnecessary commitment
  • Develop systemic review processes that leave you a committed "out" possibility when trying to "cut the losses"
  • Avoid the potential for escalation or further emotional investment in faulty decisions engendered by premature "public" commitment.

Throughout the process, it's crucial to recognize that most risk does not manifest itself from some exogenous contingent event, but rather is driven by the behaviors and decisions of people. It is only by exercising the intellectual rigor to challenge our current views of the future and long-lived underlying assumptions that we gain the means to manage the real risks that face our enterprises. I have addressed the "individual" element here. I am strong supporter that it doesn't end here. I encourage all to read the post by David Ingram and Mike Thompson who address that it is not only our behaviors as "individuals" that are relelevant, but also and perhaps rather, how we make risk/reward decision-making in groups.

Once heretical, behavioral economics is now mainstream. Money managers employ its insights about the limits of rationality in understanding investor behavior and exploiting stock-pricing anomalies. Policy makers use behavioral principles to boost participation in retirement-savings plans. Marketers now understand why some promotions entice consumers and others don't.

Yet very few corporate strategists making important decisions consciously take into account the cognitive biases-systematic tendencies to deviate from rational calculations-revealed by behavioral economics. It's easy to see why: unlike in fields such as finance and marketing, where executives can use psychology to make the most of the biases residing in others, in strategic decision making leaders need to recognize their own biases. So despite growing awareness of behavioral economics and numerous efforts by management writers, including ourselves, to make the case for its application, most executives have a justifiably difficult time knowing how to harness its power.

This is not to say that executives think their strategic decisions are perfect. In a recent McKinsey Quarterly survey of 2,207 executives, only 28 percent said that the quality of strategic decisions in their companies was generally good, 60 percent thought that bad decisions were about as frequent as good ones, and the remaining 12 percent thought good decisions were altogether infrequent. Our candid conversations with senior executives behind closed doors reveal a similar unease with the quality of decision making and confirm the significant body of research indicating that cognitive biases affect the most important strategic decisions made by the smartest managers in the best companies. Mergers routinely fail to deliver the expected synergies. Strategic plans often ignore competitive responses. And large investment projects are over budget and over time-over and over again.

In this article, we share the results of new research quantifying the financial benefits of processes that "debias" strategic decisions. The size of this prize makes a strong case for practicing behavioral strategy-a style of strategic decision making that incorporates the lessons of psychology. It starts with the recognition that even if we try, like Baron Münchhausen, to escape the swamp of biases by pulling ourselves up by our own hair, we are unlikely to succeed. Instead, we need new norms for activities such as managing meetings (for more on running unbiased meetings, see " Taking the bias out of meetings "), gathering data, discussing analogies, and stimulating debate that together can diminish the impact of cognitive biases on critical decisions. To support those new norms, we also need a simple language for recognizing and discussing biases, one that is grounded in the reality of corporate life, as opposed to the sometimes-arcane language of academia. All this represents a significant commitment and, in some organizations, a profound cultural change.

The value of good decision processes

Think of a large business decision your company made recently: a major acquisition, a large capital expenditure, a key technological choice, or a new-product launch. Three things went into it. The decision almost certainly involved some fact gathering and analysis. It relied on the insights and judgment of a number of executives (a number sometimes as small as one). And it was reached after a process-sometimes very formal, sometimes completely informal-turned the data and judgment into a decision.

Our research indicates that, contrary to what one might assume, good analysis in the hands of managers who have good judgment won't naturally yield good decisions. The third ingredient-the process-is also crucial. We discovered this by asking managers to report on both the nature of an important decision and the process through which it was reached. In all, we studied 1,048 major decisions made over the past five years, including investments in new products, M&A decisions, and large capital expenditures (Exhibit 1).

Exhibit 1

The research analyzed a variety of decisions.

We asked managers to report on the extent to which they had applied 17 practices in making that decision. Eight of these practices had to do with the quantity and detail of the analysis: did you, for example, build a detailed financial model or run sensitivity analyses? The others described the decision-making process: for instance, did you explicitly explore and discuss major uncertainties or discuss viewpoints that contradicted the senior leader's? We chose these process characteristics because in academic research and in our experience, they have proved effective at overcoming biases.

After controlling for factors like industry, geography, and company size, we used regression analysis to calculate how much of the variance in decision outcomes was explained by the quality of the process and how much by the quantity and detail of the analysis. The answer: process mattered more than analysis-by a factor of six (Exhibit 2). This finding does not mean that analysis is unimportant, as a closer look at the data reveals: almost no decisions in our sample made through a very strong process were backed by very poor analysis. Why? Because one of the things an unbiased decision-making process will do is ferret out poor analysis. The reverse is not true; superb analysis is useless unless the decision process gives it a fair hearing.

Exhibit 2

Process, analysis, and industry variables explain decision-making effectiveness.

To get a sense of the value at stake, we also assessed the return on investment (ROI) of decisions characterized by a superior process. The analysis revealed that raising a company's game from the bottom to the top quartile on the decision-making process improved its ROI by 6.9 percentage points. The ROI advantage for top-quartile versus bottom-quartile analytics was 5.3 percentage points, further underscoring the tight relationship between process and analysis. Good process, in short, isn't just good hygiene; it's good business.

The building blocks of behavioral strategy

Any seasoned executive will of course recognize some biases and take them into account. That is what we do when we apply a discount factor to a plan from a direct report (correcting for that person's overoptimism). That is also what we do when we fear that one person's recommendation may be colored by self-interest and ask a neutral third party for an independent opinion.

However, academic research and empirical observation suggest that these corrections are too inexact and limited to be helpful. The prevalence of biases in corporate decisions is partly a function of habit, training, executive selection, and corporate culture. But most fundamentally, biases are pervasive because they are a product of human nature-hardwired and highly resistant to feedback, however brutal. For example, drivers laid up in hospitals for traffic accidents they themselves caused overestimate their driving abilities just as much as the rest of us do.

Improving strategic decision making therefore requires not only trying to limit our own (and others') biases but also orchestrating a decision-making process that will confront different biases and limit their impact. To use a judicial analogy, we cannot trust the judges or the jurors to be infallible; they are, after all, human. But as citizens, we can expect verdicts to be rendered by juries and trials to follow the rules of due process. It is through teamwork, and the process that organizes it, that we seek a high-quality outcome.

Building such a process for strategic decision making requires an understanding of the biases the process needs to address. In the discussion that follows, we focus on the subset of biases we have found to be most relevant for executives and classify those biases into five simple, business-oriented groupings. (You can download a PDF of the groupings of biases that occur most frequently in business.) A familiarity with this classification is useful in itself because, as the psychologist and Nobel laureate in economics Daniel Kahneman has pointed out, the odds of defeating biases in a group setting rise when discussion of them is widespread. But familiarity alone isn't enough to ensure unbiased decision making, so as we discuss each family of bias, we also provide some general principles and specific examples of practices that can help counteract it.

Interactive

Counter pattern-recognition biases by changing the angle of vision

The ability to identify patterns helps set humans apart but also carries with it a risk of misinterpreting conceptual relationships. Common pattern-recognition biases include saliency biases (which lead us to overweight recent or highly memorable events) and the confirmation bias (the tendency, once a hypothesis has been formed, to ignore evidence that would disprove it). Particularly imperiled are senior executives, whose deep experience boosts the odds that they will rely on analogies, from their own experience, that may turn out to be misleading. Whenever analogies, comparisons, or salient examples are used to justify a decision, and whenever convincing champions use their powers of persuasion to tell a compelling story, pattern-recognition biases may be at work.

Pattern recognition is second nature to all of us-and often quite valuable-so fighting biases associated with it is challenging. The best we can do is to change the angle of vision by encouraging participants to see facts in a different light and to test alternative hypotheses to explain those facts. This practice starts with things as simple as field and customer visits. It continues with meeting-management techniques such as reframing or role reversal, which encourage participants to formulate alternative explanations for the evidence with which they are presented. It can also leverage tools, such as competitive war games, that promote out-of-the-box thinking.

Sometimes, simply coaxing managers to articulate the experiences influencing them is valuable. According to Kleiner Perkins partner Randy Komisar, for example, a contentious discussion over manufacturing strategy at the start-up WebTV suddenly became much more manageable once it was clear that the preferences of executives about which strategy to pursue stemmed from their previous career experience. When that realization came, he told us, there was immediately a "sense of exhaling in the room." Managers with software experience were frightened about building hardware; managers with hardware experience were afraid of ceding control to contract manufacturers.

Getting these experiences into the open helped WebTV's management team become aware of the pattern recognition they triggered and see more clearly the pros and cons of both options. Ultimately, WebTV's executives decided both to outsource hardware production to large electronics makers and, heeding the worries of executives with hardware experience, to establish a manufacturing line in Mexico as a backup, in case the contractors did not deliver in time for the Christmas season. That in fact happened, and the backup plan, which would not have existed without a decision process that changed the angle of vision, "saved the company."

Another useful means of changing the angle of vision is to make it wider by creating a reasonably large-in our experience at least six-set of similar endeavors for comparative analysis. For example, in an effort to improve US military effectiveness in Iraq in 2004, Colonel Kalev Sepp-by himself, in 36 hours-developed a reference class of 53 similar counterinsurgency conflicts, complete with strategies and outcomes. This effort informed subsequent policy changes.

Counter action-oriented biases by recognizing uncertainty

Most executives rightly feel a need to take action. However, the actions we take are often prompted by excessive optimism about the future and especially about our own ability to influence it. Ask yourself how many plans you have reviewed that turned out to be based on overly optimistic forecasts of market potential or underestimated competitive responses. When you or your people feel-especially under pressure-an urge to take action and an attractive plan presents itself, chances are good that some elements of overconfidence have tainted it.

To make matters worse, the culture of many organizations suppresses uncertainty and rewards behavior that ignores it. For instance, in most organizations, an executive who projects great confidence in a plan is more likely to get it approved than one who lays out all the risks and uncertainties surrounding it. Seldom do we see confidence as a warning sign-a hint that overconfidence, overoptimism, and other action-oriented biases may be at work.

Superior decision-making processes counteract action-oriented biases by promoting the recognition of uncertainty. For example, it often helps to make a clear and explicit distinction between decision meetings, where leaders should embrace uncertainty while encouraging dissent, and implementation meetings, where it's time for executives to move forward together. Also valuable are tools-such as scenario planning, decision trees, and the "premortem" championed by research psychologist Gary Klein (for more on the premortem, see " Strategic decisions: When can you trust your gut? ")-that force consideration of many potential outcomes. And at the time of a major decision, it's critical to discuss which metrics need to be monitored to highlight necessary course corrections quickly.

Counter stability biases by shaking things up

In contrast to action biases, stability biases make us less prone to depart from the status quo than we should be. This category includes anchoring-the powerful impact an initial idea or number has on the subsequent strategic conversation. (For instance, last year's numbers are an implicit but extremely powerful anchor in any budget review.) Stability biases also include loss aversion-the well-documented tendency to feel losses more acutely than equivalent gains-and the sunk-cost fallacy, which can lead companies to hold on to businesses they should divest.

One way of diagnosing your company's susceptibility to stability biases is to compare decisions over time. For example, try mapping the percentage of total new investment each division of the company receives year after year. If that percentage is stable but the divisions' growth opportunities are not, this finding is cause for concern-and quite a common one. Our research indicates, for example, that in multibusiness corporations over a 15-year time horizon, there is a near-perfect correlation between a business unit's current share of the capital expenditure budget and its budget share in the previous year. A similar inertia often bedevils advertising budgets and R&D project pipelines.

One way to help managers shake things up is to establish stretch targets that are impossible to achieve through "business as usual." Zero-based (or clean-sheet) budgeting sounds promising, but in our experience companies use this approach only when they are in dire straits. An alternative is to start by reducing each reporting unit's budget by a fixed percentage (for instance, 10 percent). The resulting tough choices facilitate the redeployment of resources to more valuable opportunities. Finally, challenging budget allocations at a more granular level can help companies reprioritize their investments.

Counter interest biases by making them explicit

Misaligned incentives are a major source of bias. "Silo thinking," in which organizational units defend their own interests, is its most easily detectable manifestation. Furthermore, senior executives sometimes honestly view the goals of a company differently because of their different roles or functional expertise. Heated discussions in which participants seem to see issues from completely different perspectives often reflect the presence of different (and generally unspoken) interest biases.

The truth is that adopting a sufficiently broad (and realistic) definition of "interests," including reputation, career options, and individual preferences, leads to the inescapable conclusion that there will always be conflicts between one manager and another and between individual managers and the company as a whole. Strong decision-making processes explicitly account for diverging interests. For example, if before the time of a decision, strategists formulate precisely the criteria that will and won't be used to evaluate it, they make it more difficult for individual managers to change the terms of the debate to make their preferred actions seem more attractive. Similarly, populating meetings or teams with participants whose interests clash can reduce the likelihood that one set of interests will undermine thoughtful decision making.

Counter social biases by depersonalizing debate

Social biases are sometimes interpreted as corporate politics but in fact are deep-rooted human tendencies. Even when nothing is at stake, we tend to conform to the dominant views of the group we belong to (and of its leader). Many organizations compound these tendencies because of both strong corporate cultures and incentives to conform. An absence of dissent is a strong warning sign. Social biases also are likely to prevail in discussions where everyone in the room knows the views of the ultimate decision maker (and assumes that the leader is unlikely to change her mind).

Countless techniques exist to stimulate debate among executive teams, and many are simple to learn and practice. (For more on promoting debate, see suggestions from Kleiner Perkins' Randy Komisar and Xerox's Anne Mulcahy in " How we do it: Three executives reflect on strategic decision making.") But tools per se won't create debate: that is a matter of behavior. Genuine debate requires diversity in the backgrounds and personalities of the decision makers, a climate of trust, and a culture in which discussions are depersonalized.

Most crucially, debate calls for senior leaders who genuinely believe in the collective intelligence of a high-caliber management team. Such executives see themselves serving not only as the ultimate decision makers but also as the orchestrators of disciplined decision processes. They shape management teams with the humility to encourage dissent and the self-confidence and mutual trust to practice vigorous debate without damaging personal relationships. We do not suggest that CEOs should become humble listeners who rely solely on the consensus of their teams-that would substitute one simplistic stereotype for another. But we do believe that behavioral strategy will founder without their leadership and role modeling.

Four steps to adopting behavioral strategy

Our readers will probably recognize some of these ideas and tools as techniques they have used in the past. But techniques by themselves will not improve the quality of decisions. Nothing is easier, after all, than orchestrating a perfunctory debate to justify a decision already made (or thought to be made) by the CEO. Leaders who want to shape the decision-making style of their companies must commit themselves to a new path.

1. Decide which decisions warrant the effort

Some executives fear that applying the principles we describe here could be divisive, counterproductive, or simply too time consuming (for more on the dangers of decision paralysis, see the commentary by WPP's Sir Martin Sorrell in " How we do it: Three executives reflect on strategic decision making "). We share this concern and do not suggest applying these principles to all decisions. Here again, the judicial analogy is instructive. Just as higher standards of process apply in a capital case than in a proceeding before a small-claims court, companies can and should pay special attention to two types of decisions.

The first set consists of rare, one-of-a-kind strategic decisions. Major mergers and acquisitions, "bet the company" investments, and crucial technological choices fall in this category. In most companies, these decisions are made by a small subgroup of the executive team, using an ad hoc, informal, and often iterative process. The second set includes repetitive but high-stakes decisions that shape a company's strategy over time. In most companies, there are generally no more than one or two such crucial processes, such as R&D allocations in a pharmaceutical company, investment decisions in a private-equity firm, or capital expenditure decisions in a utility. Formal processes-often affected by biases-are typically in place to make these decisions.

2. Identify the biases most likely to affect critical decisions

Open discussion of the biases that may be undermining decision making is invaluable. It can be stimulated both by conducting postmortems of past decisions and by observing current decision processes. Are we at risk, in this meeting, of being too action oriented? Do I see someone who thinks he recognizes a pattern but whose choice of analogies seems misleading to me? Are we seeing biases combine to create dysfunctional patterns that, when repeated in an organization, can become cultural traits? For example, is the combination of social and status quo biases creating a culture of consensus-based inertia? This discussion will help surface the biases to which the decision process under review is particularly prone.

3. Select practices and tools to counter the most relevant biases

Companies should select mechanisms that are appropriate to the type of decision at hand, to their culture, and to the decision-making styles of their leaders. For instance, one company we know counters social biases by organizing, as part of its annual planning cycle, a systematic challenge by outsiders to its business units' plans. Another fights pattern-recognition biases by asking managers who present a recommendation to share the raw data supporting it, so other executives in this analytically minded company can try to discern alternative patterns.

If, as you read these lines, you have already thought of three reasons these techniques won't work in your own company's culture, you are probably right. The question is which ones will. Adopting behavioral strategy means not only embracing the broad principles set forth above but also selecting and tailoring specific debiasing practices to turn the principles into action.

4. Embed practices in formal processes

By embedding these practices in formal corporate operating procedures (such as capital-investment approval processes or R&D reviews), executives can ensure that such techniques are used with some regularity and not just when the ultimate decision maker feels unusually uncertain about which call to make. One reason it's important to embed these practices in recurring procedures is that everything we know about the tendency toward overconfidence suggests that it is unwise to rely on one's instincts to decide when to rely on one's instincts! Another is that good decision making requires practice as a management team: without regular opportunities, the team will agree in principle on the techniques it should use but lack the experience (and the mutual trust) to use them effectively.

The behavioral-strategy journey requires effort and the commitment of senior leadership, but the payoff-better decisions, not to mention more engaged managers-makes it one of the most valuable strategic investments organizations can make.

About the authors

Dan Lovallo is a professor at the University of Sydney, a senior research fellow at the Institute for Business Innovation at the University of California, Berkeley, and an adviser to McKinsey; Olivier Sibony is a director in McKinsey's Brussels office.

One of the most important questions facing leaders is when they should trust their gut instincts-an issue explored in a dialogue between Nobel laureate Daniel Kahneman and psychologist Gary Klein titled " Strategic decisions: When can you trust your gut?" published by McKinsey Quarterly in March 2010. Our work on flawed decisions suggests that leaders cannot prevent gut instinct from influencing their judgments. What they can do is identify situations where it is likely to be biased and then strengthen the decision process to reduce the resulting risk.

Our gut intuition accesses our accumulated experiences in a synthesized way, so that we can form judgments and take action without any logical, conscious consideration. Think about how we react when we inadvertently drive across the center line in a road or see a car start to pull out of a side turn unexpectedly. Our bodies are jolted alert, and we turn the steering wheel well before we have had time to think about what the appropriate reaction should be.

The brain appears to work in a similar way when we make more leisurely decisions. In fact, the latest findings in decision neuroscience suggest that our judgments are initiated by the unconscious weighing of emotional tags associated with our memories rather than by the conscious weighing of rational pros and cons: we start to feel something-often even before we are conscious of having thought anything. As a highly cerebral academic colleague recently commented, "I can't see a logical flaw in what you are saying, but it gives me a queasy feeling in my stomach."

Given the powerful influence of positive and negative emotions on our unconscious, it is tempting to argue that leaders should never trust their gut: they should make decisions based solely on objective, logical analysis. But this advice overlooks the fact that we can't get away from the influence of our gut instincts. They influence the way we frame a situation. They influence the options we choose to analyze. They cause us to consult some people and pay less attention to others. They encourage us to collect more data in one area but not in another. They influence the amount of time and effort we put into decisions. In other words, they infiltrate our decision making even when we are trying to be analytical and rational.

This means that to protect decisions against bias, we first need to know when we can trust our gut feelings, confident that they are drawing on appropriate experiences and emotions. There are four tests.

  1. The familiarity test: Have we frequently experienced identical or similar situations?

    Familiarity is important because our subconscious works on pattern recognition. If we have plenty of appropriate memories to scan, our judgment is likely to be sound; chess masters can make good chess moves in as few as six seconds. "Appropriate" is the key word here because many disastrous decisions have been based on experiences that turned out to be misleading-for instance, the decision General Matthew Broderick, an official of the US Department of Homeland Security, made on August 29, 2005, to delay initiating the Federal response following Hurricane Katrina.

    The way to judge appropriate familiarity is by examining the main uncertainties in a situation-do we have sufficient experience to make sound judgments about them? The main uncertainties facing Broderick were about whether the levees had been breached and how much danger people faced in New Orleans. Unfortunately, his previous experience with hurricanes was in cities above sea level. His learned response, of waiting for "ground truth," proved disastrous.

    Gary Klein's premortem technique, a way of identifying why a project could fail, helps surface these uncertainties. But we can also just develop a list of uncertainties and assess whether we have sufficient experience to judge them well.

  2. The feedback test: Did we get reliable feedback in past situations?

    Previous experience is useful to us only if we learned the right lessons. At the time we make a decision, our brains tag it with a positive emotion-recording it as a good judgment. Hence, without reliable feedback, our emotional tags can tell us that our past judgments were good, even though an objective assessment would record them as bad. For example, if we change jobs before the impact of a judgment is clear or if we have people filtering the information we receive and protecting us from bad news, we may not get the feedback we need. It is for this reason that "yes men" around leaders are so pernicious: they often eliminate the feedback process so important to the development of appropriate emotional tags.

  3. The measured-emotions test: Are the emotions we have experienced in similar or related situations measured?

    All memories come with emotional tags, but some are more highly charged than others. If a situation brings to mind highly charged emotions, these can unbalance our judgment. Knowing from personal experience that dogs can bite is different from having a traumatic childhood experience with dogs. The first will help you interact with dogs. The second can make you afraid of even the friendliest dog.

    A board chairman, for example, had personally lost a significant amount of money with a previous company when doing business in Russia. This traumatic experience made him wary of a proposal for a major Russian expansion in his new company. But he also realized that the experience could be biasing his judgment. He felt obliged to share his concerns but then asked the rest of the board to make the final decision.

  4. The independence test: Are we likely to be influenced by any inappropriate personal interests or attachments?

    If we are trying to decide between two office locations for an organization, one of which is much more personally convenient, we should be cautious. Our subconscious will have more positive emotional tags for the more convenient location. It is for this reason that it is standard practice to ask board members with personal interests in a particular decision to leave the meeting or to refrain from voting. Also for this reason, we enjoy the quip "turkeys will not vote for Christmas."

    A similar logic applies to personal attachments. When auditors, for example, were asked to demonstrate to a Harvard professor that their professional training enabled them to be objective in arriving at an audit opinion, regardless of the nature of the relationship they had with a company, they demonstrated the opposite.

If a situation fails even one of these four tests, we need to strengthen the decision process to reduce the risk of a bad outcome. There are usually three ways of doing this-stronger governance, additional experience and data, or more dialogue and challenge. Often, strong governance, in the form of a boss who can overrule a judgment, is the best safeguard. But a strong governance process can be hard to set up and expensive to maintain (think of the US Senate or a typical corporate board). So it is normally cheaper to look for safeguards based on experience and data or on dialogue and challenge.

In the 1990s, for example, Jack Welch knew he would face some tough decisions about how to exploit the Internet, so he chose experience as a solution to the biases he might have. He hired a personal Internet mentor who was more than 25 years his junior and encouraged his top managers to do the same. Warren Buffett recommends extra challenge as a solution to biases that arise during acquisitions. Whenever a company is paying part of the price with shares, he proposes using an "adviser against the deal," who would be compensated well only if it did not go through.

There are no universal safeguards. Premortems help surface uncertainties, but they do not protect against self-interest. Additional data can challenge assumptions but will not help a decision maker who is influenced by a strong emotional experience. If we are to make better decisions, we need to be thoughtful both about why our gut instincts might let us down and what the best safeguard is in each situation. We should never ignore our gut. But we should know when to rely on it and when to safeguard against it.

About the authors

Andrew Campbell and Jo Whitehead are directors of London's Ashridge Strategic Management Centre and coauthors, together with Sydney Finkelstein, of Think Again: Why Good Leaders Make Bad Decisions and How to Keep It From Happening to You (Harvard Business School Press, 2009).

Blind Spots of Decision Making - Page 8

Decision making lies at the heart of our personal and professional lives. Every day we make decisions. Some are small, domestic, and innocuous. Others are more important, affecting people's lives, livelihoods, and well-being. Inevitably, we make mistakes along the way. The daunting reality is that enormously important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.

Consider Jürgen Schrempp, CEO of Daimler-Benz. He led the merger of Chrysler and Daimler against internal opposition. Nine years later, Daimler was forced to virtually give Chrysler away in a private equity deal. Steve Russell, chief executive of Boots, the UK drugstore chain, launched a health care strategy designed to differentiate the stores from competitors and grow through new health care services such as dentistry. It turned out, though, that Boots managers did not have the skills needed to succeed in health care services, and many of these markets offered little profit potential. The strategy contributed to Russell's early departure from the top job. Brigadier General Matthew Broderick, chief of the Homeland Security Operations Center, who was responsible for alerting President Bush and other senior government officials if Hurricane Katrina breached the levees in New Orleans, went home on Monday, August 29, 2005, after reporting that they seemed to be holding, despite multiple reports of breaches.

The reality is that important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.

All these executives were highly qualified for their jobs, and yet they made decisions that soon seemed clearly wrong. Why? And more important, how can we avoid making similar mistakes? This is the topic we've been exploring for the past four years, and the journey has taken us deep into a field called decision neuroscience. We began by assembling a database of 83 decisions that we felt were flawed at the time they were made. From our analysis of these cases, we concluded that flawed decisions start with errors of judgment made by influential individuals. Hence we needed to understand how these errors of judgment occur.

In the following pages, we will describe the conditions that promote errors of judgment and explore ways organizations can build protections into the decision-making process to reduce the risk of mistakes. We'll conclude by showing how two leading companies applied the approach we describe. To put all this in context, however, we first need to understand just how the human brain forms its judgments.

How the Brain Trips Up

We depend primarily on two hardwired processes for decision making. Our brains assess what's going on using pattern recognition, and we react to that information-or ignore it-because of emotional tags that are stored in our memories. Both of these processes are normally reliable; they are part of our evolutionary advantage. But in certain circumstances, both can let us down.

Pattern recognition is a complex process that integrates information from as many as 30 different parts of the brain. Faced with a new situation, we make assumptions based on prior experiences and judgments. Thus a chess master can assess a chess game and choose a high-quality move in as little as six seconds by drawing on patterns he or she has seen before. But pattern recognition can also mislead us. When we're dealing with seemingly familiar situations, our brains can cause us to think we understand them when we don't.

What happened to Matthew Broderick during Hurricane Katrina is instructive. Broderick had been involved in operations centers in Vietnam and in other military engagements, and he had led the Homeland Security Operations Center during previous hurricanes. These experiences had taught him that early reports surrounding a major event are often false: It's better to wait for the "ground truth" from a reliable source before acting. Unfortunately, he had no experience with a hurricane hitting a city built below sea level.

By late on August 29, some 12 hours after Katrina hit New Orleans, Broderick had received 17 reports of major flooding and levee breaches. But he also had gotten conflicting information. The Army Corps of Engineers had reported that it had no evidence of levee breaches, and a late afternoon CNN report from Bourbon Street in the French Quarter had shown city dwellers partying and claiming they had dodged the bullet. Broderick's pattern-recognition process told him that these contrary reports were the ground truth he was looking for. So before going home for the night, he issued a situation report stating that the levees had not been breached, although he did add that further assessment would be needed the next day.

Over the course of their careers, Sir Martin Sorrell, CEO of WPP; Randy Komisar, a partner at Kleiner Perkins Caufield & Byers; and Anne Mulcahy, Xerox's chairman and former CEO, have made strategic decisions of all shapes and sizes. Their experiences, which the executives share in these three commentaries, highlight a critical challenge: striking the right balance between thorough, unbiased decision-making processes, on the one hand, and timely action, on the other. While there's no silver bullet, taking concrete steps to cultivate internal critics, safeguard diversity of thought, clarify assumptions underlying different points of view, and force tough choices between business priorities can help.

WPP's Sir Martin Sorrell: 'Learn from mistakes and listen to feedback'

The reality is that leaders must, on the spur of the moment, be able to react rapidly and grasp opportunities. Ultimately, therefore, I think that the best process to reduce the risk of bad decisions-whatever series of tests, hurdles, and measuring sticks one applies-should be quick, flexible, and largely informal. It's important to experiment, to be open to intuition, and to listen to flashes of inspiration. This is not to say the process shouldn't be rigorous: run the analyses, suck up all the data, and include some formal processes as well. But don't ask hundreds of people. Carefully sound out the relevant constituencies-clients, suppliers, competitors-and try to find someone you trust who has no agenda about the issue at hand.

There will be mistakes, of course. The truth is we all make mistakes all the time. For instance, I know it's true that decision makers risk escalating their commitment to losing endeavors that they have an emotional stake in. I know because I've been guilty of that myself. However, the only way to avoid making mistakes is to avoid making decisions (or, at least, very few). But then the company would grind to a halt. Instead, learn from mistakes and listen to feedback.

About the author

Sir Martin Sorrell is chief executive officer of WPP, a leading advertising and marketing-services group. Sir Martin actively supports the advancement of international business schools, advising Harvard, IESE, the London Business School, and Indian School of Business, among others.

Kleiner Perkins' Randy Komisar: 'Balance out biases'

Before behavioral economics even had a name, it shook up Randy Komisar's career. He became aware of the then-nascent field while contemplating a graduate degree in economics, losing confidence in the dismal science as a result. Komisar ultimately shifted gears, becoming a lawyer and later pursuing a career in commerce. He cofounded Claris, served as CEO for LucasArts Entertainment and Crystal Dynamics, served as "virtual CEO" for a host of companies such as WebTV and TiVo and, since 2005, has been a partner at Kleiner Perkins Caufield & Byers, the Silicon Valley venture capital fund. Along the way, he has developed a distinct point of view on how to create executive teams and cultural environments that are conducive to good decision making. In a recent interview with McKinsey's Olivier Sibony and Allen Webb, Komisar provided practical advice for senior executives hoping to make good decisions in a world where bias is inevitable.

Harness bias

Rather than trying to tune out bias, my focus is on recognizing, encouraging, and balancing bias within effective decision making. I came to that conclusion as I was starting my career, when I had a chance to work with Bill Campbell, who is well known, particularly in Silicon Valley, as a leader and coach. Bill was the CEO of Intuit (where he's now chairman), he's on the Apple board, and he's a consigliere to Google.

What I observed back then was that Bill had this amazing ability to bring together a ragtag team of exceptionally talented people. Some had worked for successful companies, some had not. Some had been senior managers. Some had been individual contributors. Everybody brought to the table biases borne out of their domains and their experiences. Those experience-based biases probably are not that different at the psychological level from the behavioral biases that economists focus on today.

Bill was very capable at balancing out the biases around the table and coming up with really effective decisions and, more important, the groundwork for consensus-not necessarily unanimity, but consensus. I liken it to what I have always understood, true or false, about how President Kennedy ran his cabinet: that he used to assemble the smartest people he could, throw a difficult issue on the table, and watch them debate it. Then at some point he would end the debate, make a decision, and move on. It's also similar to the judicial process, where advocates come together to present every facet of a case, and a judge makes an informed determination. The advocates' biases actually work to the benefit of a good decision, rather than being something that needs to be mitigated.

Make a balance sheet

There's a methodology I've used within companies for making big, hard decisions that I introduced into Kleiner Perkins and that we have been using lately to help decide whether or not to invest in new ventures. It starts with assembling a group that is very diverse. If you look at my partners, you'd see an unruly gang of talented people with very different experiences, very different domain skills, and, consequently, very different opinions.

Starting with that, the notion is to put together a simple balance sheet where everybody around the table is asked to list points on both sides: "Tell me what is good about this opportunity; tell me what is bad about it. Do not tell me your judgment yet. I don't want to know." They start the process without having to justify and thereby freeze their opinions and instead are allowed to give their best insights and consider the ideas of others. Not surprisingly, smart people will uncover many of the same issues. But they may weigh them differently depending on their biases.

We do not ask for anyone's bottom line until everybody has spoken and the full balance sheet is revealed. I have noticed my own judgment often changes as I see the balance sheet fill out. I might have said, "We shouldn't do the deal for the following three reasons." But after creating a balance sheet, I might well look at it and say, "You know, it's probably worth doing for these two reasons."

The balance sheet process mitigates a lot of the friction that typically arises when people marshal the facts that support their case while ignoring those that don't. It also emphasizes to the group that each participant is smart and knowledgeable, that it was a difficult decision, and that there is ample room for the other judgment. By assembling everyone's insights rather than their conclusions, the discussion can focus on the biases and assumptions that lead to the opinions. An added bonus is that people start to see their own biases. Somebody will stand up and say, "You're expecting me to say the following three things, and I will. But I've also got to tell you about these other four things, which are probably not what you'd expect from me." Finally, opinion leaders have less sway because they don't signal their conclusions too early.

Although this may sound tedious and slow, we're able to move quickly. One reason is that we never try to achieve perfection-meaning 100 percent certainty-around a decision. We just can't get there in the timeframe necessary. The corollary is that we assume every decision needs to be tested, measured, and refined. If the test results come back positive, we proceed; if they're negative, we "course correct" quickly.

Create a culture where 'failure' is not a wrong answer

The book John Mullins and I recently wrote, Getting to Plan B, presents a way of building a culture of good decision making. The very simple premise is that Plan A most often fails, so we need a process by which to methodically test assumptions to get to a better Plan B.

The process starts with an acknowledgment that Plan A probably is based upon flawed assumptions, and that certain leap-of-faith questions are fundamental to arriving at a better answer. If we disagree on the decision, it's very likely that we have different assumptions on those critical questions-and we need to decide which assumptions are stronger, yours or mine. You end up teasing apart these assumptions through analogs: someone will say, "Joe did something like this." And then someone else will say, "Yes, but Joe's situation was different for the following reasons. Sally did something like this, and it failed." In that process, you don't get points for being right about your assumption, and I don't lose points for being wrong. We both get points for identifying the assumption, working on it, and agreeing that the facts have come in one way or the other.

What makes this culturally difficult in larger companies is that there is often a sense that Plan A is going to succeed. It's well analyzed. It's vetted. It's crisp. It looks great on an Excel spreadsheet. It becomes the plan of record to which everybody executes. And the execution of that plan does not usually contemplate testing assumptions on an ongoing basis to permit a course correction. So if the plan is wrong, which it most often is, then it is a total failure. The work has gone on too long. Too much money has been spent. Too many people have invested their time and attention on it. And careers can be hurt in the process. To create the right culture, you have to make very clear that a wrong answer is not "failure" unless it is ignored or uncorrectable.

Intuit, for instance, has found that many early-stage R&D projects went on too long. As in most companies, there was a belief that "we just need to put a little more time and money into these things." Within about 90 days after I had explained the Plan B process to Intuit, they had broken a set of projects into smaller hypotheses, put together a dashboard process for testing assumptions, and were starting to make go-no-go decisions at each step along the way. They reported back that teams were killing their own projects early because they now had metrics to guide them. And most important, they were not being blamed. Intuit's culture allows for rapid testing and "failure," and those who prove responsible and accountable in course correcting are rewarded with new projects.

Listen to the little voice

I think comfort with uncertainty and ambiguity is an important trait in a leader. That's not to say that they're ambiguous or uncertain or unclear, but they're not hiding behind some notion of black or white. When somebody's shutting down conversations because he is uncomfortable with the points of view in the room or with where the decision may be going, it usually leads to a culture where the best ideas no longer come to the top.

Now, there are cultures where that does seem to work, but I think those are exceptions. Steve Jobs seems to be able to run Apple exceedingly well in large part because Steve Jobs is an extraordinary person. But he's not a guy who tolerates a lot of diversity of opinion. Frankly, few leaders I meet, no matter how important they are in the press or how big their paychecks are, are that comfortable with diversity of opinion.

I love a leader who changes his or her opinion based upon the strength of the arguments around the table. It's great to see a leader concede that the decision's a hard one and may have to be retested. It's great to see a leader who will echo the little voice in the back of the room that has a different point of view-and thereby change the complexion of the discussion.

When I went to LucasArts, I can remember sitting down one day with a young woman two levels down in the sales organization. I said, "Do you think we could build our own sales force and distribution here? We've been going through distributors for a long time. Our margins are a lot smaller as a result. What do you think?" She shut the door, looked at me, and said, "I know that my boss would disagree with me and I know that my peers in marketing disagree with me, but I think we can do it." And so we did it, and the company's gross margin line probably grew fivefold in 12 months-all based upon this one little voice in the back of the room. You've got to be able to hear that voice.

About the author

Randy Komisar is a partner with Kleiner Perkins Caufield & Byers. He is the author of The Monk and the Riddle and coauthor, with John Mullins, of Getting to Plan B: Breaking Through to a Better Business Model. Randy has been a consulting professor of entrepreneurship at Stanford, where he still lectures.

Xerox's Anne Mulcahy: 'Timeliness trumps perfection'

When Anne Mulcahy became CEO of Xerox in 2001-as the company teetered on the edge of bankruptcy-she dove in with the confidence and decisiveness that had typified her career to date. But as she began to engineer the company's dramatic turnaround, something unexpected happened: Mulcahy started hearing rumblings that her leadership style was too decisive. As she recounts, "I got feedback that between my directness and my body language, within three nanoseconds people knew where I stood on everything and lined up to follow, and that if I didn't work on it, it really would be a problem." So Mulcahy listened. "I stopped getting on my feet," she explains, "and I worked hard at not jumping in, at making people express a point of view." This was the first of many lessons about how to ensure high-quality decision making that Mulcahy would go on to learn during her nine years as CEO. In a recent interview with McKinsey's Rik Kirkland, she distilled five suggestions for other senior leaders.

Cultivate internal critics

My own management style probably hasn't changed much in 20 years, but I learned to compensate for this by building a team that could counter some of my own weaknesses. You need internal critics: people who know what impact you're having and who have the courage to give you that feedback. I learned how to groom those critics early on, and that was really, really useful. This requires a certain comfort with confrontation, though, so it's a skill that has to be developed.

I started making a point of saying, "All right, John-Noel, what are you thinking? I need to hear." And this started to demonstrate that even if I did show my colors quickly, they could still take me on and I could still change my mind. The decisions that come out of allowing people to have different views-and treasuring the diversity of those views-are often harder to implement than what comes out of consensus decision making, but they're also better.

Force tough people choices

If you're sitting around the table with the wrong group of people, no process is going to drive good decision making. You need to lead with people decisions first. One of the easiest mistakes you can make is to compromise on people. It's very easy to close your eyes and say warm bodies are better than no bodies. The way to counter this bias is to introduce a "forced choice" process. What I mean by this is, you need a disciplined process for forcing discussion about a set of candidates and a position. At Xerox, we developed an HR process that required three candidates for every job.

We also established a group-assessment process, which helped us avoid what I call lazy people decisions, that is, biases against confrontation that could have marginalized the effectiveness of our team. You need to look for people who can strike the hard balance between courage and learning-people who have audacity in their convictions and know when to be unyielding but who are also good listeners and capable of adapting. That is the single most important leadership trait, outside of pure competence.

However you do it, you need to set a context for choice. Once you've done that, you must make sure you understand your own criteria for what first-class talent means, and you need to hold yourself accountable for creating a dialogue about it in a very honest way.

Force tough R&D choices

One of the rules of the road should be never to evaluate R&D programs individually. You should always decide on them within the context of an R&D portfolio. There needs to be an "is this better than that?" conversation-no one should get to personally champion his program in a vacuum. Any single idea can look great in isolation.

The portfolio process, like the "forced choice" process for people decisions, is really important because it gives you choices in context. It also takes some of the difficulty of killing individual projects out of the way. And it helps you hold yourself accountable for the full resourcing of the idea. If you decide to invest in a growth opportunity, it's because you've spent a lot of time making sure that it's resourced properly, that you've got the right skill sets to execute it, and that you're not just saying, "Sure, go off and do it" before you've thought through all those considerations.

This process was particularly important for us at Xerox. We kept an investment going for ten years in a technology called Solid Ink, which just came to market this year. We did this by putting a fence around it and a few other strategic priorities that we knew we wanted to protect. Portfolio decision making helped us drive those priorities forward even though most of the people who made the decisions wouldn't still be in their jobs to see the returns.

Know when to let go

One of the most important types of decision making is deciding what you are not going to do, what you need to eliminate in order to make room for strategic investments. This could mean shutting down a program. It could mean outsourcing part of the business. These are often the hardest decisions to make, and the ones that don't get nearly enough focus. Making a decision not to fund a new project is not painful. Making a decision to take out a historical program or investment is. It means taking out people and competencies and expertise. That's much, much harder.

The most difficult decisions are these legacy ones-the historical investments, the things that are just easier to chip away at rather than make a tough decision. This is where we make the most compromises-at the expense of our focus. A great example from Xerox was that it took too long to move from legacy investments in black-and-white imaging to future strategic investments in color and services.

An approach that can help this process involves establishing a decision framework (one akin to a zero-based budgeting philosophy) that says there's no preconceived commitment to a legacy business. It will get discussed in the context of opportunities for future investment like all the rest. But to make this decision process work, you need to make sure to create a balance between the people who can champion and advocate the future and those who own-and are very invested in-the past.

Strike the right risk balance

Decisiveness is about timeliness. And timeliness trumps perfection. The most damaging decisions are the missed opportunities, the decisions that didn't get made in time. If you're creating a category of bad decisions you've made, you need to include with it all the decisions you didn't get to make because you missed the window of time that existed to take advantage of an opportunity.

These days, everyone is risk averse. Unfortunately, people define risk as something you avoid rather than something you take. But taking risks is critical to your decision-making effectiveness and growth, and most companies have taken a large step backwards because of the current climate. I was CEO of Xerox for five years before we really got back into the acquisition market, even though we knew we needed to acquire some things rather than develop them internally. But we got very conservative, very risk averse, and also too data driven. By the time we would reach a decision that some technology was going to be a home run, it had either already been bought or was so expensive we couldn't afford it.

Decisions have shelf lives, so you really need to put tight timeframes on your process. I would so much rather live with the outcome of making a few bad decisions than miss a boatload of good ones. Some of it flies in the face of good process and just requires good gut. So when trying to take bias out of decision making, you need to be really cautious not to take instinct, courage, and gut out as well.

About the author

Anne Mulcahy is chairman and former CEO of Xerox. She is a director on the boards of Catalyst, Citigroup, Johnson & Johnson, and the Washington Post Company, as well as chair of the board of trustees for Save the Children.

When General Motors launched Saturn, in 1985, the small-car division was GM's response to surging demand for Japanese brands. At first, consumers were very receptive to what was billed as "a new kind of car company," but sales peaked in 1994 and then drifted steadily downward. GM reorganized the division, taking away some of its autonomy in order to leverage the parent company's economies of scale, and in 2004 GM agreed to invest a further $3 billion to rejuvenate the brand. But 21 years and billions of dollars after its founding, it has yet to earn a profit. Similarly, Polaroid, the pioneer of instant photography and the employer of more than 10,000 people in the 1980s, failed to find a niche in the digital market. A series of layoffs and restructurings culminated in bankruptcy, in October 2001.

These stories illustrate a common business problem: staying too long with a losing venture. Faced with the prospect of exiting a project, a business, or an industry, executives tend to hang on despite clear signs that it's time to bail out. Indeed, when companies do finally exit, the spur is often the arrival of a new senior executive or a crisis, such as a seriously downgraded credit rating.

Research bears out the tendency of companies to linger. One study showed that as a business ages, the average total return to shareholders tends to decline. For most of the divestitures in the sample, the seller would have received a higher price had it sold earlier. According to our analysis of a broad cross-section of US companies from 1993 to 2004, the probability that a failing business will grow appreciably or become profitable within three years was less than 35 percent. Finally, researchers who studied the entry and exit patterns of businesses across industries found that companies are more likely to exit at the troughs of business cycles-usually the worst time to sell.

Why is it so difficult to divest a business at the right time or to exit a failing project and redirect corporate resources? Many factors play a role, from the fact that managers who shepherd an exit often must eliminate their own jobs to the costs that companies incur for layoffs, worker buyouts, and accelerated depreciation. Yet a primary reason is the psychological biases that affect human decision making and lead executives astray when they confront an unsuccessful enterprise or initiative. Such biases routinely cause companies to ignore danger signs, to refrain from adjusting goals in the face of new information, and to throw good money after bad.

In contrast to other important corporate decisions, such as whether to make acquisitions or enter new markets, bad timing in exit decisions tends to go in one direction, since companies rarely exit or divest too early. An awareness of this fact should make it easier to avoid errors-and does, if companies identify the biases at play, determine where in the decision-making process they crop up, and then adopt mechanisms to minimize their impact. Techniques such as contingent road maps and tools borrowed from private equity firms can help companies to decide objectively whether they should halt a failing project or business and to navigate the complexities of the exit.

The psychological biases at play

The decision-making process for exiting a project, business, or industry has three steps. First, a well-run company routinely assesses whether its products, internal projects, and business units are meeting expectations. If they aren't, the second step is the difficult decision about whether to shut them down or divest if they can't be improved. Finally, executives tackle the nitty-gritty details of exiting.

Each step of this process is vulnerable to cognitive biases that can undermine objective decision making. Four biases have significant impact: the confirmation bias, the sunk-cost fallacy, escalation of commitment, and anchoring and adjustment. We explore the psychology behind each one, as well as its influence on decisions (Exhibit 1).

Exhibit 1

Four cognitive biases significantly affect exit decisions.

Analyzing the project

Let's start with a brief test of a person's ability to analyze hypotheses. Imagine that someone deals four cards from a deck, each with a number printed on one side and a letter on the other. Which pair would you choose given an opportunity to flip over just two cards to test the assertion, "If a card has a vowel on one side, then there must be an odd number on the other side"?

Most people correctly choose the U but then incorrectly select 7. This pattern illustrates the confirmation bias: people tend to seek information that supports their point of view and to discount information that doesn't. An odd number opposite U confirms the statement, while an even number refutes it. But the 7 doesn't provide any new information-a vowel on the other side confirms the assertion, but a consonant doesn't reveal anything, since consonants can have even or odd numbers on their flip sides. The correct choice is the 8 because it could reveal something: if there is a vowel on the other side, the statement is false.

Now imagine a group of executives evaluating a project to see if it meets performance hurdles and if its revenues and costs match the initial estimates. Just as most people choose cards that support a statement rather than those that could contradict it, business evaluators rarely seek data to disprove the contention that a troubled project or business will eventually come around. Instead, they seek market research trumpeting a successful launch, quality control estimates predicting that a product will be reliable, or forecasts of production costs and start-up times that would confirm the success of the turnaround effort. Indeed, reports of weak demand, tepid customer satisfaction, or cost overruns often give rise to additional reports that contradict the negative ones.

Consider the fate of a US beer maker, Joseph Schlitz Brewing. In the early 1970s, executives at the company decided to use a cheaper brewing process, citing market research suggesting that consumers couldn't tell beers apart. Although they received constant evidence, in the form of lower sales, that customers found the taste of the beer brewed with the new process noticeably worse, the executives stuck with their low-cost strategy too long. Schlitz, once the third-largest brewer in the United States, went into decline and was acquired by rival Stroh in 1982. Like-wise, when Unilever launched a new Persil laundry detergent in the United Kingdom, in 1994, the company tested the formula on new clothes successfully but didn't seek disconfirming evidence, such as whether it would damage older clothing or react negatively to common clothing dyes. Consumers discovered that it did, and Unilever eventually had to return to the old formula.

Deciding which projects to exit

At this stage, the sunk-cost fallacy is the key bias affecting the decision-making process. In deciding whether to exit, executives often focus on the unrecoverable money already spent or on the project-specific know-how and capabilities already developed. A related bias is the escalation of commitment: yet more resources are invested, even when all indicators point to failure. This misstep, typical of failing endeavors, often goes hand in hand with the sunk-cost fallacy, since large investments can induce the people who make them to spend more in an effort to justify the original costs, no matter how bleak the outlook. When anyone in a meeting justifies future costs by pointing to past ones, red flags should go up; what's required instead is a levelheaded assessment of the future prospects of a project or business.

The Vancouver Expo 86 is a classic example. The initial budget, CAN $78 million in 1978, ballooned to CAN $1.5 billion by 1985, with a deficit of more than CAN $300 million. During those seven years, the expo received several cash infusions because of the provincial government's commitment to the project. Outrageous attendance estimates were used to justify the added expense (the confirmation bias at play). Predictions of 12.5 million visitors, which would have stressed Vancouver's infrastructure, grew at one point to 28 million-roughly Canada's population at the time. Moreover, Canadians had seen budget deficits for big events before: the 1967 Montreal Exposition lost CAN $285 million-six times early estimates-and the 1976 Montreal Olympics lost more than CAN $1 billion, though no deficit had been expected.

Contrast that with the story of the Cincinnati subway. Construction began in 1920. When the $6 million budget ran out, in 1927, the leaders of the city decided that it no longer needed the subway, a point suggested by studies from independent experts. Further construction was stopped, though crews had finished building the tunnels. The idea for the subway had been conceived in 1884, and the project was supported by Republicans and Democrats alike, so this decision was not a whim; World War I and shifting demographic needs had altered the equation. Fortunately for Cincinnatians, during the past 80 years, referendums to raise funds for completion have all failed.

Proceeding with the cancellation

The final bias is anchoring and adjustment: decision makers don't sufficiently adjust future estimates away from an initial value. Early estimates can influence decisions in many business situations, and this bias is particularly relevant in divestment decisions. There are three possible anchors. One is tied to the sunk cost, which the owner may hope to recover. Another is a previous valuation, perhaps made in better times. The third-the price paid previously for other businesses in the same industry-often comes up during merger waves, as it did recently in the consolidation of dot-com companies. If the first company sold for, say, $1 billion, other owners may think that their companies are worth that much too, even though buyers often target the best, most valuable company first.

The sale of PointCast, which in the 1990s was one of the earliest providers of personalized news and information over the Internet, shows this bias at work. The company had 1.5 million users and $5 million in annual advertising revenue when Rupert Murdoch's News Corporation (NewsCorp) offered $450 million to acquire it. The deal was never finalized, however, and shortly thereafter problems arose. Customers complained of slow service and began defecting to Yahoo! and other rivals. In the next two years, a number of companies considered buying PointCast, but the offer prices kept dropping. In the end, it was sold to Infogate for $7 million. PointCast's executives may well have anchored their expectations on the first figure, making them reluctant to accept subsequent lower offers.

Axing a project that flops is relatively straight-forward, but exiting a business or an industry is more complex: companies can more easily reallocate resources-especially human resources-from terminated projects than from failed businesses. Higher investments, which loom larger in decision making, are typically tied up in an ongoing business rather than in an internal project. The anguish executives often feel when they must fire colleagues also partially explains why many closures don't occur until after a change in the executive suite. Divestiture, however, is easier because of the possibility of selling the business to another owner. Selling a project to another company is much more difficult, if it is possible at all.

When a company decides to exit an entire business, the characteristics of the company and the industry can influence the decision-making process. If a flagging division is the only problematic unit in an otherwise healthy company, for instance, all else being equal, managers can sell or close it more easily than they could if it were the core business, where exit would likely mean the company's death. (Managers might still sell in this case, but we recognize that it will be hard to do so.) It sometimes (though rarely) does make sense to hang on in a declining industry-for instance, if rivals are likely to exit soon, leaving the remaining company with a monopoly.

Becoming unbiased

Several techniques can mitigate the effects of the human biases that confound exit decision making. One way of overcoming the confirmation bias, for instance, is to assign someone new from the management team to assess a project. At a multinational energy and raw-materials company, a manager who was not part of an initial proposal must sign off on the project. If the R&D department claims that a prototype production process can ramp up to full speed in three months, for example, the production manager has to approve it. If the target isn't met, the production manager too is held accountable. Making executives responsible for the estimates of other people is a powerful check: managers are unlikely to agree to a target they cannot reach or to overestimate the chances that a project will be profitable. The likely result is more honest opinions.

Well-run private equity firms adopt these practices too. One leading US firm assigns independent partners to conduct periodic reviews of businesses in its portfolio. If Mr. Jones buys and initially oversees a company, for example, Ms. Smith is later charged with the task of reviewing the purchase and its ensuing performance. She takes her role seriously because she is also accountable for the unit's final performance. Although the process can't eliminate the possibility that the partners' collective judgment will be biased, the reviews not only make biases less likely but also make it more likely that underperforming companies will be sold before they drain the firm's equity.

Another tool that can help executives overcome biases and make more objective decisions is a contingent road map that lays out signposts to guide decision makers through their options at predetermined checkpoints over the life of a project or business. Signposts mark the points when key uncertainties must be resolved, as well as the ensuing decisions and possible outcomes. For a contingent road map to be effective, specific choices must be assigned to each signpost before the project begins (or at least well before the project approaches the signpost). This system in effect supplies a precommitment that helps mitigate biases when the time to make the decision arrives.

One petrochemical company, for instance, created a road map for an unprofitable business unit that proposed a new catalyst technology in an attempt to turn itself around (Exhibit 2). The road map established specific targets-a tight range of outcomes-that the new technology had to achieve at a series of checkpoints over several years. It also set up exit rules if the business missed these targets.

Exhibit 2

A contingent road map establishes targets.

Road maps can also help to isolate the specific biases that may affect the corporate decision-making process. If a signpost suggests, for example, that a project or business should be shut down but executives decide that the company has invested too much time and money to stop, the sunk-cost fallacy and escalation-of-commitment bias are quite likely at work. Of course, the initial road map might have to be adjusted as new information arrives, but the changes, if any, should always be made solely to future signposts, not to the current one.

Contingent road maps prevent executives from changing the decision criteria in midstream unless there is a valid, objective reason. They help decision makers to focus on future expectations (rather than past performance) and to recognize uncertainty in an explicit way through the use of multiple potential paths. They limit the impact of the emotional sunk costs of executives in projects and businesses. And they help decision makers by removing the blame for unfavorable outcomes that have been specified in advance: the explicit recognition of problems gives an organization a chance to adapt, while a failure to recognize problems beforehand requires a change in strategy that is often psychologically and politically difficult to justify. Before the invasion of Iraq in 2003, for example, it was uncertain how US troops would be received there. If the Bush administration had publicly announced a contingency plan providing for the possibility of increased troop levels should an insurgency erupt, the president would most likely have had the political cover to adopt that strategy.

When companies are finally ready to sell a business, the decision makers can overcome any lingering anchoring and adjustment biases by using independent evaluators who have never seen the initial projections of its value. Uninfluenced by these earlier estimates, the reviews of such people will take into account nothing but the project's actual experience, such as the evolution of market share, competition, and costs. One leading private equity firm overcomes anchoring and other biases in decision making by routinely hiring independent evaluators, who bring a new set of eyes to older businesses in its portfolio.

There are ways to ease the emotional pain of shutting down or selling projects or businesses. If a company has several flagging ones, for example, they can be bundled together and exited all at once or at least in quick succession-the business equivalent of ripping a bandage off quickly. Such moves ensure that the psychological sense of failure that often accompanies an exit isn't revisited several times. A onetime disappointment is also easier to sell to stakeholders and capital markets, especially for a new CEO with a restructuring agenda.

In addition, companies can focus on exiting businesses with products and capabilities that are far from their core activities, as P&G did in 2002, when it divested and spun off certain products in order to focus on others with stronger growth prospects and a more central position in its corporate portfolio.

Although canceling a project or exiting a business may often be regarded as a sign of failure, such moves are really a perfectly normal part of the creative-destruction process. Companies need to realize that in this way they can free up their resources and improve their ability to embrace new market opportunities.

By neutralizing the cognitive biases that make it harder for executives to evaluate struggling ventures objectively, companies have a considerably better shot at making investments in ventures with strong growth prospects. The unacceptable alternative is to gamble away the company's resources on endeavors that are likely to fail in the long run no matter how much is invested in them.

About the authors

John Horn is a consultant in McKinsey's Washington, DC, office; Dan Lovallo is a professor at the Australian Graduate School of Management (of the University of New South Wales) as well as an adviser to the firm; Patrick Viguerie is a director in the Atlanta office.