- Shopping Bag ( 0 items )
Learn the art—and science—of risk management
In this exceptionally lucid, accessible book, one of the most highly regarded industry experts illuminates the delicate process of making decisions in an uncertain world and helps both lay people and professional risk managers understand the role of "risk-management" in their work, their lives, and their businesses. This book will enable professional risk managers to truly grasp the concepts behind their tools, and it will enable their clients (investors) and their coworkers to understand them as well. Handy and easy-to-read, The Book of Risk provides a down-to-earth look at an exciting field that has practical applications for everyone.
Dan Borge, PhD (Clinton Corners, NY), was managing director and partner at Bankers Trust Company. He was with Bankers Trust for the last twenty years and was the architect of the first-ever risk management system implemented institutionally—Bankers Trust's renowned RAROC system. Prior to working at Bankers Trust, he designed airplanes at Boeing. He is an aeronautical engineer and has a PhD in finance from Harvard Business School.
What Is Risk Management and Why Should You Care?
Beliefs and Preferences.
Combining Art and Science: Volatility and Correlation.
Fundamental Strategies for Managing Risks.
The Enemy Within.
Grooming You to Be CEO.
The View from the CEO's Chair.
You Are in Charge of Your Life—What Are You Going to Do?
Risks and Opportunities.
(Please note: Figures and any other illustrations mentioned in the following text refer to the print edition of this work, and are not reproduced here.)
The term "risk management" is loaded with connotations of caution and timidity, carrying unpleasant reminders of dreary sessions with insurance agents and infuriating lectures from parents on the dangers of having a good time. People who think about risk management at all are likely to think of it as a grim necessity, at best.
From another perspective, however, risk management is absolutely riveting, for it is a way to gain more power over events that can change your life. Risk management can help you to seize opportunity, not just to avoid danger. Since good risk management can mean the difference between wealth and poverty, success and failure, life and death, it is worth some of your attention.
Risk management is now emerging as a profession in its own right. People who might have become lawyers, doctors, or engineers are now becoming risk managers, and the best of them are being handsomely rewarded for doing so. Risk managers are starting to make regular appearances at board meetings of major corporations and financial institutions. There are professional risk management journals and there are risk management conventions in Florida. The existence of risk management professionals is, on balance, a good thing, but you should not think that the arrival of the experts lets you off the hook. The experts can help you, but you cannot escape the responsibility of being the chief risk manager of your own life. Only you know what you really want and what you actually believe. You will be the one to suffer most from bad decisions made on your behalf.
The advent of risk management as a distinct profession has created a babble of obscure jargon that can confuse and frustrate the uninitiated. As usual, the experts want to discourage their paying customers from doing too much for themselves. One of the ways they do this is to give unfamiliar names to commonsense ideas, like a doctor telling you that you have a perorbital hematoma when all you have is a black eye.
Fortunately, the basic ideas of risk management are really very simple:
Risk means being exposed to the possibility of a bad outcome.
Risk management means taking deliberate action to shift the odds in your favor--increasing the odds of good outcomes and reducing the odds of bad outcomes.
The art of risk management is to adapt and apply these ideas to the particular situations you face in real life--whether you are making decisions in your profession or in your personal life.
The point is not to become a risk manager but to become a better risk manager, since we are all risk managers already. We make risk decisions every day, often without thinking about it. If you got out of bed this morning, you made a risk decision. If you lit up a cigarette, you made another. If you drove your car to work, you made another. If you put some money in the stock market, you made another. If you took a plane to Philadelphia, you made another (or perhaps two).
I am not suggesting that you agonize over every little choice you make, but I am suggesting that you can and should think more carefully about those decisions that could have important consequences for you. Without realizing it, you might be taking unnecessary or excessive risks. You might be too timid about taking reasonable risks that offer big rewards. You might not be aware of some of the choices available to you that risk management makes possible.
The possible applications of risk management are endless. The financial world is now a hotbed of risk management activity because financial institutions have been particularly vulnerable to surprising disasters, many of which could have been prevented by better risk management. Financial risks have also been easier to quantify than some other kinds of risks. The better-managed financial institutions can now estimate their risk exposure to changes in the financial markets every day.
Beyond finance, risk management in one form or another is being applied in medicine, engineering, meteorology, seismology, and many other fields where the consequences of uncertain events can be extreme. The Food and Drug Administration weighs the frequency and severity of a drug's side effects against the drug's effectiveness in fighting disease. Insurance companies price their coverage by estimating the odds and possible damage of a category five hurricane hitting Miami and of a Richter 8 earthquake hitting Los Angeles.
These kinds of calculations are becoming more relevant and more useful as the field of risk management advances, but risk management is not, and will never be, a magic formula that will always give you the right answer. It is a way of thinking that will give you better answers to better questions and by doing so helps you to shift the odds in your favor as you play the game of life.
The purpose of risk management is to improve the future, not to explain the past. This will seem obvious to everyone but risk management experts, who can become obsessed with fitting historical data to analytically convenient theoretical models, ignoring the possibility that the conditions that caused the historical events to occur will not apply in the future. The main problem with the future, of course, is that no one knows exactly what it will be. Life is uncertain.
People respond in different ways to the prospect of a life full of surprises. Fatalists adopt the attitude that what will be will be--and simply react to events as they unfold. They go with the flow. Fanatics deny uncertainty by believing passionately in their preferred vision of the future, ignoring all other possibilities. They are certain that they know what is going to happen and they act accordingly.
However, others take a more constructive attitude toward uncertainty. Scientists, for example, believe that much of life's uncertainty is due to ignorance, which can be reduced by finding truth. A modern geologist is not worried about the unpredictable actions of evil spirits living in rocks, but might be worried about the chances that a nearby volcano will erupt.
Scientists attack ignorance by applying the scientific method. The scientific method depends on logic, observable and repeatable evidence, and the suspension of judgment until that evidence is compelling. Scientists strive for objectivity, which is the absence of personal bias in forming theories and interpreting the evidence. A scientist with a personal stake in one theory is prone to overlook or dismiss evidence in favor of a competing theory. Ideally, any scientist should draw the same conclusions from the same facts. Since a personal perspective can subvert the search for truth, a scientist must be detached. Detachment not only guards against distortions of the truth; it puts aside any consideration of whether a particular discovery would be useful or valuable. In science, value judgments and personal beliefs are not admissible when weighing the evidence.
Of course, the actual scientific process does not rigidly conform to this ideal, because scientists are all too human. The history of science is as colorful as the rest of human history, a cavalcade of vanity, envy, prejudice, dishonesty, stubbornness, group-think, and other varieties of human weakness. It is amazing that science has achieved so much, given the vagaries of human nature. Perhaps it is because the scientific method gives too little credit to the creative intuition of real scientists. In any case, scientists do have a distinctive attitude toward uncertainty, characterized by a detached and patient search for verifiable truth.
Unlike the fatalist's passivity, the fanatic's blind faith, and the scientist's detachment, the risk manager has a pragmatic attitude toward uncertainty: The future may be uncertain but it is not unimaginable and what I do can shift the odds in my favor.
Unlike the scientist, the risk manager is not trying to be objective; he has an ax to grind, either for himself or someone else. Values and beliefs are to be acted upon, not dismissed. The risk manager's first concern is achieving useful results, not gaining a clearer picture of the truth for its own sake. As we will see, a risk manager has more to gain from some truths than from others, which dampens his enthusiasm for searching for truths that cannot help him decide what to do.
The risk manager, unlike the scientist, does not wait indefinitely for additional evidence to resolve uncertainty. He knows the opportunity to act might not come again so he must act now, even if the right answer is far from obvious.
The risk manager shares the scientist's intention to be rational, which sets them both apart from the fatalist and the fanatic. But this shared desire for rationality does not necessarily lead the risk manager and the scientist to the same conclusions from the same set of facts, for their assumptions and motives are often quite different. The scientist uses fact and logic to describe the world more accurately. The risk manager uses fact and logic, to the extent that it is practical, to determine what he ought to do to advance his interests.
Earlier we said that risk means being exposed to the possibility of a bad outcome. To get any further, we have to decide what we mean by a bad outcome. It is hard to exaggerate the importance of being as clear as possible about the meaning. As the saying goes, "If you don't know where you want to go, any road will get you there."
With the possible exception of death, there is no universal definition of "bad outcome." It depends on the specifics of the situation you are facing. If you are deciding which movie to go to, a bad outcome might be boredom. If you are deciding whether to take your raincoat, a bad outcome might be hypothermia. If you are deciding whether to take out a second mortgage to buy oil futures contracts, a bad outcome might be bankruptcy. To make matters worse, there may be more than one kind of "bad outcome" in a particular situation. You might have to weigh the pain of hypothermia against the pain of looking unfashionable in last year's raincoat.
One way of thinking about this need to be specific about risk is to imagine that your decision is the next move in a game. Before you decide how to move, you have to know what game you are playing and how the score is kept. The consequences of muddled objectives can be devastating.
If you are not the chief executive officer (CEO) of a major corporation, imagine that you are. You are trying to decide whether to build an expensive new factory based on untested but very promising technology. To help you think about the risks involved, I ask you to define the "bad outcome" you want to avoid in making this decision.
You say, "Losing money, obviously."
I ask, "The company's money or your bonus?"
You say, "The company's money."
I let that pass without comment. Now I ask, "By losing the company's money, do you mean taking a hit in this year's reported earnings or in the stock price?"
Being a finance major in business school, you say, "The stock price."
I let that pass also. Now I ask, "The stock price this year or three years from now, after the factory is operating."
Since you are a finance major you know about present value and say, "This year's stock price says it all."
"Maybe," I say, "but what if the stock market doesn't understand the true potential of your new technology and unfairly discounts your stock for three years until the factory is actually finished and working?"
You tire of my annoying questions and have the security guards usher me out of your impressive new headquarters building.
I claim that none of the answers you gave me as CEO were obvious (although some were more politically correct than others). These different meanings of a bad outcome could have led to very different decisions and very different results.
Some questions almost always come up when you are deciding what risk means in your particular situation:
There are other questions that will come up when you are trying to be specific about the meaning of risk that applies to your particular situation, but we will leave these for later. Here we just want to emphasize that the clearer your understanding of what you are trying to avoid or to accomplish, the better your chances of making a good risk decision.
The Holy Grail of risk management is to find the best possible decision to make when faced with uncertainty. Usually we are thrilled to find a decision that is merely good, but there is actually an elegantly logical way to find the very best decision among all conceivable decisions. If this sounds too good to be true, it usually is. I have not forgotten my earlier statement that there are no magic formulas in risk management. But it is worth ignoring the messiness of real life for a moment to look at an idealized decision-making process. Knowing the ideal approach will help us to see what we are really doing when we make a risk decision and to judge the strengths and weaknesses of the more practical methods we use to muddle through in our actual decisions.
Suppose that I offered you the following opportunity: I will draw one card from a standard deck of playing cards. If the card is a spade, I will pay you $100. If it is the ace of spades, I will pay you $1,000. If the card is not a spade, I will pay you nothing. You must decide how much you are willing to pay me to play this game. The right answer to this problem is not apparent, despite the simplicity of the game itself. You might even doubt the existence of one and only one right answer. However, in our ideal world, there really is only one right answer and it is the very best decision you can make under the circumstances, although the answer only works for you, not your brother-in-law Harvey. His beliefs and preferences are different than yours. Let Harvey solve his own problems.
Although we do not know the right answer yet, we can eliminate some possibilities without too much difficulty. Unless you are masochistic, you won't pay me more than $1,000 to play this game, because that would leave you with no possibility whatever of coming out ahead. Although you can pay me nothing and refuse to play, you should be willing to pay me at least some small amount because the game gives you a good chance to win $100 and a shot at winning $1,000, and the worst outcome is that you lose the price of your ticket. I am sure you would pay at least a dime to play. What about a dollar? What about $50? The only question is where you stop and walk away, but you should play at some price. Is the right decision $2, or $10, or $80? At this point our idealized decision-making method comes into play. The game is pictured in Figure 1.1, assuming $20 is the price you are considering paying for a ticket to play.
The diagram shown as Figure 1.1 is an example of a decision tree, which is the foundation of risk management. In theory, any risk problem can be represented by a decision tree, although some decision trees are far too large and complex for even the fastest computer to handle.
The decision tree for our card game contains one uncertain event (draw a card). There are three possible outcomes for the event: ace of spades, spade but not the ace of spades, and not a spade. Each outcome has a payoff: $980, $80, or Ð$ 20. There is one decision to make: Play or do not play.
But we are not finished setting up the problem. We need important information from you to complete the decision tree.
First, we need your beliefs about the probability of each possible outcome--the odds that you would assign to drawing the ace of spades, the odds of drawing a spade but not the ace, and the odds of not drawing a spade.
We can use common sense to figure the odds. There are 52 cards in the deck and each card has the same chance to be drawn as any other card. So there is one chance in 52 of drawing any particular card such as the ace of spades. Therefore the probability drawing the ace of spades is 1/52 or 1.9 percent. There are 13 spades in the deck including the ace, so there are 12 chances in 52 of drawing a spade that is not the ace, giving us a probability of 12/52 or 23.1 percent. There are 39 cards that are not spades, so there is a 75 percent probability (39/52) of drawing one of those. Because there are no other possible outcomes, our probabilities must add up to 100% and they do (1.9 + 23.1 + 75 = 100).
Be aware that your commonsense probability beliefs make the crucial assumptions that I am not a card shark and that the deck is not defective (no missing or duplicate cards). You are making a leap of faith that the game is not rigged against you. For example, a defective deck might be missing the ace of spades, giving you no chance at all of winning the $1,000 prize. This element of faith is always present, to some degree, in any decision you make under uncertainty, for it is you and you alone who must decide and there is never any outside source or expert that you can trust to be completely reliable. In the end, your beliefs are the only beliefs that matter. That is why we called your probability assessments beliefs--to remind us of their personal and subjective nature. To keep things simple, we will accept your assumption of a fair game.
As an aside, a rigorous scientist might take a very dim view of what you just did. After all, no one has produced any observations from well-controlled experiments with this particular dealer or deck of cards. He would not accept your assumption of a fair game without evidence. Having no data, the scientist would refuse to assign any odds, would refuse to play, and would pass up any chance of winning the $1,000 prize.
Finally, we need your preferences for the payoffs from each outcome. How much pleasure would you get from winning $980 or $80 and how much pain would you feel if you won nothing and lost your entry fee of $20? Vague descriptions of your mood state are not good enough, you must put numbers on your preferences. Is winning $980 twice as satisfying as winning $490? Probably not, but is it 1.8 times as satisfying or 1.6 times as satisfying? Every time you make a risk decision you are implicitly assigning numbers to your preferences. I am asking you to make your preferences conscious and explicit.
But how can this be done? It is easy for us to say that we like apples better than oranges. But saying how much better seems much harder and possibly irrelevant. It may be hard but it is not irrelevant, because whenever we choose to do something that involves giving up some of one thing to get more of another, we are implicitly saying by how much we prefer one to the other. One of the principal assertions of risk management is that it is better to be explicit about your preferences, because doing so allows you to apply the power of logic to make a better decision than you would make with fuzzy, dimly perceived preferences. Admittedly, having explicit preferences when choosing fruit at the grocery store may not improve your life very much, but having explicit preferences when plotting a financial strategy for your retirement may improve your life immensely.
Since your choices implicitly embody your preferences, one way to explicitly reveal your preferences is to ask you what you would choose to do in simple situations and deduce your preferences from your answers. This procedure will allow us to apply your newly explicit preferences to more complex decisions.
To explicitly reveal your preferences for money, I start by asking you the following question:
You own a lottery ticket that gives you a 50 percent chance of winning $5,000 and a 50 percent chance of winning nothing. At what price would you sell your ticket?
You think carefully and say "I wouldn't sell my lottery ticket for less than $1,500."
I then ask you the same kind of question again and again, using different amounts of money each time. I take your answers to these questions and do some arithmetic to deduce your explicit preference, or utility, for money, which is plotted in Figure 1.2.
Note: When reading utility curves such as this, do not pay attention to the scale of the numbers, just the shape of the curve that the numbers describe. A utility of 6908 corresponding to a wealth of $0 could just as well have been a utility of 0, and a utility of 7601 corresponding to a wealth of $1,000 could just as well have been a utility of 1. What is significant is that all the other numbers between 0 and 1 retain their relative relationship and thus preserve the shape of the utility curve. It is not the absolute of amount of utility that matters but only the relative utilities of different amounts of money as compared to each other.
You can see from Figure 1.2 that your utility curve flattens as the payoff increases. Going from $500 to $1,000 is not as satisfying as going from zero to $500. The next dollar adds less satisfaction than the previous dollar. The tenth cookie is less satisfying than the first cookie. The diminishing satisfaction of getting more and more is a very common characteristic of people's preferences and when this is the case, people are willing to give something up to reduce their risk (their exposure to the possibility of a bad outcome). The experts call this attitude risk aversion.
Just as with your beliefs, your preferences are the only preferences that matter for this decision. You are the decision maker, so your actions should be logically consistent with your preferences and your beliefs.
Now we have nearly everything we need to complete our decision tree and to find the one best decision for you. Adding your beliefs and preferences, the tree now looks like Figure 1.3, assuming for the moment that you are considering paying $20 to play the game.
If you pay $20 to play and the ace of spades is drawn, you gain $980 and experience a satisfaction of 7,591 utils (reading off your utility curve in Figure 1.3), on which $980 corresponds to 7,591 utils). If a spade not the ace is drawn, you gain $80 and experience a satisfaction of 6,985 utils. If no spade is drawn, you lose $20 and experience a satisfaction of 6,888 utils. If you refuse to play, you gain or lose nothing and you experience a satisfaction of 6,908 utils.
Knowing all this might be interesting, but you still do not know what to do. How do you weigh the merits of playing at $20 against the merits of not playing? Playing at $20 involves risk (the possibility of a bad outcome) but also offers the possibility of a reward. Not playing avoids the risk but passes up any chance for the reward. Since you do not know in advance which outcome will occur, how do you decide? How do you weigh the risky choice against the riskless choice? We will use one of the greatest insights in the development of modern risk management.
John Savage, a pioneer in decision theory, showed that it is logically consistent to compare the expected utility of a risky choice to the utility of a riskless choice. If the expected utility of the risky choice is higher than the utility of the riskless choice, then taking the risk is the logical thing to do. We can weigh two or more risky choices against one another by comparing their expected utilities. The best choice is the choice that has the highest expected utility.
But what, you ask, is expected utility? We will get to that shortly, but first we need to set the stage by clarifying what we mean by logical consistency.
In the end, we want to find a decision that is logically consistent with your beliefs (about the probabilities of all the possible outcomes) and your preferences (the amount of satisfaction you would experience from each possible outcome). You are the decision maker and we want to respect and reflect your interests. We also want to reject any decision that is blatantly illogical when compared to other decisions you would make in similar situations--like the simple gambles we used to assess your utility curve. You do not want to be illogical if you can avoid it. There are several requirements for consistency. As one example, if you prefer A over B, and you prefer B over C, logical consistency requires that you prefer A over C. If you are indifferent between A and B and you are indifferent between B and C, you must be indifferent between A and C. If you pick A over B and you are indifferent between B and C, you must pick A over C. These choices are nothing more than common sense, but consistency can be surprisingly hard to achieve when making decisions that involve risk.
Fortunately, using Savage's insight on expected utility, we can avoid these and other logical blunders. We are going to calculate the expected utility of each alternative decision and select the decision that has the highest expected utility. Then we are done. We will have chosen the best possible decision that is consistent with your beliefs, your preferences, and the facts of this particular situation.
Now, finally, what is expected utility? Expected utility is a weighted average of the utilities of all the possible outcomes that could flow from a particular decision, where higher-probability outcomes count more than lower-probability outcomes in calculating the average. For example, if a particular decision gives you an 80 percent chance of experiencing 1,000 utils and a 20 percent chance of experiencing -200 utils, the expected utility of making this decision is:
.80 × 1000 plus .20 × (-200) equals 760 expected utils
This calculation is intuitively reasonable because everything else being equal, an outcome with an 80 percent probability is much more important to your likely satisfaction than an outcome with 20 percent probability. The decision with the highest expected utility is anticipated to produce higher satisfaction, averaged over all its possible outcomes, than any other decision. In other words, each alternative decision puts you on a different path into the future and the best decision puts you on a path that offers the highest satisfaction on average, considering the likelihood of all its possible outcomes along the way.
Using expected utility to identify the best decision makes intuitive sense, but some fancy mathematics is required to demonstrate that maximizing expected utility is indeed the right thing to do (and there is lively debate among the experts on the finer points of this principle).
Finally, we have all that we need to determine the best decision for you. We have identified the decision you must make (whether to buy a $20 ticket to play this game). We have identified the uncertain event (drawing the card), all its possible outcomes (ace of spades, spade not the ace, not a spade), and the payoff from each outcome ($ 980, $80, or Ð$ 20). We have assessed your beliefs about the probabilities of each outcome and your preferences for the payoff from each outcome (expressed in units of utility). Last but not least, we have determined your objective (to find the decision that offers you the highest expected utility).
We now calculate the expected utility of each decision you could make. If you pay $20 to play, you have a 1.9 percent probability of 7,591 utils, a 23.1 percent probability of 6,985 utils and 75 percent probability of 6,888 utils. Your expected utility of playing is:
(.019 × 7,591) + (.231 × 6,985) + (.75 × 6,888) = 6,924
If you do not play, you have a 100 percent probability of 6,908 utils. Your expected utility of not playing is:
1.0 × 6,908 = 6,908
Because paying $20 to play has a higher expected utility (6,924 utils) than not playing (6,908 utils), you should be willing to pay at least $20 to play. In fact, you should be willing to pay more than $20.
To find the very highest price that you should be willing to pay, we find the price that offers the same expected utility as not playing, namely 6,908 utils. At that price you are indifferent between playing and not playing.
By calculating the expected utilities of a range of ticket prices we see from the following list that a price of $35 offers the same expected utility (6,908) as not playing:
Now you know exactly what to do. If I charge you less than $35, you should play. However, if I try to charge any more than $35, you should refuse to play.
This is the best possible decision for you to make if you are to be logically consistent with your stated preferences and beliefs. It is what you ought to do if faced with this situation. Remember that we are not trying to be scientific and search for truth, we are trying to make you better off. An academic psychologist might define the problem very differently, trying to predict what people, in general, will actually do if faced with this type of situation. Some people might be illogical and refuse to play. Others might pay too much to play. The psychologist is not giving advice, but is a neutral observer trying to discover patterns in human behavior. It is not his job to tell you what you ought to do in this particular situation. He is being descriptive and we are being prescriptive. He is detached, but we have an agenda.
Knowing the decision that you ought to make, what do you do if that decision seems wrong? If you feel uncomfortable with the decision dictated by logic, you may want to reconsider your assessments of your probability beliefs and preferences. Sometimes another iteration produces a more accurate picture of what your beliefs and preferences really are. But be careful that you do not bias your analysis by artificially forcing it to converge to a predetermined result that has an irrational appeal to you. The best decision is not always the one that you are instinctively drawn to.
Earlier we discussed the importance of precisely defining what we mean by risk. In this example, we did just that. First, we decided to keep score by quantifying the satisfaction you would derive from gaining or losing money (what we called your utility curve). Second, we did not include any other ways of keeping score, such as the forgone pleasure of my company if you refused to play the game. Third, we decided that the game ended with the drawn card and we completely ignored anything that might happen after the game, such as losing your winnings at next week's poker game.
The risk in this example is very specific. By buying a ticket, you take a risk by exposing yourself to a 75 percent chance of a bad outcome (losing the price of your ticket). You are willing to take this risk (up to a ticket price of $35) because you feel this risk is outweighed by the 1.9 percent chance at winning the $1,000 prize and the 23.1 percent chance at winning the $100 prize.
We went through this example to illustrate an idealized method for managing risks, even though many real-life risk problems are too complex to solve in this crisp and precise fashion. Again, in real life there are no magic formulas. However, our idealized risk management method captures virtually every feature of real risk problems and it tells us how we ought to solve risk problems (if only we could). It fully reflects our risk management philosophy of acting on your beliefs and your preferences to improve your future by helping you make better risk decisions. We are not conducting a scientific experiment to find new truth that describes the world more accurately. We are not detached or objective.
Our card game is a simple decision tree. To tackle harder problems, we add more decisions, more events, more outcomes, and more complex beliefs and preferences. If we can properly identify how all these additional elements relate to each other, we can construct a decision tree that can, in principle, be solved to reveal the very best decision--if only we have fast enough computers or large enough brains to do so. A large decision tree can be solved by successively transforming its bushy branches into simple branches that contain simple gambles.
Newcomers to decision trees often have difficulty with the concept of utility. The term "utilitarian" comes to mind and it has acquired an unpleasant connotation. A utilitarian is thought to be the sort of person who would cheerfully grind up his grandmother for soup if the grandmother's pain would be less than the diners' collective pleasure. Rest assured that we do not make such proposals here. Apart from logical consistency, we have little to say about what your preferences ought to be. You are perfectly free to value your grandmother above all else in the world, if you want to. We merely suggest that you do, in fact, have preferences. They influence your behavior every day whether you acknowledge it or not. You choose to do one thing over another partly because you prefer one thing to another, to a certain degree. We claim that you will usually be better off if you can express your preferences clearly and act on them rationally. We are not suggesting that this process is always easy or painless.
At a more technical level, one difficulty with decision trees is that they can become very large very quickly, growing exponentially as more decisions, events, and outcomes are added. A tree with 10 alternative decisions, 10 events for each decision, and 10 possible outcomes for each event has 1,000 possible outcomes to evaluate (10 × 10 × 10). Change 10 to 20 and you have 8,000 outcomes to grapple with, and so on.
Consequently, most real-life risk problems of any importance have to be simplified to be solved. The best risk managers are those that can simplify without sacrificing the essentials. Much of this book is about the progress we are making in doing just that.
By using our idealized risk management method as a benchmark, we have a much better grip on the essentials and what we might be sacrificing by taking shortcuts or making simplifying approximations. Judging ourselves against the ideal forces us to think more clearly about our problem: to sift the wheat from the chaff; to break the problem into smaller, more manageable pieces; to avoid unnecessary errors in logic; and to use the results more intelligently.