Superforecasting The art and science of prediction

Philip E. Tetlock, 1954-

Book - 2015

"From one of the world's most highly regarded social scientists, a transformative book on the habits of mind that lead to the best predictions. Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week's meals. Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts' predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught...? In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people--including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer--who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They've beaten other benchmarks, competitors, and prediction markets. They've even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters." In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden's compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn't require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future--whether in business, finance, politics, international affairs, or daily life--and is destined to become a modern classic"--

Saved in:

2nd Floor Show me where

303.49/Tetlock
1 / 1 copies available
Location Call Number   Status
2nd Floor 303.49/Tetlock Checked In
Subjects
Published
New York : Crown 2015.
Language
English
Main Author
Philip E. Tetlock, 1954- (author)
Other Authors
Dan Gardner, 1968- (author)
Physical Description
240 pages; 25 cm
ISBN
9780804136693
9780804136716
  • 1. An Optimistic Skeptic
  • 2. Illusions of Knowledge
  • 3. Keeping Score
  • 4. Superforecasters
  • 5. Supersmart?
  • 6. Superquants?
  • 7. Supernewsjunkies?
  • 8. Perpetual Beta
  • 9. Superteams
  • 10. The Leader's Dilemma
  • 11. Are They Really So Super?
  • 12. What's Next?
  • Epilogue
  • An Invitation
  • Appendix: Ten Commandments for Aspiring Superforecasters
  • Acknowledgments
  • Notes
  • Index
Review by Choice Review

Tetlock (Univ. of Pennsylvania) and Gardner (editor, Policy Options) report on the findings from the ongoing Good Judgment Project, which details how amateur forecasters are often more accurately tuned than analysts from the intelligence community. Using their Website www.goodjudgement.com over four years to recruit 2,800 ordinary citizens interested in current events, researchers asked volunteers nearly 500 questions and used publicly available sources to determine the probability that various events would occur. Some volunteers inevitably performed much better than the pack. The "superforecasters'" strategies form the core of the book and can be distilled into a handful of directives: (1) Use logic and data upon which to base predictions; (2) Eliminate personal biases; (3) Keep track of your findings; (4) Break problems into sub-problems; (5) Find counterarguments to each problem; (6) Think probabilistically, always recognizing that everything is uncertain; (7) Distinguish between what is known and unknown; (8) Be wary of your assumptions. These ordinary people regularly attained considerable accuracy through meticulous application of these unsurprisingly common-sense lessons. These prescriptions offer decision makers an opportunity to understand and react more intelligently to confusing current events. This book will complement courses in corporate strategy and decision making. Summing Up: Recommended. Undergraduates, graduates, professionals. --Jerry Paul Miller, Simmons College, Boston

Copyright American Library Association, used with permission.
Review by New York Times Review

A little after midnight, while writing this review, I took a break to get some beer from my local supermarket. As I stood in line the lights suddenly dimmed throughout the store. I must have looked puzzled. "We do that because less people come in this late," the clerk explained. "There are fewer customers, so we need less light?" I asked. "Correct," he said. His non sequitur had me leaving the store fortified with both a six-pack and the reinforced conviction that books on how to think should be required reading in high schools across the country. "Mindware: Tools for Smart Thinking," by the psychologist Richard E. Nisbett, and "Superforecasting: The Art and Science of Prediction," by the psychologist Philip E. Tetlock and the journalist Dan Gardner, are two such books. The six sections of "Mindware" offer a variety of perspectives on how we think: the role of the unconscious in our judgments and decisions; the lessons of behavioral economics; the principles of probability and statistics; recommendations for how to test your ideas; and two sections on reasoning and the nature of knowledge. Nisbett is famous for his groundbreaking work in several areas of psychology; Malcolm Gladwell called him "the most influential thinker in my life." And so a book from Nisbett on this important subject is bound to be met with high expectations. My verdict is mixed. If you are looking for a survey of the topics covered in the book's six sections, this is a good one. You'll learn about our overzealousness to see patterns, our hindsight bias, our loss aversion, the illusions of randomness and the importance of the scientific method, all in under 300 pages of text. But there isn't much in "Mindware" that is new, and if you've read some of the many recent books on the unconscious, randomness, decision making and pop economics, then the material covered here will be familiar to you. Nisbett writes clearly, and he takes his time with difficult concepts ranging from multiple regression (which answers the question, Given many variables that contribute to some outcome, what is the effect of each?) to dialectical reasoning (a method of argument for resolving opposing views in order to establish truth). But the dry tone of the book, along with Nisbett's practice of telling us what he is going to say and reiterating what he has just said, gives "Mindware" a textbook feel. Where "Mindware" addresses the issue of making sense of a complex world from many angles, "Superforecasting" focuses on one issue: how we form theories of what will happen in the future. "Superforecasting" is a sequel of sorts to Tetlock's 2005 book "Expert Political Judgment," in which he analyzed 82,361 predictions made by 284 experts in fields like political science, economics and journalism. He found that about 15 percent of events they claimed had little or no chance of happening did in fact happen, while about 27 percent of those labeled sure things didn't. Tetlock concluded that the experts did little better than a "dart-throwing chimp." The primate metaphor resurfaces in this new book. The authors single out Thomas Friedman of The New York Times for being an "exasperatingly evasive" forecaster, and they point to the inaccuracy of financial pundits at CNBC, whose performance prompted Jon Stewart to remark, "If I'd only followed CNBC's advice, I'd have a million dollars today - provided I'd started with a hundred million dollars." But unlike "Mindware," most of the material in "Superforecasting" is new, and includes a compendium of best practices for prediction. The book describes the findings of the Good Judgment Project, an effort started by Tetlock and his collaborator (and wife), Barbara Meilers, in 2011, which was funded by an arm of the American intelligence community. National security agencies have an obvious interest in Tetlock's project. By one estimate, the United States has 20,000 intelligence analysts working full time to assess issues like the probability of an Israeli sneak attack on Iran in the next month, or the departure of Greece from the eurozone by the end of the year. That is nearly four times the number of physics faculty at American research universities. And so money spent on improving results must have seemed like a good investment. It was. The Good Judgment Project used the Internet to recruit 2,800 volunteers, ordinary people with an interest in current affairs - a retired computer programmer, a social services worker, a homemaker. Over four years, the researchers asked them to employ public news and information sources to estimate the probability that various events would occur, posing nearly 500 questions of the sort intelligence analysts must answer every day. The volunteers were also asked to reaffirm or adjust those probabilities daily, until a question "expired" at a pre-announced closing date. Some of the volunteers performed strikingly better than the pack. Tetlock and Meilers studied their strategies, and what they learned about the thinking and methodology of these "superforecasters" is the heart of what is presented in the book. The central lessons of "Superforecasting" can be distilled into a handful of directives. Base predictions on data and logic, and try to eliminate personal bias. Keep track of records so that you know how accurate you (and others) are. Think in terms of probabilities and recognize that everything is uncertain. Unpack a question into its component parts, distinguishing between what is known and unknown, and scrutinizing your assumptions. Those lessons are hardly surprising, though the accuracy that ordinary people regularly attained through their meticulous application did amaze me. Unfortunately, few of us seem to follow these principles in our daily lives. The prescriptions in both "Superforecasting" and "Mindware" should offer us all an opportunity to understand and react more intelligently to the confusing world around us. LEONARD MLODINOW is the author of "The Upright Thinkers: The Human Journey From Living in Trees to Understanding the Cosmos" and "Subliminal: How Your Unconscious Mind Rules Your Behavior."

Copyright (c) The New York Times Company [October 18, 2015]
Review by Library Journal Review

Tetlock (Annenberg Univ. Professor, Univ. of Pennsylvania; Expert Political Judgment) and journalist Gardner (Future Babble) have consolidated their efforts in a quest to figure out how best to anticipate the future. This book is one of several on the subject (e.g., Daniel Kahneman's Thinking, Fast and Slow and Nate Silver's The Signal and the Noise) that are not necessarily big-data driven, nor confined to business applications. Tetlock and Gardner examine both theory and practical instances of forecasts that were successful and unsuccessful-the Cuban Missile Crisis and the Bay of Pigs are cited, as are examples concerning professional poker, medicine, and weather. The authors introduce Brier scores (a measurement of the accuracy of probabilistic predictions). Profiled is the Good Judgment Project, a consortium of volunteer forecasters co-led by Tetlock and others. What comes through clearly in this book is that the best forecasters are bright but not necessarily Big Bang Theory smart-they have curiosity and an ability to detach themselves from preconceived notions. The appendix gives abbreviated dicta for those who aspire to be superforecasters. VERDICT In the absence of a physical absolute, insightful forecasts are invaluable. Day traders, consumer marketers, and statisticians will find this book of value.-Steven Silkunas, Fernandina Beach, FL © Copyright 2015. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.

(c) Copyright Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.
Review by Kirkus Book Review

Superforecastingpredicting events that will occur in the futureis not only possible; it accounts for an entire industry. World-renowned behavioral scientist Tetlock (Expert Political Judgment: How Good Is It? How Can We Know, 2005, etc.) explains why some people are so good at it and how others can cultivate the skill. Global forecasting is hardly limited to predicting the weather. In fact, much of it has significantly higher stakes: everything from the potential of conflict in the North China Sea to the 2016 presidential election is at play. Legions of intelligent, well-educated, and well-paid analysts digest data and attempt to make hundreds of nuanced predictions each year. Remarkably, in his seminal 20-year study, the author established that, on average, these "experts" are "roughly as accurate as a dart-throwing chimpanzee." On the other hand, the superforecasters Tetlock has recruited are far more accurate: his team handily beat their competitors in a forecasting tournament sponsored by a U.S. government agency, providing more accurate answers than even those with access to classified files. And here's the rub: his all-volunteer team is composed entirely of so-called ordinary people with ordinary jobs. In this captivating book, Tetlock argues that success is all about the approach: foresight is not a gift but rather a product of a particular way of thinking. Superforecasters are open-minded, careful, curious, and self-critical. They make an initial prediction and then meticulously adjust this prediction based on each new piece of related information. In each chapter, the author augments his research with compelling interviews, anecdotes, and historical context, using accessible real-world examples to frame what could otherwise be dense subject matter. His writing is so engaging and his argument so tantalizing, readers will quickly be drawn into the challengein the appendix, the author provides a concise training manual to do just that. A must-read field guide for the intellectually curious. Copyright Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.

1 An Optimistic Skeptic We are all forecasters. When we think about changing jobs, getting married, buying a home, making an investment, launching a product, or retiring, we decide based on how we expect the future will unfold. These expectations are forecasts. Often we do our own forecasting. But when big events happen--markets crash, wars loom, leaders tremble--we turn to the experts, those in the know. We look to people like Tom Friedman. If you are a White House staffer, you might find him in the Oval Office with the president of the United States, talking about the Middle East. If you are a Fortune 500 CEO, you might spot him in Davos, chatting in the lounge with hedge fund billionaires and Saudi princes. And if you don't frequent the White House or swanky Swiss hotels, you can read his New York Times columns and bestselling books that tell you what's happening now, why, and what will come next.1 Millions do. Like Tom Friedman, Bill Flack forecasts global events. But there is a lot less demand for his insights. For years, Bill worked for the US Department of Agriculture in Arizona--"part pick-and-shovel work, part spreadsheet"--but now he lives in Kearney, Nebraska. Bill is a native Cornhusker. He grew up in Madison, Nebraska, a farm town where his parents owned and published the Madison Star-Mail, a newspaper with lots of stories about local sports and county fairs. He was a good student in high school and he went on to get a bachelor of science degree from the University of Nebraska. From there, he went to the University of Arizona. He was aiming for a PhD in math, but he realized it was beyond his abilities--"I had my nose rubbed in my limitations" is how he puts it--and he dropped out. It wasn't wasted time, however. Classes in ornithology made Bill an avid bird-watcher, and because Arizona is a great place to see birds, he did fieldwork part-time for scientists, then got a job with the Department of Agriculture and stayed for a while. Bill is fifty-five and retired, although he says if someone offered him a job he would consider it. So he has free time. And he spends some of it forecasting. Bill has answered roughly three hundred questions like "Will Russia officially annex additional Ukrainian territory in the next three months?" and "In the next year, will any country withdraw from the eurozone?" They are questions that matter. And they're difficult. Corporations, banks, embassies, and intelligence agencies struggle to answer such questions all the time. "Will North Korea detonate a nuclear device before the end of this year?" "How many additional countries will report cases of the Ebola virus in the next eight months?" "Will India or Brazil become a permanent member of the UN Security Council in the next two years?" Some of the questions are downright obscure, at least for most of us. "Will NATO invite new countries to join the Membership Action Plan (MAP) in the next nine months?" "Will the Kurdistan Regional Government hold a referendum on national independence this year?" "If a non-Chinese telecommunications firm wins a contract to provide Internet services in the Shanghai Free Trade Zone in the next two years, will Chinese citizens have access to Facebook and/or Twitter?" When Bill first sees one of these questions, he may have no clue how to answer it. "What on earth is the Shanghai Free Trade Zone?" he may think. But he does his homework. He gathers facts, balances clashing arguments, and settles on an answer. No one bases decisions on Bill Flack's forecasts, or asks Bill to share his thoughts on CNN. He has never been invited to Davos to sit on a panel with Tom Friedman. And that's unfortunate. Because Bill Flack is a remarkable forecaster. We know that because each one of Bill's predictions has been dated, recorded, and assessed for accuracy by independent scientific observers. His track record is excellent. Bill is not alone. There are thousands of others answering the same questions. All are volunteers. Most aren't as good as Bill, but about 2% are. They include engineers and lawyers, artists and scientists, Wall Streeters and Main Streeters, professors and students. We will meet many of them, including a mathematician, a filmmaker, and some retirees eager to share their underused talents. I call them superforecasters because that is what they are. Reliable evidence proves it. Explaining why they're so good, and how others can learn to do what they do, is my goal in this book. How our low-profile superforecasters compare with cerebral celebrities like Tom Friedman is an intriguing question, but it can't be answered because the accuracy of Friedman's forecasting has never been rigorously tested. Of course Friedman's fans and critics have opinions one way or the other--"he nailed the Arab Spring" or "he screwed up on the 2003 invasion of Iraq" or "he was prescient on NATO expansion." But there are no hard facts about Tom Friedman's track record, just endless opinions--and opinions on opinions.2 And that is business as usual. Every day, the news media deliver forecasts without reporting, or even asking, how good the forecasters who made the forecasts really are. Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between. And every day, all of us--leaders of nations, corporate executives, investors, and voters--make critical decisions on the basis of forecasts whose quality is unknown. Baseball managers wouldn't dream of getting out the checkbook to hire a player without consulting performance statistics. Even fans expect to see player stats on scoreboards and TV screens. And yet when it comes to the forecasters who help us make decisions that matter far more than any baseball game, we're content to be ignorant.3 In that light, relying on Bill Flack's forecasts looks quite reasonable. Indeed, relying on the forecasts of many readers of this book may prove quite reasonable, for it turns out that forecasting is not a "you have it or you don't" talent. It is a skill that can be cultivated. This book will show you how. The One About the Chimp I want to spoil the joke, so I'll give away the punch line: the average expert was roughly as accurate as a dart-throwing chimpanzee. You've probably heard that one before. It's famous--in some circles, infamous. It has popped up in the New York Times, the Wall Street Journal, the Financial Times, the Economist, and other outlets around the world. It goes like this: A researcher gathered a big group of experts--academics, pundits, and the like--to make thousands of predictions about the economy, stocks, elections, wars, and other issues of the day. Time passed, and when the researcher checked the accuracy of the predictions, he found that the average expert did about as well as random guessing. Except that's not the punch line because "random guessing" isn't funny. The punch line is about a dart-throwing chimpanzee. Because chimpanzees are funny. I am that researcher and for a while I didn't mind the joke. My study was the most comprehensive assessment of expert judgment in the scientific literature. It was a long slog that took about twenty years, from 1984 to 2004, and the results were far richer and more constructive than the punch line suggested. But I didn't mind the joke because it raised awareness of my research (and, yes, scientists savor their fifteen minutes of fame too). And I myself had used the old "dart-throwing chimp" metaphor, so I couldn't complain too loudly. I also didn't mind because the joke makes a valid point. Open any newspaper, watch any TV news show, and you find experts who forecast what's coming. Some are cautious. More are bold and confident. A handful claim to be Olympian visionaries able to see decades into the future. With few exceptions, they are not in front of the cameras because they possess any proven skill at forecasting. Accuracy is seldom even mentioned. Old forecasts are like old news--soon forgotten--and pundits are almost never asked to reconcile what they said with what actually happened. The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough. Many have become wealthy peddling forecasting of untested value to corporate executives, government officials, and ordinary people who would never think of swallowing medicine of unknown efficacy and safety but who routinely pay for forecasts that are as dubious as elixirs sold from the back of a wagon. These people--and their customers--deserve a nudge in the ribs. I was happy to see my research used to give it to them. But I realized that as word of my work spread, its apparent meaning was mutating. What my research had shown was that the average expert had done little better than guessing on many of the political and economic questions I had posed. "Many" does not equal all. It was easiest to beat chance on the shortest-range questions that only required looking one year out, and accuracy fell off the further out experts tried to forecast--approaching the dart-throwing-chimpanzee level three to five years out. That was an important finding. It tells us something about the limits of expertise in a complex world--and the limits on what it might be possible for even superforecasters to achieve. But as in the children's game of "telephone," in which a phrase is whispered to one child who passes it on to another, and so on, and everyone is shocked at the end to discover how much it has changed, the actual message was garbled in the constant retelling and the subtleties were lost entirely. The message became "all expert forecasts are useless," which is nonsense. Some variations were even cruder--like "experts know no more than chimpanzees." My research had become a backstop reference for nihilists who see the future as inherently unpredictable and know-nothing populists who insist on preceding "expert" with "so-called." So I tired of the joke. My research did not support these more extreme conclusions, nor did I feel any affinity for them. Today, that is all the more true. There is plenty of room to stake out reasonable positions between the debunkers and the defenders of experts and their forecasts. On the one hand, the debunkers have a point. There are shady peddlers of questionable insights in the forecasting marketplace. There are also limits to foresight that may just not be surmountable. Our desire to reach into the future will always exceed our grasp. But debunkers go too far when they dismiss all forecasting as a fool's errand. I believe it is possible to see into the future, at least in some situations and to some extent, and that any intelligent, open-minded, and hardworking person can cultivate the requisite skills. Call me an "optimistic skeptic." The Skeptic To understand the "skeptic" half of that label, consider a young Tunisian man pushing a wooden handcart loaded with fruits and vegetables down a dusty road to a market in the Tunisian town of Sidi Bouzid. When the man was three, his father died. He supports his family by borrowing money to fill his cart, hoping to earn enough selling the produce to pay off the debt and have a little left over. It's the same grind every day. But this morning, the police approach the man and say they're going to take his scales because he has violated some regulation. He knows it's a lie. They're shaking him down. But he has no money. A policewoman slaps him and insults his dead father. They take his scales and his cart. The man goes to a town office to complain. He is told the official is busy in a meeting. Humiliated, furious, powerless, the man leaves. 1. Why single out Tom Friedman when so many other celebrity pundits could have served the purpose? The choice was driven by a simple formula: (status of pundit) X (difficulty of pinning down his/her forecasts) X (relevance of pundit's work to world politics). Highest score wins. Friedman has high status; his claims about possible futures are highly difficult to pin down--and his work is highly relevant to geopolitical forecasting. The choice of Friedman was in no way driven by an aversion to his editorial opinions. Indeed, I reveal in the last chapter a sneaky admiration for some aspects of his work. Exasperatingly evasive though Friedman can be as a forecaster, he proves to be a fabulous source of forecasting questions. 2. Again, this is not to imply that Friedman is unusual in this regard. Virtually every political pundit on the planet operates under the same tacit ground rules. They make countless claims about what lies ahead but couch their claims in such vague verbiage that it is impossible to test them. How should we interpret intriguing claims like "expansion of NATO could trigger a ferocious response from the Russian bear and may even lead to a new Cold War" or "the Arab Spring might signal that the days of unaccountable autocracy in the Arab world are numbered" or . . . ? The key terms in these semantic dances, may or could or might, are not accompanied by guidance on how to interpret them. Could could mean anything from a 0.0000001 chance of "a large asteroid striking our planet in the next one hundred years" to a 0.7 chance of "Hillary Clinton winning the presidency in 2016." All this makes it impossible to track accuracy across time and questions. It also gives pundits endless flexibility to claim credit when something happens (I told you it could) and to dodge blame when it does not (I merely said it could happen). We shall encounter many examples of such linguistic mischief. 3. It is as though we have collectively concluded that sizing up the starting lineup for the Yankees deserves greater care than sizing up the risk of genocide in the South Sudan. Of course the analogy between baseball and politics is imperfect. Baseball is played over and over under standard conditions. Politics is a quirky game in which the rules are continually being contorted and contested. So scoring political forecasting is much harder than compiling baseball statistics. But "harder" doesn't mean impossible. It turns out to be quite possible. There is also another objection to the analogy. Pundits do more than forecasting. They put events in historical perspective, offer explanations, engage in policy advocacy, and pose provocative questions. All true, but pundits also make lots of implicit or explicit forecasts. For instance, the historical analogies pundits invoke contain implicit forecasts: the Munich appeasement analogy is trotted out to support the conditional forecast "if you appease country X, it will ramp up its demands"; and the World War I analogy is trotted out to support "if you use threats, you will escalate the conflict." I submit that it is logically impossible to engage in policy advocacy (which pundits routinely do) without making assumptions about whether we would be better or worse off if we went down one or another policy path. Show me a pundit who does not make at least implicit forecasts and I will show you one who has faded into Zen-like irrelevance. Excerpted from Superforecasting: The Art and Science of Prediction by Philip E. Tetlock, Dan Gardner All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.