Review of Superforecasting by Philip E. Tetlock and Dan Gardner
The possibilities of history are far more various than the human intellect is designed to conceive.—Arthur Schlesinger, Jr. (1)
If I throw a ball in the air, I can say with rough confidence where it’s likely to land. If I wanted, I could calculate the trajectory of the ball fairly precisely. We know from careful observation that objects under the influence of gravity obey relatively simple, fixed rules.
The course of human history isn’t so easy to work out. Our collective behavior may be as much the product of physical law as any natural phenomenon. But it doesn’t seem to obey any simple, fixed rules. Human society is not a simple mechanical system, but rather a complex network of interacting agents. Groups of people can behave very differently in very similar circumstances. Social behavior is—to use the philosopher Karl Popper’s famous distinction—more cloud-like than clock-like. (2)
Forecasting is thus more art than science. Since we can’t reliably plot the course of history the way we would the trajectory of a ball, we can never be entirely sure what will happen. But we can nevertheless make better or worse guesses about the future.
In Superforecasting, Philip Tetlock and Dan Gardner show how some people are able to make significantly more accurate forecasts than others. Tetlock is a professor of political psychology at the University of Pennsylvania who has studied forecasting since the 1980s. Early on, he found that even people who analyzed political and economic events for a living—government officials, professors, journalists, and so on—weren’t much better than chance at predicting events just a few years away. Experts as a group were—to use Tetlock’s memorable phrase—not much more accurate than “a dart-throwing chimp”. (3)
But Tetlock also found that some people were consistently more accurate than others. In 2011, the Intelligence Advanced Research Projects Activity (IARPA)—which conducts research for the US intelligence community—started sponsoring a series of forecasting tournaments as part of an effort to improve forecasts in the wake of intelligence failures leading up to the Iraq war. Competitors were asked to provide real-time estimates of the probability that events of interest to intelligence analysts would happen within some set time frame—would the president of Tunisia flee the country? would an H5N1 bird flu outbreak kill more than 10 people in China? Competitors’ accuracy was evaluated using Brier scores, which essentially measure how closely forecasts correlate with actual events.
Tetlock and his collaborators found that the best forecasters weren’t just making lucky guesses. They showed relatively little regression to the mean, outperforming the control group on question after question. The top forecasters in any given year were likely to among the top forecasters again the next year. They had a measurable forecasting skill. Over the course of four years, the top 2% of forecasters were 60% more accurate than the tournament average. They predicted events more accurately 300 days in advance than the average forecaster could 100 days in advance. Most impressively, they were substantially more accurate in their spare time than the intelligence community’s own analysts—who had access to classified information. (4)
These “superforecasters”—here I should say that I am one of the superforecasters interviewed for the book—were not well-known political experts. One of the most accurate was a computer programmer; another was a pharmacist; another worked in the Department of Agriculture. While superforecasters had ordinary-seeming day jobs, they were an unusually smart and knowledgeable group. When tested, they scored at least a standard deviation higher than the general population on tests of fluid intelligence and at least a standard deviation higher than the general population on tests of political knowledge. (5) Many were retired or—like me—were employed less than full time, so they could spend hours every week researching the questions and breaking them down into manageable parts. If the question was whether Ebola would spread to Europe, they pored over epidemiological models, studied airline screening procedures, and read papers on the possible sexual transmission of the disease. They updated their forecasts often.
Superforecasters also scored highly on measures of “actively open-minded thinking”. That is, they are not committed in advance to any one idea of how the world works. They treat their ideas as hypotheses to be tested, rather than premises to be built on. They look for facts and arguments that might call their views into question. They generally see events as determined in part by chance rather than attributing them to divine will or fate. They approach problems from a variety of different angles. They are unusually willing to consider that they might be wrong.
The philosopher Isaiah Berlin famously divided thinkers into “foxes”, who look at problems from a different perspectives, and “hedgehogs”, who “relate everything to a single central vision”. (6) The dichotomy comes from the Greek poet Archilochus’ line that “The fox knows many things, but the hedgehog knows one big thing”. Tetlock found that people who were confident there are simple, readily-available explanations for events—whether they were realists or liberal idealists, Marxists or supply-side economists—were practically worthless forecasters. People who saw themselves as foxes, who thought politics was complex and unpredictable, and who were willing to consider different points of view were consistently more accurate. Foxes were better forecasters.
So it shouldn’t be a surprise that our most prominent experts aren’t necessarily good forecasters. They aren’t famous for their accuracy, which for the most part has never been measured. They are famous because they sound convincing, because they are interesting or entertaining, or because they tell us what we want to hear. Their forecasts are often so vague as to be meaningless. Complex, nuanced pictures of reality—more accurate pictures of reality—frustrate policy makers and make poor television. When experts get something wrong, their mistakes are quickly forgotten or rationalized away. As a result, we reward experts who confidently make forecasts that are—as H.L. Mencken put it—”neat, plausible, and wrong”. (7)
That’s a problem. Our ability to make intelligent plans—both as individuals and as a society—depends on a realistic view of what’s ahead. But it’s a problem we can fix. We can hold the people who make predictions accountable. We can learn to think more like foxes. That means recognizing the limitations of what we can see and know. It means recognizing—as Tetlock and Gardner write—“reality is profoundly complex, that seeing things clearly is a constant struggle when it can be done at all, and that human judgment must, therefore, be riddled with mistakes”.
Buy Superforecasting: The Art and Science of Prediction. NonProphets, the podcast I host on forecasting, is available here. Links to books I recommend, review, or cite are Amazon Affiliate links. I receive a small percentage of any purchases made through these links.
(1) Arthur Schlesinger, Jr. Speech at the JFK Presidential Library and Museum (November 27, 2006)
(2) Karl Popper, “An Approach to the Problem of Rationality and the Freedom of Man” (Lecture at the University of Washington, April 21, 1965)
(3) Philip E. Tetlock, Expert Political Judgment (2005)
(4) David Ignatius, “More Chatter than Needed” in The Washington Post (November 1, 2013)
(5) Barbara Mellers et al., “Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions” in Perspectives on Psychological Science (May 2015)
(6) Isaiah Berlin, “The Hedgehog and the Fox” (1953)
(7) H.L. Mencken, “The Divine Afflatus” in The New York Evening Mail (November 16, 1917)
Wow, thanks for offering this analysis. It’s very fair and representative of the book, even though you are an insider of sorts. This is a great overview to pass around to give people the gist of what the book is about.
I found the book itself highly engaging and persuasive. Check out my layman’s review if you are so inclined: https://leviathanbound.wordpress.com/2016/03/11/superforecasting/
Take care, and all the best in your (super)forecasting!