During my recent holiday travels, I listened to an episode of the Ezra Klein show featuring Philip Tetlock, a psychologist known for his work on how people make judgments, particularly in the form of predictions about future events. I found the podcast interesting and so sought out Tetlock’s book Superforecasting: The Art and Science of Prediction (written in collaboration with journalist Dan Gardner) to find out more.
The book focuses on the phenomenon of “superforecasters”, a group of people Tetlock has discovered in his research who are especially good at making accurate predictions about future events. Tetlock identified this group of skilled predictors through the Good Judgment Project, in which volunteers predict the likelihood of various events such as increases in inflation, government debt defaults, and conflicts between nations. (I promise the book is more interesting than this description makes it sound.) One important element of the kinds of forecasts Tetlock describes is that they take big, complex questions (such as “when will the supply chain get back to normal?”) and distill them into smaller, more readily quantifiable questions (like “what percentage of shipments will remain in port for more than five days in September of 2022?”). This allows for clear judgments about whether a prediction was right or wrong, and the subsequent identification of “superforecasters” who regularly outperform the group in the accuracy of their predictions. Tetlock tells the stories of a number of these forecasters and emphasizes that what distinguishes them is not who they are (their intelligence or expertise) but how they think about and approach the problem of predicting the future.
From this body of research, Tetlock and his colleagues have found several ways in which superforecasters differ from forecasters with more average performance on prediction tasks (and, Tetlock argues, from the pundits we often see making predictions in media). First, superforecasters are humble and willing to admit what they don’t know. This related to the second point, which is that superforecasters are not necessarily experts in the specific area in which they are making predictions, and more expertise often isn’t helpful in improving a predictor’s accuracy – superforecasters are generally those with broad knowledge and the ability to conduct relevant research rather than deep expertise in a relevant area (Tetlock connects this to the idea of “hedgehogs” who know one big thing and tend to view all questions through that lens and “foxes” who are more flexible and nuanced in their thinking). Superforecasters also consider the bigger picture (how have similar situations unfolded in the past?) before focusing in on the particulars of the current question. Finally, superforecasters tend to adjust their predictions more often in response to new information.
Superforecasting was engagingly written and provided lots of opportunities to reflect on how experts make predictions and the role of predictions in our everyday lives. (If you’re short on time, the podcast gives a pretty thorough overview of the key ideas as well.)
Recommended for fans of: Thinking Fast and Slow, The Undoing Project, The Tipping Point, Freakonomics.