inquiry_from_an_anti_library's review against another edition

Go to review page

adventurous hopeful informative inspiring fast-paced

5.0

Is This An Overview? 
Forecasting is a skill that everyone uses everyday to predict the effects of potential changes.  Like any skill, forecasting can be improved.  Experts are often sought out for decisions and event interpretations, to forecast what will come about.  Although many provided forecasts appear valuable, their quality is often undetermined.  The public tends to favor those who make the future appear more certain, even though their overconfidence is a source of lower quality forecasts.  On average, experts can provide a better narrative of events, but their forecasts are as good as random guesses.  

Part of the reason for the poor performance of forecasts is that reality is complex and dynamic, making predictions difficult.  Society might have more knowledge and computational power, but less confidence in predictability.  There might be limits on predictability, but people can become better at making forecasts.  To find out how people can make better forecasts, and methods to avoid, many diverse people participated in a forecasting research project.  

What made some people better at making forecasts, what made people superforecasters, was based on how they thought about information, how they used information.  Not intelligence, not ideology, not numeracy skills.  The forecasters were doubtful of their claims, and sought to improve them.  Complex problems which seemed impossible to forecast, were reconsidered through a variety of questions seeking to find ways for the event to occur, or not occur.  They looked for the base rate, a general probability of an event happening before going to the unique case.  Anchoring their views to the outside view, rather than the inside view.  They seek to improve their own forecasts by looking for what others think about the event, they look for alternative forecasts.  They adapt to new information, update their forecasts to new information, and try to not underreact or overreact to the information.  

These methods of thinking, these guidelines might improve decision making, but better to change guidelines than make a terrible forecast.  People can become better at forecasting, but teams have better results than an individual superforecaster, as each member can help others to refine ideas, and no individual can do everything.  But teams take effort to make them productive, and can create processes that exacerbate bad decisions. 

How To Get Better At Forecasting?
To become better at forecasts, people need to practice.  There is a lot of tacit knowledge that cannot be learned through how others describe forecasting.  Feedback is needed to train in any skill, including forecasting.  But the feedback to forecasts, usually lack quality.  They do not provide immediate feedback nor provide clear results.  Without appropriate feedback, people can become overconfident in their forecasts.  People can gain an illusion of control from seemingly favorable random outcomes.  Judging forecasts would depend on running many forecasts, such as in weather.  But there are forecasts that cannot be rerun, such as history.  Need to run experiments to verify claims.  

The language around what people mean by possibilities need to be more specific rather than ambiguous.  People can mean drastically different possibilities, which can create a dangerous misunderstanding. Teams can use a chart to numerically define possibility claims, to reduce confusion.  Numbers are an opinion, but can be used to reduce confusion.  Forecasts also need timelines.  Without timelines, forecasts become perpetually in dispute at what they meant.  

Caveats? 
Forecasting on problems will always have uncertainty.  As referenced in the book, no matter the quality of the better decision making, there will be uncertainty and wrong decisions.  The process of decision making matters more than the outcome, as there will be more opportunities for better decisions with a better decision making process than a randomly favorable outcome under a worse decision making process. 

drmilamarinova's review against another edition

Go to review page

informative inspiring reflective medium-paced

4.5

As a cognitive psychologist and a scientist, I found the book rally insightfull and helpful in understanding how predictions are/should be made in the context of policies and social sciences and Humanities. The book introduces also some useful concepts from judgement, decision making and even game theory in a very beginner-friendly manner. 

savageadage's review against another edition

Go to review page

informative

3.0

pashtet31's review against another edition

Go to review page

5.0

Harry Truman famously said
: Give me a one-handed economist! All my economics say, ''On the one hand? on the other.''

Philip Tetlock combines three major findings from different areas of research:

1) People don't like experts who are context specific and could not provide us with clear simple answers regarding complex phenomena in a probabilistic world. People don't like if an expert sounds not 100% confident. They reason, that confidence represents skills.

2) Experts who employ publicly acceptable role of hedgehogs (ideologically narrow-minded) and/or express ideas with 100% certainty are wrong on most things most of the time. General public is fooled by hindsight bias (on the part of experts) and lack of accountability.

3) We live in the nonlinear complex probabilistic world, thus, we need to shape our thinking accordingly. Those who do it ("foxes" comparing to "hedgehogs" can think non-simplistically) become much better experts in their own field and better forecasters in general.

I guess, nobody with sufficient IQ or relevant experience will find any new and surprising ideas in this book. However, the story is interesting in itself and many Tetlock's arguments and examples can be borrowed for further discussions with real people in the real life settings.

pinkgallah's review against another edition

Go to review page

5.0

Great read. Just when you think that (like many non-fiction books) it is about to get too long and too repetitive, it brings in a new and interesting topic. It argues convincingly for us to update how we treat and make predictions and I do hope the ideas catch on.

cloudedbyte's review against another edition

Go to review page

5.0

The book gave me some interesting ideas, forced me to review my thinking process. I appresiate it.
It does not have all answers, but then nobody has them.

yates9's review against another edition

Go to review page

4.0

Interesting story of a research program organised to try to identify people with great forecasting skills and understand what kind of thinking helped them be this way, or improve.

A much more important ans very interesting final few chapters on the broader space of trying to predict events in the future, probability distributions, and complexity. The question about the limits of forecasting is commendable and perhaps one of the most important points of the book. We can talk about knowledge and prediction within a certain timeframe but beyond that it is a completely different situation and the game changes and its impossible to really
look ahead that far.

mcordell's review against another edition

Go to review page

informative medium-paced

3.75

y4le's review against another edition

Go to review page

5.0

Superforcasters opens by comparing the state of forecasting in our current society to that of medicine in the premodern world. Practitioners of forecasting today are not judged on their performance or accuracy, but on the strength of the hindsight aided narrative they can put together. Telling a compelling story that plausibly explains how your specific prediction may have been wrong but your narrative is still generally correct is what kept medicine examining humors and bloodletting for centuries. The message is clear: Look for your failures. Seek out ways to improve your performance. Get prompt feedback. Don't ever get complacent. The state of "perpetual beta" is key, according to the authors, to achieving maximum performance in any field. A notion that we apply to the training of doctors and professional athletes, but not the people whose prognostications drive the behavior of everyone from corporate giants to you and me. We should test those who claim to have insight into complex systems, and look at a forecaster's past record before blindly accepting their story

mwcooper11's review

Go to review page

informative reflective medium-paced

4.25