What I've been reading (vol. 1)

I decided to have this type of a regular book review column on the blog where I intend to present brief reviews of some of the (nonfiction) books I will be reading throughout the year. One of my new year resolutions was to read more books. So far I was 'entrapped' by reading mostly academic papers (for my work) and newspaper articles (for my amusement). I feel I have neglected the wonders of comfortable couching with a book in my hands and a pen and paper besides me to write down moments of instant inspiration from the reading process. So I have decided to read at least two (nonfiction) books per week. This however doesn't imply I've set out to read 100 books throughout the year (as I expect some other engagements during the summer and by the end of the year). The two books per week is a starting goal for the first few months, where I've set a total of 20 to 25 books to read, all of which I will present in my blog reviews.

So far I came down to three books in two weeks, which is an OK average to start with. I assume this will only worsen however, as the post-holiday obligations pile up and once again detach me from my reading schedule (they've already started doing that!). However I've deliberately set the goal too high, following the idea that "one should shoot for the moon, and even if one misses he ends among the stars". So even if my ambitious goal of 2 books per week turns into 2 books per month, I will be satisfied. As long as I get to read all the books I initially set out to. 

Let's then start with the first group. The first pile of books were about forecasting, predictions, and uncertainty. I started with Tetlock's Superforcasters, Silver's Signal and the Noise (I've read the first few chapters of his book before, but I thought I needed a refresher so I read it all again.), and Hand's Improbability Principle. These will be presented today. Next in line was Taleb and his trio: Fooled by Randomness, Black Swan and Antifragile. They will be in the next post (vol. 2). 

1. Tetlock, Phillip and Gardner, Dan (2015) Superforcasting: The Art and Science of Prediction. Random House. (link to blog

We rely on forecasts on a daily basis. Making predictions is a natural response in overcoming the knowledge deficit in a world filled with uncertainty. In this fight with uncertainty we, unfortunately, often lose out, as we start approaching it with our limited approximation of reality. In other words our daily judgement are too often clouded by our individual biases. 

In addition to our own forecasts we rely on other peoples' forecasts. Unfortunately we are mostly unaware of the precision, accuracy and past performance of the forecasts we rely on from other people. Whether it’s the weather forecast, economic growth, sports, elections, or any current event, we are completely blind-sighted over the actual quality of the forecast we take as a given signal to address our uncertainty conundrum. The very people whose forecasts we rely on, the pundits, experts or 'talking heads', very often don’t have the slightest clue about how an event is going to unfold. They tend to be as biased as the rest of us – their listeners/readers. But as soon as they give out their forecast, after the event has passed, it is seldom recalled. “Old forecasts are like old news” say the authors. This is why the TV experts can go on and on, still being invited to give talks, despite their continuous track record in failure, to give out new predictions, which too have a quite high likelihood of being wrong. But this is a demand-side problem as well as a supply-side one: no one from the public demands evidence of accuracy of the forecasters. Because of this there is no measurement, and hence no revision. Every expert can simply go about their usual business, thinking that they are still quite good, even though they are no better than a dart-throwing chimp - sometimes they hit the bull's eye, but most of the times they miss strikingly (one of the media-catchy conclusions of Tetlock's first big research effort summarized in an earlier book: Expert Political Judgement). 

In this veil of uncertainty some of us however tend to do better than others. In an excellent book Phillip Tetlock, with co-author Dan Gardner, explores the results of a tournament experiment through which he was able to find a group of ordinary people who did predictions far better than the so-called experts and analysts with access to classified data. In fact their accuracy was 60% better than average. He calls this group of people superforcatsers

How did he manage to find these people? He opened up the Good Judgment Project (GJP) where he invited volunteers to make regular forecasts about the future. This was all part of a bigger forecasting tournament organized by a government agency IARPA. After the intelligence community fiasco following the missing WMDs in Iraq, the government decided to create a forecasting tournament and invite groups of top scientific teams to apply whatever method they wanted to be as precise as possible in their predictions. The GJP was one of 5 teams that competed in the first tournament and with stellar performance. Its superforcasters beat the official control group by 60% in the first year, and by 78% in the second. They beat all of their competitors between 30% and 70%, including the professional intelligence analysts with access to classified information. This is when comparing overall individual performance, but the GJP project also had teams built up in the subsequent years. Their teams were better than individuals by 23%. But there was a distinction between teams of ordinary forcasters and teams of superforecasters. Ordinary teams beat the wisdom of the crowd by 10%, but were themselves beaten by prediction markets by 20%. However the prediction markets were beaten by superteams by 15-30%. And best of all, the GJP had a mixed crowd of regular people, not necessarily supersmart, math wises, or newsjunkies. It was a highly diversified crowd, but a very successful one primarily because of the way they taught about the issues.

In the book the authors describe in length what it takes to become a superforecaster (the keyword is Bayesian reasoning), but they also offer some additional insights and a multitude of fun and interesting examples. The book is both an enjoyable read and a learning experience. It has even encouraged me to join the GJP. I have some reservations about the project itself, but I'll leave this for another time. 

2. Silver, Nate (2012) The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t. Penguin Press (link to fivethirtyeight)

“The signal is the truth. The noise is what distracts us from the truth”


In what is quite possibly one of the best books about predictions, Nate Silver very diligently, and slightly auto-biographically, teaches us how to distinguish a true signal from the distracting noise, in the era of ever-increasing and easy accessible information. 

In the very first figure of the book Silver points to the ever-increasing information phenomenon. He shows us the number of books produced per year and how they skyrocketed since Guttenberg’s invention of the printing press back in 1440. Combined with the rapid development of societies after the first, second, and what is already the third Industrial Revolution, it’s not hard to notice the vast increase in the availability of data in today’s world. In this abundance of information it is easy for one to get lost. The vast majority of this information is pure noise, and as Silver put it: “the noise is increasing faster than the signal”. In this atmosphere making predictions is immensely difficult. Even more so since we aren’t on average very good in making them, nor can we ever make perfectly objective predictions (deprived of our subjective biases).

Silver’s bottom line in uncovering why many predictions fail, is because most people have a poor understanding of probability and uncertainty. This makes people err too often in confusing noise with signal. This results in overconfidence of forecasters which leads to bad predictions. On the other hand of the spectrum, modesty (willingness to accept our mistakes and learn from them) and an appreciation of uncertainty improve predictions. Most of the things he talks about (e.g. distinguishing foxes from hedgehogs) is also touched upon in Tetlock's book. In fact they both convey the same message: most experts are phonies, we can do it better! (in a nutshell)

Silver, like Tetlock, offers something close to a solution – applying the Bayes Theorem. Through a multitude of examples diagnosing the prediction problem, he suggests Bayes' Theorem as a solution concept that makes sure we question our beliefs by becoming more comfortable with uncertainty and probability, and through which we are always forced to update our beliefs when new evidence strikes us. The basic idea behind Bayes’ Theorem is to formulate probabilistic beliefs about the world once we are facing new data. It describes conditional probability: it tells us the probability that a theory of hypothesis is true if an event has occurred. For a very detailed and brilliant explanation of Bayes's Theorem I suggest the following page, for a shorter, but also quite intuitive explanation I suggest this one

Thinking like Bayesians forces us to think of events in matter of chance. It is a simple math formula that helps us put things in perspective and think of every outcome in terms of how likely or how unlikely was it to happen. It is wrong to believe that our prior beliefs are perfectly objective and rational. They aren’t. By acknowledging this and being ready to accept new evidence in estimating the probability of an event, we are striving to be less subjective and less wrong. Science works precisely in this way. Researchers are searching for the truth and are encouraged to examine evidence before making final judgments and conclusions. Every scientist starts with a prior, but the real scientist never lets his previous judgment guide him towards confirming his pre-existing bias. He relies on his experiments to convince him otherwise. As Keynes said: “If the facts change, I change my mind”. Thinking like a Bayesian essentially means we should start thinking like scientists – being skeptical about the worldview we encounter, and only maintain certainty over a certain issue once we are presented with enough conclusive evidence.

Oh, he also talks a lot about weather forecasting, earthquakes, economists and political scientists,  the efficient market hypothesis, poker, baseball, chess, and terrorism. Read the book. 

3. Hand, David (2014) The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day. Penguin Random House (link to blog)

What are the odds of that happening? How many times have we heard these words before? There seems to be a paradoxical reverse-proportional relationship between the probability of a certain event happening, and the amount of time it actually occurred. 

There is no better evidence for this than the financial market itself. Throughout just the past century, in the US alone, we’ve experienced several market crashes. From the banking crisis of 1907, to the Great Depression in 1929, to the oil shocks of the 1970s, the crash of 1987, the dot-com bubble burst in 2000/2001, and finally the Great Recession in 2007/08. Before each of these events we could have heard the experts saying “no one saw it coming” and the standard “this was a one in a million/billion/trillion event”. But yet, each of these did occur, no matter how unlikely, unprecedented, or unexpected it was. 

Hand’s book describes even unlikelier events. People being struck by lightning several times, experiencing and surviving several terrorist attacks, finding a copy of a long lost book purely by accident, winning the lottery several times, exactly the same lottery numbers being picked out in a time span of two weeks, hitting a hole in one, etc. 

The reason for the quite regular occurrences of quite unimaginable events is what Hand calls the Improbability Principle, a set of mathematical and statistical laws that explain why the extremely improbable events are actually happening all the time. A set of 5 laws are tied together to explain the regular occurrence of unlikely events, in a way that they are actually unavoidable. “The extraordinary unlikely must happen; events of vanishingly small probability will occur”. The five laws are the following: law of inevitability (something must happen; if we make a complete list of all possible outcomes, one of them must occur, no matter how small the probability), law of truly large numbers (with a very large enough number of opportunities, any outrageous thing might happen); law of selection (an example of hindsight bias – you assign probabilities as high as you like after the event took place); law of probability lever (a slight change in circumstances can have a huge impact on probabilities. This change can transform tiny probabilities into massive ones); and the law of near enough (some events are just sufficiently similar that they may be regarded as identical. There are no two exactly the same measures, up until an infinite decimal point, so two very close events may seem like exactly the same.)

Unlikely events do happen. And they happen much too often than we tend to perceive, so they catch us by surprise every time. We often succumb under the fallacy of not thinking that some things are inevitable and that they will happen given enough opportunities for them to happen. We don’t think about hindsight bias when we conclude nor do we consider the incremental changes that made some things actually much more likely to happen than not. Another very fun book with a multitude of examples. 

Comments

  1. I think people are now used to ahve best rreading content for their own taste. The best resume critique service is really amazing and valuable for students to understand it's importance.

    ReplyDelete

Post a Comment

Popular posts from this blog

Short-selling explained (case study: movie "Trading Places")

Economic history: mercantilism and international trade

Rent-seeking explained: Removing barriers to entry in the taxi market

Graphs (images) of the week: Separated by a border