Brexit: The analysis of results and predictions

On yesterday’s historic referendum, Britain has voted Leave. It was decided by a small margin, 51.9% to 48.1% in favour of Leave, with turnout at a high 72.2% (highest since the 1990s). The outcome dealt a decisive blow against PM David Cameron who announced his resignation in the morning. The markets have had a strong negative reaction, with the markets plummeting, and the pound sharply declining to its 30-year low against the dollar. It was an outcome the markets failed to anticipate (or were hoping to avoid), which explains the investors’ abrupt reactions.

Read the initial reactions: The Economist is in state of disbelief, trying to find a solution, and describing what happens next (invoking Article 50 of the Lisbon Treaty). They also have this interesting piece on the fallen legacy of David Cameron. The FT dreads about "Britain's leap into the dark", and keeps warning on the negative economic consequences. Martin Wolf also had a good comment. The BBC brings reactions from abroad, discusses the possibility of another Scottish referendum, and sums up eight reasons as to why the Leave campaign won. Other reactions are in the same direction: "a split nation", "what will the uncertain future bring", "what have we done?", and of course - the celebrations of the Brexiters. 

How did we do with our predictions?

Even though our prediction of the most likely outcome was a narrow victory for Remain (50.5 to 49.5), our model correctly anticipated that Leave has almost the same probability of winning. We gave the Leave option a 47.7% chance, admittedly more than any other model, expressing clearly that our prediction was nothing short of a coin toss.

As could be seen from our probability distribution graph below, the highest probability for the exact result of 49.5% for Leave (the one we decided to go with) was 7.53%, while the probability for the actual outcome of 51.9% for Leave was a close 6.91%, according to the model. This is a painfully small difference that comes down to pure luck in the end. Or as we said – a coin toss.

Source: Oraclum Intelligence Systems Ltd.
Turns out – the coin fell on the other side. Nevertheless, we stayed within our margin of error and can honestly say that we came really close (off by 2.4%; see the graph below). We knew that the last few days have been hectic and that the Remain campaign was catching up (high turnout suggests so), but it was obviously not enough to overturn the result. Leave started to lead two weeks before the referendum, and just as our model was showing an increasing chance of Leave over the weekend, a new flock of polls switched some voters’ opinions towards a likely Remain victory by Wednesday. In addition to the trend switch in our model we also failed to receive a larger sample, which proved to be decisive in the end.

Our results in greater detail are available in the graph below. It represents the comparison of our predictions to the actual results for the UK as a whole, and for each region (in other words, the calibration of the model). It shows that most of our predictions fall within the 3% confidence interval, and almost all of them (except Northern Ireland) fall within the 5% confidence interval. The conclusion is that we have a well calibrated model.

Model calibration (click to enlarge) Source: Oraclum Intelligence Systems Ltd.
This is even more impressive given our very small overall sample size (N=350). However even with such a small sample we were able to come really close to the actual prediction, beating a significant amount of other prediction models. Obviously the small sample size induced larger errors when it came down to certain regions (e.g. Northern Ireland or Yorkshire and Humberside), but it was remarkable how well the model performed even with so few survey respondents. Even if it did eventually predict the wrong outcome.

This was a model in its experimental phase (it still is), and the entire process is a learning curve for us. We will adapt and adjust, attempting to make our prediction method arguably the best one out there. It certainly has the potential to do that.

How did the benchmarks do?

It appears that the simplest model turned out to be the best one. The Adjusted polling average (APA), taking only the value of the polls two weeks prior to the referendum gave Leave 51% and Remain close 48.9%. This doesn’t mean individual pollsters did good, but that pollsters as a group did good (remember, polls are not predictions, they are merely representations of preferences at a given point in time). The problem with the individual pollsters was still a lot of uncertainty, such as double digits for undecided voters, even the day before the referendum. This is hardly their fault of course, but it tells us that looking at pollsters as a group is somewhat better than looking at a single individual pollster, no matter when they publish their results.

However, the Poll of polls (taking only the six last ones) was off, as it was 52:48 in favour of Remain (they’ve updated that yesterday just after I published the post, so I didn’t have time to change it). And the expert forecasting models from Number Cruncher Politics and Elections Etc both failed by 4% and 5% respectively.

Most surprisingly, the prediction markets and the betting markets have all failed significantly! As have the superforecasters. It turns out that putting your money where your mouth is still is not enough for good predictions. At least not when it comes to Britain. Prediction markets in some cases were giving an over 80% chance to Remain at the day of the referendum. In this case ours was the only model predicting a much more uncertain outcome.

Comments

Popular posts from this blog

Short-selling explained (case study: movie "Trading Places")

Rent-seeking explained: Removing barriers to entry in the taxi market

Economic history: mercantilism and international trade

Graphs (images) of the week: Separated by a border