And Winner Is– Prediction Game: Why Did Most Media, Pollsters, Pundit, Wisdom of Crowds Get It Wrong?

The odds of winning were between 75 and 99% just two days before 2016 presidential election… Most polls consistently gave Clinton a comfortable lead in the weeks leading up to the election… Of 67 national polls tracking the race since the start of October, only four gave Trump the lead… Of 61 national polls tracking the race during final weeks, six gave Trump the lead… So how did so many pollsters, pundits, news media, wisdom of crowds… get it so wrong?

One by one, all the predictions, all the forecasts, the obsessively mined data, the experience at calling previous elections fell apart. It all counted for nothing… According to Nate Silver; in this age of information-overload, it’s more difficult than ever to distinguish a true ‘signal’ from the noisy universe of data… One theory holds that the probably lies in the familiar litany of cognitive biases that lead people astray despite their best efforts to be rational…

Historically prediction like horse races, have tended to demonstrate favorite-long shot bias– overestimating the chances of improbable events and underestimating those of likely ones… Remember a  prediction is statement about an uncertain event, and its often but not always, based upon experience or knowledge… According to Tom Ensey; predicting the future has always been a dicey proposition but in the age of– big data, analytics, algorithms and high-paid pollsters, a certain faith takes hold in pollsters ability to predict election outcomes.

According to Nate Silver; the day before the election day Hillary Clinton had a better than two-to-one chance of winning… According to Larry Sabato; we were wrong, the entire punditry industry, the entire polling industry, the entire analyst industry have a lot to learn… According to Barry Kay; polls don’t predict the future, they are snapshots of the past and hopefully the recent past…

The PREDICTION:

pollster-media-pollsters-wrong

In the article Why Pollsters Got It Wrong by Paul Cairney writes: Think about what ‘pollsters’ do: They take small samples and use them as a proxy for an entire population… To ensure that the samples are not biased by selection, they try to develop methods to generate respondents randomly, and they try to get enough responses from a ‘representative’ sample of the population… This process involves all sorts of compromises and unintended consequences…

Every survey result is made up of a combination of two variables; demographic composition of the electorate… and how each group is expected to vote… in addition, some groups are far less likely to respond than others, so pollsters typically weight the answers they receive to match the projections of what the electorate will likely look like…

Polling errors can stem either from getting an unrepresentative sample of respondents within each group, or from incorrectly predicting how many of each type of voter will show up… For the layman, it serves as a reminder of the uncertainty in polling, and a warning about being overconfident even when the weight of the surveyed evidence seems overwhelming…

According to Mona Chalabi; journalists, even statistically literate journalists, can be dumb; it’s the reason why just about everyone failed to predict this election… And, it’s not just because the methodology of polling is itself flawed, it’s also because the analysis is affected by the humans conducting it… Personal experience, personal beliefs, and personal biases get in the way…

Many of journalists dismissed Trump with a mere 2% chance of winning… they simply didn’t know of any or care about any Trump supporters– they didn’t get it… And their personal beliefs also encouraged a bit of wishful thinking– they didn’t want to get it… Until we can find perfectly objective robots to conduct these polls, asking 100% neutral questions and communicating them to you the reader with 100% neutrality… then there will always be a polling problem… Humans flawed as they are produce flawed polls that are imperfectly designed, imperfectly conducted, imperfectly analyzed.

The OUTCOME:

pollster-hqdefault

Sensible ‘pollsters’ admit that they might be a bit-off; highlighting their estimated ‘margin of error’ from the size of the sample, then maybe crossing their fingers and hope that they have not missed errors based on non-random sampling… But there will always be the usual reaction; opinion polls are useless and they’ve got it wrong again... However, there is no alternative to opinion polls. Like democracy, they are the worst system apart from all the others.

There is no other way to try to predict elections than to ask a sample of people how they intend to vote, and to use various prediction techniques to try and project their answers to the whole voting population… Humans want to know what is going to happen next; they demand information…

However, pollsters are fallible and their prediction are fallible and sometimes completely wrong… But, there will always be opinion polls… and as long as people understand what polls are, and what they are not– it all works fine…