As a market researcher this election has just not stopped giving – quite apart from the excitement of the too close to call predictions before the big day, there is now the fascinating debate about why those predictions were so wrong.
Just to remind ourselves…..Polls 3rd to 6th May 2015 vs the election outcome
On the night itself, just after the exit polls came out, pollsters were already tweeting their apologies and there were calls for an independent investigation – this week, some senior politicians are calling for a ban on all polling in the immediate run up to the election as is already the case in France, India, Italy and Spain. Favourite theories expounded so far are…
- The “shy Tories” – these are people who are apparently so embarrassed to admit that they want to vote Tory that they will not even admit it on an anonymous online survey. But if this is the case why are they more likely to admit it face to face to an exit pollster – the exit poll which of course, ended up much closer to the final result.
- Institutionalised respondents – “The people who volunteer to be polled online or on the phone are part of a panel who inevitably become institutionalised…It is no longer a random sample of the population.” Said Lord Foulkes while calling for the establishment of an Ofcom-style independent regulator of the polling industry. But how does he then explain that the telephone polls (which actually do use a random sampling method) also had the Labour and Tory parties running neck and neck.
I’m not an apologist for the research industry but I want to see evidence to explain what happened as opposed to a knee jerk analysis. In 1992, the last time the polls were badly out, further analysis revealed that the sampling method was wrong – the population had changed more than social scientists had realised during the 1980s and therefore the weights used to analyse the results were skewed incorrectly in favour of Labour. Is this what’s happened this time? Two bits of interesting analysis have been done that possibly shed more light.
First, Peter Kellner, President of YouGov, reported that they re-interviewed 6,000 people that they had earlier polled online. They found that while 5% changed their vote, each party gained and lost support in equal measure – the overall Conservative-Labour division remained 34-34%. While Peter Kellner says that this demonstrates that there was no last minute swing to the Tories – to me this is a piece of evidence that there are potential issues with the overall sampling method and weights.
Second, the wonderful Nate Silver’s company Five Thirty Eight – who also predicted a hung parliament – have compared what results their national vote share model would predict using the actual national vote shares. They report “In this scenario, our predictions were wrong for only 34 of 632 individual seats. This is still not perfect, but it is a lot better [than their forecast based on the polls]. Most of the remaining error comes from the fact that the Conservatives outperformed precisely in the marginal seats where it mattered most to increasing their seat total”. In other words, Five Thirty Eight are saying that it is the results from the pollsters that were wrong and less their predictive models which with the right data would have worked pretty well even with multi-party politics. Again, indicating to me that as in 1992, there might well be an issue with the sample method and weights.
Time will only tell but I am confident that analysis will be able to show what happened and also that the polling industry will be able to suggest ways to rectify it.
Here you will find details of a timely report from Policy Network exploring demographic change and it’s impact on UK politics.
And for further reading a few articles that have caught our eye in the last week: