Last Updated on August 12, 2017 by Karl Thompson
Surveys which ask how people intend to vote in major elections seem to get it wrong more often than not, but why is this?
Taking the averages of all nine first and then final polls for the UK general election 2017, the predictions for the Conservatives show them down from 46% to 44%; and Labour up from 26% to 36%.
The actual vote share following the result of the general election shows the Conservatives at 42% and Labour at 40% share of the vote.
Writing in The Guardian, David Lipsey notes that ‘The polls’ results in British general elections recently have not been impressive. They were rightish (in the sense of picking the right winner) in 1997, 2001, 2005 and 2010. They were catastrophically wrong in 1992 and 2015. As they would pick the right winner by chance one time in two, an actual success rate of 67%, against success by pin of 50%, is not impressive.’
So why do the pollsters get it wrong so often?
Firstly, there is a plus or minus 2 or 3% statistical margin of error in a poll – so if a poll shows the Tories on 40% and Labour on 34%, this could mean that the real situation is Tory 43%, Labour 31% – a 12 point lead. Or it could mean both Tory and Labour are on 37%, neck and neck.
This is demonstrated by these handy diagrams from YouGov’s polling data on voting intentions during the run up to the 2017 UK general election…
Voting Intention 2017 Election
Seat estimates 2017 General Election
Based on the above, taking into account margin for error, it is impossible to predict who would have won a higher proportion of the votes and more seats out of Labour and the Tories.
Secondly, the pollsters have no way of knowing whether they are interviewing a representative sample.
When approached by a pollster most voters refuse to answer and the pollster has very little idea whether these non-respondents are or are not differently inclined from those who do respond. In the trade, this is referred to as polling’s “dirty little secret”.
Thirdly, the link between demographic data and voting patterns is less clear today – it used to be possible to triangulate polling data with demographic data from previous election results, but voter de-alignment now means that such data is now less reliable as a source of triangulating the opinion polls survey data, meaning pollsters are more in the dark than ever.
Fourthly, a whole load of other factors affected people’s actual voting behaviour in this 2017 election and maybe the polls failed to capture this?
David Cowley from the BBC notes that…. ‘it seems that whether people voted Leave or Remain in 2016’s European referendum played a significant part in whether they voted Conservative or Labour this time…. Did the 2017 campaign polls factor this sufficiently into the modelling of their data? If younger voters came out in bigger numbers, were the polls equipped to capture this, when all experience for many years has shown this age group recording the lowest turnout?’
So it would seem that voting-intention surveys have always had limited validity, and that, if anything, this validity problem is getting worse…. after years of over-estimating the number of Labour votes, they’ve now swung right back the other way to underestimating the popularity of Labour.
Having said that these polls are not entirely useless, they did still manage to predict that the Tories would win more votes and seats than Labour, but they just got the difference between them oh so very wrong.
The problem of obtaining representative samples (these days)
According to The Week (July 2017) – the main problem with polling these days is that finding representative samples is getting harder… When Gallup was polling, the response rate was 90%, in 2015, ICM had to call up 30 000 numbers just to get 2000 responses. And those who do respond are often too politically engaged to be representative.