Posted on 1 Comment

Why Do Voting Opinion Polls Get it Wrong So Often?

Surveys which ask how people intend to vote in major elections seem to get it wrong more often than not, but why is this?

Taking the averages of all nine first and then final polls for the UK general election 2017, the predictions for the Conservatives show them down from 46% to 44%; and Labour up from 26% to 36%.

voting intention 2017 general election

The actual vote share following the result of the general election shows the Conservatives at 42% and Labour at 40% share of the vote.

2017 election result share of vote UK

Writing in The Guardian, David Lipsey notes that ‘The polls’ results in British general elections recently have not been impressive. They were rightish (in the sense of picking the right winner) in 1997, 2001, 2005 and 2010. They were catastrophically wrong in 1992 and 2015. As they would pick the right winner by chance one time in two, an actual success rate of 67%, against success by pin of 50%, is not impressive.’

So why do the pollsters get it wrong so often?

Firstly, there is a plus or minus 2 or 3% statistical margin of error in a poll – so if a poll shows the Tories on 40% and Labour on 34%, this could mean that the real situation is Tory 43%, Labour 31% – a 12 point lead. Or it could mean both Tory and Labour are on 37%, neck and neck.

This is demonstrated by these handy diagrams from YouGov’s polling data on voting intentions during the run up to the 2017 UK general election…

Voting Intention 2017 Election 

Statistics Margin Error.png

Seat estimates 2017 General Election

Seat Estimates

Based on the above, taking into account margin for error, it is impossible to predict who would have won a higher proportion of the votes and more seats out of Labour and the Tories.

Secondly, the pollsters have no way of knowing whether they are interviewing a representative sample.

When approached by a pollster most voters refuse to answer and the pollster has very little idea whether these non-respondents are or are not differently inclined from those who do respond. In the trade, this is referred to as polling’s “dirty little secret”.

Thirdly, the link between demographic data and voting patterns is less clear today – it used to be possible to triangulate polling data with demographic data from previous election results, but voter de-alignment now means that such data is now less reliable as a source of triangulating the opinion polls survey data, meaning pollsters are more in the dark than ever.

Fourthly, a whole load of other factors affected people’s actual voting behaviour in this 2017 election and maybe the polls  failed to capture this?

David Cowley from the BBC notes that…. ‘it seems that whether people voted Leave or Remain in 2016’s European referendum played a significant part in whether they voted Conservative or Labour this time…. Did the 2017 campaign polls factor this sufficiently into the modelling of their data? If younger voters came out in bigger numbers, were the polls equipped to capture this, when all experience for many years has shown this age group recording the lowest turnout?’

So it would seem that voting-intention surveys have always had limited validity, and that, if anything, this validity problem is getting worse…. after years of over-estimating the number of Labour votes, they’ve now swung right back the other way to underestimating the popularity of Labour.

Having said that these polls are not entirely useless, they did still manage to predict that the Tories would win more votes and seats than Labour, but they just got the difference between them oh so very wrong.

The problem of obtaining representative samples (these days)

According to The Week (July 2017) – the main problem with polling these days is that finding representative samples is getting harder… When Gallup was polling, the response rate was 90%, in 2015, ICM had to call up 30 000 numbers just to get 2000 responses. And those who do respond are often too politically engaged to be representative.

 

Advertisements
Posted on Leave a comment

Sampling Techniques in Social Research

Selecting a sample is the process of finding and choosing the people who are  going to be the target of your research.

Most researchers will have a ‘target population’ in mind before conducting research. The target population consists of those people who have the characteristics of the sample you wish to study. If you’re interested in conducting primary research on the experiences of working class school children in 2017 (or whatever year we’re currently in!),  then your target population would be all working class school children.

Many researchers use a sampling frame to choose a sample, which is simply a list from which a sample is chosen – this might be a register of all pupils in a school, if you are conducting research in a school, for example.

Positivist researchers want to make sure their research is representative – research is representative if the characteristics of the people in the sample (the people who are actually researched) reflect the characteristics of the target population.

NB – The people who are the targets of social research are also known as the ‘respondents’

Five sampling methods used in sociology 

Random sampling – an example of random sampling would be picking names out of a hat. In random sampling everyone in the population has the same chance of getting chosen. This is easy because it is quick and can even be performed by a computer. However, because it is down to chance you could end up with an unrepresentative sample, perhaps with one demographic being missed out.

Systematic sampling – an example of a systematic sample would be picking every 10th person on a list or register. This carries the same risk of being unrepresentative as random sampling as, for example, every 10th person could be a girl.

Stratified sampling – this method attempts to make the sample as representative as possible, avoiding the problems that could be caused by using a completely random sample. To do this the sample frame will be divided into a number of smaller groups, such as social class, age, gender, ethnicity etc. Individuals are then drawn at random from these groups. If you are observing doctors and you had split the sample frame into ethnic groups you would draw 8% of the participants from the Asian group, as you know that 8% of doctors in Britain are Asian.

Quota sampling – In this method researchers will be told to ensure the sample fits with certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed. The researcher might then find these 30 by going to a job centre. The problem of representativeness is again a problem with the quota sampling method.

Multistage sampling – With multistage sampling, a researcher selects a sample by using combinations of different sampling methods. For example, in Stage 1, a researcher might use systematic sampling, and in Stage 2, he might use random sampling to select a subset for the final sample

Snowball sampling – With this method, researchers might find a few participants, and then ask them to find participants themselves and so on. This is useful when a sample is difficult to obtain. For example Laurie Taylor used this method when investigating criminals. It would be difficult for him to find a sample as he didn’t know many criminals; however these criminals know a lot of people who would be willing to participate, so it is more efficient to use the snowball method.