Posted on Leave a comment

Surveys on Family Life in the UK

Social Surveys are one of the most common methods for routinely collecting data in sociology and the social sciences more generally. There are lots of examples of where we use social surveys throughout the families and households module in the A level sociology syllabus – so what do they tell us about family life in modern Britain, and what are their strengths and limitations….?

This information should be useful for both families and households and for exploring the strengths and limitations of social surveys for research methods…

Attitudes to marriage surveys

Headline Fact – in 2016, only 37% of the UK population believe people should be married before they have children.

Findings from NatCen’s 2016 British Social Attitudes survey suggests that the British public is reaching a tipping point in its views on marriage.

For the first time since NatCen started asking whether people who want to have children ought to be married, the proportion who disagree (35%) is almost the same as those who agree (37%).

Back in 1989, seven people in ten (70%) felt that people should be married if they want to have children, compared with less two in ten (17%) who disagreed.

It’s actually worth noting how quickly attitudes have changed since the previous survey in 2012, as demonstrated in the info graphic below – in 2016 it’s now down to 37%

Percentage of the UK population who agree that parents should be married before they have children

What are the strengths of this survey (focussing on this one question)?

  • I’m tempted to say the validity is probably quite good, as this isn’t a particularly sensitive topic, and the focus of the question is the ‘generalised other’, so there should be no social desirability.
  • It’s very useful for making comparisons over time – given that the same question has been asked in pretty much the same way for quite a few years now…
  • Representativeness seems to be OK – NatCen sampled a range of ages, and people with different political views, so we can compare all that too – no surprises here btw – the old and the conservatives are more likely to be in favour of marriage.

What are the limitations of this survey?

  • As with all surveys, there’s no indication of why belief in marriage is in decline, no depth or insight.
  • The question above is so generalised, it might give us a false impression of how liberal people are. I wonder how much the results would change if you made the questions more personal – would you rather your own son/ daughter should be married before they had children? Or just different – ‘all other things being equal, it’s better for children to be brought up by married parents, rather than by non-married-parents’ – and then likehert scale it. Of course that question itself is maybe just a little leading….

Housework Surveys 

Headline ‘fact’ – women still do 60% more housework than men (based on ONS data from 2014-15)

housework UK

Women carry out an overall average of 60% more unpaid work than men, ONS analysis has shown.

Women put in more than double the proportion of unpaid work when it comes to cooking, childcare and housework and on average men do 16 hours a week of such unpaid work compared to the 26 hours of unpaid work done by women a week.

The only area where men put in more unpaid work hours than women is in the provision of transport – this includes driving themselves and others around, as well as commuting to work.

This data is derived from the The UK Time Diary Study (2014-15) – which used a combination of time-use surveys and interviews to collect data from around 9000 people in 4000 households.

It’s worth noting that even though the respondents were merely filling in a few pages worth of diary, this document contains over 200 pages of technical details, mainly advice on how researchers are supposed to code responses.

What are the strengths of this survey?

  • The usual ease of comparison. You can clearly see the differences in hours between men and women – NB the survey also shows differences by age and social class, but I haven’t included that here (to keep things brief).
  • It’s a relatively simply topic, so there’s unlikely to be any validity errors due to interpretation on the part of people completing the surveys: it’s obvious what ‘washing clothes’ means for example.
  • This seems to suggest the continued relevance of Feminism to helping us understand and combat gender inequality in the private sphere.

What are the limitations of this data? 

  • click on the above link and you’ll find that there is only a 50% response rate…. which makes the representativeness of this data questionable. If we take into account social desirability, then surely those couples with more equal housework patterns will more likely to return then, and also the busier the couple, the less likely they are to do the surveys. NO, really not convinced about the representativeness here!
  • this research tells us nothing about why these inequalities exist – to what extent is this situation freely chosen, and to what extent is it down to an ‘oppressive socialisation into traditional gender norms’ or just straightforward coercion?
  • given all of the coding involved, I’m not even convinced that this is really that practically advantageous…. overall this research seems to have taken quite a long time, which is a problem given the first criticism above!

Surveys on Children’s Media Usage

Headline Fact: 5 – 15 year olds spend an average of 38 hours a week either watching TV, online or gaming.

It’s also worth noting that for the first time, in 2016, children aged 5-15 say they spend more time online than they do watching television on a TV set.

This is based on research conducted In April/ May/ June 2016, in which 1,375 in-home interviews with parents and children aged 5-15 were conducted, along with 684 interviews with parents of children aged 3-4. (OFCOM: Children and Parents: Media Use and Attitude Report)

Strengths of this Survey

  • It makes comparisons over time easy, as the same questions are asked over a number of different years.
  • Other than that, I think there are more problems!

Limitations of this Survey

  • There are no details of how the sample was achieved in the methodology – so I can’t comment on the representativeness.
  • These are just estimations from the children and parents – this data may have been misrepresented. Children especially might exaggerate their media usage when alone, but downplay it if a parent is present.
  • I’m especially suspicious of the data for the 3-7 year olds, given that this comes from the parent, not the child… there’s a strong likelihood of social desirability leading to under-reporting… good parents don’t let their kids spend too much time online after all!

Further examples of surveys on the family

If you like this sort of thing, you might also want to explore these surveys…

The Working Families Parenting Survey – which basically shows that most parents are too busy working to spend as much time with their kids as they want….

The University of Manchester’s Online Parenting Survey (which takes 20-30 minutes)

Advertisements
Posted on 1 Comment

Why Do Voting Opinion Polls Get it Wrong So Often?

Surveys which ask how people intend to vote in major elections seem to get it wrong more often than not, but why is this?

Taking the averages of all nine first and then final polls for the UK general election 2017, the predictions for the Conservatives show them down from 46% to 44%; and Labour up from 26% to 36%.

voting intention 2017 general election

The actual vote share following the result of the general election shows the Conservatives at 42% and Labour at 40% share of the vote.

2017 election result share of vote UK

Writing in The Guardian, David Lipsey notes that ‘The polls’ results in British general elections recently have not been impressive. They were rightish (in the sense of picking the right winner) in 1997, 2001, 2005 and 2010. They were catastrophically wrong in 1992 and 2015. As they would pick the right winner by chance one time in two, an actual success rate of 67%, against success by pin of 50%, is not impressive.’

So why do the pollsters get it wrong so often?

Firstly, there is a plus or minus 2 or 3% statistical margin of error in a poll – so if a poll shows the Tories on 40% and Labour on 34%, this could mean that the real situation is Tory 43%, Labour 31% – a 12 point lead. Or it could mean both Tory and Labour are on 37%, neck and neck.

This is demonstrated by these handy diagrams from YouGov’s polling data on voting intentions during the run up to the 2017 UK general election…

Voting Intention 2017 Election 

Statistics Margin Error.png

Seat estimates 2017 General Election

Seat Estimates

Based on the above, taking into account margin for error, it is impossible to predict who would have won a higher proportion of the votes and more seats out of Labour and the Tories.

Secondly, the pollsters have no way of knowing whether they are interviewing a representative sample.

When approached by a pollster most voters refuse to answer and the pollster has very little idea whether these non-respondents are or are not differently inclined from those who do respond. In the trade, this is referred to as polling’s “dirty little secret”.

Thirdly, the link between demographic data and voting patterns is less clear today – it used to be possible to triangulate polling data with demographic data from previous election results, but voter de-alignment now means that such data is now less reliable as a source of triangulating the opinion polls survey data, meaning pollsters are more in the dark than ever.

Fourthly, a whole load of other factors affected people’s actual voting behaviour in this 2017 election and maybe the polls  failed to capture this?

David Cowley from the BBC notes that…. ‘it seems that whether people voted Leave or Remain in 2016’s European referendum played a significant part in whether they voted Conservative or Labour this time…. Did the 2017 campaign polls factor this sufficiently into the modelling of their data? If younger voters came out in bigger numbers, were the polls equipped to capture this, when all experience for many years has shown this age group recording the lowest turnout?’

So it would seem that voting-intention surveys have always had limited validity, and that, if anything, this validity problem is getting worse…. after years of over-estimating the number of Labour votes, they’ve now swung right back the other way to underestimating the popularity of Labour.

Having said that these polls are not entirely useless, they did still manage to predict that the Tories would win more votes and seats than Labour, but they just got the difference between them oh so very wrong.

The problem of obtaining representative samples (these days)

According to The Week (July 2017) – the main problem with polling these days is that finding representative samples is getting harder… When Gallup was polling, the response rate was 90%, in 2015, ICM had to call up 30 000 numbers just to get 2000 responses. And those who do respond are often too politically engaged to be representative.

 

Posted on Leave a comment

Why Did Labour Gain Seats in the 2017 General Election?

In the recent June 2017 General Election, Labour won more votes than it did in 2001, 2005, 2010 or 2015, proving almost all the forecasts and commentators wrong.According to this Guardian article there are three main reasons for this…

It motivated young people to get out and vote.

A lot’s been made of the historically high turnout by 18-24 year olds…. It looks like in key constituencies – from Harrow West to Canterbury (a seat that has been Conservative since 1918) – the youth vote was vital. Labour showed it cared about young people by promising to scrap tuition fees, an essential move to stop the marketisation of higher education, and it proposed a house-building programme that would mean many more could get on the property ladder.

This is in stark contrast to the two other major parties – the Lib Dems in 2010 under Nick Clegg lied to them, and the Conservatives have attacked them – cutting housing benefits for 18- to 21-year-olds, excluding under-25s from the minimum wage rise and slashing the education maintenance allowance. At this election, Theresa May offered nothing to young people in her manifesto. Their message was: put up with your lot. Under the Tories, young people have been taken for granted and sneered at as too lazy to vote.

The NUS reported a 72% turnout by young people, and there is a definite thread in the media attributing the swing towards Labour as down to this.

However, this is contested by Jack Sommors in this article who suggests that it was middle-aged people who swung the election result away from the Tories.

‘Lord Ashcroft’s final poll, which interviewed 14,000 people from Wednesday to Friday last week, found people aged 35 to 44 swung to Labour – 50% voted for them while just 30% voted for the Tories. This is compared to 36% of them voting Labour and 26% backing the Tories just two years ago’.

A further two reasons which might explain the swing, let’s say among the younger half of the voting population, rather than just the very youngest are:

Labour offered localised politics, not a marketing approach

Labour rejected the marketing approach to politics in favour of a strong, localised grassroots campaign… this was not simply an election May lost; it was one in which Corbyn’s Labour triumphed. Labour proposed collectivism over individualism and a politics that people could be part of.

Labour offered a genuine alternative to neoliberalism…

Labour offered a positive agenda to an electorate that’s been told its only choice is to swallow the bitter pill of neoliberalism – offering a decisive alternative to Tory austerity in the shape of a manifesto packed with policies directly challenging what has become the economic status quo in the UK. Labour no longer accepted the Tory agenda of cuts (a form of economics long ago abandoned in the US and across Europe): it offered investment in public services, pledged not to raise taxes for 95% of the population, talked about a shift to a more peaceful foreign policy, promised to take our rail, water and energy industries out of shareholders’ hands and rebalance power in the UK.

So how is this relevant to A-level Sociology…?

  • In terms of values…It seems to show a widespread rejection of neoliberal ideas among the youth, and possibly evidence that neoliberal policies really have damaged most people’s young people’s (and working class people’s) life chances, and this result is a rejection of this.
  • In terms of the media… It’s a reminder that the mainstream media doesn’t reflect public opinion accurately- just a thin sliver of the right wing elite. It also suggests that the mainstream media is losing its power to shape public opinion and behavior, given the negative portrayals of Corbyn in the mainstream. .

Value-Freedom and explaining election results…

The above article is written with a clearly left-leaning bias. Students may like to reflect on whether it’s actually possible to explain the dramatic voter swing towards Labour objectively, and how you might go about getting valid and representative data on why people voted like they did, given that there are so many possible variables feeding into the outcome of this election?!

Sources

Young people voted because labour didn’t sneer at them. It’s that simple

General Election 2017: Young turn out ‘remarkable’

Posted on 1 Comment

Outline and Explain Two Theoretical Problems of Using Social Surveys in Social Research

Firstly, social surveys suffer from the imposition problem, closed questions limits what respondents can say Interpretivists argue respondents have diverse motives and it is unlikely that researchers will think up every possible relevant question and every possible, response, thus questionnaires will lack validity.

This is especially true for more complex topics such as religions belief – ticking the ‘Christian’ box can mean many different things to many different people, for example.

Interpretivists thus say that surveys are socially constructed—they don’t reflect reality, but the interests of researchers

However, this is easily rectified by including a section at the end of questionnaires in which respondents can write their explanations.

Secondly, self-completion surveys can also suffer from poor representativeness…

Postal questionnaires can suffer from a low response rate, and samples might be self-selecting— due to the illiterate or people who might be ashamed/ scared to return questionnaires on sensitive topics.

Also, you can’t check who has filled them in, so surveys may actually misrepresent the target population.

However, it is possible to rectify this with incentives and booster samples.

The above is a suggested response to a possible 10 mark ‘pure methods’ question which might come up on either paper 1 or 3 of the AQA’s A Level Sociology Papers. It follows the basic formula – make a point, develop it twice, and then evaluate it (which to my mind seems to work well for ‘pure methods’ 10 mark questions. 

Theory and Methods A Level Sociology Revision Bundle 

If you like this sort of thing, then you might like my Theory and Methods Revision Bundle – specifically designed to get students through the theory and methods sections of  A level sociology papers 1 and 3.

Contents include:

  • 74 pages of revision notes
  • 15 mind maps on various topics within theory and methods
  • Five theory and methods essays
  • ‘How to write methods in context essays’.
Posted on 1 Comment

Outline and Explain Two Practical Advantages of Using Social Surveys in Social Research (10)

It’s possible that a 10 mark question on A level sociology papers 1 or 3 could simply ask you about a ‘pure’ research method, as with the example above.

This post suggests a strategy for how to answer such possible questions and provides one exemplar answer, which I think would get full marks in the exam….

Strategy 

  • Make two, distinct points—as different from each other as possible!
  • For each of the points, explain, develop it twice, and (if it flows) do a linked evaluation.
  • It’s good practice to link to Positivism and Interpretivism and use examples.

Exemplar Answer

Firstly, surveys are a quick and cheap means of gathering data from large numbers of people, across wide areas, because, once sent out, millions of people could potentially fill them at the same time.

They are especially quick/ efficient if put online because computers can analyse pre-coded answers and quantify/ compare the data instantaneously.

They also make it easier to gain government funding because you can generalise from large data sets and thus use to inform social policy—the census, for example, allows the government to plan for school places in the future.

However, Interpretivists would argue you never get in-depth/ valid data with this method, and so predictions can be flawed—the polls on Brexit didn’t tell us what people really thought about this issue!

Secondly, you don’t need ‘people skills’ to use social surveys, thus anyone can use them to do research.

This is because they can be written in advance, and put on-line or sent by post, and thus sociologist’s personal involvement with respondents can be kept to a minimum.

This also means that busy people with family commitments can easily use social surveys.

However, Interpretivists and Feminist argue this wouldn’t be an advantage for all topics—some areas are so sensitive they require personal contact, such as domestic abuse.

Theory and Methods A Level Sociology Revision Bundle 

If you like this sort of thing, then you might like my Theory and Methods Revision Bundle – specifically designed to get students through the theory and methods sections of  A level sociology papers 1 and 3.

Contents include:

  • 74 pages of revision notes
  • 15 mind maps on various topics within theory and methods
  • Five theory and methods essays
  • ‘How to write methods in context essays’.
Posted on Leave a comment

How Old are Twitter Users?

‘Who Tweets’ is an interesting piece of recent research which attempts to determine some basic demographic characteristics of Twitter users, relying on nothing but the data provided by the users themselves in their twitter profiles.

Based on a sample of 1470 twitter profiles* in which users clearly stated** their age, the authors of ‘Who Tweets’ found that 93.9% of twitter users were under the age of 35. The full age-profile of twitter users (according to the ‘Who Tweets’/ COSMOS data) compared to the actual age profile taken from the UK Census is below:

The age profiles of Twitter users - really?
The age profiles of Twitter users – really?

 

Compare this to the Ipsos MORI Tech Tracker report for the third quarter of 2014 (which the above research draws on) which used face to face interviews based on a quota sample of 1000 people.

Ages of twitter users according to a face to face Mori Poll
Ages of twitter users according to a face to face Mori Poll

Clearly this shows that only 67% of media users are under the age of 35, quite a discrepancy with the user-defined data!

The researchers note that:

‘We might… hypothesis that young people are more likely to profess their age in their profile data and that this would lead to an overestimation of the ‘youthfulness’ of the UK Twitter population. As this is a new and developing field we have no evidence to support this claim, but the following discussion and estimations should be treated cautiously.

Looking again at the results from the Technology Tracker study conducted by Ipsos MORI, nearly two thirds of Twitter users were under 35 years of age in Q3 of 2014 whereas our study clearly identifies 93.9% as being 35 or younger. There are two possible reasons for this. The first is that the older population is less likely to state their age on Twitter. The second is that the age distribution in the survey data is a function of sample bias (i.e. participants over the age of 35 in the survey were particularly tech-savvy). This discrepancy between elicited (traditional) and naturally occurring (new) forms of social data warrants further investigation…’

Comment 

This comparison clearly shows how we get some very different data on a very basic question (‘what is the age distribution of twitter users’?) depending on the methods we use, but which is more valid? The Ipsos face to face poll is done every quarter, and it persistently yields results which are nothing like COSMOS, and it’s unlikely that you’re going to get a persistent ‘tech savy’ selection bias in every sample of over 35 year olds, so does that mean it’s a more accurate reflection of the age profile of Twitter users?

Interestingly the Ipsos data shows a definite drift to older users over time, it’d be interesting to know if more recent COSMOS data reflects this. More interestingly, the whole point of COSMOS is to provided us with more up to date, ‘live’ information – so where is it?!? Sort of ironic that the latest public reporting is already 12 months behind good old Ipsos –

Age profiles of Twitter users in final quarter of 2015 according to MORI
Age profiles of Twitter users in final quarter of 2015 according to MORI

 

 

At the end of the day, I’m not going to be too harsh about the above ‘Who Tweets’ study, it is experimental, and many of the above projects are looking at the methodological limitations of this data.  It would just be nice if they, err, got on with it a bit… come on Sociology, catch up!

One thing I am reasonably certain about is that the above comparison certainly shows the continued importance of terrestrial methods if we want demographic data.

Of course, one simple way of checking the accuracy of the COSMOS data is simply to do a face to face survey and ask people what there age is and whether they state this in their Twitter profiles, then again I’m sure they’ve thought of that… maybe in 2018 we’ll get a report?

*drawn from the  Collaborative Online Social Media Observatory (COSMOS)

**there’s an interesting discussion of the rules applied to determine this in the ‘Who Tweets’ article.