How many teens are on antidepressants?

A recent survey found that 1/3rd of teens have been prescribed anti-depressants, but this is probably a result of sampling bias.

One in three teenagers are on antidepressants according to a recent iNews article published in August 2022.

You can read the full article here: One in three teens on antidepressants as lack of mental health services puts pressure on GPs to help.

Now it may well be tough being a teenager these days, especially during the Covid-19 Pandemic, but this figure does sound alarmingly high!

And I’m not the only one who thinks so, and in fact this statistic may not even be accurate according to some deeper research and analysis by Nathan Gower on behalf of Radio 4’s More or Less show.

The figure above comes from a survey conduct in July 2022 by a charity called Stem4 which supports teenage mental health.

This was a broad ranging survey looking at teenagers mental health and well-being overall based on a ‘general national sample’ of 2007 teenagers and the question which yielded the results which lead to the headlines above was:

“Have you been prescribed antidepressants to treat depression or other mental health conditions.”

37% of 12 to 18 year old respondents reported that they had been prescribed antidepressants at some point in their life, which is where the one in three figure above comes from.

NB Stem4 was asked to add that question to their survey by Good Morning Britain and then they teamed up and had a great time discussing (uncritically of course) the findings…

Official Statistics on Antidepressant Prescriptions

The problem with above survey findings is that official statistics show VERY different proportions.

Dr Ruth Jack a Senior Fellow in the Faculty of Medicine and Health Sciences at the University of Nottingham who has also conducted research on the prescription rates of antidepressants to teenagers in England (rather than the whole of the U.K.).

Her methodology involved looking at hundreds of thousands of medical records from G.P.s in England up to 2017, and her findings are very different to those of the survey results above.

12-18 year olds in 2017 – 2.3% were ever prescribed an anti-depressant.

That 2.3% should cover most prescriptions because although specialist mental health practices and hospitals can also prescribe antidepressants to teenagers, most prescriptions revert back to G.P.s

Another alternative source we can use is from NHS in England which publishes data on how many patients are prescribed antidepressants in a year. NHS England uses different age groupings but the findings are similar to Doctor Jack’s – in the low single digit percentages.

So both of the above pieces of research which are based on the official NHS statistics and Doctors’ records show much lower figures than the survey conducted by Stem 4.

There is a MASSIVE difference: 37% of 12-18 year olds from one survey compared to 2.3% according to the Doctors’ own records, that is more than 10 times the difference according to Stem4’s survey based on the self-reporting of the teenagers themselves.

Stem4’s survey reports that there are higher rates of prescription among younger teenagers compared to older teenagers. However both the NHS data and Doctor Jacks’ research show the opposite: lower rates for younger teens and then higher rates for older teens – with the prescription numbers getting significantly higher for 16 years and older.

Explaining the differences

The Survey data is from 2022 while Dr Jack’s data only goes up 2017, so it could be that the antidepressant prescription rate for teenagers has increased radically during the Pandemic, but this would mean there has been a HUGE 20 fold increase!

But this massive recent increase is unlikely the NHS data we have runs up to 2021 which suggests such that prescriptions did rise by about 10% during Covid, but not 20 times!

When interviewed by More or Less the CEO of Stem4 says that the objective of their survey was to hear the voices of young people by giving them an opportunity to express themselves and they saw no reason to hold back these findings which tell us what young people feel even if they are very different to the official statistics.

To support her survey findings she cites a a Freedom of Information request which was released in August 2021 suggested that GP prescriptions for those aged 5 to 12 had increased 40% between 2015 to 2021.

The More or Less Interviewer seemed to be trying to invite her to confess that her findings were completely invalid but she wasn’t backing down, suggesting that the rates of teen prescriptions were probably half way between her data and the official data.

However it was also clear that the data scientists from More or Less were having none of this – it simply isn’t possible that one third of teens have ever been on prescription anti-depressants, the official numbers just don’t add up.

Credible Data versus Eye Catching, Distorted Data

Dr Jack’s data and the NHS data are valid and reliable results which accurately reflect the underlying reality and give us the actual rate at which teenagers are prescribed medication, and this data can help us tackle the problems of teenage mental ill health.

Stem4’S research is an invalid data set which has produced a distorted picture of reality in order to make an eye catching headline and bring people’s attention to Stem4 and the mental health support services they offer.

Explaining Stem4’s Misleading Survey Results

The first thing to note is that Stem4’s research is probably telling us something different to the NHS data – the former is asking ‘have you ever been prescribed’ while NHS data is current prescriptions.

So if it’s a different 2.3% every year then over 7 years (12-18) we get to around 15%.

But this is still a way off the reported 35%.

Personally I think that Sample Bias probably explains the rest of the difference.

Stem4’s own report on the survey results tells us that they used a company called SurveyGoo to conduct the research, and SurveyGoo specialised in Online Surveys.

There is a chance that in the marketing of the survey it would have been more appealing to those teenagers who have had mental health problems in the past.

Say that SurveyGoo has 10K teens in its panel and the survey goes out to all of them – a higher proportion of teens who have had depression would be interested in answering it compared to those teens who hadn’t had depression.

The problem here is that we can’t go back easily and check the data as it’s not freely available for public consultation.

Biased Research…?

There is an even darker side to this. This could be a case of deliberately misleading statistics being publicised for commercial gain.

The question above was asked by Good Morning Britain which is a sensationalist Tabloid Media show which wants eyes, and this is an eye catching headline, so what do they care about a possible biased sample.

And the same goes for Stem4 – they make their money selling mental health and wellbeing packages to schools and other institutions – it is in their interests to exaggerate the extent of teen depression and especially ‘prescription abuse’ because they are offering earlier intervention strategies, for a cost of course!

SignPosting and Related Posts

This should be of interest for the research studies module.

Please click here to return to the homepage – ReviseSociology.com

Bias in Presenting Quantitative Data

Newspapers can ‘bias’ the presentation of quantitative data by stretching out the scale of the data they present, making differences between bars seem larger than they actually are (or vice versa!).

Quantitative research methods are usually regarded as being more objective than qualitative research methods as there is less room for the subjective biases and interpretations of researchers to influence the data collection process in quantitative research.

However, bias can still creep into quantitative research and one way this can happen is over the decision in how to present the data in even a basic visualisation.

Specifically, one can take the same data and stretch out the scale of a graph displaying that data and give the impression that the differences between the subjects under investigation are wider than in the original presentation.

Bias in scaling graphs

A recent example of what I’m going to call ‘bias in scaling graphs‘ can be found in how an article by The Guardian displays recent data on how much GDP (Gross Domestic Product) has grown in different European Countries between 2019 to 2022.

the same data from the Office for National Statistics in a more ‘stretched out’ scale which

The Guardian article (September 2022) in question is this one: UK is only G7 country with smaller economy than before Covid-19 which displays the following graphical data to show how the UK’s GDP is falling compared to other G8 Nations.

Source: The Guardian, 2022

Now you might think ‘this is quantitative data so it’s objective’ and on that basis no one can argue with what it’s telling us – the U.S. economy is doing VERY WELL compared to most Euro nations, growing more than TWICE as fast is the impression we get.

And after all, this is fair enough – a 2.6% growth rate is more than twice as fast as a 1% or less growth rate!

Same data different scale…

However you might think differently about the above when you see the same data (almost) displayed by the UK Government in this publication: GDP International Comparisons: Key Economic Indicators which features the graph below:

Source: Commons Library 2022

Note that the data is ALMOST the same – except for Britain’s data being different at 0.6% positive rather than negative – the Guardian article was written after the UK Gov report on the basis of the UK Economic growth forecast being downgraded, but everything else is the same.

My point here is that the data above is (almost) the same and yet the graph has been ‘squashed’ compared to the graph showing the same data in The Guardian article – note the scaling is the same – if you look above you can see that the US Bar is twice as high as the EU bars, but the difference APPEARS smaller because it’s not as stretched.

The Guardian achieves its stretched out scale by displaying the bars horizontally rather than vertically – that way there is more room to stretch them out and make the differences appear larger in a visual sense.

And with the UK now in an economic downturn it makes Britain seem further behind compared to other countries than what would have been the case with the more squished presentation in the Government’s version.

But aren’t they both biased…?

In a word yes – someone has to decided the format in which to present the data which is going to skew what people see.

But the reason I’m calling out The Guardian on this is for two reasons:

  1. it’s unusual to display bars horizontally, the standard is vertically, but there’s not way you can stretch out the visualisation vertically without it looking very odd.
  2. The differences are quite small – we are talking 1-2% points of change so having a more squished scale to represent the small differences seems appropriate, The Guardian has chosen to exaggerate these from the original display possible to make them seem larger than they actually are.

Signposting and Related Posts

This material should be of interest to anyone studying Research Methods.

It’s also a useful example of Left Wing bias in the media, most sociologists focus on right wing bias!

Please click here to return to the homepage – ReviseSociology.com

Invalid Official Statistics on Volunteering?

I caught an episode of Woman’s Hour last week in which the presenter kept mentioning that according to a recent survey 62% of people in the UK had volunteered in the last week, and inviting people to discuss their experiences of voluntary work.

The survey in question (excuse the pun) was the Volunteering and Charitable Giving Community Life Survey 2020-2021.

The show was then peppered with references to people’s volunteering efforts, such as working with the homeless at Christmas, staffing food banks, helping out with the Covid-vaccination efforts and so on.

And such examples fit very well with my own imagination of what ‘voluntary work’ involves – to my my mind a volunteer is someone who commits an hour or maybe more a week (I have a low bar in terms of time!) to do something such as the above, probably in conjunction with a formal charity or at least other people as part of a team.

But I just couldn’t believe that 62% of people did that kind of voluntary work last year.

And it turns out that they don’t

The government survey (a form of official statistics) that yielded these results distinguishes between formal and informal volunteering.

The former type: formal volunteering is what I (and probably most people) think of as ‘real volunteering’ – it was these kind of things the Woman’s Hour presenter was interested in hearing about and publicising.

However, only 17% of people did formal volunteering last year…..

Just over 50% of people did ‘informal volunteering’ but this has a VERY LOW BAR for inclusion. Basically, if you babysat your friend’s kids for one day at some point last year, you get to tick the box saying that you did ‘informal volunteering’.

This basically means that ANYONE with a young family has done what this society defines as ‘informal volunteering’ – I mean surely EVERY FAMILY babysits once in a while for their friends – this is just normal parenting – children have friends, parents want a day to themselves every now and then so you ‘babysit swap’ – or sleepovers, technically you could count having your friends’ children over for a sleepover with your own kids as ‘having done voluntary work’ in the last year’.

Add formal and informal volunteering (/ mutal parental favours) together and you get that 62% figure that the Woman’s Hour presenter was talking about.

However to my mind 62% is a completely misleading figure – 17% is how many people ACTUALLY volunteer every year!

It’s a bit annoying TBH – as also in the ‘informal volunteerin’ category are things such as buying shopping for someone who can’t get out of the house and that’s LEGIT, or valid volunteering in my mind, but the category is too inclusive to give us any useful data on this.

Relevance to A-Level Sociology

This is a wonderful example of how a definition which is too broad, in this case what counts as ‘volunteering’ can give a misleading, or invalid impressing of how much actual voluntary work really goes on in the UK.

This survey is a form of official statistics, so you can use this example to be critical of them.

it is possible that the government officials deliberately made the definition so broad so as to give the impression that there is more community spirit, or more of a ‘big society’ around than there actually is – because if there’s lots of community work and voluntary work going on, it’s easier for the government to justify doing less.

However, even with these very broad definitions, the trend in volunteering has still been going down in recent years!

Are one in five people really disabled?

According to official statistics 19% of working aged adults, or one in five people self-report as being ‘disabled’, and this figure has been widely used in the media to promote pro-disability programming.

How do we Define Disability?

According to the formal, legal, UK definition under the 2010 Equality Act someone is disable if they ‘have a physical or mental impairment that has a substantial and ‘long-term’ negative effect on your ability to do normal daily activities’.

That 19% figure sounds like a lot of people, in fact it is a lot of people – that’s 13 million people in the United Kingdom.

But maybe it’s only a lot because when we think of ‘disability’ we tend to immediately think of people will physical and very visible disabilities, the classic image of a disable person being someone in a wheelchair, which the media generally doesn’t help with its over-reliance of wheelchair users to signify they are ‘representing the disabled’.

In fact there are ‘only’ 1.2 million wheelchair users in Britain, or less than one in ten people who classify as disabled.

How do we measure disability ?

The 19%, or one five figure comes from the UK’s Family Resources Survey, the latest published result coming from the 2018/19 round of surveys.

This is a pretty serious set of surveys in which respondents from 20 000 households answer questions for an hour, some related to disability.

The Questions which determined whether someone classifies as disable or not are as follows:

  • Have you had any long term negative health conditions in the last 12 months? If you respond yes, you move on to the next two questions:
  • Do any of these health conditions affect you in any of the following areas – listed here are the top answers: mobility/ stamina, breathing or fatigue/ mental health/ dexterity/ other 
  • Final question: do any of your conditions or illness impact your ability to carry out your day to day activities -the responses here are on a 4 point likehert scale ranging from a not at all to a lot.

Anyone ticking YES/ YES and either ‘my illness affects me a lot or a little’ is classified by the UK government as disabled.

Validity problems with this way of measuring disability

The problem with the above is that if you have Asthma and similar mild conditions you could be classified as disabled, and this doesn’t tie in with the government’s own definition of disability which requires that someone has a condition which ‘substantially’ affects their ability to carry out every day tasks.

Stating that you have asthma which affects your breathing a little, does NOT IMO qualify you as disabled, but it does in this survey.

The government doesn’t publish the breakdown of responses to the final disability question, but it’s roughly a 50-50 split between those answering ‘a lot’ and ‘a little.

In conclusion, it might be more accurate to say that one in ten people is disabled.

Relevance to A-level sociology

This short update should be a useful contemporary example to illustrate some of the validity problems associated with using social surveys, especially for topics with a high degree of subjectivity such as what disability means!

NB – I gleaned the above information from Radio Four’s More or Less, the episode which aired on Weds 10th Feb 2021.

Autobiographies in social research

An autobiography is an account of the life of an individual, written by that individual, sometimes with the assistance of a professional biographer.

One of the most popular UK autobiographies of 2020 was Harry and Meghan’s ‘Finding Freedom’, and it is supposed to ‘dispel rumors about their relationship from both sides of the pond’.

The Amazon critics, however, disagree. The comments ranked at 2 and 3 (accessed 18 August 2020)  in order of usefulness both give the book 1 star out of five and comment thus:

Dela – 1.0 out of 5 stars Pure fantasy

“… the reader can only assume a good proportion of [this book is] made up… the reader is left with a very poor impression of the couple. As someone else said – this is very much an ‘own goal’.”

600 people found this helpful

hellsbells123 – 1.0 out of 5 stars Dross of the highest order – all time low for Harry

“Dreadful book full of ridiculous unnecessary detail from a couple who profess to want privacy. This is a book masquerading as a love story but full of bile, hatred and bitterness. “

578 people found this helpful

Source

The strengths and limitations of autobiographies as a source of data

Whether they have a readership of millions or tens, autobiographies are selective in the information they provide about the life of the author.

They thus tell you what the author wants you to know about themselves and their life history.  

However, you have no way of knowing whether the events outlined in an autobiography are actually and I wouldn’t even trust an autobiography to give me an accurate view of the authors’ own interpretation of what the most significant events in their life history were.

The author may exaggerate certain events, either because they mis-remember them, or because they want their book to sell, thus they are selecting what they think their audience will want to read.

In some cases, events may even be fabricated altogether.

As a rule, I’d say that the more famous someone is, then the less valid its contents are.  An exception to this would be less famous ‘positive thinking’ lifestyle gurus, whose income maybe depends more on their book sales than really famous people, who could possibly afford to be honest in the biographies!

Either way, there are so many reasons why an autobiography might lack validity, I wouldn’t trust the content of any of them – think about it, how honest would you be in your autobiography, if you knew anyone could read it?

Using autobiography sales data may be more useful…

IMO the value of autobiographies lies in telling us what people want to hear, not necessarily in getting to the truth of people’s personal lives.

If want to know what people want to hear, look a the sales volumes – there are really no surprises…..

Top selling autobiographies of all time (source)

Relevance to A-level Sociology?

Twitter data is a source of secondary qualitative data (public rather than private data) and so is relevant to the research methods part of the course.

Personal Documents in social research

Personal documents are those which are intended only to be viewed by oneself or intimate relations, namely friends or family. They generally (but not always) not intended to be seen by a wider public audience.

For the purposes of A-level sociology, the two main types of personal document are diaries and personal letters.

Today, I’m inclined to include personal ‘emails’ and certain intimate chat groups – such as circles of close friends chatting on WhatsApp, in this definition, because the data produced here will reveal personal thoughts and feelings, and isn’t intended for wider public consumption.

I think we can also include some personal blogs and vlogs in this definition, as some of these do reveal personal thoughts and feelings, even if they are written to be viewed by the general public – people sharing aspects of their daily lives on YouTube, or people writing more focused blogs about the travel experiences or how they are coping with critical illnesses, all have something of the ‘personal’ about them.

We could also include ‘naughty photos’ intended only to be shared with an intimate partner, but I think I’ll leave an analysis of those kind of documents out of this particular post!

Just a quick not on definitions – you need to be careful with the distinction I think between personal and private documents.

  • Personal documents = anything written which reveals one’s personal thoughts and feelings. These can either be written for consumption by oneself, by close others, or sometimes for public consumption.
  • Private documents – these are simply not intended to be viewed by a wider public audience, and can include someone’s personal diary or intimate letters/ photos between two people, but company accounts and strategy can also count as private documents, even if shared by several dozens of people, if not intended for consumption by a wider audience.

As with all definitions, just be clear what you’re talking about.

Certainly to be safe, for the sake of getting marks in an A-level sociology exam question on the topic, personal diaries and ‘intimate letters’ are certainly both types of personal document.

Examples of sociological research using Personal Documents

Thomas and Znaniecki, The Polish Peasant (1918/ 1921)

Ozana Cucu-Oancea argues that this remains the most significant work using personal documents in the history of the social sciences (source).

The study used a range of both personal and public documents, and the former included hundreds of letters between Polish immigrants and their families back home in Poland, as well as several personal diaries.

In all the work consisted of 2,200 pages in five volumes, so it’s pretty extensive, focussing  on the cultural consequences of Polish migration.

The documents revealed touched on such themes as crime,  prostitution, alcoholism; and the problem of social happiness in general.

What was significant about this study from a theoretical point of view is that it put the individual at the centre of social analysis and stood in contrast to Positivism which was popular at that time.

The limitations of using personal documents in social research

  • There is a problem of interpretation. The researchers might misinterpret the meaning of the documents. The less contextual information the researchers have, the more likely this is to happen.
  • Practically it takes a long time to sift through and organise the information.
  • Who cares? Let’s face it, are you really going to go and read a 2, 200 page work analysing letters from Polish Immigrants, written over 100 years ago?

Relevance to A-level Sociology?

Twitter data is a source of secondary qualitative data (public rather than private data) and so is relevant to the research methods part of the course.

Please click here to return to the homepage – ReviseSociology.com

A-Level Sociology Official Statistics Starter (Answers)

One of the supposed advantages of official statistics is that they are quick and easy to use to find out basic information.

To test this out, I use the following as a starter for my ‘official statistics’ lesson with my A-level sociology students:

I print the above off as a one paged hand-out and give students 10 minutes to find out the approximate answers to each of the questions.

If some students manage to find all of them in less than 10 minutes, they can reflect on the final question about validity. I wouldn’t expect all students to get to this, but all of them can benefit from it during class discussion after the task.

Official statistics stater: answers

Below are the answers to the questions (put here because of the need to keep updating them!)

How many people are there in the UK?

66, 800 000 estimated in 2020

Source: Office for National Statistics Population Estimates.


How many households are there in the UK?

27.8 million in 2019

Source: ONS Families and Households in the UK 2019.


How many marriages were there last year in the UK?


240 000 in 2017, latest figures available

Source: ONS Marriages in England and Wales

How many cases of Domestic Violence were there in England and Wales last year?

In the year ending March 2019, an estimated 2.4 million adults aged 16 to 74 years experienced domestic abuse in the last year (1.6 million women and 786,000 men).

Source: Domestic Abuse in England and Wales, November 2019.


What proportion of GCSE grades achieved 4 or above in 2020, how does this compare to 2019?

79% of GCSE entries in 2020 received 4 or above, up from 70% in 2019.

Source: The Guardian.

How many students sat an A level in Sociology last year?

38, 015 students sat an exam in A-level sociology in 2019.

Source: Joint Council for Qualifications (curse them for PDFing their data and making it less accessible for broader analysis).

Do any of the above sources lack validity?

It’s hard to make an arguement that the last two have poor validity – however, you can argue that these are invalid measurements of students’ ability, because of variations in difficulty of the exams and a range of other factors.

With the DV stats, there are several reasons why these cases may go under reported such as fear and shame on the part of the victims.

Marriages, there may be a few unrecorded forced marriages in the UK.

In terms of households, the validity is pretty high, as you just count the number of houses and flats, however, definitions of what counts as a household could lead to varying interepretations of the numbers.

The population stats are an interesting one – we have records of births, deaths and migration, but illegal immigration, well be it’s nature it’s difficult to measure!

The point of this starter and what comes next…

It’s kinaesthetic demonstration of the practical advantages of official statistics, and gives students a chance to think about validity for themselves.

Following the starter, we crack on with official statisics proper – considering in more depth the strengths and limitations of different types of official statistics, drawn from other parts of the A-level sociology specification.

A-level teaching resources

If you’re interested in receiving a paper copy of this, along with a shed load of other fully modifiable teaching resources, why not subscribe to my A-level sociology teaching resources, a bargain at only £9.99 month.

Unlike Pearsons or Tutor to You (however you spell it), I’m independent, all subscription money comes straight to me, rather than the resource designers getting a pittance and 90% of the money going to the corporates at the top, like with those companies.

How has Coronavirus Affected Education?

The most obvious impact of the 2020 Coronavirus on education was the cancellation of GCSE and A-level exams, with the media focusing on the chaos caused by teacher predicted grades being downgraded by the exam authority’s algorithm and then the government U-turn which reinstated the original teacher predicted grades.

While it’s fair to say that this whole ‘exam debacle’ was stressful for most students, in the end the end of exam period cohorts ended up getting a good deal, on average, as they were able to pick whichever ‘result’ was best.

It’s also fair to say, maybe, that most of the students who missed their GCSEs and A-levels didn’t miss out on that much education – what they missed out on, mostly, was the extensive period of ‘exam training’ which comes just before the exam, which are skills that aren’t really applicable in real life.

However, in addition to the exam year cohorts, there were also several other years of students – primary and secondary school students, and older students, doing apprenticeships and degrees, whose ‘real education’ has been impacted by Covid-19.

This article focuses on some of the recent research that’s focused on these ‘other’ less newsworthy students.

This post has primarily been written to get students studying A-level sociology thinking about methods in context, or how to apply research methods to the study of different topics within education.

Research studies on the impact of Coronavirus on Education.

I’ve included three sources with lots of research: the DFE, The NFER and the Sutton Trust, and then a few other sources as well.

The Department for Education (DFE)

The DFE Guidance for Schools resources seems like a sensible place to start for information on the impact of the pandemic on schools.

The Guidance for the Full Opening of Schools recommends seven main measures to control the spread of the virus.

This guidance suggests there is going to be a lot more pressure on teachers to ‘police’ pupils actions and interactions – although ‘social distancing’ is required only dependent on the individual school’s circumstances, and face coverings are not mandatory. So schools do have some discretion.

All in all, it just looks like schools are going to be quite a lot more unpleasant and stressful places to be in as various measures are put in place to try and ensure contact between pupils is being limited.

The National Foundation of Education Research (NFER)

The NFER has produced several mainly survey based research studies looking at the impact of Coronavirus on schools.

One NFER survey of almost 3000 senior leaders and teachers in 2200 schools across England and Wales asking them about the challenges they face from September 2020.

The main findings of this survey are as follows:

  • teachers report that their students are an average of three months behind with their studies after missing school due to Lockdown
  • Teachers in the most deprived schools are three times more likely to report that their pupils are four months behind compared to those in the least deprived schools.
  • Over 25% of pupils had limited access to computer facilities during lock down. This was more of a problem for pupils from deprived areas.
  • Teacher anticipate that 44% of pupils will need catch up lessons in the coming academic year.
  • Schools are prioritizing students’ mental health and well being ahead of getting them caught up.

The Sutton Trust

The Sutton Trust has several reports which focus on the impact of Coronavirus, specifically on education. The reports look at the impacts on early-years and apprenticeships, for example.

A report by the Sutton Trust on the impact of the school shutdown in April noted some of the following key findings:

  • Private schools were about twice as likely to have well-established online learning platforms compared to state schools, correspondingly privately schooled children were twice as likely to receive daily online lessons compared to state school children.
  • 75% of parents with postgraduate degrees felt confident about educating their children at home, compared to less than half of parents with A-levels as their highest level of qualification
  • 50% of teachers in private schools said they’d received more than three quarters of the work back, compared to only 8% in the most deprived state schools.

Research from other organisations

  • This article from the World Economic Forum provides an interesting global perspective on the impact of coronavirus – with more than a billion children worldwide having been out of school. It highlights that online learning might become more central going forwards, but points out that access to online education various massively from country to country.
  • The Institute for Fiscal studies produced a report in July focusing on the financial impacts of Coronavirus on Universities. They estimate that the sector will have lost £11 billion in one year, a quarter of income, and that around 5% of providers probably won’t be able to survive without government assistance.
  • This article in The Conversation does a cross national comparison of how schools in four countries opened up. They grade their approach. It’s an interesting example of how some social policies are more effective than others!

Final Thoughts

I’ve by no means covered all the available research, rather I’ve tried to get some breadth in here, looking at the impact on teachers and pupils, and at things globally too.

By all means drop some links to further research in the comments!

Two-stage balloon rocket as an introduction to ‘experiments’ in sociology

The two-stage balloon rocket experiment is a useful ‘alternative’ starter to introduce the topic of experiments – a topic which can be both a little dry, and which some students will find challenging, what with all the heavy concepts!

Using the experiment outlined below can help by introducing students to the concepts of ‘dependent and independent variables’, ’cause and effect’, ‘controlled condition’s, ‘making predictions’ and a whole load of other concepts associated with the experimental method.

The experiment, including the materials you’ll need, and some discussion questions, is outlined here – you’ll need to sign up, but it’s easy enough to do, you can use your Google account.

Keep in mind that this link takes you to a full-on science lesson where it’s used to teach younger students about physics concepts – but modified and used as a starter it’s a useful intro a sociology lesson!

Also, students love to revert back to their childhood, and you can call this an activity which benefits the lads and the kin-aesthetic learners, Lord knows there’s precious little enough for them in the rest of the A-level specification, so you may as well get this in while you can!

The two-stage balloon rocket experiment

(Modified version for an intro to experiments in A-level sociology!)

  1. Set up the two-stage balloon rocket experiment in advance of the students coming into the classroom. Set it up with only a little amount of air, so it deliberately is a bit naff on its first run.
  2. Get students to discuss what they think is going to happen when you release the balloon along the wire.
  3. Release the balloon.
  4. Discuss why it didn’t work too well.
  5. Get students involved with redesigning the experiment
  6. Do round two.
  7. Use the examples of ‘balloon speed’ as ‘dependent’ and ‘amount of air/ fuel’ as independent variables’ when introducing these often difficult to understand concepts in the next stage (excuse the pun) of the lesson.

Questions you might get the students to consider:

  • What variables did we find had the biggest impact on how far the rocket traveled?
  • Did any variables have a very small impact or no impact at all?
  • If we had more time or other materials available, what changes would you make to make the rocket travel even farther?

Don’t forget to save the animal modelling balloons you would have bought for this and use them for the ‘Balloon Animals Starter’ in the next lesson on field experiments.

Please click here to return to the homepage – ReviseSociology.com

Sociological Experiments

This post aims to provide some examples to some of the more unusual and interesting experiments that students can explore and evaluate.

I’ve already done a post on ‘seven field experiments‘, that outline seven of the most interesting classic and contemporary experiments which are relevant to various topics within the A-level sociology syllabus, in this post I provide a much fuller list, and try to present some more unusual examples, focusing on contemporary examples with video examples where possible.

The Circle

Channel Four’s ‘The Circle’ is an experiment of sorts – contestants have to stay in one room and can only interact with each other by a bespoke, in-house social media application, competing for popularity. At the end of every day the two-three most popular people get to kick out someone from the least three popular people, then a newbie comes in to replace them.

The Twinstitute

This recent series which aired on BBC2 involves getting identical twins to do the same tasks under different circumstances – to see what the effect of ‘external stimuli’ (independent variables) are on factors such as ‘concentration’.

In one classic, and super easy to relate to example, sets of twins are asked to do a written IQ test – one half are allowed to keep their mobile phones on the table, another have to put them away – all other variables remain the same. The findings are predictable – the group with their phones out get worse scores.

Conclusion – mobile phones are distracting, quite a useful fact to remind students of!

Sleep deprivation makes people less likely to want to socialise with you!

A 2017 experiment measured how respondents perceived tired people. The findings were that respondents were less likely to want to socialise with sleep-deprived people.

  • 25 Participants (aged 18-47) were photographed after normal sleep and again after two days of sleep deprivation.
  • The two photographs were then rated by 122 raters (aged 18-65), according to how much they would like to socialise with the participants. The raters also rated the photos based on attractiveness, health, sleepiness and trustworthiness.
  • The raters were less likely to want to socialise with the participants in the ‘sleep-deprived’ photos  compared to the photos of them when they’d had normal sleep. They also perceived the ‘sleep-deprived’ versions as less attractive, less health and more sleepy.
  • There was no difference in the trustworthiness ratings.

You have to think about this to get to what the variables are:

  • The main dependent variable is the raters’ ‘desire to socialise’ with the people in the photos.
  • The independent variable is the ‘level of sleep-deprivation’ (measured by photos)  

What I like about this experiment is the clear ‘control measure’ – the researchers used photos of the same participants – after regular sleep and sleep-deprivation.

Without that control measure, the experiment would probably fall apart1

Science Professors think female applicants are less competent

In this 2012 experiment researchers sent 127 science professors around the country (both male and female) the exact same application materials from a made-up undergraduate student applying for a lab manager position.

63 of the fake applications were made by a male, named John; the other 64 were made by a female, named Jennifer.

Every other element of the applications were identical.

The researchers also matched the two groups of professors to whom the applications were sent, in terms of age distribution, scientific fields, and tenure status.

The 127 professors were each asked to evaluate the application based on

  • their overall competency and hireability,
  • the salary they would offer to the student
  • the degree of mentoring they felt the student deserved.

The faculty were not told the purpose of the experiment, just that their feedback would be shared with the student.

The results

Both male and female professors consistently regarded the female student applicant as less competent and less hireable than the otherwise identical male student:

  • The average competency rating for the male applicant was 4.05, compared to 3.33 for the female applicant.
  • The average salary offered to the female was $26,507.94, while the male was offered $30,238.10.
  • The professor’s age and sex had insignificant effects on discrimination —old and young, male, and female alike tended to view the female applicants more negatively.

Blind auditions improve the chances of female musicians being recruited to orchestras

A comparative study by Cecilia Rouse, an associate professor in Princeton’s Woodrow Wilson School and Claudia Goldin, a professor of economics at Harvard University, seems to confirm the existence of sex-biased hiring by major symphony orchestras.

Traditionally, women have been underrepresented in American and European orchestras. Renowned conductors have asserted that female musicians have “smaller techniques,” are more temperamental and are simply unsuitable for orchestras, and some European orchestras do not hire women at all.

To overcome bias, most major U.S. orchestras implemented blind auditions in the 1970s to 1980s, in which musicians audition behind a screen that conceals their identities but does not alter sound. However, some kept non-blind auditions.

This provided the context for a nice ‘natural experiment’…

Using data from the audition records, the researchers found that:

  • – for both blind and non-blind auditions, about 28.6 percent of female musicians and 20.2 percent of male musicians advanced from the preliminary to the final round.
  • – When preliminary auditions were not blind, only 19.3 percent of the women advanced, along with 22.5 percent of the men.

The researchers calculated that blind auditions increased the probability that a woman would advance from preliminary rounds by 50 percent.

As a result, blind auditions have had a significant impact on the face of symphony orchestras. About 10 percent of orchestra members were female around 1970, compared to about 35 percent in the mid-1990s.

Rouse and Goldin attribute about 30 percent of this gain to the advent of blind auditions.

Their report was published in the September-November issue of the American Economic Review.

The Marshmallow Test

This classic 1971 experiment was designed to measure a child’s level of self-control, or will-power. In sociological terms, this is measuring a child’s ability to ‘defer gratification’.

Researchers put a child in a room with one Marshmallow. The child was informed that they could eat it whenever they wanted, but if they could wait until the researcher returned, they could have two Marshmallows.

The researcher then left and the child was left alone to deal with their temptation for approximately 15 minutes. In the end 2/3rds of children gave into temptation and ate the Marshmallow, the other third resisted.

The researchers then tracked the children through later life and found that those who had more will power/ self control (those who hadn’t eaten the treat) were more likely to do well at school, avoid obesity and generally had a better quality of life.

NB – it’s down to you to do your research on how replicable and valid this experiment is.

Here’s one of the original researchers in 2015 saying how they’ve evolved and replicated the experiment and he’s written a book on the importance of teaching self-control to enhance people’s quality of life:

On the other hand, this is a video which is critical, saying that future studies found that social economic background accounted for around half of life-success, with individual will-power only accounting for half.

(However, this second video appears to be one young guy with no academic credentials, other than the lame bookshelf he’s put in the background, hardly semiotics genius.)

Please click here to return to the homepage – ReviseSociology.com

%d bloggers like this: