Are one in five people really disabled?

According to official statistics 19% of working aged adults, or one in five people self-report as being ‘disabled’, and this figure has been widely used in the media to promote pro-disability programming.

How do we Define Disability?

According to the formal, legal, UK definition under the 2010 Equality Act someone is disable if they ‘have a physical or mental impairment that has a substantial and ‘long-term’ negative effect on your ability to do normal daily activities’.

That 19% figure sounds like a lot of people, in fact it is a lot of people – that’s 13 million people in the United Kingdom.

But maybe it’s only a lot because when we think of ‘disability’ we tend to immediately think of people will physical and very visible disabilities, the classic image of a disable person being someone in a wheelchair, which the media generally doesn’t help with its over-reliance of wheelchair users to signify they are ‘representing the disabled’.

In fact there are ‘only’ 1.2 million wheelchair users in Britain, or less than one in ten people who classify as disabled.

How do we measure disability ?

The 19%, or one five figure comes from the UK’s Family Resources Survey, the latest published result coming from the 2018/19 round of surveys.

This is a pretty serious set of surveys in which respondents from 20 000 households answer questions for an hour, some related to disability.

The Questions which determined whether someone classifies as disable or not are as follows:

  • Have you had any long term negative health conditions in the last 12 months? If you respond yes, you move on to the next two questions:
  • Do any of these health conditions affect you in any of the following areas – listed here are the top answers: mobility/ stamina, breathing or fatigue/ mental health/ dexterity/ other 
  • Final question: do any of your conditions or illness impact your ability to carry out your day to day activities -the responses here are on a 4 point likehert scale ranging from a not at all to a lot.

Anyone ticking YES/ YES and either ‘my illness affects me a lot or a little’ is classified by the UK government as disabled.

Validity problems with this way of measuring disability

The problem with the above is that if you have Asthma and similar mild conditions you could be classified as disabled, and this doesn’t tie in with the government’s own definition of disability which requires that someone has a condition which ‘substantially’ affects their ability to carry out every day tasks.

Stating that you have asthma which affects your breathing a little, does NOT IMO qualify you as disabled, but it does in this survey.

The government doesn’t publish the breakdown of responses to the final disability question, but it’s roughly a 50-50 split between those answering ‘a lot’ and ‘a little.

In conclusion, it might be more accurate to say that one in ten people is disabled.

Relevance to A-level sociology

This short update should be a useful contemporary example to illustrate some of the validity problems associated with using social surveys, especially for topics with a high degree of subjectivity such as what disability means!

NB – I gleaned the above information from Radio Four’s More or Less, the episode which aired on Weds 10th Feb 2021.

Researching in Classrooms

The classic method for researching in classrooms is non-participant observation, the method used by OFSTED inspectors. However, there are other methods available to the researcher who wishes to conduct research on actual lessons within schools.

Classrooms are closed environments with very clear rules of behaviour and typically containing around 20-30 students, one teacher and maybe one learning assistant, and lessons usually lasting from 40 minutes to an hour.

The obvious choice of research method for using in a classroom is that of non-participant observation, where the researcher takes on the role of the OFSTED inspector.

The fact that there are so many students in one place, and potentially hundreds of micro-interactions in even just a 40-minute lesson gives the observational researcher plenty to focus on, so classrooms are perhaps some of the most data rich environments within education.

Arguably the most useful way of collecting observational data would be for the researcher to have an idea about what they are looking for in advance – possibly how many times teachers praise which pupils, or how many times disruptive behaviour takes place, and how the teacher responds, rather than trying to watch everything, which would be difficult.

And students will probably be used to OFSTED inspections, or other staff in the school dropping in to observe lessons occasionally, thus it should be relatively easy for a researcher to blend into the background and observe without being too obtrusive.

The fact that classrooms are usually organised in a standardised way (they tend to be similar sizes, with only a few possible variations on desk layouts) also means the researcher has a good basis for reliability – any differences he observes in teacher or student behaviour across classrooms or schools is probably because of the teachers or pupils themselves, not differences in the environments they are in (at least to an extent!).

There are, however, some limitations with researching in classrooms.

Gaining access could be a problem – not all teachers are going to be willing to have a researcher observing them. They may regard their classroom as their environment and think they have little to gain from an outsider observing them – although if a researcher is a teacher themselves, they could maybe offer some useful feedback about teaching strategies applied by teachers.

Teachers will probably act differently when observed – if you think back to OFSTED inspections, teachers usually ‘up their game’ and make sure to be more inclusive and encouraging, this is likely to happen when anyone observes.

Similarly, pupils may behave differently – they may be more reluctant to contribute because of a researcher being present, or disruptive students may act up even more.

Classrooms are very unique, controlled environments, with only two roles (teachers and students) and clear norms. Teachers and students alike will not be themselves in these highly unusual situations.

Finally, researchers wouldn’t be able to dig deeper and ask probing questions when part of a lesson, unless they took on the role of participant observer by becoming an learning assistant, but even then they would be limited to what they could ask if they didn’t want to disrupt the lesson flow.

It’s not all about direct non-participant observation

Researchers might choose a more participatory approach to researching in classrooms, by training to be a learning assistant or even a teacher, and doing much longer term, unstructured observational research with students.

This would enable them to get to really know the students within a lesson, and make it very easy to to ask deeper questions outside of lessons.

The problem with this would be that they would then be part of the educational establishment and students may not wish to open up to them precisely because of that reason.

A further option would be to put up cameras and observe from a distance, but this might come up against some resistance from both teachers and students, and it would be more difficult to ask follow up questions if reviewing the recordings some time after the actual lesson took place.

Please click here to return to the homepage – ReviseSociology.com

Researching Pupils in Education

Educational researchers might reasonably expect to have to conduct research with pupils at some in their careers, given that they are at the centre of the education system.

This post outlines some the challenges researchers may face when researching pupils in the context of education. It has been primarily written for students of A-level sociology.

Why you might want to research pupils?

The whole education system is based around the pupils. Without them there are no teachers, no OFSTED, no exams, no system! So it makes sense to ask them their opinions on education from time to time!

Pupils tend to have lower status than staff in the system, so giving them a voice is ethically sound!

Schools and pupils want to portray themselves in a good light, so the portrayals they give of their institutions may not be accurate.

Failing students often don’t get a voice, they probably don’t want to talk about education, and so finding out what they think about education might be especially valuable.

The problems of researching pupils

If conducting research within schools, senior leaders and teachers may select which students researchers get to collect data from, possibly selecting some of the better behaved students to portray the school in a positive light.

Once they have gained access, pupils may be reluctant to open up to researchers because they are not used to interacting with any adults other than their parents and teachers.

Younger pupils will be less able to grasp the meaning of key concepts such as social class, or even ‘occupation’, so researchers will have to think carefully about how they might operationalise concepts so that students can understand them.

The reading ability of younger learners may mean that questionnaires will not be a suitable method of investigating their attitudes, so interviews or observations will probably be more appropriate methods, and these tend to be more time consuming.

Speech codes may be a barrier to a researcher gaining trust and understanding from certain groups of students.

Younger pupils may not be able to fully understand the purpose of the research, so it may not be possible to gain fully informed consent from them.

The attitudes pupils have towards the power structure of the school may influence the validity of the data the researcher gets. A pro-school student may be reluctant to criticise the school, whereas anti-school students may do so even if their criticisms are invalid. The later is a criticism that has been made of David Gilborn’s research on Teacher Racism.

Given the general status of children as ‘vulnerable’ researchers need to take special care that students will not suffer any unnecessary harm (such as stress) during the research process.

Because of their vulnerability status, there are going to be gatekeepers to get through in order to research pupils – probably both parents, and teachers, before any research with pupils can take place.

Researchers will have to work within Child Protection legislation – they will need criminal record checks in advance, and ensure that no personal data collected is shared.

It is highly unlikely that researchers would be allowed to spend any time alone with students today, like Paul Willis was able to do for 18 months back in the 1970s! So Participant Observation as a method is probably out of the question today.

Please click here to return to the homepage – ReviseSociology.com

The Global Drug Survey – a good example of invalid data due to bias?

86% of the global population have used drugs in the last year, and more people have used cannabis than tobacco. Almost 30% of the world’s population have used Cocaine in the last year, at least according to the 2019 Global Drug Survey.

Global Drugs Survey.PNG

This survey asked adults in 36 countries about their use of drugs and alcohol.

According to the same survey, the British get drunk more often than people in any other nation, at least according to a recent

In Britain, people stated they got drunk an average of 51 times last year, with U.S., Canada and Australia not far behind. The average was 33 times.

Where Cocaine use was concerned, 73% of people in England said they had tried it compared to 43% globally.

How valid is this data?

I don’t know about you, but to me these figures seem very high, and I’m left wondering if they aren’t skewed upwards by selective sampling or loose questions.

This report is produced by a private company who sell products related to addiction advice, and I guess their market is national health care services.

Seems to me like it’s in their interests to skew the data upwards to give themselves more of a purpose.

I certainly don’t believe the average person in the UK gets drunk once a week and that almost 3/4s of the population have tried Cocaine.

Sources

The Week 25th May 2019

 

 

Exam advice from the AQA’s Examiner Reports from 2018

The AQA produces an examiner report after every exam, and it’s very good advice to look at these reports to see common mistakes students made last year, so you can avoid making the same mistakes this year!

AQA sociology examiner report 2018.png

Below I’ve selected FIVE choice pieces of advice based on the two most common errors from the 2018 Education with Theory and Methods paper.

  1. For the short answer questions, make sure you get your ID and Development the right way round – for example, last year’s 4 mark question was on ‘two reasons why marketisation policies may create social class differences in educational achievement’ – many students started with a policy rather than a reason, they should have started with a reason and then illustrated with a policy.
  2. The six marker was ‘outline three reasons for gender differences in educational achievement – the report says that many students did not get a second mark because they failed to be specific enough in their application to gender or educational achievement, so be specific!
  3. For question 5 – the methods in context question – the best answers used the hooks in the item, so use the item!
  4. At the other end of the paper – the final 10 mark theory and methods and question, a lot of students seemed to run out time to answer this, so make sure you get your timing right. Remember that it’s almost certainly going to be easier to get 4/10 for a 10 mark question than to go from 12/20 to 16/20 on a methods in context question – the bar’s lower after all!
  5. Focussing on the final 10 marker – if you get another ‘criticise a theory’ type question’ then the best answers simply used other perspectives to develop their criticisms.

It seems that the 10 marker with item and 30 mark essay question were OK!

Sources 

All information taken from the AQA’s 7192/1 examiner report.

You can read the full report here.

You can view the 2018 paper here.

Using contemporary examples to evaluate for theory and methods

A level sociology students should be looking to using contemporary examples and case studies to illustrate points and evaluate theories whenever possible. In the exams, the use of contemporary evidence is something examiners look for and reward.

Below are a few examples of some recent events in the news which are relevant to the theory and methods aspects of sociology

All of the above took place in either 2019 or 2018! 

Using interviews to research education

Interviews are one of the most commonly used qualitative research methods in the sociology of education. In this post I consider some of the strengths and limitations of using interviews to research education, focussing mainly on unstructured interviews.

This post is primarily designed to get students thinking about methods in context, or ‘applied research methods’. Before reading through this students might like to brush up on methods in context by reading this introductory post. Links to other methods in context advice posts can be found at the bottom of the research methods page (link above!)

Practical issues with interviews  

Gaining access may be a problem as schools are hierarchical institutions and the lower down the hierarchy an individual is, the more permissions the interviewer will require to gain access to interview them. For example, you might require the headmaster’s permission to interview a teacher, while to interview pupils you’ll require the headmasters and their parent’s permission.

However, if you can gain consent, and get the headmaster onside, the hierarchy may make doing interviews more efficient – the headmaster can instruct teachers to release pupils from lessons to do the interviews, for example.

Interviews tend to take more time than questionnaires, and so finding the time to do the interviews may be a problem – teachers are unlikely to want to give up lesson time for interviews, and pupils are unlikely to want spend their free time in breaks or after school taking part in interviews. Where teachers are concerned, they do tend to be quite busy, so they may be reluctant to give up time in their day to do interviews.

However, if the topic is especially relevant or interesting, this will be less of a problem, and the interviewer could use incentives (rewards) to encourage respondents to take part. Group interviews would also be more time efficient.

Younger respondents tend to have more difficulty in keeping to the point, and they often pick up on unexpected details in questions, which can make interviews take longer.

Younger respondents may have a shorter attention span than adults, which means that interviews need to be kept short.

Validity issues

Students may see the interviewer as the ‘teacher in disguise’ – they may see them as part of the hierarchical structure of the institution, which could distort their responses. This could make pupils give socially desirable responses. With questions about homework, for example, students may tell the interviewer they are doing the number of hours that the school tells them they should be doing, rather than the actual number of hours they spend doing homework.

To overcome this the teacher might consider conducting interviews away from school premises and ensure that confidentiality is guaranteed.

Young people’s intellectual and linguistic skills are less developed that adults and the interviewer needs to keep in mind that:

  • They may not understand longer words or more complex sentences.
  • They may lack the language to be able to express themselves clearly
  • They may have a shorter attention span than adults
  • They may read body language different to adults

Having said all of that, younger people are probably going to be more comfortable speaking rather than reading and writing if they have poor communication skills, which means interviews are nearly always going to be a better choice than questionnaires where younger pupils are concerned.

To ensure greater validity in interviews, researchers should try to do the following:

  • Avoid using leading questions as young people are more suggestible than adults.
  • Use open ended questions
  • Not interrupt students’ responses
  • Learn to tolerate pauses while students think.
  • Avoid repeating questions, which makes students change their first answer as they think it was wrong.

Unstructured interviews may thus be more suitable than structured interviews, because they make it easier for the researcher to rephrase questions if necessary.

The location may affect the validity of responses – if a student associates school with authority, and the interview takes place in a school, then they are probably more likely to give socially desirable answers.

If the researcher is conducting interviews over several days, later respondents may get wind of the topics/ questions which may influence the responses they give.

Ethical issues

Schools and parents may object to students being interviewed about sensitive topics such as drugs or sexuality, so they may not give consent.

To overcome this the researcher might consider doing interviews with the school alongside their PSHE programme.

Interviews may be unsettling for some students – they are, after all, artificial situations. This could be especially true of group interviews, depending on who is making up the groups.

Group interviews

Peer group interviews may well be a good a choice for researchers studying topics within the sociology of education.

Advantages 

  • Group interviews can create a safe environment for pupils
  • Peer-group discussion should be something pupils are familiar with from lessons
  • Peer-support can reduce the power imbalance between interviewer and students
  • The free-flowing nature of the group interview could allow for more information to come forth.
  • The group interview also allows the researcher to observe group dynamics.
  • They are more time efficient than one on one interviews.

Disadvantages

  • Peer pressure may mean students are reluctant to be honest for fear of ridicule
  • Students may also encourage each other to exaggerate or lie for laffs.
  • Group interviews are unpredictable, and very difficult to standardise and repeat which mean they are low in validity.

Blue Monday

‘Blue Monday’ is apparently the most ‘depressing’ day of the year…

Blue Monday 2019.png

Accept it’s not.

It’s actually the day of the year on which people are most likely to book a holiday, based on the following formula:

Blue Monday.png

A psychologist called Cliff Arnall came up with the formula in 2005. He developed it on behalf Travel (a now defunct media channel), who wanted to know what motivated people to book a summer holiday, and the bits of the formula are (I think) supposed to represent those variables.

Industry stats suggest that late January is the period when people are most likely to book a holiday (how far this has been reinforced by the Blue Monday marketing phenomenon is hard to say), so Arnall’s variables may be valid. I’ve no idea how he came up with either them or the formula, but the idea behind it doesn’t appear to have been to calculate the most ‘depressing’ day of the year.

However, somehow the media have come up with the concept of ‘Blue Monday’, and now the third Monday in January is, in the public’s imagination, the day of the year which is the most ‘depressing’.

Intuitively this makes sense: debt, darkness, post-Christmas, all things we might think make it more likely that we will be miserable.

However, there is no actual evidence to back up the claim that Blue Monday is the most ‘depressing’ day of the year.

There are actually two ‘scientific’ sources we can use to see how happy people are: The Office for National Statistics Wellbeing Survey and the Global Happiness Survey. Both are worth checking out, but the problem is neither of them (as far as I’m aware) collect happiness data on a daily basis. They simply don’t drill down into that level of granularity.

People have used social media sentiment analysis to look at how mood varies day to day, but this doesn’t back up the concept of Blue Monday… if anything early spring seems to be the period when people are the least happy.

Twitter mood analysis.png

There’s a further problem, the official view of the mental health charity MIND, that Blue Monday trivializes depression which tends to be a long-term mental health condition which doesn’t simply worsen as we move from Christmas into January and then gradually lift as we get further towards spring.

In the end it must be remembered that ‘Blue Monday’ is actually a marketing tool, designed to make us buy crap we don’t need in order to ‘lift our moods’, which aren’t necessarily lower in January at all!

And as a result, we get a raft of newspaper articles telling us how to ‘beat Blue Monday’, some of which suggest we should ‘book a holiday’ which is where the whole concept started after all!

Blue monday deals.png

Relevance to A level sociology 

Firstly the concept of Blue Monday illustrates the need to think critically – this is a great example of a concept which is based on completely invalid measurements. It simply has no validity, so the only question you can ask is ‘why does it exist’, rather than ‘why are people more miserable in late January (they are not, according to the evidence!).

This is possible support for the Marxist theory of society – of ideological control through the media: Blue Monday appears to be a media fabrication designed to get us to buy more stuff.

Selected sources 

Ben Goldacre – on why Blue Science is Bad Science

 

Gender and Education: Good Resources

Useful links to quantitative and qualitative research studies, statistics, researchers, and news paper articles relevant to gender and education. These links should be of interest to students studying A-level and degree level sociology, as well as anyone with a general interest in the relationship between gender, gender identity, differential educational achievement and differences in subject choice.

Just a few links to kick-start things for now, to be updated gradually over time…

General ‘main’ statistical sites and sources

The latest GSCE results analysed by gender from the TES

A Level Results from the Joint Council for Qualifications – broken down by gender and region

Stats on A level STEM subjects – stats on the gender balance are at the end (70% of psychology students are female compared to only 10% of computer science students)

General ‘Hub’ Qualitative resources 

The Gender and Education Association – works to eradicate sexism and gender equality within education. Promotes a Feminist pedagogy (theory of learning).

A link to Professor Becky Francis’ research, which focuses mainly on gender differences in educational achievement – at time of writing (November 2017) her main focus seems to be on girls lack of access to science and banding and streaming (the later not necessarily gender focused)

Specific resources for exploring gender and differential educational achievement

Education as a strategy for international development – despite the fact that girls are outperforming boys in the United Kingdom and most other developed countries, globally girls are underachieving compared to boys in most countries. This link takes you to a general post on education and social development, many of the links explore gender inequality in education.

Specific resources for exploring gender and subject choice 

Dolls are for Girls, Lego is for Boys – A Guardian article which summarizes a study by Becky Francis’s on Gender, Toys and Learning, Francis asked the parents of more than 60 three- to five-year-olds what they perceived to be their child’s favourite toy and found that while parental choices for boys were characterised by toys that involved action, construction and machinery, there was a tendency to steer girls towards dolls and perceived “feminine” interests, such as hairdressing.

Girls are Logging Off – A BBC article which briefly alerts our attention to the small number of girls opting to do computer science.

 

 

Nudge Politics: a sociological analysis

‘nudge politics’ involves governments implementing small social policy measures to help people make the ‘right decisions’. This post considers some of the pros and cons of this type of social policy agenda.

It’s been 10 years since economist Richer Thaler and law professor Cass Sunstein published ‘Nudge: Improving Decisions About Health, Wealth and Hapiness‘.

Nudge book.jpgThe idea behind ‘Nudge’ was that by exploiting traits of ‘human nature’ such as our tendencies to put of making decisions, or to give into peer pressure, it was possible to ‘nudge’ people into making certain decisions.

10 years on, it seems that government all over the world have applied ‘nudge theory’ to achieve their desired outcome. They have managed to implement some relatively ‘small scale’ social policies and make huge savings at little cost to the public purse.

In the U.K. for example, David Cameron set up the Behavourial Insights Team (or Nudge Unit) which seems to have had some remarkable successes. For example:

  • Reminder letters telling people that most of their neighbours have already paid their taxes have boosted tax receipts. This was designed to appeal to the ‘heard instinct’.
  • The unit boosted tax returns from the top 1% (those owing more than £30K) from 39% to 47%. To do so they changed their punitive letter to one reminding them of the good paying taxes can do.
  • Sending encouraging text messages to pupils resitting GCSEs has boosted exam results. This appeals to the well-recognised fact that people respond better to praise.
  • Sending text messages to jobseekers reminding them of job interviews signed off with ‘good luck’ has reduced the number of missed interviews.

As with so many public-policy initiatives these days, the Behavourial Insights Team is set up as a private venture, and it now makes its money selling its ‘nudge policy’ ideas to government departments around the world.

The Limitations of Nudge Politics 

Methodologically speaking there are a at least three fairly standard problems:

Firstly, the UK’s nudge unit hasn’t been in place long enough to establish whether these are long-term, ’embedded success’.

Secondly, we don’t really know why ‘nudge actions’ work. The data suggests a correlation between small changes in how letters are worded and so on and behaviour, but we don’t really know the ‘why’ of what’s going on.

Thirdly, I’m fairly sure there aren’t that many controlled trials out there which have been done to really verify the success of some of these policies.

Theoretically there are also quite a few problems:

The book and the ‘team’ above both talk in terms of ‘nudging’ people into making the ‘right decisions’… but who decides what is right? This theory ignores questions of power.

It also could be used towards very negative ends… in fact I think we’ve already seen that with the whole Brexit and Trump votes….. I’m sure those campaigns used nudge theory to manipulate people’s voting outcomes. It doesn’t take a massive swing to alter political outcomes today after all!

Finally, I cannot see how you are going to be able to ‘nudge’ people into making drastic changes to save the planet for example: I can’t imagine the government changing the message on its next round of car tax renewal letters to include messages such as: ‘have you ever thought about giving up the car and just walking everywhere instead? If you did so, the planet might stand a chance of surviving!’.

Final thoughts: the age of the ‘nudge’?

I think this book and this type of ‘steering politics’ are very reflective of the age we live in. (The whole theory is kind of like a micro-version of Anthony Giddens’ ‘steering the juggernaut’ theory.) This is policy-set very much favoured to career politicians and bureaucrats who would rather focus on ‘pragmatic politics’. It’s kind of like what realism is to Marxism in criminology theory: not interested in the ‘big questions’.

I just cannot see how this kind of politics is going to help us move towards making the kind of drastic social changes that are probably going to be required to tackle the biggest problems of our times: global warming, militarism, inequality, refugees for example.

Relevance to A-level sociology

The most obvious relevance is to the social policy aspect of the theory and methods specification.

Find out more…

Image sources 

Nudge book cover

This post will also be published to the steem blockchain.