Bivariate Analysis for Quantitative Social Research

Bivaraiate analysis methods include contigency tables + chi square, Pearson’s R, and Spearman’s Rho,

Bivariate analysis involves analysing two variables at a time in order to uncover whether the two variables are related.

Exploring relationships between variables means searching for evidence that the variation in one variable coincides with variation in another variable.

There are a variety of techniques you can use to conduct bivariate analysis but their use depends on the nature of the two variables being analysed.

Type of variableNominal OrdinalInterval/ RatioDichotomous
NominalContingency table + chi-square + Cramer’s VContingency table + chi-square +Cramer’s VContingency table + chi-square +Cramer’s V, compare means and etaContingency table + chi-square +Cramer’s V
OrdinalContingency table + chi-square + Cramer’s VSpearmans’ rhoSpearmans’ rhoSpearmans’ rho
Interval/ ratioContingency table + chi-square +Cramer’s V, compare means and etaSpearmans’ rhoPearson’s RSpearmans’ rho
Dichotomous Contingency table + chi-square + Cramer’s VSpearmans’ rhoSpearmans’ rhophi
Bivariate analysis for different types of variable

Bivariate Analysis: Relationships, not causality

If there is a relationship between two variables, this does not necessarily mean one causes the other.

Even if there is a causal relationship, we need to take care to make sure the direction of causality is correct. Researchers must be careful not to let their assumptions influence the direction of causality.

For example, Sutton and Rafaeli (1998) conducted bivariate analysis on the relationship between the display of positive emotions by retail staff and levels of retail sales.

Common sense might tell you that positive staff sell more, however Sutton and Rafaeli found that the relationship was the other way around: higher levels of sales resulted in more positive emotions among staff. This was unexpected, but also makes sense.

Sometimes you can infer the direction of causality with 100% certainty. For example with the relationship between age and voting patterns. Younger people are less likely to vote, and thus age must be the independent variable. There is no way voting patterns can influence age.

Contingency Tables

A contingency table is like a frequency table but it allows two variables to be analysed simultaneously so that relationships between them can be examined.

They usually contain percentages since these make the relationships easier to see.

MaleFemale
NumberPercentNumberPercent
Sociology603012040
Maths20106020
English20106020
Dance100506020
200100300100
Students studying subjects in one college, by gender.

The table above contains both the numbers of the variables and their percentages as a proportion of the total next to them.

The percentages are column percentages: they calculate the number in each cell as a percentage of the total number in that column. Hence why the percent columns add up to 100!

In the above table we can see that there are more female students than male students and females dominate in every subject other than dance, because dance is much more popular among male students. (It’s quite an unusual college!)

Contingency tables can be applied to all types of variable, but they are not always an efficient method.

Pearson’s R

Pearon’s R is a method for examining relationships between interval/ ratio variables. The main features of this method of analysis are:

  • The coefficient will lie between 0 and 1 which indicates the strength of a relationship. 0 means no relationship, 1 means a perfect relationship.
  • The closer the coefficient is to one, the stronger the relationship, the closer to 0, the weaker the relationship.
  • The coefficient will either be positive or negative which indicates the direction of the relationship.

Examples of Pearsons’ R correlations

The table below show the relationship between age and four other variables. (Note this data is hypothetical or made up and for illustrative purposes only!)

Age grouphappiness scorewealth £hours watching TV per weekave no of friends
2010£10,000155
308£20,000108
406£30,0003311
504£40,0002210
60-692£50,000916
Pearson’s R-1100.93

The correlations are as follows:

  • between age and happiness: perfect negative correlation.
  • between age and wealth: perfect positive correlation.
  • between age and watching TV: no correlation
  • between age and number of friends: strong positive correlation.

The scatter plots for the above data are as follows:

Age and happiness

Age and wealth

Age and TV

Age and friends

Spearman’s Rho

Spearmans’ Rho is often represented with Greek letter p and is designed for use with ordinal variables. It can also be used when one variable is ordinal and the other is interval/ ratio.

It is exactly the same as Pearson’s R in that the computed value will be between 0 and 1 and either positive or negative.

Pearson’s R can only be used when both variables are interval/ ratio. Spearman’s Rho can be used when on the the variables is ordinal.

Phi and Cramer’s V

The Phi coefficient is used for the analysis of the relationship between two dichotomous variables. Like Pearsons R it results in computed statistic which is either positive or negative and varies between 0 and 1.

Cramer’s V can be used with nominal variables. It can only show the strength of relation between two variables, not the direction.

Cramers’ V is usually reported along with a contingency table and chi-square test.

Comparing means and eta

If you need to examine the relationship between an interval/ ratio variable and a nominal variable if the latter can be relatively unambiguously identified as the independent variable, then it might be useful to compare the means of the interval/ratio variable for each subgroup of the nominal variable.

This procedure is often accompanied by a test of association between variables called eta. The statistic expresses the level of association between the two variables will always be positive.

Eta-squared expresses the amount of variation in the interval/ ratio variable that is due to the nominal variable.

Signposting and sources

This material should be of interest to anyone studying quantitative social research methods.

To return to the homepage – revisesociology.com

Bryman, A (2016) Social Research Methods

Structured Interviews in Social Research

Structured interviews are a standardised way of collecting data typically using closed, pre-coded surveys.

A structured interview is where interviewers ask pre-written questions to candidates in a standardised way, following an interview schedule. As far as possible the interviewer asks the same questions in the same order and the same way to all candidates. 

(An exception to this is filter questions in which case the interviewer may skip sub-questions if a negative response is provided). 

Answers to structured interviews are usually closed, or pre-coded, and the interviewer ticks the appropriate box according to the respondents’ answers. However some structured interviews may be open ended in which case the interviewer writes in the answers for the respondent.

Social surveys are the main context in which researchers will conduct structured interviews.

This post covers:

  • the advantages of structured interviews
  • the different contexts in which they take place (phone and computer assisted).
  • the stages of conducting them: from knowing the schedule to leaving!
  • their limitations.

Advantages of Structured Interviews

The main advantage of structured interviews is that they promote standardisation in both the processes of asking questions and recording answers.

This reduces bias and error in the asking of questions and makes it easier to process respondents’ answers.

The two main advantages of structured interviews are thus:

  • Reducing error due to interviewer variability.
  • Increasing the accuracy and ease of data processing.

Reducing error due to interviewer variability

Structured interviews help to reduce the amount of error in data collection because they are standardised.

Variability and thus error can occur in two ways:

  • Intra-interviewer variability: occurs when an interviewer is not consistent with the way they ask the questions or record the answers.
  • Inter-interviewer variability: when there are more than two interviewers who are not consistent with each other in the way they ask questions or record answers.

These two sources of variability can occur together and compound the the problem of reduced validity.

The common sources of error in survey research include:

  1. A poorly worded question.
  2. The way the question is asked by the interviewer.
  3. Misunderstanding on the part of the respondent being interviewed.
  4. Memory problems on the part of the respondent.
  5. The way the information is recorded by the interviewer.
  6. The way the information is processed: coding of answers or data entry.

Because the asking of questions and recording of answers are standardised, this means any variation in answers from respondents should be due to true or real variation in the respondents answers, rather than variation arising because of differences in the interview context.

Accuracy and Ease of Data Processing

Structured interviews consist of mainly closed, pre-coded questions or fixed choice questions.

With closed-questions the respondent is given a limited choice of possible answers and is asked to select which response or responses apply to them.

The interviewer then simply ticks the appropriate box.

This limit box ticking procedure limits the scope for interviewer bias to introduce error. There is no scope for the interviewer to omit or modify anything the respondent says because they are not writing down their answer.

Another advantage with pre-coded data gained from the structured interview is that it allows for ‘automatic’ data processing.

If answers had been written down or transcribed from a recording, a researcher would have to examine this qualitative data, sort and assign the various answers to categories.

For example if a survey had produced qualitative data on what respondents thought about Brexit, the researcher might categories the range of answers into ‘for Brexit’, ‘neutral’, and ‘against Brexit’.

This process of reducing more complex and varied data into fewer and simpler ‘higher level’ categories is known as coding data, or establishing a coding frame and is necessary for quantitive analysis to take place.

Coding (whether done before or after a structured interview takes place) introduces another source or potential error. Answers may be categorised incorrectly by the researchers. The researchers may categorise answers differently to how the respondents themselves would have categorised their answers.

There are two sources of error in recording data:

  • Intra-rater-variability: where the person applying the coding is inconsistent in the way they apply the rules of assigning answers to categories.
  • Inter-rater-variability: where two different raters apply the rules of assigning answers to categories differently.

If either or both of the above occur then variability in responses will be due to error rather than true variability in the responses.

The closed question survey/ interview avoids the above problem because respondents assign themselves to categories, simply by picking an option and the interviewer ticking a box.

There is very little opportunity with pre-coded interviews for interviewers or analysers to misinterpret or miss-assign respondents’ answers to the wrong categories.

Structured Interview Contexts

Structured interviews tend to be done when there is only one respondent. Group interviews are usually more qualitative because they dynamics of having two ore more respondents present mean answers tend to be more complex, and so tick-box answers are not usually sufficient to get valid data.

Besides the face to face interview, there are two particular contexts which are common with structured interviewing: telephone interviewing and computer assisted interviewing. (These are not mutually exclusive).

Telephone interviewing

Telephone interviews are very common with market research companies, and opinion polling companies such as YouGov. They are used less often by academic researchers but an exception to this was during the Covid-19 Pandemic when many studies which would usually rely on in-person interviews had to be carried out over the phone.

The advantages of telephone interviews

The advantages of telephone interviews compared to face to face interviews the advantages of telephone interviews are:

  • Telephone interviews are cheaper and quicker to administer because there is no travel time or costs involved in accessing the respondents. The more dispersed the research sample is geographically the larger the advantage.
  • Telephone interviews are easier to supervise than face to face interviews. You can have one supervisor in a room with several phone interviewers. Interviewers can be recorded and monitored, although care has to be taken with GDPR.
  • Telephone interviews reduce bias due to the personal characteristics of the interviewers. It is much more difficult to tell what the class background or ethnicity or the interviewer is over the phone, for example.

The limitations of phone interviews

  • People without phones cannot be part of the sample.
  • Call screening with mobile phones has greatly reduced the response rate of phone surveys.
  • Respondents with hearing impediments will find phone interviews more difficult.
  • The length of a phone interview generally can’t be sustained over 20-25 minutes.
  • There is a general belief that telephone interviews achieve lower response rates than face to face interviews.
  • There is some evidence that phone interviews are less useful when dealing with sensitive topics but the data is not clear cut.
  • There may. be validity problems because telephone interviews do not allow for observation. For example an interviewer cannot observe if a respondent is confused by a question.
  • In cases where researchers need specific types of people, telephone interviews do not allow us to check if the correct types of people are actually those being interviewed.

Computer assisted Interviewing 

With computer assisted interviewing interviews questions are pre-written and appear on the computer screen. Interviewers follow the instructions and read out questions in order and key in the respondents’ answers, either as open or closed responses. 

There are two main types of Computer Assisted Interviewing:

  • CAPI – Computer Assisted Personal Interviewing. 
  • CATI – Computer Assisted Telephone Interviewing.

Most telephone interviews today are Computer Assisted. There are several survey software packages that allow for the construction of effective surveys with analytics tools for data analysis. 

They are less popular for personal interviews but have been growing in popularity. 

CATI and CAPI are more common among commercial survey organisations such as IPSOS but are used less in academic research conducted by universities. 

The advantages of computer assisted interviewing

CAPI are very useful for filter questions as the software can skip to the next question if the previous one isn’t relevant. This reduces the likelihood of the interviewer asking irrelevant questions or missing out questions. 

They are also useful for prompt-questions as flash cards can be generated on the screen and shown to the respondents as required. This should mean respondents are more likely to see the flash-cards in the same way as there is no possibility for the researcher to arrange them in a different order for different respondents, as might be the case with physical flashcards. 

Another advantage of computer assisted interviewing is automatic storage on the computer or cloud upload which means there is no need to scan paper interview sheets or enter the data manually at a later date. 

Thus Computer Assisted Interviews should increase the level of standardisation and reduce the amount of variability error introduced by the interviewer. 

The disadvantages of Computer Assisted Interviewing:

  • They may create a sense of distance and disconnect between the interviewer and respondents. 
  • Miskeying may result in the interviewer entering incorrect data, and they are less likely to realise this than with paper interviews. 
  • Interviewers need to be comfortable with the technology.

Conducting Structured Interviews 

The procedures involved with conducting an effective structure interview include:

  • Knowing the interview schedule
  • Gaining access 
  • Introducing the research 
  • Establishing rapport 
  • Asking questions and recording answers 
  • Leaving the interview.

The processes above are specifically in relation to structured interviews, but will also apply to semi-structure interviews.

The interview schedule 

An interview schedule is the list of questions in order, with relevant instructions about how the questions are to be asked. Before conducting an interview, the interviewer should know the interview schedule inside out. 

Interviews can be stressful and pressure can cause interviewers to not follow standardised procedures. For example, interviewers may ask questions in the wrong order or miss questions out. 

When several interviewers are involved in the research process it is especially important that all of them know the interview schedule to ensure questions are asked in a standardised way. 

Gaining access

Interviews are the interface between the research and the respondents and are thus a crucial link in ensuring a good response rate. In order to gain access interviews need to:

  • Be prepared to keep calling back with telephone interviews. Keep in mind the most likely times to get a response. 
  • Be self-assured and confident. 
  • Reassure people that you are not a salesperson, but doing research for a deeper purpose. 
  • Dress appropriately. 
  • Be prepared to be flexible with time: finding a time that fits the respondent if first contact isn’t convenient. 

Introducing the research 

Respondents need to be provided with a rationale explaining the purposes of the research and why they are giving up their time to take part. 

The introductory rationale may be written down or spoken. A written rationale may be sent out to prospective respondents in advance of the research taking place, as is the case with those selected to take part in the British Social Attitudes survey. A verbal rationale is employed with street-based market research, cold-calling telephone surveys and may also be reiterated during house to house surveys. 

An effective introductory statement can be crucial in getting respondents to take part. 

What should an introductory statement for social research include?

  • Make clear the identity of the interviewer.
  • Identify the agency which is conducting the research: for example a university or business. 
  • Include details of how the research is being funded. 
  • Indicate the broader purpose of the research in broad terms: what are the overall aims?
  • Give an indication of the kind of data that will be collected. 
  • Make it clear that participation is voluntary. 
  • Make it clear that data will be anonymised and that the respondent will not be identified in any way, by data being analysed at an aggregate level. 
  • Provide reassurance about the confidentiality of information. 
  • Provide a respondent with the opportunity to ask questions. 

Establishing rapport with structured interviews

Rapport is what makes the respondent feel as if they want to cooperate with the researcher and take part in the research. Without rapport being established respondents may either not agree to take part or terminate the interview half way through! 

Rapport can be established through visual cues of friendliness such as positive body language, listening and good eye contact. 

However with structured interviews, establishing rapport is a delicate balancing act as it is crucial for the interviewers be as objective as possible and not get too close to the respondents.

Rapport can be achieved by being friendly with the interviewee, although interviewers shouldn’t take this too far. Too much friendliness can result in the interview taking too long and the interviewee getting bored. 

Too much rapport can also result in the respondent providing socially desirable answers. 

Asking Questions and Recording Answers 

With structured interviews it is important that researchers strive to ask the same questions in the same way to all respondents. They should ask questions as written in order to minimise error. 

Experiments in question-wording suggest that even minor variations in wording can influence replies. 

Interviewers may be tempted to deviate from the schedule because they feel awkward asking some questions to particular people, but training can help with this and make it more likely that standardisation is kept in place. 

Where recording answers is concerned, bias is far less likely with pre-coded answers. 

PROVIDING Clear instructions 

Interviews need to follow clear instructions through the progress of the interview. This is important if an interview schedule includes filter questions. 

Filter questions require the interviewer to ask questions of some respondents but not to others. Filler questions are usually indented on an interview schedule. 

For example: 

  1. Did you vote in the last general election…?  YES / NO 

1a (to be asked if respondent answered yes to Q1)

Which of the following political parties did you vote for? Conservatives/ Labour/ Lib Dems/ The Green Party/ Other. 

The risk of not following instructions is that the respondent may be asked questions that are irrelevant to them, which may be irritating. 

Question order

Researchers should stick to the question order on the survey. 

Leapfrogging questions may result in questions skipped not being asked because the researcher could forget to go back to them. 

Changing the question order may also lead to variability in replies because questions previously asked may affect how respondents answer questions later on in the survey. 

Three specific examples demonstrate why question order matters:

People are less likely to respond that taxes should be lowered if they are asked questions about government spending beforehand. 

In victim surveys if people are asked about their attitudes to crime first they are more likely to report that they have been a victim of crime in later questions. 

One question in the 1988 British Crime Survey asked the following question:

‘Taking everything into account, would you say the police in this area do a good job or a poor job? 

For all respondents this question appeared early on, but due to an admin error the question appeared twice in some surveys, and for those who answered the question twice:

  • 66% gave the same response
  • 22% gave a more positive response
  • 12% gave a less positive response. 

The fact that only two thirds of respondents gave the same response twice clearly indicates that the effect of question order can be huge. 

One theory for the change is that the survey was about crime and as respondents thought more in-depth about crime as the interview progressed, 22% felt more favourable to the police and 13% less favourable, this would have varied with their own experiences. 

Rules for ordering questions in social surveys

  • Early questions should be clearly related to the topic of the research about which the respondent has already been informed. This is so the respondent immediately feels like the questions are relevant. 
  • Questions about age/ ethnicity/ gender etc. should not be asked at the beginning of the interview 
  • Sensitive questions should be left for later.
  • With a longer questionnaire, questions should be grouped into sections to break up the interview. 
  • Within each subgroup general questions should precede specific ones. 
  • Opinions and attitudes questions should precede questions about behaviour and knowledge. Questions about the later are less likely to be influenced by question order. 
  • If a respondent has already answered a later question in the course of answering a previous one, that later question should still be asked. 

Probing questions in structured interviews 

Probing may be required in structured interviews when 

  • respondents do not understand the question and either ask for or it is clear that they need more information to provide an answer. 
  • The respondent does not provide a sufficient answer and needs to be probed for more information. 

The problem with the interviewer asking additional probing questions is that they introduce researcher-led variability into the interview context. 

Tactics for effective probing in structured interviews:
  • Employ standardised probes. These work well when open ended answers are required. Examples of standardised probes include: ‘Could you say a little more about that?’ or ‘are there any other reasons why you think that?’. 
  • If a response does not allow for a pre-existing box to be ticked In a closed ended survey the interviewer could repeat the available options
  • If the response requires a number rather than something like ‘often’ the researcher should just persist with asking the question.  They shouldn’t try and second guess a number!

Prompting 

Prompting occurs when the interviewer suggests a possible answer to a question to the respondent. This is effectively what happens with a closed question survey or interview: the options are the prompts. The important thing is that the prompts are the same for all the respondents and asked in the same way. 

During face to face interviews there may be times when it is better for researchers to use show cards (or flash cards) to display the answers rather than say them. 

Three contexts in which flashcards are better:

  • When there is a long list of possible answers. For example if asking respondents about which newspapers they read, it would be easier to show them a list rather than reading them out!
  • With Likert Scales, ranked for 1-5 for example, it would be easier to have a showcard with 1-5 and the respondent can point to it, rather than reading out ‘1,2,3,4,5’. 
  • With some sensitive details such as income, respondents might feel more comfortable if they are shown income bands with letters attached, then they can say the letter. This allows the respondent to not state what their income is out loud. 

Leaving the Interview 

On leaving the interview thank the respondent for taking part. 

Researchers should not engage in further communication about the purpose of the research at this point beyond the standard introductory statement. To do so means this respondent may divulge further information to other respondents yet to take part, possibly biassing their responses.

Problems with structured interviews 

Four problems with structured interviews include:

  • the characteristics of the interviewer interfering with the results.
  • Response sets resulting in reduced validity (acquiescence and social desirability).
  • The problem of lack of shared meaning.
  • The feminist critique of the unequal power relationship between interviewer and respondent.

Interviewer characteristics

The characteristics of the interviewer such as their gender or ethnicity may affect the responses a respondent gives. For example, a respondent may be less likely to open up on sensitive issues with someone who is a different gender to them.  

Response Sets 

This is where respondents reply to a series of questions in a consistent way but one that is irrelevant to the concept being measured. 

This is a particular problem when respondents are answering several Likert Scale questions in a row. 

Two of the most prominent types of response set are ‘acquiescence’ and ‘social desirability bias’ 

Acquiescence 

Acquiescence refers to a tendency of some respondents to consistently agree or disagree with a set of questions. They may do this because it is quicker for them to get through the interview. This is known as satisficing. 

Satisficing is where respondents reduce the amount of effort required to answer a question. They settle for an answer that is satisfactory rather than making the effort to generate the most accurate answer. 

Examples of satisficing include:

  • Agreeing with yes statements or ‘yeasaying’.
  • Opting for middle point answers on scales.
  • Not considering the full-range of answers in a range of closed questions, for example picking the first or last answers. 

The opposite of satisficing is optimising. Optimising is where respondents expend effort to arrive at the best and most appropriate answer to a question. 

It is possible to weed out respondents who do this by ensuring there is a mix of positive and negative sentiment in a batch of Likert questions. 

For example you may have a batch of three questions designed to measure attitudes towards Rishi Sunak’s performance as Primeminister.

If you have two scales where ‘5’ is positive and one where 5 is Negative, for example:

  • Rishi Sunak is an effective leader 1.2.3.4.5
  • Rishi Sunak has managed the economy well 1.2.3.4.5 
  • Rishi Sunak is NOT to be trusted 1.2.3.4.5  

If someone is acquiescing without thinking about their answers, they are likely to circle all 5s, which wouldn’t make sense. Hence we could disregard this response and maybe even the entire survey from this individual. 

Social desirability bias 

Socially desirable behaviours and attitudes tend to be over-reported. This can especially be the case for sensitive questions.

Strategies for reducing social interviews bias
  • Use self-completion forms rather than interviewers. 
  • Soften the question for example ‘even the calmest of car drivers sometimes lose their temper when driving, has this ever happened to you?

The problem of meaning 

Structured surveys and interviews assume that respondents share the same meanings for terms as the interviewers. 

However, from an interpretivist perspective interviewer and respondent may not share the same meanings. Respondents may be ticking boxes but mean different things to what the interviewer thinks they mean. 

The issue of meaning is side-stepped in structured interviews. 

The feminist critique of structured interviews 

The structure of the interview epitomises the asymmetrical relationship between researcher and respondent. This is a critique made of all quantitative research. 

The researcher extracts information from the respondent and gives little or nothing in return. 

Interviewers are even advised not to get too familiar with respondents as giving away too much information may bias the results. 

Interviewers should refrain from expressing their opinions, presenting any personal information and engaging in off-topic chatter. All of this is very impersonal. 

This means that structured interviews are probably not appropriate for very sensitive topics that involve a more personal touch. For example with domestic violence, unstructured interviews which aim to explore the nature of violence have revealed higher levels of violence than structured interviews such as the Crime Survey of England and Wales.

Sources and signposting

Structured interviews are relevant to the social research methods module within A-level sociology.

This post was adapted from Bryman, A (2016) Social Research Methods.

Bias in Presenting Quantitative Data

Newspapers can ‘bias’ the presentation of quantitative data by stretching out the scale of the data they present, making differences between bars seem larger than they actually are (or vice versa!).

Quantitative research methods are usually regarded as being more objective than qualitative research methods as there is less room for the subjective biases and interpretations of researchers to influence the data collection process in quantitative research.

However, bias can still creep into quantitative research and one way this can happen is over the decision in how to present the data in even a basic visualisation.

Specifically, one can take the same data and stretch out the scale of a graph displaying that data and give the impression that the differences between the subjects under investigation are wider than in the original presentation.

Bias in scaling graphs

A recent example of what I’m going to call ‘bias in scaling graphs‘ can be found in how an article by The Guardian displays recent data on how much GDP (Gross Domestic Product) has grown in different European Countries between 2019 to 2022.

the same data from the Office for National Statistics in a more ‘stretched out’ scale which

The Guardian article (September 2022) in question is this one: UK is only G7 country with smaller economy than before Covid-19 which displays the following graphical data to show how the UK’s GDP is falling compared to other G8 Nations.

Source: The Guardian, 2022

Now you might think ‘this is quantitative data so it’s objective’ and on that basis no one can argue with what it’s telling us – the U.S. economy is doing VERY WELL compared to most Euro nations, growing more than TWICE as fast is the impression we get.

And after all, this is fair enough – a 2.6% growth rate is more than twice as fast as a 1% or less growth rate!

Same data different scale…

However you might think differently about the above when you see the same data (almost) displayed by the UK Government in this publication: GDP International Comparisons: Key Economic Indicators which features the graph below:

Source: Commons Library 2022

Note that the data is ALMOST the same – except for Britain’s data being different at 0.6% positive rather than negative – the Guardian article was written after the UK Gov report on the basis of the UK Economic growth forecast being downgraded, but everything else is the same.

My point here is that the data above is (almost) the same and yet the graph has been ‘squashed’ compared to the graph showing the same data in The Guardian article – note the scaling is the same – if you look above you can see that the US Bar is twice as high as the EU bars, but the difference APPEARS smaller because it’s not as stretched.

The Guardian achieves its stretched out scale by displaying the bars horizontally rather than vertically – that way there is more room to stretch them out and make the differences appear larger in a visual sense.

And with the UK now in an economic downturn it makes Britain seem further behind compared to other countries than what would have been the case with the more squished presentation in the Government’s version.

But aren’t they both biased…?

In a word yes – someone has to decided the format in which to present the data which is going to skew what people see.

But the reason I’m calling out The Guardian on this is for two reasons:

  1. it’s unusual to display bars horizontally, the standard is vertically, but there’s not way you can stretch out the visualisation vertically without it looking very odd.
  2. The differences are quite small – we are talking 1-2% points of change so having a more squished scale to represent the small differences seems appropriate, The Guardian has chosen to exaggerate these from the original display possible to make them seem larger than they actually are.

Signposting and Related Posts

This material should be of interest to anyone studying Research Methods.

It’s also a useful example of Left Wing bias in the media, most sociologists focus on right wing bias!

Please click here to return to the homepage – ReviseSociology.com

Invalid Official Statistics on Volunteering?

I caught an episode of Woman’s Hour last week in which the presenter kept mentioning that according to a recent survey 62% of people in the UK had volunteered in the last week, and inviting people to discuss their experiences of voluntary work.

The survey in question (excuse the pun) was the Volunteering and Charitable Giving Community Life Survey 2020-2021.

The show was then peppered with references to people’s volunteering efforts, such as working with the homeless at Christmas, staffing food banks, helping out with the Covid-vaccination efforts and so on.

And such examples fit very well with my own imagination of what ‘voluntary work’ involves – to my my mind a volunteer is someone who commits an hour or maybe more a week (I have a low bar in terms of time!) to do something such as the above, probably in conjunction with a formal charity or at least other people as part of a team.

But I just couldn’t believe that 62% of people did that kind of voluntary work last year.

And it turns out that they don’t

The government survey (a form of official statistics) that yielded these results distinguishes between formal and informal volunteering.

The former type: formal volunteering is what I (and probably most people) think of as ‘real volunteering’ – it was these kind of things the Woman’s Hour presenter was interested in hearing about and publicising.

However, only 17% of people did formal volunteering last year…..

Just over 50% of people did ‘informal volunteering’ but this has a VERY LOW BAR for inclusion. Basically, if you babysat your friend’s kids for one day at some point last year, you get to tick the box saying that you did ‘informal volunteering’.

This basically means that ANYONE with a young family has done what this society defines as ‘informal volunteering’ – I mean surely EVERY FAMILY babysits once in a while for their friends – this is just normal parenting – children have friends, parents want a day to themselves every now and then so you ‘babysit swap’ – or sleepovers, technically you could count having your friends’ children over for a sleepover with your own kids as ‘having done voluntary work’ in the last year’.

Add formal and informal volunteering (/ mutal parental favours) together and you get that 62% figure that the Woman’s Hour presenter was talking about.

However to my mind 62% is a completely misleading figure – 17% is how many people ACTUALLY volunteer every year!

It’s a bit annoying TBH – as also in the ‘informal volunteerin’ category are things such as buying shopping for someone who can’t get out of the house and that’s LEGIT, or valid volunteering in my mind, but the category is too inclusive to give us any useful data on this.

Relevance to A-Level Sociology

This is a wonderful example of how a definition which is too broad, in this case what counts as ‘volunteering’ can give a misleading, or invalid impressing of how much actual voluntary work really goes on in the UK.

This survey is a form of official statistics, so you can use this example to be critical of them.

it is possible that the government officials deliberately made the definition so broad so as to give the impression that there is more community spirit, or more of a ‘big society’ around than there actually is – because if there’s lots of community work and voluntary work going on, it’s easier for the government to justify doing less.

However, even with these very broad definitions, the trend in volunteering has still been going down in recent years!

Are one in five people really disabled?

According to official statistics 19% of working aged adults, or one in five people self-report as being ‘disabled’, and this figure has been widely used in the media to promote pro-disability programming.

How do we Define Disability?

According to the formal, legal, UK definition under the 2010 Equality Act someone is disable if they ‘have a physical or mental impairment that has a substantial and ‘long-term’ negative effect on your ability to do normal daily activities’.

That 19% figure sounds like a lot of people, in fact it is a lot of people – that’s 13 million people in the United Kingdom.

But maybe it’s only a lot because when we think of ‘disability’ we tend to immediately think of people will physical and very visible disabilities, the classic image of a disable person being someone in a wheelchair, which the media generally doesn’t help with its over-reliance of wheelchair users to signify they are ‘representing the disabled’.

In fact there are ‘only’ 1.2 million wheelchair users in Britain, or less than one in ten people who classify as disabled.

How do we measure disability ?

The 19%, or one five figure comes from the UK’s Family Resources Survey, the latest published result coming from the 2018/19 round of surveys.

This is a pretty serious set of surveys in which respondents from 20 000 households answer questions for an hour, some related to disability.

The Questions which determined whether someone classifies as disable or not are as follows:

  • Have you had any long term negative health conditions in the last 12 months? If you respond yes, you move on to the next two questions:
  • Do any of these health conditions affect you in any of the following areas – listed here are the top answers: mobility/ stamina, breathing or fatigue/ mental health/ dexterity/ other 
  • Final question: do any of your conditions or illness impact your ability to carry out your day to day activities -the responses here are on a 4 point likehert scale ranging from a not at all to a lot.

Anyone ticking YES/ YES and either ‘my illness affects me a lot or a little’ is classified by the UK government as disabled.

Validity problems with this way of measuring disability

The problem with the above is that if you have Asthma and similar mild conditions you could be classified as disabled, and this doesn’t tie in with the government’s own definition of disability which requires that someone has a condition which ‘substantially’ affects their ability to carry out every day tasks.

Stating that you have asthma which affects your breathing a little, does NOT IMO qualify you as disabled, but it does in this survey.

The government doesn’t publish the breakdown of responses to the final disability question, but it’s roughly a 50-50 split between those answering ‘a lot’ and ‘a little.

In conclusion, it might be more accurate to say that one in ten people is disabled.

Relevance to A-level sociology

This short update should be a useful contemporary example to illustrate some of the validity problems associated with using social surveys, especially for topics with a high degree of subjectivity such as what disability means!

NB – I gleaned the above information from Radio Four’s More or Less, the episode which aired on Weds 10th Feb 2021.

Autobiographies in social research

An autobiography is an account of the life of an individual, written by that individual, sometimes with the assistance of a professional biographer.

One of the most popular UK autobiographies of 2020 was Harry and Meghan’s ‘Finding Freedom’, and it is supposed to ‘dispel rumors about their relationship from both sides of the pond’.

The Amazon critics, however, disagree. The comments ranked at 2 and 3 (accessed 18 August 2020)  in order of usefulness both give the book 1 star out of five and comment thus:

Dela – 1.0 out of 5 stars Pure fantasy

“… the reader can only assume a good proportion of [this book is] made up… the reader is left with a very poor impression of the couple. As someone else said – this is very much an ‘own goal’.”

600 people found this helpful

hellsbells123 – 1.0 out of 5 stars Dross of the highest order – all time low for Harry

“Dreadful book full of ridiculous unnecessary detail from a couple who profess to want privacy. This is a book masquerading as a love story but full of bile, hatred and bitterness. “

578 people found this helpful

Source

The strengths and limitations of autobiographies as a source of data

Whether they have a readership of millions or tens, autobiographies are selective in the information they provide about the life of the author.

They thus tell you what the author wants you to know about themselves and their life history.  

However, you have no way of knowing whether the events outlined in an autobiography are actually and I wouldn’t even trust an autobiography to give me an accurate view of the authors’ own interpretation of what the most significant events in their life history were.

The author may exaggerate certain events, either because they mis-remember them, or because they want their book to sell, thus they are selecting what they think their audience will want to read.

In some cases, events may even be fabricated altogether.

As a rule, I’d say that the more famous someone is, then the less valid its contents are.  An exception to this would be less famous ‘positive thinking’ lifestyle gurus, whose income maybe depends more on their book sales than really famous people, who could possibly afford to be honest in the biographies!

Either way, there are so many reasons why an autobiography might lack validity, I wouldn’t trust the content of any of them – think about it, how honest would you be in your autobiography, if you knew anyone could read it?

Using autobiography sales data may be more useful…

IMO the value of autobiographies lies in telling us what people want to hear, not necessarily in getting to the truth of people’s personal lives.

If want to know what people want to hear, look a the sales volumes – there are really no surprises…..

Top selling autobiographies of all time (source)

Relevance to A-level Sociology?

Twitter data is a source of secondary qualitative data (public rather than private data) and so is relevant to the research methods part of the course.

Personal Documents in social research

Personal documents are those which are intended only to be viewed by oneself or intimate relations, namely friends or family. They generally (but not always) not intended to be seen by a wider public audience.

For the purposes of A-level sociology, the two main types of personal document are diaries and personal letters.

Today, I’m inclined to include personal ‘emails’ and certain intimate chat groups – such as circles of close friends chatting on WhatsApp, in this definition, because the data produced here will reveal personal thoughts and feelings, and isn’t intended for wider public consumption.

I think we can also include some personal blogs and vlogs in this definition, as some of these do reveal personal thoughts and feelings, even if they are written to be viewed by the general public – people sharing aspects of their daily lives on YouTube, or people writing more focused blogs about the travel experiences or how they are coping with critical illnesses, all have something of the ‘personal’ about them.

We could also include ‘naughty photos’ intended only to be shared with an intimate partner, but I think I’ll leave an analysis of those kind of documents out of this particular post!

Just a quick not on definitions – you need to be careful with the distinction I think between personal and private documents.

  • Personal documents = anything written which reveals one’s personal thoughts and feelings. These can either be written for consumption by oneself, by close others, or sometimes for public consumption.
  • Private documents – these are simply not intended to be viewed by a wider public audience, and can include someone’s personal diary or intimate letters/ photos between two people, but company accounts and strategy can also count as private documents, even if shared by several dozens of people, if not intended for consumption by a wider audience.

As with all definitions, just be clear what you’re talking about.

Certainly to be safe, for the sake of getting marks in an A-level sociology exam question on the topic, personal diaries and ‘intimate letters’ are certainly both types of personal document.

Examples of sociological research using Personal Documents

Thomas and Znaniecki, The Polish Peasant (1918/ 1921)

Ozana Cucu-Oancea argues that this remains the most significant work using personal documents in the history of the social sciences (source).

The study used a range of both personal and public documents, and the former included hundreds of letters between Polish immigrants and their families back home in Poland, as well as several personal diaries.

In all the work consisted of 2,200 pages in five volumes, so it’s pretty extensive, focussing  on the cultural consequences of Polish migration.

The documents revealed touched on such themes as crime,  prostitution, alcoholism; and the problem of social happiness in general.

What was significant about this study from a theoretical point of view is that it put the individual at the centre of social analysis and stood in contrast to Positivism which was popular at that time.

The limitations of using personal documents in social research

  • There is a problem of interpretation. The researchers might misinterpret the meaning of the documents. The less contextual information the researchers have, the more likely this is to happen.
  • Practically it takes a long time to sift through and organise the information.
  • Who cares? Let’s face it, are you really going to go and read a 2, 200 page work analysing letters from Polish Immigrants, written over 100 years ago?

Relevance to A-level Sociology?

Twitter data is a source of secondary qualitative data (public rather than private data) and so is relevant to the research methods part of the course.

Please click here to return to the homepage – ReviseSociology.com

A-Level Sociology Official Statistics Starter (Answers)

One of the supposed advantages of official statistics is that they are quick and easy to use to find out basic information.

To test this out, I use the following as a starter for my ‘official statistics’ lesson with my A-level sociology students:

I print the above off as a one paged hand-out and give students 10 minutes to find out the approximate answers to each of the questions.

If some students manage to find all of them in less than 10 minutes, they can reflect on the final question about validity. I wouldn’t expect all students to get to this, but all of them can benefit from it during class discussion after the task.

Official statistics stater: answers

Below are the answers to the questions (put here because of the need to keep updating them!)

How many people are there in the UK?

66, 800 000 estimated in 2020

Source: Office for National Statistics Population Estimates.


How many households are there in the UK?

27.8 million in 2019

Source: ONS Families and Households in the UK 2019.


How many marriages were there last year in the UK?


240 000 in 2017, latest figures available

Source: ONS Marriages in England and Wales

How many cases of Domestic Violence were there in England and Wales last year?

In the year ending March 2019, an estimated 2.4 million adults aged 16 to 74 years experienced domestic abuse in the last year (1.6 million women and 786,000 men).

Source: Domestic Abuse in England and Wales, November 2019.


What proportion of GCSE grades achieved 4 or above in 2020, how does this compare to 2019?

79% of GCSE entries in 2020 received 4 or above, up from 70% in 2019.

Source: The Guardian.

How many students sat an A level in Sociology last year?

38, 015 students sat an exam in A-level sociology in 2019.

Source: Joint Council for Qualifications (curse them for PDFing their data and making it less accessible for broader analysis).

Do any of the above sources lack validity?

It’s hard to make an arguement that the last two have poor validity – however, you can argue that these are invalid measurements of students’ ability, because of variations in difficulty of the exams and a range of other factors.

With the DV stats, there are several reasons why these cases may go under reported such as fear and shame on the part of the victims.

Marriages, there may be a few unrecorded forced marriages in the UK.

In terms of households, the validity is pretty high, as you just count the number of houses and flats, however, definitions of what counts as a household could lead to varying interepretations of the numbers.

The population stats are an interesting one – we have records of births, deaths and migration, but illegal immigration, well be it’s nature it’s difficult to measure!

The point of this starter and what comes next…

It’s kinaesthetic demonstration of the practical advantages of official statistics, and gives students a chance to think about validity for themselves.

Following the starter, we crack on with official statisics proper – considering in more depth the strengths and limitations of different types of official statistics, drawn from other parts of the A-level sociology specification.

A-level teaching resources

If you’re interested in receiving a paper copy of this, along with a shed load of other fully modifiable teaching resources, why not subscribe to my A-level sociology teaching resources, a bargain at only £9.99 month.

Unlike Pearsons or Tutor to You (however you spell it), I’m independent, all subscription money comes straight to me, rather than the resource designers getting a pittance and 90% of the money going to the corporates at the top, like with those companies.

How has Coronavirus Affected Education?

The most obvious impact of the 2020 Coronavirus on education was the cancellation of GCSE and A-level exams, with the media focusing on the chaos caused by teacher predicted grades being downgraded by the exam authority’s algorithm and then the government U-turn which reinstated the original teacher predicted grades.

While it’s fair to say that this whole ‘exam debacle’ was stressful for most students, in the end the end of exam period cohorts ended up getting a good deal, on average, as they were able to pick whichever ‘result’ was best.

It’s also fair to say, maybe, that most of the students who missed their GCSEs and A-levels didn’t miss out on that much education – what they missed out on, mostly, was the extensive period of ‘exam training’ which comes just before the exam, which are skills that aren’t really applicable in real life.

However, in addition to the exam year cohorts, there were also several other years of students – primary and secondary school students, and older students, doing apprenticeships and degrees, whose ‘real education’ has been impacted by Covid-19.

This article focuses on some of the recent research that’s focused on these ‘other’ less newsworthy students.

This post has primarily been written to get students studying A-level sociology thinking about methods in context, or how to apply research methods to the study of different topics within education.

Research studies on the impact of Coronavirus on Education.

I’ve included three sources with lots of research: the DFE, The NFER and the Sutton Trust, and then a few other sources as well.

The Department for Education (DFE)

The DFE Guidance for Schools resources seems like a sensible place to start for information on the impact of the pandemic on schools.

The Guidance for the Full Opening of Schools recommends seven main measures to control the spread of the virus.

This guidance suggests there is going to be a lot more pressure on teachers to ‘police’ pupils actions and interactions – although ‘social distancing’ is required only dependent on the individual school’s circumstances, and face coverings are not mandatory. So schools do have some discretion.

All in all, it just looks like schools are going to be quite a lot more unpleasant and stressful places to be in as various measures are put in place to try and ensure contact between pupils is being limited.

The National Foundation of Education Research (NFER)

The NFER has produced several mainly survey based research studies looking at the impact of Coronavirus on schools.

One NFER survey of almost 3000 senior leaders and teachers in 2200 schools across England and Wales asking them about the challenges they face from September 2020.

The main findings of this survey are as follows:

  • teachers report that their students are an average of three months behind with their studies after missing school due to Lockdown
  • Teachers in the most deprived schools are three times more likely to report that their pupils are four months behind compared to those in the least deprived schools.
  • Over 25% of pupils had limited access to computer facilities during lock down. This was more of a problem for pupils from deprived areas.
  • Teacher anticipate that 44% of pupils will need catch up lessons in the coming academic year.
  • Schools are prioritizing students’ mental health and well being ahead of getting them caught up.

The Sutton Trust

The Sutton Trust has several reports which focus on the impact of Coronavirus, specifically on education. The reports look at the impacts on early-years and apprenticeships, for example.

A report by the Sutton Trust on the impact of the school shutdown in April noted some of the following key findings:

  • Private schools were about twice as likely to have well-established online learning platforms compared to state schools, correspondingly privately schooled children were twice as likely to receive daily online lessons compared to state school children.
  • 75% of parents with postgraduate degrees felt confident about educating their children at home, compared to less than half of parents with A-levels as their highest level of qualification
  • 50% of teachers in private schools said they’d received more than three quarters of the work back, compared to only 8% in the most deprived state schools.

Research from other organisations

  • This article from the World Economic Forum provides an interesting global perspective on the impact of coronavirus – with more than a billion children worldwide having been out of school. It highlights that online learning might become more central going forwards, but points out that access to online education various massively from country to country.
  • The Institute for Fiscal studies produced a report in July focusing on the financial impacts of Coronavirus on Universities. They estimate that the sector will have lost £11 billion in one year, a quarter of income, and that around 5% of providers probably won’t be able to survive without government assistance.
  • This article in The Conversation does a cross national comparison of how schools in four countries opened up. They grade their approach. It’s an interesting example of how some social policies are more effective than others!

Final Thoughts

I’ve by no means covered all the available research, rather I’ve tried to get some breadth in here, looking at the impact on teachers and pupils, and at things globally too.

By all means drop some links to further research in the comments!

Two-stage balloon rocket as an introduction to ‘experiments’ in sociology

The two-stage balloon rocket experiment is a useful ‘alternative’ starter to introduce the topic of experiments – a topic which can be both a little dry, and which some students will find challenging, what with all the heavy concepts!

Using the experiment outlined below can help by introducing students to the concepts of ‘dependent and independent variables’, ’cause and effect’, ‘controlled condition’s, ‘making predictions’ and a whole load of other concepts associated with the experimental method.

The experiment, including the materials you’ll need, and some discussion questions, is outlined here – you’ll need to sign up, but it’s easy enough to do, you can use your Google account.

Keep in mind that this link takes you to a full-on science lesson where it’s used to teach younger students about physics concepts – but modified and used as a starter it’s a useful intro a sociology lesson!

Also, students love to revert back to their childhood, and you can call this an activity which benefits the lads and the kin-aesthetic learners, Lord knows there’s precious little enough for them in the rest of the A-level specification, so you may as well get this in while you can!

The two-stage balloon rocket experiment

(Modified version for an intro to experiments in A-level sociology!)

  1. Set up the two-stage balloon rocket experiment in advance of the students coming into the classroom. Set it up with only a little amount of air, so it deliberately is a bit naff on its first run.
  2. Get students to discuss what they think is going to happen when you release the balloon along the wire.
  3. Release the balloon.
  4. Discuss why it didn’t work too well.
  5. Get students involved with redesigning the experiment
  6. Do round two.
  7. Use the examples of ‘balloon speed’ as ‘dependent’ and ‘amount of air/ fuel’ as independent variables’ when introducing these often difficult to understand concepts in the next stage (excuse the pun) of the lesson.

Questions you might get the students to consider:

  • What variables did we find had the biggest impact on how far the rocket traveled?
  • Did any variables have a very small impact or no impact at all?
  • If we had more time or other materials available, what changes would you make to make the rocket travel even farther?

Don’t forget to save the animal modelling balloons you would have bought for this and use them for the ‘Balloon Animals Starter’ in the next lesson on field experiments.

Please click here to return to the homepage – ReviseSociology.com