Outline and explain two practical advantages of using official statistics

practical advantages official statistics

Official Statistics are a quick and cheap means of accessing data relevant to an entire population in a country.

They are cheap for researchers to use because they are collected by governments, who often make them available online for free—for example, the UK Census.

Marxists might point out that the fact they are free enables marginalised groups to ‘keep a check on government’.

More generally, they are useful for making quick evaluations of government policy, to see if tax payers’ money is being spent effectively–

Official statistics are a very convenient way of making cross national comparisons without visiting other countries.

Most governments in the developed world today collect official statistics which are made available for free.

More and more governments collect data around the world, so there is more and more data available every year.

The United Nations Development Programme collects the same data in the same way, so it’s easy to assess the relationship between economic and social development in a global age.

Theory and Methods A Level Sociology Revision Bundle 

If you like this sort of thing, then you might like my Theory and Methods Revision Bundle – specifically designed to get students through the theory and methods sections of  A level sociology papers 1 and 3.

Contents include:

  • 74 pages of revision notes
  • 15 mind maps on various topics within theory and methods
  • Five theory and methods essays
  • ‘How to write methods in context essays’.

 

Advertisements

Outline and explain two practical problems which may affect social research (10)

practical problems social research

 

One practical problem may be gaining access

Analysis/ development – Deviant and criminal groups may be unwilling to allow researchers to gain access because they may fear prosecution if the authorities find out about them.

Analysis/ development – some groups may be unwilling to take part in research because of social stigma.

Analysis/ development – the characteristics of the researcher may exacerbate all of this.

Analysis/ development – A further problem, is that if all of the above are problems, the research is very unlikely to get funding!

A second practical problem is that some studies can be very time consuming

Analysis/ development – gaining access can take a long time, especially with covert research.

Analysis/ development – even with overt research, gaining trust, getting respondents to feel comfortable with you can take months.

Analysis/ development  – unexpected findings in PO may further lengthen the research process

Analysis/ development – Some Participant Observation studies have taken so long that the findings may no longer be relevant—e.g. Gang Leader for a Day.

Theory and Methods A Level Sociology Revision Bundle 

If you like this sort of thing, then you might like my Theory and Methods Revision Bundle – specifically designed to get students through the theory and methods sections of  A level sociology papers 1 and 3.

Contents include:

  • 74 pages of revision notes
  • 15 mind maps on various topics within theory and methods
  • Five theory and methods essays
  • ‘How to write methods in context essays’.

Outline and Explain Two Reasons Why Interpretivists Prefer to Use Qualitative Research Methods (10)

A model answer to a possible 10 mark question which could appear on the AQA’s A-level papers 1 or 3.

If you’re a bit ‘all at sea’ with Intrepretivism, you might like to review your understanding of it first of all by reading this post: social action theory: a summary.

Interpretivism research methods.png

A developed model answer…

NB Warning – this is total overkill and probably completely unrepresentative of what 95% of actual A-level students are capable of producing.

The first reason is that Interpretivists believe that social realities are complex, and that individual’s identities are the results of 1000s of unique micro-interactions.

For example, labelling theory believes that students fail because of low teacher expectations, and these expectations are communicated to students in subtle ways over many months or years, until a student ends up with a self-concept of themselves as ‘thick’.

There is simply no way that quantitative methods such as structured questionnaires can capture these complex (‘inter-subjective’) micro interactions. In order to assess whether labelling has taken place, and whether it’s had an effect, you would need to go into a school and ideally observe it happening over a long period, and talk to students about how their self-perceptions have changed, which would require qualitative methods such as unstructured interviews. Alternatively you could use diaries in which students document their changing self concept.

A further reason why qualitative methods would be good in the above example is that you could, as a researcher, check whether teachers do actually have negative perceptions of certain students (rather than it being all in the student’s minds) – again qualitative methods are vital here – you would have to probe them, ask them testing questions, and look for body-language clues and observe them interacting to really assess whether labelling is taking place.

It would be all too easy for a teacher to lie about ‘not labelling’ if they were just filling out a self-completion questionnaire.

A second reason Interpretivists prefer qualitative methods stems from Goffman’s Dramaturgical Theory – People are actors on a ‘social stage’ who actively create an impression of themselves.

Goffman distinguished between ‘front stage’ performances of social roles and the ‘back stage’ aspects of life (at home) where we are more ‘true to ourselves’.

Goffman argued that some people put on ‘genuine performances’ – e.g. one teacher might really believe in teaching, and genuinely care about their students – their professional role is who they ‘really are’. Others, however, put on what he calls ‘cynical performances’ – another teacher, for example, might act like they care, because their school tells them to, but behind the scenes they hate the job and want to do something else.

A Qualitative method such as participant observation would be pretty much the only way to uncover whether someone is genuinely or cynically acting our their social roles – because the flexibility of following the respondent from front stage to back stage would allow the researcher to see ‘the mask coming off’.

If you just used a questionnaire, even a cynical teacher would know what boxes to tick to ‘carry on the performance’, and thus would not give you valid results.

Related Posts 

A brief overview of the difference between Positivism and Interpretivsm

Theory and Methods A Level Sociology Revision Bundle 

If you like this sort of thing, then you might like my Theory and Methods Revision Bundle – specifically designed to get students through the theory and methods sections of  A level sociology papers 1 and 3.

Contents include:

  • 74 pages of revision notes
  • 15 mind maps on various topics within theory and methods
  • Five theory and methods essays
  • ‘How to write methods in context essays’.

How equal are men and women in relationships these days? Student survey results

Women who do the lioness’s share of the housework, but men and women seem to have equal control over the finances, at least according to two surveys conduct by my A Level sociology students last week.

This acts as a useful update to the topic of power and equality within relationships, especially the ‘domestic division of labour’ aspect.

I actually did two surveys this week with the students this week, both on Socrative.

For the first survey, I simply asked students via Socrative, who did most of the domestic work when they were a child (mostly mother or mostly father – full range of possible responses are in the results below), with ‘domestic work’ broken down into tasks such as cleaning, laundry, DIY etc…

For the second Survey, I got students to write down possible survey questions on post it notes, then I selected 7 of them to make a brief questionnaire which they then used as a basis for interviewing three couples about who did the housework.

Selected results from the initial student survey on parents’ housework

These results were based on students’ memory!

Housework survey 2018

Housework survey 2018 DIY

Selected results from the second survey

based on student interviews with couples

Domestic labour questionnaire 2018

men women finances survey 2018

Discussion of the validity of the results…..

These two surveys on the domestic division of labour (and other things) provided a useful way into a discussion of the strengths and limitations of social surveys more generally….we touched on the following, among other things:

  • memory may limit validity in survey one
  • lack of possible options limits validity in survey two, also serves as an illustration of the imposition problem.
  • asking couples should act as a check on validity, because men can’t exaggerate if they are with their partner.
  • there are a few ethical problems with the ‘him’ and ‘her’ categories, which could be improved upon.

Postcript – on using student surveys to teach A-level sociology

All in all this is a great activity to do with students. It brings the research up to date, it gets them thinking about questionnaire design and, if you time it right, it even gets them out of the class room for half an hour, so you can just put yer feet up and chillax!

If you want to use the same surveys the links, which will allow you to modify as you see fit, are here:

  • Quiz one – https://b.socrative.com/teacher/#import-quiz/16728393
  • Quiz two – https://b.socrative.com/teacher/#import-quiz/33508597

 

Zimbardo’s Prison Experiment

In this famously notorious experiment college students volunteered to take on the role of either prison guards or prisoners and spend time in an artificial prison. The Stanford Prison Experiment was meant to last 14 days, it had to be stopped after just six because the ‘guards’ became abusive and the ‘prisoners’ began to show signs of extreme stress and anxiety.

In 1971, psychologist Philip Zimbardo and his colleagues set out to create an experiment that looked at the impact of becoming a prisoner or prison guard. The researchers set up a mock prison in the basement of Standford University’s psychology building, and then selected 24 undergraduate students to play the roles of both prisoners and guards.

The simulated prison included three six by nine foot prison cells. Each cell held three prisoners and included three cots. Other rooms across from the cells were utilized for the prison guards and warden. One very small space was designated as the solitary confinement room, and yet another small room served as the prison yard.

The 24 volunteers were then randomly assigned to either the prisoner group or the guard group. Prisoners were to remain in the mock prison 24-hours a day for the duration of the study. Guards, on the other hand, were assigned to work in three-man teams for eight-hour shifts. After each shift, guards were allowed to return to their homes until their next shift. Researchers were able to observe the behavior of the prisoners and guards using hidden cameras and microphones.

While the prisoners and guards were allowed to interact in any way they wanted, the interactions were generally hostile or even dehumanizing. The guards began to behave in ways that were aggressive and abusive toward the prisoners, while the prisoners became passive and depressed. Five of the prisoners began to experience such severe negative emotions, including crying and acute anxiety, that they had to be released from the study early.

The Stanford Prison Experiment demonstrates the powerful role that the situation can play in human behaviour. Because the guards were placed in a position of power, they began to behave in ways they would not normally act in their everyday lives or in other situations. The prisoners, placed in a situation where they had no real control, became passive and depressed.

Criticisms of Quantitative Research

Bryman (2016) identifies four criticisms of quantitative research:

Quantitative researchers fail to distinguish people and social institutions from the world of nature

Schutz (1962) is the main critique here.

Schutz and other phenomenologists accuse quantitative social researchers of treating the social world as if it were no different from the natural world. In so doing, quantitative researchers tend to ignore the fact that people interpret the world around them, whereas this capacity for self-reflection cannot be found among the objects of the natural sciences.

The measurement process possesses an artificial and spurious sense of precision and accuracy

Cicourel (1964) is the main critique here.

He argues that the connection between the measures developed by social scientists and the concepts they are supposed to be revealing is assumed rather than real – basically measures and concepts are both effectively ‘made up’ by the researchers, rather than being ‘out there’ in reality.

A further problem is that quantitative researchers assume that everyone who answers a survey interprets the questions in the same way – in reality, this simply may not be the case.

The reliance on instruments and procedures hinders the connection between research and everyday life

This issue relates to the question of ecological validity.  

Many methods of quantitative research rely heavily on administering research instruments to participants (such as structured interviews or self-completion questionnaires), or controlling situations to determine effects.

However, these instruments simply do not ‘tap into’ people’s real life experiences – for example, many of the well known lab experiments on the A-level sociology syllabus clearly do not reflect real life, while surveys which ask people about their attitudes towards immigration, or the environment, do not necessarily tell us about how people act towards migrants or the environment on a day to day basis.

The analysis of relationships between variables creates a static view of social life that is independent of people’s lives. 

The main critique here is Blumer (1956).

Blumer (1956) argued that studies that seek to bring out the relationships between variables omit ‘the process of interpretation or definition that goes on in human groups’.

This is a combination of criticisms 1 and 3 above, but adds on an additional problem – that in isolating out variables, quantitative research creates an artificial, fixed and frozen social (un)reality – whereas social reality is (really) alive and constantly being created through processes of interaction by its various members.

In other words, the criticism here is that quantitative research is seen as carrying an objective ontology that reifies the social world.

The above criticisms have lead intepretivists to prefer more qualitative research methods. However, these too have their limitations!

Sources:

Bryman (2016) Social Research Methods

 

The Four Main Concerns of Quantitative Research

Quantitative researchers generally have four main preoccupations: they want their research to be measurable, to focus on causation, to be generalisable, and to be replicable.

These preoccupations reflect epistemological grounded beliefs about what constitutes acceptable knowledge, and can be contrasted with the preoccupations of researchers who prefer a qualitative approach.

Measurement 

It may sound like it’s stating the obvious – but quantitative researchers are primarily interested in collecting numerical data, which means they are essentially concerned with counting social phenomena, which will often require concepts to be operationalised.

Causality 

In most quantitative research there is a strong concern with explanation: qualitative researchers are more concerned with explaining why things are as they are, rather than merely describing them (which tends to be the focus of more qualitative research).

It follows that it is crucial for quantitative researchers to effectively isolate variables in order to establish causal relationships.

Generalisation 

Quantitative researchers tend to want their findings to be representative of wider populations, rather than the just the sample involved in the study, thus there is a concern with making sure appropriate sampling techniques will be used.

Replication

If a study is repeatable then it is possible to check that the original researchers’ own personal biases or characteristics have not influenced the findings: in other words, replication is necessary to test the objectivity of an original piece of research.

Quantitative researchers tend to be keen on making sure studies are repeatable, although most studies are never repeated because there is a lack of status attached to doing so.

Source:

Bryman (2017) Social Research Methods

 

A few thoughts on revising research methods in context/ applied research methods

The ‘applied methods*’ question appears in paper 1 of the AQA’s Education with Theory and Methods exam (paper 7192/1). This is out of 20 marks, and students are expected to apply their understanding of any of the six main research method covered in the A-level sociology specification to any conceivable topic within education.

An example of an ‘applied methods*’ question is as follows:

‘Applying material from item B and elsewhere, evaluate the strengths and limitation of using participant observation to investigate truancy from school’ (20)

Here’s how I revise these questions with my students… NB I don’t introduce the item until later…

Warm up with the method

Firstly, I get students to talk through the theoretical practical and ethical strengths and limitations just of the method. I do this because students need to know they method anyway, and they can get 10/20 just for writing a decent methods essay (without applying it) – see the mark scheme here.

Methods in Context

Warm up with the method generally applied to the topic

Students brainstorm the general ethical, practical and theoretical issues you may encounter when researching this topic with this method… I think it’s good to be as open-minded as possible early on… It’s easiest just to get them to do this on paper. 

Sociology applied methods

Do a plan applying the method to the specific details in the item

I use an A3 sheet for this, with the item and question in the middle, students now read the item. 

Methods in Context

Write a detailed flow-chart

Here I get students to add in analysis and evaluation points to each original lead-point, showing a chain of reasoning (side 2 of A3 sheet).

Applied Research Methods

Repeat stage two with a different topic, to emphasise the difference in answers for the same method applied to a different topic

DO NOT go over the whole process again, once is enough!

Research Methods

Issues with Revising Applied Research Methods 

There’s a very real possibility that students will just not ‘get it’, because they have to be so nit-pickingly overt about relating the method to the specific topic. Drilling this into students is a painful and thankless task, induced solely by the demands of this specific form of the assessment.

There is also the possibility that students may lose the will to live, especially when some past papers have examples that even I find intolerably dull, and I’m actually interested in this stuff!

*These are sometimes referred to as ‘Methods in Context’ questions. This was the term originally used by the AQA for many years, but (much like this question format itself as a means of assessing application skills) it’s pretty clumsy, so the new ‘applied methods’ phrase is IMO much better.  

What is a Likert Scale?

A Likert* scale is a multiple-indicator or multiple-item measure of a set of attitudes relating to a particular area. The goal of a Likert scale is to measure intensity of feelings about the area in question.

A Likert scale about Likert scales!

In its most common format, the Likert scale consists of a statement (e.g. ‘I love Likert scales’) and then a range of ‘strength of feeling’ options which respondents choose from – in the above example, there are five such options ranging from strongly agree to strongly disagree.

Each respondents reply on each item is scored, typically with a high score (5 in the above example) being given for positive feelings and a low score (1 in the above example) for negative feelings.

Once all respondents have completed the questionnaire, the scores from all responses are aggregated to give an overall score, or ‘strength of feeling’ about the issue being measured.

Some examples of sociological research using Likert scales:

The World Values Survey is my favourite example – they use a simple four point scale to measure happiness. The poll below gives you the exact wording used in the survey…

The results on the web site (and below) show you the percentages who answer in each category, but I believe that the researchers also give scores to each response (4 to 1) and then do the same for similar questions, combine the scores and eventually come up with a happiness rating for a country out of 10. I think the USA scores around 7.2 or something like that, it might be more! Look it up if you’re interested….

America’s happiness results

Important points to remember about Likert scales

  • The items must be statements, not questions.
  • The items must all relate to the same object being measured (e.g. happiness, strength of religious belief)
  • The items that make up the scale should be interrelated so as to ensure internal reliability is strong.

*The Likert Scale is named after Rensis Likert, who developed the method.

Sources

Adapted from Bryman’s Social Research Methods

 

Validity in Social Research

Validity refers to the extent to which an indicator (or set of indicators) really measure the concept under investigation. This post outlines five ways in which sociologists and psychologists might determine how valid their indicators are: face validity, concurrent validity, convergent validity, construct validity, and predictive validity. 

Validity refers to the extent to which an indicator (or set of indicators) really measure the concept under investigation. This post outlines five ways in which sociologists and psychologists might determine how valid their indicators are: face validity, concurrent validity, convergent validity, construct validity, and predictive validity.

As with many things in sociology, it makes sense to start with an example to illustrate the general meaning of the concept of validity:

When universities question whether or not BTECs really provide a measure of academic intelligence, they are questioning the validity of BTECs to accurately measure the concept of ‘academic intelligence’.

When academics question the validity of BTECs in this way, they might be suspicious that that BTECs are actually measuring something other than a student’s academic intelligence; rather BTECs might instead actually be measuring a student’s ability to cut and paste and modify just enough to avoid being caught out by plagiarism software.

If this is the case, then we can say that BTECs are not a valid measurement of a student’s academic intelligence.

How can sociologists assess the validity of measures and indicators?

what is validity.png

There are number of ways testing measurement validity in social research:

  • Face validity – on the face of it, does the measure fit the concept? Face validity is simply achieved by asking others with experience in the field whether they think the measure seems to be measuring the concept. This is essentially an intuitive process.
  • Concurrent validity – to establish the concurrent validity of a measure, the researchers simply compare the results of one measure to another which is known to be valid (known as a ‘criterion measure). For example with gamblers, betting accounts give us a valid indication of how much they actually win or lose, but wording of questions designed to measure ‘how much they win or lose in a given period’ can yield vastly different results. Some questions provide results which are closer to the hard-financial statistics, and these can be said to have the highest degree of concurrent validity.
  • Predictive validity – here a researcher uses a future criterion measure to assess the validity of existing measures. For example we might assess the validity of BTECs as measurement of academic intelligence by looking at how well BTEC students do at university compared to A-level students with equivalent grades.
  • Construct validity – here the researcher is encouraged to deduce hypotheses from a theory that is relevant to the concept. However, there are problems with this approach as the theory and the process of deduction might be misguided!
  • Convergent validity – here the researcher compares her measures to measures of the same concept developed through other methods. Probably the most obvious example of this is the British Crime Survey as a test of the ‘validity’ of Police Crime Statistics’. The BCS shows us that different crimes, as measured by PCR have different levels of construct validity – Vehicle Theft is relatively high, vandalism is relatively low, for example.

Source 

Bryman (2016) Social Research Methods