Invalid Official Statistics on Volunteering?

I caught an episode of Woman’s Hour last week in which the presenter kept mentioning that according to a recent survey 62% of people in the UK had volunteered in the last week, and inviting people to discuss their experiences of voluntary work.

The survey in question (excuse the pun) was the Volunteering and Charitable Giving Community Life Survey 2020-2021.

The show was then peppered with references to people’s volunteering efforts, such as working with the homeless at Christmas, staffing food banks, helping out with the Covid-vaccination efforts and so on.

And such examples fit very well with my own imagination of what ‘voluntary work’ involves – to my my mind a volunteer is someone who commits an hour or maybe more a week (I have a low bar in terms of time!) to do something such as the above, probably in conjunction with a formal charity or at least other people as part of a team.

But I just couldn’t believe that 62% of people did that kind of voluntary work last year.

And it turns out that they don’t

The government survey (a form of official statistics) that yielded these results distinguishes between formal and informal volunteering.

The former type: formal volunteering is what I (and probably most people) think of as ‘real volunteering’ – it was these kind of things the Woman’s Hour presenter was interested in hearing about and publicising.

However, only 17% of people did formal volunteering last year…..

Just over 50% of people did ‘informal volunteering’ but this has a VERY LOW BAR for inclusion. Basically, if you babysat your friend’s kids for one day at some point last year, you get to tick the box saying that you did ‘informal volunteering’.

This basically means that ANYONE with a young family has done what this society defines as ‘informal volunteering’ – I mean surely EVERY FAMILY babysits once in a while for their friends – this is just normal parenting – children have friends, parents want a day to themselves every now and then so you ‘babysit swap’ – or sleepovers, technically you could count having your friends’ children over for a sleepover with your own kids as ‘having done voluntary work’ in the last year’.

Add formal and informal volunteering (/ mutal parental favours) together and you get that 62% figure that the Woman’s Hour presenter was talking about.

However to my mind 62% is a completely misleading figure – 17% is how many people ACTUALLY volunteer every year!

It’s a bit annoying TBH – as also in the ‘informal volunteerin’ category are things such as buying shopping for someone who can’t get out of the house and that’s LEGIT, or valid volunteering in my mind, but the category is too inclusive to give us any useful data on this.

Relevance to A-Level Sociology

This is a wonderful example of how a definition which is too broad, in this case what counts as ‘volunteering’ can give a misleading, or invalid impressing of how much actual voluntary work really goes on in the UK.

This survey is a form of official statistics, so you can use this example to be critical of them.

it is possible that the government officials deliberately made the definition so broad so as to give the impression that there is more community spirit, or more of a ‘big society’ around than there actually is – because if there’s lots of community work and voluntary work going on, it’s easier for the government to justify doing less.

However, even with these very broad definitions, the trend in volunteering has still been going down in recent years!

Are one in five people really disabled?

According to official statistics 19% of working aged adults, or one in five people self-report as being ‘disabled’, and this figure has been widely used in the media to promote pro-disability programming.

How do we Define Disability?

According to the formal, legal, UK definition under the 2010 Equality Act someone is disable if they ‘have a physical or mental impairment that has a substantial and ‘long-term’ negative effect on your ability to do normal daily activities’.

That 19% figure sounds like a lot of people, in fact it is a lot of people – that’s 13 million people in the United Kingdom.

But maybe it’s only a lot because when we think of ‘disability’ we tend to immediately think of people will physical and very visible disabilities, the classic image of a disable person being someone in a wheelchair, which the media generally doesn’t help with its over-reliance of wheelchair users to signify they are ‘representing the disabled’.

In fact there are ‘only’ 1.2 million wheelchair users in Britain, or less than one in ten people who classify as disabled.

How do we measure disability ?

The 19%, or one five figure comes from the UK’s Family Resources Survey, the latest published result coming from the 2018/19 round of surveys.

This is a pretty serious set of surveys in which respondents from 20 000 households answer questions for an hour, some related to disability.

The Questions which determined whether someone classifies as disable or not are as follows:

  • Have you had any long term negative health conditions in the last 12 months? If you respond yes, you move on to the next two questions:
  • Do any of these health conditions affect you in any of the following areas – listed here are the top answers: mobility/ stamina, breathing or fatigue/ mental health/ dexterity/ other 
  • Final question: do any of your conditions or illness impact your ability to carry out your day to day activities -the responses here are on a 4 point likehert scale ranging from a not at all to a lot.

Anyone ticking YES/ YES and either ‘my illness affects me a lot or a little’ is classified by the UK government as disabled.

Validity problems with this way of measuring disability

The problem with the above is that if you have Asthma and similar mild conditions you could be classified as disabled, and this doesn’t tie in with the government’s own definition of disability which requires that someone has a condition which ‘substantially’ affects their ability to carry out every day tasks.

Stating that you have asthma which affects your breathing a little, does NOT IMO qualify you as disabled, but it does in this survey.

The government doesn’t publish the breakdown of responses to the final disability question, but it’s roughly a 50-50 split between those answering ‘a lot’ and ‘a little.

In conclusion, it might be more accurate to say that one in ten people is disabled.

Relevance to A-level sociology

This short update should be a useful contemporary example to illustrate some of the validity problems associated with using social surveys, especially for topics with a high degree of subjectivity such as what disability means!

NB – I gleaned the above information from Radio Four’s More or Less, the episode which aired on Weds 10th Feb 2021.

The limitations of School Exclusion Statistics

The Department for Education publishes an annual report on exclusions, the latest edition published in August 2018 being ‘Permanent and fixed-period exclusions in England: 2016 to 2017.

The 2018 report shows that the overall rate of permanent exclusions was 0.1 per cent of pupil enrolments in 2016/17. The number of exclusions was 7,720.

exlusion statistics.png

The report also goes into more detail, for example….

  • The vast majority of exclusions were from secondary schools >85% of exclusions.
  • The three main reasons for permanent exclusions (not counting ‘other’) were
    • Persistent disruptive behaviour
    • Physical assault against a pupil
    • Physical assault against an adult.

Certain groups of students are far more likely to be permanently excluded:

  • Free School Meals (FSM) pupils had a permanent exclusion rate four times higher than non-FSM pupils
  • FSM pupils accounted for 40.0% of all permanent exclusions
  • The permanent exclusion rate for boys was over three times higher than that for girls
  • Over half of all permanent exclusions occur in national curriculum year 9 or above. A quarter of all permanent exclusions were for pupils aged 14
  • Black Caribbean pupils had a permanent exclusion rate nearly three times higher than the school population as a whole.
  • Pupils with identified special educational needs (SEN) accounted for around half of all permanent exclusions

The ‘reasons why’ and ‘types of pupil’ data probably hold no surprises, but NB there are quite a few limitations with the above data, and so these stats should be treated with caution!

Limitations of data on permanent exclusions

Validity problems…

According to this Guardian article, the figures do not take into account ‘informal exclusions’ or ‘off-rolling’ – where schools convince parents to withdraw their children without making a formal exclusion order – technically it’s then down to the parents to enrol their child at another institution or home-educate them, but in many cases this doesn’t happen.

According to research conducted by FFT Education Datalab up to 7, 700 students go missing from the school role between year 7 and year 11 when they are  supposed to sit their GCSEs…. Equivalent to a 1.4% drop out rate across from first enrolment at secondary school to GCSEs.

Datalabs took their figures from the annual school census and the DfE’s national pupil database. The cohort’s numbers were traced from year seven, the first year of secondary school, up until taking their GCSEs in 2017.

The entire cohort enrolled in year 7 in state schools in England in 2013 was 550,000 children

However, by time of sitting GCSEs:

  • 8,700 pupils were in alternative provision or pupil referral units,
  • nearly 2,500 had moved to special schools
  • 22,000 had left the state sector (an increase from 20,000 in 2014) Of the 22,000,
    • 3,000 had moved to mainstream private schools
    • Just under 4,000 were enrolled or sat their GCSEs at a variety of other education institutions.
    • 60% of the remaining 15,000 children were likely to have moved away from England, in some case to other parts of the UK such as Wales (used emigration data by age and internal migration data to estimate that around)
    • Leaves between 6,000 to 7,700 former pupils unaccounted for, who appear not to have sat any GCSE or equivalent qualifications or been counted in school data.

Working out the percentages this means that by GCSEs, the following percentages of the original year 7 cohort had been ‘moved on’ to other schools. 

  • 6% or 32, 000 students in all, 10, 00 of which were moved to ‘state funded alternative provision, e.g. Pupil Referral Units.
  • 4%, or 22K left the mainstream state sector altogether (presumably due to exclusion or ‘coerced withdrawal’ (i.e. off rolling), of which
  • 4%, or 7, 700 cannot be found in any educational records!

This Guardian article provides a decent summary of the research.

Further limitations of data on school exclusions

  • There is very little detail on why pupils were excluded, other than the ‘main reason’ formally recorded by the head teacher in all school. There is no information at all about the specific act or the broader context. Labelling theorists might have something to say about this!
  • There is a significant time gap between recording and publication of the data. This data was published in summer 2018 and covers exclusions in the academic year 2016-2017. Given that you might be looking at this in 2019 (data is published annually) and that there is probably a ‘long history’ behind many exclusions (i.e. pupils probably get more than one second chance), this data refers to events that happened 2 or more years ago.

Relevance of this to A-level sociology

This is of obvious relevance to the education module… it might be something of a wake up call that 4% of students leave mainstream secondary education before making it to GCSEs, and than 1.4% seem to end up out of education and not sitting GCSEs!

It’s also a good example of why independent longitudinal studies provide a more valid figure of exclusions (and ‘informal’ exclusions) than the official government statistics on this.

 

I’ll be producing more posts on why students get excluded, and on what happens to them when they do and the consequences for society in coming weeks.

 

This is a topic that interests me, shame it’s not a direct part of the A level sociology education spec!

Do 25% of children really have their own mobiles? Invalid Research Example #01

This is a ‘new thread’ idea… posting up examples of naff research. I figure there are two advantages to this…

  1. It’s useful for students to have good examples of naff research, to show them the meaning of ‘invalid data’ or ‘unrepresentative samples’, or in this case, just plain unreferenced material which may as well be ‘Fake News’.
  2. At least I get some kind of pay back (in the form of the odd daily post) for having wasted my time wading through this drivel.

My first example is from The Independent, the ex-newspaper turned click-bait website.

I’ve been doing a bit of research on smart phone usage statistics this week and I came across this 2018 article in the Independent: Quarter of Children under 6 have a smartphone, study finds.

invalid research.png

The article provides the following statistics

  • 25% of children under 6 now have their own mobile
  • 12% of children under 6 spend more than 24 hours a week on their mobile
  • 80% parents admit to not limiting the amount of time their children spend on games

Eventually it references a company called MusicMagpie (which is an online store) but fails to provide a link to the research,  and provides no information at all about the sampling methods used or other details of the survey (i.e. the actual questions, or how it’s administered.). I dug around for a few minutes, but couldn’t find the original survey either.

The above figures just didn’t sound believable to me, and they don’t tie in with OFCOM’s 2017 findings which say that only 5% of 5-7 year olds and 1% of 3-4 year olds have their own mobiles.

As it stands, because of the simple fact that I can’t find details of the survey, these research findings from musicMagpie are totally invalid.

I’m actually quite suspicious that the two companies have colluded to generate some misleading click-bait statistics to drive people to their websites to increase advertising and sales revenue.

If you cannot validate your sources, then do not use the data!

Validity in Social Research

Validity refers to the extent to which an indicator (or set of indicators) really measure the concept under investigation. This post outlines five ways in which sociologists and psychologists might determine how valid their indicators are: face validity, concurrent validity, convergent validity, construct validity, and predictive validity. 

Validity refers to the extent to which an indicator (or set of indicators) really measure the concept under investigation. This post outlines five ways in which sociologists and psychologists might determine how valid their indicators are: face validity, concurrent validity, convergent validity, construct validity, and predictive validity.

As with many things in sociology, it makes sense to start with an example to illustrate the general meaning of the concept of validity:

When universities question whether or not BTECs really provide a measure of academic intelligence, they are questioning the validity of BTECs to accurately measure the concept of ‘academic intelligence’.

When academics question the validity of BTECs in this way, they might be suspicious that that BTECs are actually measuring something other than a student’s academic intelligence; rather BTECs might instead actually be measuring a student’s ability to cut and paste and modify just enough to avoid being caught out by plagiarism software.

If this is the case, then we can say that BTECs are not a valid measurement of a student’s academic intelligence.

How can sociologists assess the validity of measures and indicators?

what is validity.png

There are number of ways testing measurement validity in social research:

  • Face validity – on the face of it, does the measure fit the concept? Face validity is simply achieved by asking others with experience in the field whether they think the measure seems to be measuring the concept. This is essentially an intuitive process.
  • Concurrent validity – to establish the concurrent validity of a measure, the researchers simply compare the results of one measure to another which is known to be valid (known as a ‘criterion measure). For example with gamblers, betting accounts give us a valid indication of how much they actually win or lose, but wording of questions designed to measure ‘how much they win or lose in a given period’ can yield vastly different results. Some questions provide results which are closer to the hard-financial statistics, and these can be said to have the highest degree of concurrent validity.
  • Predictive validity – here a researcher uses a future criterion measure to assess the validity of existing measures. For example we might assess the validity of BTECs as measurement of academic intelligence by looking at how well BTEC students do at university compared to A-level students with equivalent grades.
  • Construct validity – here the researcher is encouraged to deduce hypotheses from a theory that is relevant to the concept. However, there are problems with this approach as the theory and the process of deduction might be misguided!
  • Convergent validity – here the researcher compares her measures to measures of the same concept developed through other methods. Probably the most obvious example of this is the British Crime Survey as a test of the ‘validity’ of Police Crime Statistics’. The BCS shows us that different crimes, as measured by PCR have different levels of construct validity – Vehicle Theft is relatively high, vandalism is relatively low, for example.

Source 

Bryman (2016) Social Research Methods

 

 

Why Do Voting Opinion Polls Get it Wrong So Often?

Surveys which ask how people intend to vote in major elections seem to get it wrong more often than not, but why is this?

Taking the averages of all nine first and then final polls for the UK general election 2017, the predictions for the Conservatives show them down from 46% to 44%; and Labour up from 26% to 36%.

voting intention 2017 general election

The actual vote share following the result of the general election shows the Conservatives at 42% and Labour at 40% share of the vote.

2017 election result share of vote UK

Writing in The Guardian, David Lipsey notes that ‘The polls’ results in British general elections recently have not been impressive. They were rightish (in the sense of picking the right winner) in 1997, 2001, 2005 and 2010. They were catastrophically wrong in 1992 and 2015. As they would pick the right winner by chance one time in two, an actual success rate of 67%, against success by pin of 50%, is not impressive.’

So why do the pollsters get it wrong so often?

Firstly, there is a plus or minus 2 or 3% statistical margin of error in a poll – so if a poll shows the Tories on 40% and Labour on 34%, this could mean that the real situation is Tory 43%, Labour 31% – a 12 point lead. Or it could mean both Tory and Labour are on 37%, neck and neck.

This is demonstrated by these handy diagrams from YouGov’s polling data on voting intentions during the run up to the 2017 UK general election…

Voting Intention 2017 Election 

Statistics Margin Error.png

Seat estimates 2017 General Election

Seat Estimates

Based on the above, taking into account margin for error, it is impossible to predict who would have won a higher proportion of the votes and more seats out of Labour and the Tories.

Secondly, the pollsters have no way of knowing whether they are interviewing a representative sample.

When approached by a pollster most voters refuse to answer and the pollster has very little idea whether these non-respondents are or are not differently inclined from those who do respond. In the trade, this is referred to as polling’s “dirty little secret”.

Thirdly, the link between demographic data and voting patterns is less clear today – it used to be possible to triangulate polling data with demographic data from previous election results, but voter de-alignment now means that such data is now less reliable as a source of triangulating the opinion polls survey data, meaning pollsters are more in the dark than ever.

Fourthly, a whole load of other factors affected people’s actual voting behaviour in this 2017 election and maybe the polls  failed to capture this?

David Cowley from the BBC notes that…. ‘it seems that whether people voted Leave or Remain in 2016’s European referendum played a significant part in whether they voted Conservative or Labour this time…. Did the 2017 campaign polls factor this sufficiently into the modelling of their data? If younger voters came out in bigger numbers, were the polls equipped to capture this, when all experience for many years has shown this age group recording the lowest turnout?’

So it would seem that voting-intention surveys have always had limited validity, and that, if anything, this validity problem is getting worse…. after years of over-estimating the number of Labour votes, they’ve now swung right back the other way to underestimating the popularity of Labour.

Having said that these polls are not entirely useless, they did still manage to predict that the Tories would win more votes and seats than Labour, but they just got the difference between them oh so very wrong.

The problem of obtaining representative samples (these days)

According to The Week (July 2017) – the main problem with polling these days is that finding representative samples is getting harder… When Gallup was polling, the response rate was 90%, in 2015, ICM had to call up 30 000 numbers just to get 2000 responses. And those who do respond are often too politically engaged to be representative.

 

Qualitative Data – Strengths and Limitations

A summary of the theoretical, practical and ethical strengths and weaknesses of qualitative data sources such as unstructured interviews, participant observation and documents.

Examples of Qualitative Data

  • Open question questionnaires
  • Unstructured interviews
  • Participant observation
  • Public and private documents such as newspapers and letters.

Theoretical strengths

  • Better validity than for quantitative data
  • More insight (Verstehen)
  • More in-depth data
  • More respondent-led, avoids the imposition problem.
  • Good for exploring issues the researcher knows little about.
  • Preferred by Interpretivists

Practical strengths

  • A useful way of accessing groups who don’t like formal methods/ authority

Ethical strengths

  • Useful for sensitive topics
  • Allows respondents to ‘speak for themselves’
  • Treats respondents as equals

Theoretical limitations

  • Difficult to make comparisons
  • No useful for finding trends, finding correlations.
  • Typically small samples, low representativeness
  • Low reliability as difficult to repeat the exact context of research.
  • Subjective bias of researcher may influence data (interviewer bias)
  • Disliked by Positivists

Practical limitations

  • Time consuming
  • Expensive per person researched compared to qualitative data
  • Difficult to gain access (PO)
  • Analyzing data can be difficult

Ethical limitations

  • Close contact means more potential for harm
  • Close contact means more difficult to guarantee anonymity and confidentiality
  • Informed consent can be an issue with PO.

Nature of Topic – When would you use it, when would you avoid using it?

  • Useful for complex topics you know little about
  • Not necessary for simple topics.

Signposting

This post has been written as a revision summary for students revising the research methods aspect of A-level sociology.

More in-depth versions of qualitative data topics can be found below…

Covert and Covert Participant Observation  

The strengths and limitations of covert participant observation 

Interviews in Social Research 

Secondary Qualitative Data Analysis in Sociology 

Please click here to return to the homepage – ReviseSociology.com

Evaluate the View that Theoretical Factors are the most Important Factor Influencing Choice of Research Method (30)

Just a few thoughts on how you might answer this in the exam. 

Introduction – A variety of factors influence a Sociologist’s decision as to what research method they use: the nature of topic, theoretical, practical and ethical factors.

Theoretical factors – Positivism vs Interpretivism – Positivists are interested in uncovering the underlying general laws that lie behind human action. They thus prefer quantitative methods because these enable large samples to be drawn and allow for the possibility of findings being generalised to the wider population.

They also prefer quantitative methods because the data can be put into graphs and charts, allowing for easy comparisons to be made at a glance.

Another method that is linked to the positivist tradition is the experiment – laboratory experiments allow researchers to examine human behaviour in controlled environments and so allow researchers to accurately measure the effects of one specific variable on another

Interpretivists generally prefer qualitative methods which are regarded as having high validity. Validity is the extent to which research provides a true and accurate picture of the aspect of social life that is being studied. Most sociologists would agree that there is little point doing sociological research if it is invalid.

Theoretical factors – Validity – Qualitative methods should be more valid because they are suitable for gaining an in depth and empathetic understanding of the respondent’s views of life. Qualitative methods are flexible, and allow for the respondents to speak for themselves, which avoids the imposition problem as they set the research agenda. Qualitative methods also allow for rapport to be built up between the respondent and the researcher which should encourage more truthful and in depth information to flow from the respondents.

The final reason why qualitative methods such as Participant Observation should yield valid data is that it allows for the researcher to see the respondents in their natural environment.

Theoretical factors – Reliability – Is the extent to which research can be repeated and the same results achieved. Positivists point out that it is more difficult for someone else to replicate the exact same conditions of a qualitative research project because the researcher is involved in sustained, contact with the respondents and the characteristics and values of the researcher may influence the reactions of respondents.

Moreover, because the researcher is not ‘detached’ from the respondents, this may detract from his or her objectivity. Participant Observers such as Willis and Venkatesh have, for example, been accused of going native – where they become overly sympathetic with the respondents.

Interpretivists would react to this by pointing out that human beings are not machines and there are some topics that require close human contact to get to the truth – sensitive issues such as abuse and crime may well require sympathetic researchers that share characteristics in common with the respondents. Interpretivists are happy to forgo reliability if they gain in more valid and in depth data.

 Representativeness – Obviously if one wants large samples one should use quantitative methods – as with the UK National Census. However, one may not need a large sample depending on the research topic.

 Practical Factors – Practical issues also have an important influence on choices of research method. As a general rule quantitative methods cost less and are quicker to carry out compared to more qualitative methods, and the data is easier to analyse once collected, especially with pre-coded questionnaires which can simply be fed into a computer. It is also easier to get government funding for quantitative research because this is regarded as more scientific and objective and easier to generalise to the population as a whole. Finally, researchers might find respondents more willing to participate in the research if it is less invasive – questionnaires over PO.

However, qualitative methods, although less practical, may be the only sensible way of gaining valid data, or any data at all for certain topics – as mentioned above UI are best for sensitive topics while participant observation may be the only way to gain access to deviant and criminal groups.

Ethical Factors – Ethical factors also influence the choice of research methods. In order for research to gain funding it will need to meet the ethical guidelines of the British Sociological Association. How ethical a research method is depends on the researcher’s efforts to ensure that informed consent is achieved and that data is kept confidential and not used for purposes other than the research.

Real ethical dilemmas can occur with covert participant observation. However, sometimes the ethical benefits gained from a study may outweigh the ethical problems. McIntyre, for example, may have deceived the hooligans he researched but at least he exposed their behaviour.

Howard Becker also argued that there is an ethical imperative to doing qualitative research – these should be used to research the underdog, giving a voice to the marginalised whose opinions are often not heard in society.

Nature of topic – There are certain topics which lend themselves naturally to certain modes of research. Measuring how people intend to vote naturally lends itself to phone surveys for example while researching sensitive and emotive topics would be better approached through UI.

Conclusion – In conclusion there are a number of different factors that interrelate to determine a sociologist’s choice of research method – practical, ethical, theoretical and the nature of the topic under investigation. In addition, sociologists will evaluate these factors depending on their own individual values. Furthermore it is too simplistic to suggest that sociologists simply fall into two separate camps, Positivists or Interpretivists.  Many researchers use triangulation, combining different types of method so that the advantages of one will compensate for the disadvantages of another.

Theory and Methods A Level Sociology Revision Bundle 

If you like this sort of thing, then you might like my Theory and Methods Revision Bundle – specifically designed to get students through the theory and methods sections of  A level sociology papers 1 and 3.

Contents include:

  • 74 pages of revision notes
  • 15 mind maps on various topics within theory and methods
  • Five theory and methods essays
  • ‘How to write methods in context essays’.

Participant Observation in Social Research

Participant Observation is a qualitative research method in which the researcher joins in with the group under investigation. This post explores the theoretical, practical and ethical advantages and disadvantages of participant observation

Participant Observation is where the researcher joins in with the group being studied and observes their behaviour. This post covers the theoretical, practical and ethical strengths and limitations of using overt and covert participant observation in social research.

It has been written primarily for students studying the research methods aspect of A-level sociology.

participant-observation

Participant observation is closely related to the ethnographic method (or ‘ethnography’), which consists of an in-depth study of the way of life of a group of people.

Ethnography is traditionally associated with anthropology, wherein the anthropologist visits a (usually) foreign land, gains access to a group (for example a tribe or village), and spends several years living with them with the aim of uncovering their culture. The ethnographic method involves watching what participants do, listening to them, engaging in probing conversations, and joining them in day to day tasks as necessary; it also involves investigating any cultural artefacts such as art work and any written work if it exists, as well as analysing what religious rituals and popular stories can tell us about the culture. Ethnographic research has traditionally involved taking copious field notes, and the resulting ‘monographs’ which are produced can take several months, if not a year or more to write up.

To cut a long winded definition short, ethnography is basically the same as participant observation, but includes the writing up of a detailed account of one’s findings:

Ethnography = participant observation + a detailed written account of one’s findings.

Participant Observation and the use of other methods

Most participant observers (or ‘ethnographers’) will combine their observations with other methods – most obviously unstructured interviews, and some will combine them with more formal questionnaire based research, normally towards the end of their study period, meaning many of these studies are actually mixed-methods studies. Nonetheless, Participant Observation is still technically classified, for the purposes of A-level sociology as a ‘qualitative’ method.

Overt and Covert Observation

An important distinction in Participation/ Ethnography is between covert and over observation.

  • Overt Observation – this is where the group being studied know they are being observed.
  • Covert Observation – this where the group being studied does not know they are being observed, or where the research goes ‘undercover’.

These both have their strengths and limitations – overt research is obviously more ethical because of the lack of deception, and it allows the researcher to ask probing questions and use other research methods. Covert research may be the only way to gain access to deviant groups, it may enable you to gain fuller ‘immersion’ into the host culture and avoids the ‘Hawthorne Effect’. However, ethically it involves deception and can be very stressful for the researcher.

The Strengths of Participant Observation

Theoretical Advantages

The most significant strength of both types of participant observation is the high degree of validity the method achieves. There are at least five reasons for this:

participant observation anthropology

You can observe what people do, not what they say they do – In contrast to most other methods, participant observation allows the researcher to see what people do rather than what people say they do.

Participant Observation takes place in natural settings – this should mean respondents act more naturally than in a laboratory, or during a more formal interview. This should mean the Hawthorne effect will be less, especially with covert research. You also get more of a feel for respondents’ actions in context, which might otherwise seem out of place if in an artificial research environment.

Digging deep and gaining insight – the length of time ethnographers spend with a community means that close bonds that can be established, thus enabling the researcher to dig deeper than with other methods and find out things which may be hidden to all other means of enquiry.

Verstehen/empathetic understanding– participant observation allows the researcher to fully join the group and to see things through the eyes (and actions) of the people in group. Joining in allows the researcher to gain empathy through personal experiences. This closeness to people’s reality means that participant observation can give uniquely personal, authentic data.

Flexibility and generating new ideas – when completing questionnaires researchers begin with pre-set questions. Even before starting to collect the data, therefore, the researchers have decided what’s important. The problem with this is what if the questions the researcher thinks are important are not the same as the ones the subject thinks are important. By contrast, participant observation is much more flexible. It allows the researcher to enter the situation with an open mind and as new situations are encountered they can be followed up.

Practical Advantages

There are few practical advantages with this method, but participant observation might be the only methods for gaining access to certain groups. For example, a researcher using questionnaires to research street gangs is likely to be seen as an authority figure and unlikely to be accepted.

Ethical Advantages

Interpretivists prefer this method because it is respondent led – it allows respondents to speak for themselves and thus avoids a master-client relationship which you get with more quantitative methods.

The Limitations of Participant Observation

Theoretical Disadvantages

One theoretical disadvantage is the low degree of reliability. It would be almost impossible for another researcher to repeat given that a participant observation study relies on the personal skills and characteristics of the lone researcher.

Another theoretical disadvantage is the low degree of representativeness. Sociologists who use quantitative research methods study large, carefully selected, representative samples that provide a sound basis for making generalisations, In contrast, the groups used in participant observation studies are usually unrepresentative, because they are accessed through snowball sampling and thus haphazardly selected.

Critics also question how valid participant observation really is. They argue the method lacks objectivity. It can be very difficult for the researcher to avoid subjectivity and forming biased views of the group being studied. Also researchers decide what is significant and worth recording and what’s not, therefore, it depends on the values of the researcher. In extreme cases, researchers might ‘go native’, where they become sympathatic with the respondents and omit any negative analysis of their way of life.

A further threat to validity is the Hawthorne Effect, where people act differently because they know they are being observed, although participant observers would counter this by saying that people can’t keep up an act over long time periods: they will eventually relax and be themselves.

Also, the methods lack a concept of social structures such as class, gender or ethnicity. By focussing on the participants own interpretation of events, the researcher tends to ignore the wider social structures, which means giving only a partial explanation.

Practical Disadvantages

Firstly, this method tends to be time consuming and expensive in relation to the relatively small amount of respondents. It can take time to gain trust and build rapport, and so for this reason, it may take several days, weeks or even months, before the respondents really start to relax in the presence of the researcher.

Participant Observation also requires observational and interpersonal skills that not everyone possesses – you have to be able to get on with people and understand when to take a back seat and when to probe for information.

Gaining access can also be a problem – many people will not want to be researched this way, and where covert research is concerned, researchers are limited by their own characteristics. Not everyone can pass as a Hells Angel if covert observation is being used!

Ethical Disadvantages

Ethical problems are mainly limited to Covert Participant Observation, in which respondents are deceived and thus cannot give informed consent to participate in the research.

Legality can also be an issue in covert research where researchers working with deviant groups may have to do illegal acts to maintain their cover.

Some advantages of Overt compared to Covert Observation

Students often think that Covert Observation is superior to Over Observation, however there are five reasons why Overt might be a better choice of research method:

1. You can ask awkward, probing questions

2. You can combine it with other methods

3. You can take on the role of the ‘professional stranger’ – respondents might tell you things because they know you are not ‘one of them’

4. It is less stressful and risky for the researcher

5. It is easier to do follow up studies.

Related Posts

Some recent examples of PO studies within sociology

Learning to Labour by Paul Willis – A Summary

Please click here to return to the homepage – ReviseSociology.com

Sources:

Bryman (2016) Social Research Methods

Chapman et al (2016) Sociology AQA A-level Year 1 and AS Student Book