How many teens are on antidepressants?

A recent survey found that 1/3rd of teens have been prescribed anti-depressants, but this is probably a result of sampling bias.

One in three teenagers are on antidepressants according to a recent iNews article published in August 2022.

You can read the full article here: One in three teens on antidepressants as lack of mental health services puts pressure on GPs to help.

Now it may well be tough being a teenager these days, especially during the Covid-19 Pandemic, but this figure does sound alarmingly high!

And I’m not the only one who thinks so, and in fact this statistic may not even be accurate according to some deeper research and analysis by Nathan Gower on behalf of Radio 4’s More or Less show.

The figure above comes from a survey conduct in July 2022 by a charity called Stem4 which supports teenage mental health.

This was a broad ranging survey looking at teenagers mental health and well-being overall based on a ‘general national sample’ of 2007 teenagers and the question which yielded the results which lead to the headlines above was:

“Have you been prescribed antidepressants to treat depression or other mental health conditions.”

37% of 12 to 18 year old respondents reported that they had been prescribed antidepressants at some point in their life, which is where the one in three figure above comes from.

NB Stem4 was asked to add that question to their survey by Good Morning Britain and then they teamed up and had a great time discussing (uncritically of course) the findings…

Official Statistics on Antidepressant Prescriptions

The problem with above survey findings is that official statistics show VERY different proportions.

Dr Ruth Jack a Senior Fellow in the Faculty of Medicine and Health Sciences at the University of Nottingham who has also conducted research on the prescription rates of antidepressants to teenagers in England (rather than the whole of the U.K.).

Her methodology involved looking at hundreds of thousands of medical records from G.P.s in England up to 2017, and her findings are very different to those of the survey results above.

12-18 year olds in 2017 – 2.3% were ever prescribed an anti-depressant.

That 2.3% should cover most prescriptions because although specialist mental health practices and hospitals can also prescribe antidepressants to teenagers, most prescriptions revert back to G.P.s

Another alternative source we can use is from NHS in England which publishes data on how many patients are prescribed antidepressants in a year. NHS England uses different age groupings but the findings are similar to Doctor Jack’s – in the low single digit percentages.

So both of the above pieces of research which are based on the official NHS statistics and Doctors’ records show much lower figures than the survey conducted by Stem 4.

There is a MASSIVE difference: 37% of 12-18 year olds from one survey compared to 2.3% according to the Doctors’ own records, that is more than 10 times the difference according to Stem4’s survey based on the self-reporting of the teenagers themselves.

Stem4’s survey reports that there are higher rates of prescription among younger teenagers compared to older teenagers. However both the NHS data and Doctor Jacks’ research show the opposite: lower rates for younger teens and then higher rates for older teens – with the prescription numbers getting significantly higher for 16 years and older.

Explaining the differences

The Survey data is from 2022 while Dr Jack’s data only goes up 2017, so it could be that the antidepressant prescription rate for teenagers has increased radically during the Pandemic, but this would mean there has been a HUGE 20 fold increase!

But this massive recent increase is unlikely the NHS data we have runs up to 2021 which suggests such that prescriptions did rise by about 10% during Covid, but not 20 times!

When interviewed by More or Less the CEO of Stem4 says that the objective of their survey was to hear the voices of young people by giving them an opportunity to express themselves and they saw no reason to hold back these findings which tell us what young people feel even if they are very different to the official statistics.

To support her survey findings she cites a a Freedom of Information request which was released in August 2021 suggested that GP prescriptions for those aged 5 to 12 had increased 40% between 2015 to 2021.

The More or Less Interviewer seemed to be trying to invite her to confess that her findings were completely invalid but she wasn’t backing down, suggesting that the rates of teen prescriptions were probably half way between her data and the official data.

However it was also clear that the data scientists from More or Less were having none of this – it simply isn’t possible that one third of teens have ever been on prescription anti-depressants, the official numbers just don’t add up.

Credible Data versus Eye Catching, Distorted Data

Dr Jack’s data and the NHS data are valid and reliable results which accurately reflect the underlying reality and give us the actual rate at which teenagers are prescribed medication, and this data can help us tackle the problems of teenage mental ill health.

Stem4’S research is an invalid data set which has produced a distorted picture of reality in order to make an eye catching headline and bring people’s attention to Stem4 and the mental health support services they offer.

Explaining Stem4’s Misleading Survey Results

The first thing to note is that Stem4’s research is probably telling us something different to the NHS data – the former is asking ‘have you ever been prescribed’ while NHS data is current prescriptions.

So if it’s a different 2.3% every year then over 7 years (12-18) we get to around 15%.

But this is still a way off the reported 35%.

Personally I think that Sample Bias probably explains the rest of the difference.

Stem4’s own report on the survey results tells us that they used a company called SurveyGoo to conduct the research, and SurveyGoo specialised in Online Surveys.

There is a chance that in the marketing of the survey it would have been more appealing to those teenagers who have had mental health problems in the past.

Say that SurveyGoo has 10K teens in its panel and the survey goes out to all of them – a higher proportion of teens who have had depression would be interested in answering it compared to those teens who hadn’t had depression.

The problem here is that we can’t go back easily and check the data as it’s not freely available for public consultation.

Biased Research…?

There is an even darker side to this. This could be a case of deliberately misleading statistics being publicised for commercial gain.

The question above was asked by Good Morning Britain which is a sensationalist Tabloid Media show which wants eyes, and this is an eye catching headline, so what do they care about a possible biased sample.

And the same goes for Stem4 – they make their money selling mental health and wellbeing packages to schools and other institutions – it is in their interests to exaggerate the extent of teen depression and especially ‘prescription abuse’ because they are offering earlier intervention strategies, for a cost of course!

SignPosting and Related Posts

This should be of interest for the research studies module.

Please click here to return to the homepage – ReviseSociology.com

Teenage girls think there’s a lot of sexual harassment in schools, but is there?!?

A recent OFSTED report on sexual harassment in schools and colleges examined the extent of sexual harassment in schools, but to my mind it tells us very little about the actual extent of sexual harassment in schools.

The researchers visited 32 schools and colleges and interviewed 900 students about their experiences of sexual harassment, and at first glance the results look pretty bleak, but you need to be VERY CAREFUL with what these results tell us.

They tell us the perception of sexual harassment, not the actual rates of sexual harassment.

Girls’ perception of sexual harassment in schools

The following types of sexual harassment were reported as happening ‘a lot’ or ‘sometimes’ to ‘people my age’

  • sexist name-calling (92%)
  • rumours about their sexual activity (81%)
  • unwanted or inappropriate comments of a sexual nature (80%)
  • sexual assault of any kind (79%)
  • feeling pressured to do sexual things that they did not want to (68%)
  • unwanted touching (64%)

Girls’ Perception of Sexual harassment online

And the perceived extent of sexual harassment online…

  • being sent pictures or videos they did not want to see (88%)
  • being put under pressure to provide sexual images of themselves (80%)
  • having pictures or videos that they sent being shared more widely without their knowledge or consent (73%)
  • being photographed or videoed without their knowledge or consent (59%)
  • having pictures or videos of themselves that they did not know about being circulated (51%)

The problem is in the wording of the questions….

Students were basically asked ‘how bad is sexual harassment among all people my age’ and, for example 88% of girls say that ‘being sent pictures they don’t want to see is common among people my age’.

This isn’t the same as ‘88% of female students have received pictures they didn’t want’.

All this research tells us is about teenage girls’ perceptions of sexual harassment among their peers, not the actual rate of sexual harassment.

The report also found that many girls think schools are completely ineffective at dealing with cases of sexual harassment.

So teenage girls think there’s a lot of sexual harassment, but is there?

It is worth knowing that teenage girls THINK there’s a lot of sexual harassment going on, but is there?

But having read the report i’m left wanting to know the ACTUAL extent of sexual harassment, which is much more difficult to measure of course.

I guess ethics got in the way of OFSTED doing real research

I get it, asking young people about their ACTUAL PERSONAL EXPERIENCES of sexual harassment isn’t something you can do just by rocking up for a day or two and doing a few interviews.

So instead OFSTED have got around this by generalising the questions.

The problem is ‘90% of girls thinking that sexual harassment of some kind occurs in their school’ – that really tells us NOTHING about the extent of the problem.

It’s a bleak topic, matched by the bleak pointlessness of this research.

Researching in Classrooms

The classic method for researching in classrooms is non-participant observation, the method used by OFSTED inspectors. However, there are other methods available to the researcher who wishes to conduct research on actual lessons within schools.

Classrooms are closed environments with very clear rules of behaviour and typically containing around 20-30 students, one teacher and maybe one learning assistant, and lessons usually lasting from 40 minutes to an hour.

The obvious choice of research method for using in a classroom is that of non-participant observation, where the researcher takes on the role of the OFSTED inspector.

The fact that there are so many students in one place, and potentially hundreds of micro-interactions in even just a 40-minute lesson gives the observational researcher plenty to focus on, so classrooms are perhaps some of the most data rich environments within education.

Arguably the most useful way of collecting observational data would be for the researcher to have an idea about what they are looking for in advance – possibly how many times teachers praise which pupils, or how many times disruptive behaviour takes place, and how the teacher responds, rather than trying to watch everything, which would be difficult.

And students will probably be used to OFSTED inspections, or other staff in the school dropping in to observe lessons occasionally, thus it should be relatively easy for a researcher to blend into the background and observe without being too obtrusive.

The fact that classrooms are usually organised in a standardised way (they tend to be similar sizes, with only a few possible variations on desk layouts) also means the researcher has a good basis for reliability – any differences he observes in teacher or student behaviour across classrooms or schools is probably because of the teachers or pupils themselves, not differences in the environments they are in (at least to an extent!).

There are, however, some limitations with researching in classrooms.

Gaining access could be a problem – not all teachers are going to be willing to have a researcher observing them. They may regard their classroom as their environment and think they have little to gain from an outsider observing them – although if a researcher is a teacher themselves, they could maybe offer some useful feedback about teaching strategies applied by teachers.

Teachers will probably act differently when observed – if you think back to OFSTED inspections, teachers usually ‘up their game’ and make sure to be more inclusive and encouraging, this is likely to happen when anyone observes.

Similarly, pupils may behave differently – they may be more reluctant to contribute because of a researcher being present, or disruptive students may act up even more.

Classrooms are very unique, controlled environments, with only two roles (teachers and students) and clear norms. Teachers and students alike will not be themselves in these highly unusual situations.

Finally, researchers wouldn’t be able to dig deeper and ask probing questions when part of a lesson, unless they took on the role of participant observer by becoming an learning assistant, but even then they would be limited to what they could ask if they didn’t want to disrupt the lesson flow.

It’s not all about direct non-participant observation

Researchers might choose a more participatory approach to researching in classrooms, by training to be a learning assistant or even a teacher, and doing much longer term, unstructured observational research with students.

This would enable them to get to really know the students within a lesson, and make it very easy to to ask deeper questions outside of lessons.

The problem with this would be that they would then be part of the educational establishment and students may not wish to open up to them precisely because of that reason.

A further option would be to put up cameras and observe from a distance, but this might come up against some resistance from both teachers and students, and it would be more difficult to ask follow up questions if reviewing the recordings some time after the actual lesson took place.

Please click here to return to the homepage – ReviseSociology.com

The Global Drug Survey – a good example of invalid data due to bias?

86% of the global population have used drugs in the last year, and more people have used cannabis than tobacco. Almost 30% of the world’s population have used Cocaine in the last year, at least according to the 2019 Global Drug Survey.

Global Drugs Survey.PNG

This survey asked adults in 36 countries about their use of drugs and alcohol.

According to the same survey, the British get drunk more often than people in any other nation, at least according to a recent

In Britain, people stated they got drunk an average of 51 times last year, with U.S., Canada and Australia not far behind. The average was 33 times.

Where Cocaine use was concerned, 73% of people in England said they had tried it compared to 43% globally.

How valid is this data?

I don’t know about you, but to me these figures seem very high, and I’m left wondering if they aren’t skewed upwards by selective sampling or loose questions.

This report is produced by a private company who sell products related to addiction advice, and I guess their market is national health care services.

Seems to me like it’s in their interests to skew the data upwards to give themselves more of a purpose.

I certainly don’t believe the average person in the UK gets drunk once a week and that almost 3/4s of the population have tried Cocaine.

Sources

The Week 25th May 2019

 

 

Applying material from Item C and your knowledge of research methods, evaluate the strengths and limitations of using participant observation to investigate pupil exclusions

This 20 mark methods in context question came up in the 2018 A-level sociology 7192/1 paper, below is the full question and some thoughts about how you might go about answering it!

 

Applying material from Item C and your knowledge of research methods, evaluate the strengths and limitations of using participant observation to investigate pupil exclusions

Hints for answering

The item mentions many different types of exclusion, you should address them all and contrast the usefulness of participant observation for researching different types. E.G….

  1. Permanent (although you are really directed away from this
  2. Fixed (1/20 pupils)
  3. Pupils excluded from lessons (‘no reliable data’)
  4. Self-exclusion for truanting
  5. Self-exclusion by ‘switching off’.

You’re also directed to discuss particular types of students – those with special educational needs and those from traveller backgrounds for example.

The paragraph on the method directs you to discuss the role you would take amongst other things. NB the method is participant observation in general, so you could contrast overt and covert.

Here are some of the points you could develop:

  • Overt participant observation as a learning support assistant is probably the only way you could do this – useful for gaining insight into pupils being excluded from lessons and those self-excluding by switching off, but not for truancy.
  • If you took that role you could get close to SEN students – some of the students more likely to be excluded, but less so for traveller children.
  • An SEN learning support assistant could view more than one teacher/ classroom over the course of a few weeks, so reasonable representativeness.
  • You could check for teacher bias agains certain students in terms of why they get excluded – but this might be difficult IF you are actively trying to support learners in your role.
  • Also, your presence might improve behaviour and lesson the likelihood of exclusion.
  • Practically you’re limited to one school.
  • To be ethical you would have to tell management your true purpose for wanting to join in as an assistant, maybe investigating teachers with the highest exclusion rate, but you would have to not tell them for validity purposes, which would be unethical.
  • Practically you would still have to be trained as an LA.
  • Exclusions are rare, so you might be hanging around a long time waiting for one to happen.
  • You could embed yourself within a group of traveller or SEN children to get their take on school, which might give you insight, but this is not practical for adults.
  • Ultimately you’d have to combine it with Unstructured Interviews to really find out why exclusions take place, which is possible if you’re overt, not covert.

Not an exhaustive list, just a few ideas…. NB you would have to use more methods concepts.

Sources 

Click to access AQA-71921-QP-JUN18.PDF

Using interviews to research education

Interviews are one of the most commonly used qualitative research methods in the sociology of education. In this post I consider some of the strengths and limitations of using interviews to research education, focussing mainly on unstructured interviews.

This post is primarily designed to get students thinking about methods in context, or ‘applied research methods’. Before reading through this students might like to brush up on methods in context by reading this introductory post. Links to other methods in context advice posts can be found at the bottom of the research methods page (link above!)

Practical issues with interviews  

Gaining access may be a problem as schools are hierarchical institutions and the lower down the hierarchy an individual is, the more permissions the interviewer will require to gain access to interview them. For example, you might require the headmaster’s permission to interview a teacher, while to interview pupils you’ll require the headmasters and their parent’s permission.

However, if you can gain consent, and get the headmaster onside, the hierarchy may make doing interviews more efficient – the headmaster can instruct teachers to release pupils from lessons to do the interviews, for example.

Interviews tend to take more time than questionnaires, and so finding the time to do the interviews may be a problem – teachers are unlikely to want to give up lesson time for interviews, and pupils are unlikely to want spend their free time in breaks or after school taking part in interviews. Where teachers are concerned, they do tend to be quite busy, so they may be reluctant to give up time in their day to do interviews.

However, if the topic is especially relevant or interesting, this will be less of a problem, and the interviewer could use incentives (rewards) to encourage respondents to take part. Group interviews would also be more time efficient.

Younger respondents tend to have more difficulty in keeping to the point, and they often pick up on unexpected details in questions, which can make interviews take longer.

Younger respondents may have a shorter attention span than adults, which means that interviews need to be kept short.

Validity issues

Students may see the interviewer as the ‘teacher in disguise’ – they may see them as part of the hierarchical structure of the institution, which could distort their responses. This could make pupils give socially desirable responses. With questions about homework, for example, students may tell the interviewer they are doing the number of hours that the school tells them they should be doing, rather than the actual number of hours they spend doing homework.

To overcome this the teacher might consider conducting interviews away from school premises and ensure that confidentiality is guaranteed.

Young people’s intellectual and linguistic skills are less developed that adults and the interviewer needs to keep in mind that:

  • They may not understand longer words or more complex sentences.
  • They may lack the language to be able to express themselves clearly
  • They may have a shorter attention span than adults
  • They may read body language different to adults

Having said all of that, younger people are probably going to be more comfortable speaking rather than reading and writing if they have poor communication skills, which means interviews are nearly always going to be a better choice than questionnaires where younger pupils are concerned.

To ensure greater validity in interviews, researchers should try to do the following:

  • Avoid using leading questions as young people are more suggestible than adults.
  • Use open ended questions
  • Not interrupt students’ responses
  • Learn to tolerate pauses while students think.
  • Avoid repeating questions, which makes students change their first answer as they think it was wrong.

Unstructured interviews may thus be more suitable than structured interviews, because they make it easier for the researcher to rephrase questions if necessary.

The location may affect the validity of responses – if a student associates school with authority, and the interview takes place in a school, then they are probably more likely to give socially desirable answers.

If the researcher is conducting interviews over several days, later respondents may get wind of the topics/ questions which may influence the responses they give.

Ethical issues

Schools and parents may object to students being interviewed about sensitive topics such as drugs or sexuality, so they may not give consent.

To overcome this the researcher might consider doing interviews with the school alongside their PSHE programme.

Interviews may be unsettling for some students – they are, after all, artificial situations. This could be especially true of group interviews, depending on who is making up the groups.

Group interviews

Peer group interviews may well be a good a choice for researchers studying topics within the sociology of education.

Advantages 

  • Group interviews can create a safe environment for pupils
  • Peer-group discussion should be something pupils are familiar with from lessons
  • Peer-support can reduce the power imbalance between interviewer and students
  • The free-flowing nature of the group interview could allow for more information to come forth.
  • The group interview also allows the researcher to observe group dynamics.
  • They are more time efficient than one on one interviews.

Disadvantages

  • Peer pressure may mean students are reluctant to be honest for fear of ridicule
  • Students may also encourage each other to exaggerate or lie for laffs.
  • Group interviews are unpredictable, and very difficult to standardise and repeat which mean they are low in validity.

Gender and Education: Good Resources

Useful links to quantitative and qualitative research studies, statistics, researchers, and news paper articles relevant to gender and education. These links should be of interest to students studying A-level and degree level sociology, as well as anyone with a general interest in the relationship between gender, gender identity, differential educational achievement and differences in subject choice.

Just a few links to kick-start things for now, to be updated gradually over time…

General ‘main’ statistical sites and sources

The latest GSCE results analysed by gender from the TES

A Level Results from the Joint Council for Qualifications – broken down by gender and region

Stats on A level STEM subjects – stats on the gender balance are at the end (70% of psychology students are female compared to only 10% of computer science students)

General ‘Hub’ Qualitative resources 

The Gender and Education Association – works to eradicate sexism and gender equality within education. Promotes a Feminist pedagogy (theory of learning).

A link to Professor Becky Francis’ research, which focuses mainly on gender differences in educational achievement – at time of writing (November 2017) her main focus seems to be on girls lack of access to science and banding and streaming (the later not necessarily gender focused)

Specific resources for exploring gender and differential educational achievement

Education as a strategy for international development – despite the fact that girls are outperforming boys in the United Kingdom and most other developed countries, globally girls are underachieving compared to boys in most countries. This link takes you to a general post on education and social development, many of the links explore gender inequality in education.

Specific resources for exploring gender and subject choice 

Dolls are for Girls, Lego is for Boys – A Guardian article which summarizes a study by Becky Francis’s on Gender, Toys and Learning, Francis asked the parents of more than 60 three- to five-year-olds what they perceived to be their child’s favourite toy and found that while parental choices for boys were characterised by toys that involved action, construction and machinery, there was a tendency to steer girls towards dolls and perceived “feminine” interests, such as hairdressing.

Girls are Logging Off – A BBC article which briefly alerts our attention to the small number of girls opting to do computer science.

 

 

Sociology Crime and Deviance Research Project, Summer Term 2018

This my very simply ‘research’ project task for summer timetable 2018. I’m experimenting with going back to a very open ended project!

This my very simply ‘research’ project task for summer timetable 2018. I’m experimenting with going back to a very open ended project!

Crime Deviance Sociology.jpg

The AQA Sociology specification states that you should be able to cite examples of your own research, hence this summer term research project (which is also useful for introducing theories of crime and deviance.

Task

Select one ‘type’ of crime from the list below and produce a 1500 -2000 word report applying perspectives and incorporating some independent research exploring how and why this crime occurs.

Examples of crimes you might look at

    • Burglary
    • Theft
    • Domestic violence
    • Corporate crime
    • State violence
  • Fraud
    • Knife crime/ gun crime
    • Subcultures
    • Drug dealers
    • Terrorism
  • Any other type of crime or deviance of your choice

Section 1: Introduction

Outline what crime you’ve chose to focus on, define it, and provide a few basic statistics to outline the extent of it.

Section 2: Theoretical context

Summarise how conflict, consensus and action theories would explain this crime. Use the following links or your main text books as necessary:

Section 3: Research summary

Find at least three (ideally more) pieces of contemporary (last 10 years) independent research conducted on this crime – this might be by official government sources, or specialist criminologists.

Summarise these pieces of research and use them to evaluate the above perspectives (which are supported, which are not.)

Section 4: Methods section (optional)

If you find there are significant gaps in your knowledge not covered by available literature, outline what research methods you might employ to find out more.

Timing: you have until the end of summer term timetable to hand in a 1500-2000 word research project.

Criticisms of Quantitative Research

Bryman (2016) identifies four criticisms of quantitative research:

Quantitative researchers fail to distinguish people and social institutions from the world of nature

Schutz (1962) is the main critique here.

Schutz and other phenomenologists accuse quantitative social researchers of treating the social world as if it were no different from the natural world. In so doing, quantitative researchers tend to ignore the fact that people interpret the world around them, whereas this capacity for self-reflection cannot be found among the objects of the natural sciences.

The measurement process possesses an artificial and spurious sense of precision and accuracy

Cicourel (1964) is the main critique here.

He argues that the connection between the measures developed by social scientists and the concepts they are supposed to be revealing is assumed rather than real – basically measures and concepts are both effectively ‘made up’ by the researchers, rather than being ‘out there’ in reality.

A further problem is that quantitative researchers assume that everyone who answers a survey interprets the questions in the same way – in reality, this simply may not be the case.

The reliance on instruments and procedures hinders the connection between research and everyday life

This issue relates to the question of ecological validity.  

Many methods of quantitative research rely heavily on administering research instruments to participants (such as structured interviews or self-completion questionnaires), or controlling situations to determine effects.

However, these instruments simply do not ‘tap into’ people’s real life experiences – for example, many of the well known lab experiments on the A-level sociology syllabus clearly do not reflect real life, while surveys which ask people about their attitudes towards immigration, or the environment, do not necessarily tell us about how people act towards migrants or the environment on a day to day basis.

The analysis of relationships between variables creates a static view of social life that is independent of people’s lives. 

The main critique here is Blumer (1956).

Blumer (1956) argued that studies that seek to bring out the relationships between variables omit ‘the process of interpretation or definition that goes on in human groups’.

This is a combination of criticisms 1 and 3 above, but adds on an additional problem – that in isolating out variables, quantitative research creates an artificial, fixed and frozen social (un)reality – whereas social reality is (really) alive and constantly being created through processes of interaction by its various members.

In other words, the criticism here is that quantitative research is seen as carrying an objective ontology that reifies the social world.

The above criticisms have lead intepretivists to prefer more qualitative research methods. However, these too have their limitations!

Sources:

Bryman (2016) Social Research Methods

 

The Four Main Concerns of Quantitative Research

Quantitative researchers generally have four main preoccupations: they want their research to be measurable, to focus on causation, to be generalisable, and to be replicable.

These preoccupations reflect epistemological grounded beliefs about what constitutes acceptable knowledge, and can be contrasted with the preoccupations of researchers who prefer a qualitative approach.

Measurement 

It may sound like it’s stating the obvious – but quantitative researchers are primarily interested in collecting numerical data, which means they are essentially concerned with counting social phenomena, which will often require concepts to be operationalised.

Causality 

In most quantitative research there is a strong concern with explanation: qualitative researchers are more concerned with explaining why things are as they are, rather than merely describing them (which tends to be the focus of more qualitative research).

It follows that it is crucial for quantitative researchers to effectively isolate variables in order to establish causal relationships.

Generalisation 

Quantitative researchers tend to want their findings to be representative of wider populations, rather than the just the sample involved in the study, thus there is a concern with making sure appropriate sampling techniques will be used.

Replication

If a study is repeatable then it is possible to check that the original researchers’ own personal biases or characteristics have not influenced the findings: in other words, replication is necessary to test the objectivity of an original piece of research.

Quantitative researchers tend to be keen on making sure studies are repeatable, although most studies are never repeated because there is a lack of status attached to doing so.

Source:

Bryman (2017) Social Research Methods

 

%d bloggers like this: