Structured Interviews in Social Research

Structured interviews are a standardised way of collecting data typically using closed, pre-coded surveys.

A structured interview is where interviewers ask pre-written questions to candidates in a standardised way, following an interview schedule. As far as possible the interviewer asks the same questions in the same order and the same way to all candidates. 

(An exception to this is filter questions in which case the interviewer may skip sub-questions if a negative response is provided). 

Answers to structured interviews are usually closed, or pre-coded, and the interviewer ticks the appropriate box according to the respondents’ answers. However some structured interviews may be open ended in which case the interviewer writes in the answers for the respondent.

Social surveys are the main context in which researchers will conduct structured interviews.

This post covers:

  • the advantages of structured interviews
  • the different contexts in which they take place (phone and computer assisted).
  • the stages of conducting them: from knowing the schedule to leaving!
  • their limitations.

Advantages of Structured Interviews

The main advantage of structured interviews is that they promote standardisation in both the processes of asking questions and recording answers.

This reduces bias and error in the asking of questions and makes it easier to process respondents’ answers.

The two main advantages of structured interviews are thus:

  • Reducing error due to interviewer variability.
  • Increasing the accuracy and ease of data processing.

Reducing error due to interviewer variability

Structured interviews help to reduce the amount of error in data collection because they are standardised.

Variability and thus error can occur in two ways:

  • Intra-interviewer variability: occurs when an interviewer is not consistent with the way they ask the questions or record the answers.
  • Inter-interviewer variability: when there are more than two interviewers who are not consistent with each other in the way they ask questions or record answers.

These two sources of variability can occur together and compound the the problem of reduced validity.

The common sources of error in survey research include:

  1. A poorly worded question.
  2. The way the question is asked by the interviewer.
  3. Misunderstanding on the part of the respondent being interviewed.
  4. Memory problems on the part of the respondent.
  5. The way the information is recorded by the interviewer.
  6. The way the information is processed: coding of answers or data entry.

Because the asking of questions and recording of answers are standardised, this means any variation in answers from respondents should be due to true or real variation in the respondents answers, rather than variation arising because of differences in the interview context.

Accuracy and Ease of Data Processing

Structured interviews consist of mainly closed, pre-coded questions or fixed choice questions.

With closed-questions the respondent is given a limited choice of possible answers and is asked to select which response or responses apply to them.

The interviewer then simply ticks the appropriate box.

This limit box ticking procedure limits the scope for interviewer bias to introduce error. There is no scope for the interviewer to omit or modify anything the respondent says because they are not writing down their answer.

Another advantage with pre-coded data gained from the structured interview is that it allows for ‘automatic’ data processing.

If answers had been written down or transcribed from a recording, a researcher would have to examine this qualitative data, sort and assign the various answers to categories.

For example if a survey had produced qualitative data on what respondents thought about Brexit, the researcher might categories the range of answers into ‘for Brexit’, ‘neutral’, and ‘against Brexit’.

This process of reducing more complex and varied data into fewer and simpler ‘higher level’ categories is known as coding data, or establishing a coding frame and is necessary for quantitive analysis to take place.

Coding (whether done before or after a structured interview takes place) introduces another source or potential error. Answers may be categorised incorrectly by the researchers. The researchers may categorise answers differently to how the respondents themselves would have categorised their answers.

There are two sources of error in recording data:

  • Intra-rater-variability: where the person applying the coding is inconsistent in the way they apply the rules of assigning answers to categories.
  • Inter-rater-variability: where two different raters apply the rules of assigning answers to categories differently.

If either or both of the above occur then variability in responses will be due to error rather than true variability in the responses.

The closed question survey/ interview avoids the above problem because respondents assign themselves to categories, simply by picking an option and the interviewer ticking a box.

There is very little opportunity with pre-coded interviews for interviewers or analysers to misinterpret or miss-assign respondents’ answers to the wrong categories.

Structured Interview Contexts

Structured interviews tend to be done when there is only one respondent. Group interviews are usually more qualitative because they dynamics of having two ore more respondents present mean answers tend to be more complex, and so tick-box answers are not usually sufficient to get valid data.

Besides the face to face interview, there are two particular contexts which are common with structured interviewing: telephone interviewing and computer assisted interviewing. (These are not mutually exclusive).

Telephone interviewing

Telephone interviews are very common with market research companies, and opinion polling companies such as YouGov. They are used less often by academic researchers but an exception to this was during the Covid-19 Pandemic when many studies which would usually rely on in-person interviews had to be carried out over the phone.

The advantages of telephone interviews

The advantages of telephone interviews compared to face to face interviews the advantages of telephone interviews are:

  • Telephone interviews are cheaper and quicker to administer because there is no travel time or costs involved in accessing the respondents. The more dispersed the research sample is geographically the larger the advantage.
  • Telephone interviews are easier to supervise than face to face interviews. You can have one supervisor in a room with several phone interviewers. Interviewers can be recorded and monitored, although care has to be taken with GDPR.
  • Telephone interviews reduce bias due to the personal characteristics of the interviewers. It is much more difficult to tell what the class background or ethnicity or the interviewer is over the phone, for example.

The limitations of phone interviews

  • People without phones cannot be part of the sample.
  • Call screening with mobile phones has greatly reduced the response rate of phone surveys.
  • Respondents with hearing impediments will find phone interviews more difficult.
  • The length of a phone interview generally can’t be sustained over 20-25 minutes.
  • There is a general belief that telephone interviews achieve lower response rates than face to face interviews.
  • There is some evidence that phone interviews are less useful when dealing with sensitive topics but the data is not clear cut.
  • There may. be validity problems because telephone interviews do not allow for observation. For example an interviewer cannot observe if a respondent is confused by a question.
  • In cases where researchers need specific types of people, telephone interviews do not allow us to check if the correct types of people are actually those being interviewed.

Computer assisted Interviewing 

With computer assisted interviewing interviews questions are pre-written and appear on the computer screen. Interviewers follow the instructions and read out questions in order and key in the respondents’ answers, either as open or closed responses. 

There are two main types of Computer Assisted Interviewing:

  • CAPI – Computer Assisted Personal Interviewing. 
  • CATI – Computer Assisted Telephone Interviewing.

Most telephone interviews today are Computer Assisted. There are several survey software packages that allow for the construction of effective surveys with analytics tools for data analysis. 

They are less popular for personal interviews but have been growing in popularity. 

CATI and CAPI are more common among commercial survey organisations such as IPSOS but are used less in academic research conducted by universities. 

The advantages of computer assisted interviewing

CAPI are very useful for filter questions as the software can skip to the next question if the previous one isn’t relevant. This reduces the likelihood of the interviewer asking irrelevant questions or missing out questions. 

They are also useful for prompt-questions as flash cards can be generated on the screen and shown to the respondents as required. This should mean respondents are more likely to see the flash-cards in the same way as there is no possibility for the researcher to arrange them in a different order for different respondents, as might be the case with physical flashcards. 

Another advantage of computer assisted interviewing is automatic storage on the computer or cloud upload which means there is no need to scan paper interview sheets or enter the data manually at a later date. 

Thus Computer Assisted Interviews should increase the level of standardisation and reduce the amount of variability error introduced by the interviewer. 

The disadvantages of Computer Assisted Interviewing:

  • They may create a sense of distance and disconnect between the interviewer and respondents. 
  • Miskeying may result in the interviewer entering incorrect data, and they are less likely to realise this than with paper interviews. 
  • Interviewers need to be comfortable with the technology.

Conducting Structured Interviews 

The procedures involved with conducting an effective structure interview include:

  • Knowing the interview schedule
  • Gaining access 
  • Introducing the research 
  • Establishing rapport 
  • Asking questions and recording answers 
  • Leaving the interview.

The processes above are specifically in relation to structured interviews, but will also apply to semi-structure interviews.

The interview schedule 

An interview schedule is the list of questions in order, with relevant instructions about how the questions are to be asked. Before conducting an interview, the interviewer should know the interview schedule inside out. 

Interviews can be stressful and pressure can cause interviewers to not follow standardised procedures. For example, interviewers may ask questions in the wrong order or miss questions out. 

When several interviewers are involved in the research process it is especially important that all of them know the interview schedule to ensure questions are asked in a standardised way. 

Gaining access

Interviews are the interface between the research and the respondents and are thus a crucial link in ensuring a good response rate. In order to gain access interviews need to:

  • Be prepared to keep calling back with telephone interviews. Keep in mind the most likely times to get a response. 
  • Be self-assured and confident. 
  • Reassure people that you are not a salesperson, but doing research for a deeper purpose. 
  • Dress appropriately. 
  • Be prepared to be flexible with time: finding a time that fits the respondent if first contact isn’t convenient. 

Introducing the research 

Respondents need to be provided with a rationale explaining the purposes of the research and why they are giving up their time to take part. 

The introductory rationale may be written down or spoken. A written rationale may be sent out to prospective respondents in advance of the research taking place, as is the case with those selected to take part in the British Social Attitudes survey. A verbal rationale is employed with street-based market research, cold-calling telephone surveys and may also be reiterated during house to house surveys. 

An effective introductory statement can be crucial in getting respondents to take part. 

What should an introductory statement for social research include?

  • Make clear the identity of the interviewer.
  • Identify the agency which is conducting the research: for example a university or business. 
  • Include details of how the research is being funded. 
  • Indicate the broader purpose of the research in broad terms: what are the overall aims?
  • Give an indication of the kind of data that will be collected. 
  • Make it clear that participation is voluntary. 
  • Make it clear that data will be anonymised and that the respondent will not be identified in any way, by data being analysed at an aggregate level. 
  • Provide reassurance about the confidentiality of information. 
  • Provide a respondent with the opportunity to ask questions. 

Establishing rapport with structured interviews

Rapport is what makes the respondent feel as if they want to cooperate with the researcher and take part in the research. Without rapport being established respondents may either not agree to take part or terminate the interview half way through! 

Rapport can be established through visual cues of friendliness such as positive body language, listening and good eye contact. 

However with structured interviews, establishing rapport is a delicate balancing act as it is crucial for the interviewers be as objective as possible and not get too close to the respondents.

Rapport can be achieved by being friendly with the interviewee, although interviewers shouldn’t take this too far. Too much friendliness can result in the interview taking too long and the interviewee getting bored. 

Too much rapport can also result in the respondent providing socially desirable answers. 

Asking Questions and Recording Answers 

With structured interviews it is important that researchers strive to ask the same questions in the same way to all respondents. They should ask questions as written in order to minimise error. 

Experiments in question-wording suggest that even minor variations in wording can influence replies. 

Interviewers may be tempted to deviate from the schedule because they feel awkward asking some questions to particular people, but training can help with this and make it more likely that standardisation is kept in place. 

Where recording answers is concerned, bias is far less likely with pre-coded answers. 

PROVIDING Clear instructions 

Interviews need to follow clear instructions through the progress of the interview. This is important if an interview schedule includes filter questions. 

Filter questions require the interviewer to ask questions of some respondents but not to others. Filler questions are usually indented on an interview schedule. 

For example: 

  1. Did you vote in the last general election…?  YES / NO 

1a (to be asked if respondent answered yes to Q1)

Which of the following political parties did you vote for? Conservatives/ Labour/ Lib Dems/ The Green Party/ Other. 

The risk of not following instructions is that the respondent may be asked questions that are irrelevant to them, which may be irritating. 

Question order

Researchers should stick to the question order on the survey. 

Leapfrogging questions may result in questions skipped not being asked because the researcher could forget to go back to them. 

Changing the question order may also lead to variability in replies because questions previously asked may affect how respondents answer questions later on in the survey. 

Three specific examples demonstrate why question order matters:

People are less likely to respond that taxes should be lowered if they are asked questions about government spending beforehand. 

In victim surveys if people are asked about their attitudes to crime first they are more likely to report that they have been a victim of crime in later questions. 

One question in the 1988 British Crime Survey asked the following question:

‘Taking everything into account, would you say the police in this area do a good job or a poor job? 

For all respondents this question appeared early on, but due to an admin error the question appeared twice in some surveys, and for those who answered the question twice:

  • 66% gave the same response
  • 22% gave a more positive response
  • 12% gave a less positive response. 

The fact that only two thirds of respondents gave the same response twice clearly indicates that the effect of question order can be huge. 

One theory for the change is that the survey was about crime and as respondents thought more in-depth about crime as the interview progressed, 22% felt more favourable to the police and 13% less favourable, this would have varied with their own experiences. 

Rules for ordering questions in social surveys

  • Early questions should be clearly related to the topic of the research about which the respondent has already been informed. This is so the respondent immediately feels like the questions are relevant. 
  • Questions about age/ ethnicity/ gender etc. should not be asked at the beginning of the interview 
  • Sensitive questions should be left for later.
  • With a longer questionnaire, questions should be grouped into sections to break up the interview. 
  • Within each subgroup general questions should precede specific ones. 
  • Opinions and attitudes questions should precede questions about behaviour and knowledge. Questions about the later are less likely to be influenced by question order. 
  • If a respondent has already answered a later question in the course of answering a previous one, that later question should still be asked. 

Probing questions in structured interviews 

Probing may be required in structured interviews when 

  • respondents do not understand the question and either ask for or it is clear that they need more information to provide an answer. 
  • The respondent does not provide a sufficient answer and needs to be probed for more information. 

The problem with the interviewer asking additional probing questions is that they introduce researcher-led variability into the interview context. 

Tactics for effective probing in structured interviews:
  • Employ standardised probes. These work well when open ended answers are required. Examples of standardised probes include: ‘Could you say a little more about that?’ or ‘are there any other reasons why you think that?’. 
  • If a response does not allow for a pre-existing box to be ticked In a closed ended survey the interviewer could repeat the available options
  • If the response requires a number rather than something like ‘often’ the researcher should just persist with asking the question.  They shouldn’t try and second guess a number!


Prompting occurs when the interviewer suggests a possible answer to a question to the respondent. This is effectively what happens with a closed question survey or interview: the options are the prompts. The important thing is that the prompts are the same for all the respondents and asked in the same way. 

During face to face interviews there may be times when it is better for researchers to use show cards (or flash cards) to display the answers rather than say them. 

Three contexts in which flashcards are better:

  • When there is a long list of possible answers. For example if asking respondents about which newspapers they read, it would be easier to show them a list rather than reading them out!
  • With Likert Scales, ranked for 1-5 for example, it would be easier to have a showcard with 1-5 and the respondent can point to it, rather than reading out ‘1,2,3,4,5’. 
  • With some sensitive details such as income, respondents might feel more comfortable if they are shown income bands with letters attached, then they can say the letter. This allows the respondent to not state what their income is out loud. 

Leaving the Interview 

On leaving the interview thank the respondent for taking part. 

Researchers should not engage in further communication about the purpose of the research at this point beyond the standard introductory statement. To do so means this respondent may divulge further information to other respondents yet to take part, possibly biassing their responses.

Problems with structured interviews 

Four problems with structured interviews include:

  • the characteristics of the interviewer interfering with the results.
  • Response sets resulting in reduced validity (acquiescence and social desirability).
  • The problem of lack of shared meaning.
  • The feminist critique of the unequal power relationship between interviewer and respondent.

Interviewer characteristics

The characteristics of the interviewer such as their gender or ethnicity may affect the responses a respondent gives. For example, a respondent may be less likely to open up on sensitive issues with someone who is a different gender to them.  

Response Sets 

This is where respondents reply to a series of questions in a consistent way but one that is irrelevant to the concept being measured. 

This is a particular problem when respondents are answering several Likert Scale questions in a row. 

Two of the most prominent types of response set are ‘acquiescence’ and ‘social desirability bias’ 


Acquiescence refers to a tendency of some respondents to consistently agree or disagree with a set of questions. They may do this because it is quicker for them to get through the interview. This is known as satisficing. 

Satisficing is where respondents reduce the amount of effort required to answer a question. They settle for an answer that is satisfactory rather than making the effort to generate the most accurate answer. 

Examples of satisficing include:

  • Agreeing with yes statements or ‘yeasaying’.
  • Opting for middle point answers on scales.
  • Not considering the full-range of answers in a range of closed questions, for example picking the first or last answers. 

The opposite of satisficing is optimising. Optimising is where respondents expend effort to arrive at the best and most appropriate answer to a question. 

It is possible to weed out respondents who do this by ensuring there is a mix of positive and negative sentiment in a batch of Likert questions. 

For example you may have a batch of three questions designed to measure attitudes towards Rishi Sunak’s performance as Primeminister.

If you have two scales where ‘5’ is positive and one where 5 is Negative, for example:

  • Rishi Sunak is an effective leader
  • Rishi Sunak has managed the economy well 
  • Rishi Sunak is NOT to be trusted  

If someone is acquiescing without thinking about their answers, they are likely to circle all 5s, which wouldn’t make sense. Hence we could disregard this response and maybe even the entire survey from this individual. 

Social desirability bias 

Socially desirable behaviours and attitudes tend to be over-reported. This can especially be the case for sensitive questions.

Strategies for reducing social interviews bias
  • Use self-completion forms rather than interviewers. 
  • Soften the question for example ‘even the calmest of car drivers sometimes lose their temper when driving, has this ever happened to you?

The problem of meaning 

Structured surveys and interviews assume that respondents share the same meanings for terms as the interviewers. 

However, from an interpretivist perspective interviewer and respondent may not share the same meanings. Respondents may be ticking boxes but mean different things to what the interviewer thinks they mean. 

The issue of meaning is side-stepped in structured interviews. 

The feminist critique of structured interviews 

The structure of the interview epitomises the asymmetrical relationship between researcher and respondent. This is a critique made of all quantitative research. 

The researcher extracts information from the respondent and gives little or nothing in return. 

Interviewers are even advised not to get too familiar with respondents as giving away too much information may bias the results. 

Interviewers should refrain from expressing their opinions, presenting any personal information and engaging in off-topic chatter. All of this is very impersonal. 

This means that structured interviews are probably not appropriate for very sensitive topics that involve a more personal touch. For example with domestic violence, unstructured interviews which aim to explore the nature of violence have revealed higher levels of violence than structured interviews such as the Crime Survey of England and Wales.

Sources and signposting

Structured interviews are relevant to the social research methods module within A-level sociology.

This post was adapted from Bryman, A (2016) Social Research Methods.

Reality TV School Shows – How Valid are They?

Reality shows featuring schools have become common place on British T.V. over the last decade.

One well-known example is the ‘Educating’ series, which started in Essex in 2011, then visited Yorkshire in 2014, and then another three series, with the latest airing in 2017.

Each series followed one school through an entire year, with cameras going into lessons, and interviews with several students, teachers and managers.

Another example which is more a creative work done in conjunction with the children, is ‘Our School’ on CBCC…..

In research methods terms this method is a combination of ‘non-participant observation’ and semi-structured interviews, and these sources shouldn’t be dismissed out of hand because real life educational researchers rarely get access to one school for an entire year, so there is a rich vein of data here.

However, these are not works of sociological research, they are documentaries, produced for entertainment purposes and for a profit, so we need to be cautious about how useful they are.

Practical issues

Given the problems of a researcher gaining access to a school, having these shows done for us is great, as someone else has already gained access!

Theoretical issues

Representativeness may be limited – it’s likely that only schools which are doing OK will agree to take part – schools in special measures probably wouldn’t.

Also, these shows tend to focus on the dramatic cases of students – rather than the ‘normal’ ones!

Validity may be an issue – both schools and teachers may well act differently because they know there are cameras present.

Having said that, we do get something of an insight into the stories of a limited number of students.

However, if the data is not valid, there’s little point!

Ethical issues

These documentaries do seem to be done with the co-operation of the students – so I guess this gives them a voice.

I’m not convinced the teachers would be that happy about this as a whole – maybe quite a lot of railroading by the SLT?

Voices of Guinness: Oral Histories of Work in Modernity

Voices of Guinness: An Oral History of the Royal Park Brewery (202) is a recent academic work by Tim Strangleman which explores the experience of work in one Guinness Factory from the 1940s to the early 2000s.

The research took place over several years and consists of oral histories (presumably based on in-depth structured, or even unstructured interviews) with people who used to work in the factory and the use of a range of secondary documents such as photos, pictures and the Guinness factory magazine.

Strangleman puts together a kind of collage of life histories to present various stories about how workers made sense of going to work: what work meant to them and how they coped with its challenges.

This is a useful example of ‘work in modernity’ – Strangleman describes how the Guinness company established a kind of ‘industrial citizenship’ – their aim was to build workers who were fully rounded humans who had a sense of ownership over their work, a concept which many seem very alien now with ‘zero hours contracts’.

The workers for the most part in the 1940s – 1970s at least bought into this – they felt at home in the workplace and because of this, they felt able to criticize the management, a situation which may have been uncomfortable for them, but helped them to keep the workers happy enough.

In the 40s-60s – leisure was broadly focused around the factory and with work colleagues – there were several social clubs such as sports clubs, even theatre clubs, but this started to change in the 1960s when rising incomes led to more privatised forms of leisure.

The workers in late modernity also expected to be employed for life, which is one of the most notable changes to date – most students today don’t want a job for life, and you see the idea of ‘temporary employment’ built into the modern day site of the factory – NB the Guinness Factory is now closed, it has been replaced with ‘Logistics’ wharehouses, the kind of temporary structures which stand in contrast with the more permanent nature of work in modernity.

For details of the book please click here.

You can find out more about this study through two useful podcasts:

Relevance to A-level sociology

This is an excellent study to show what work used to be like in Modernity, and as Strangleman says, it reminds us what we have lost in Postmodernity.

It’s also interesting to contrast how the solidness of the factory then ties in with the stable idea of ‘jobs for life’ whereas now people no longer expect or even want jobs for life, we see more temporary buildings forming the basis for working class jobs, most obviously the prefab Amazon warehouses.

In terms of methods this a useful example of a study that uses secondary qualitative data and interviews for oral histories.

Theoretically, there are definite links here to what Bauman would have called more solid forms of Modernity!

Using interviews to research education

Interviews are one of the most commonly used qualitative research methods in the sociology of education. In this post I consider some of the strengths and limitations of using interviews to research education, focussing mainly on unstructured interviews.

This post is primarily designed to get students thinking about methods in context, or ‘applied research methods’. Before reading through this students might like to brush up on methods in context by reading this introductory post. Links to other methods in context advice posts can be found at the bottom of the research methods page (link above!)

Practical issues with interviews  

Gaining access may be a problem as schools are hierarchical institutions and the lower down the hierarchy an individual is, the more permissions the interviewer will require to gain access to interview them. For example, you might require the headmaster’s permission to interview a teacher, while to interview pupils you’ll require the headmasters and their parent’s permission.

However, if you can gain consent, and get the headmaster onside, the hierarchy may make doing interviews more efficient – the headmaster can instruct teachers to release pupils from lessons to do the interviews, for example.

Interviews tend to take more time than questionnaires, and so finding the time to do the interviews may be a problem – teachers are unlikely to want to give up lesson time for interviews, and pupils are unlikely to want spend their free time in breaks or after school taking part in interviews. Where teachers are concerned, they do tend to be quite busy, so they may be reluctant to give up time in their day to do interviews.

However, if the topic is especially relevant or interesting, this will be less of a problem, and the interviewer could use incentives (rewards) to encourage respondents to take part. Group interviews would also be more time efficient.

Younger respondents tend to have more difficulty in keeping to the point, and they often pick up on unexpected details in questions, which can make interviews take longer.

Younger respondents may have a shorter attention span than adults, which means that interviews need to be kept short.

Validity issues

Students may see the interviewer as the ‘teacher in disguise’ – they may see them as part of the hierarchical structure of the institution, which could distort their responses. This could make pupils give socially desirable responses. With questions about homework, for example, students may tell the interviewer they are doing the number of hours that the school tells them they should be doing, rather than the actual number of hours they spend doing homework.

To overcome this the teacher might consider conducting interviews away from school premises and ensure that confidentiality is guaranteed.

Young people’s intellectual and linguistic skills are less developed that adults and the interviewer needs to keep in mind that:

  • They may not understand longer words or more complex sentences.
  • They may lack the language to be able to express themselves clearly
  • They may have a shorter attention span than adults
  • They may read body language different to adults

Having said all of that, younger people are probably going to be more comfortable speaking rather than reading and writing if they have poor communication skills, which means interviews are nearly always going to be a better choice than questionnaires where younger pupils are concerned.

To ensure greater validity in interviews, researchers should try to do the following:

  • Avoid using leading questions as young people are more suggestible than adults.
  • Use open ended questions
  • Not interrupt students’ responses
  • Learn to tolerate pauses while students think.
  • Avoid repeating questions, which makes students change their first answer as they think it was wrong.

Unstructured interviews may thus be more suitable than structured interviews, because they make it easier for the researcher to rephrase questions if necessary.

The location may affect the validity of responses – if a student associates school with authority, and the interview takes place in a school, then they are probably more likely to give socially desirable answers.

If the researcher is conducting interviews over several days, later respondents may get wind of the topics/ questions which may influence the responses they give.

Ethical issues

Schools and parents may object to students being interviewed about sensitive topics such as drugs or sexuality, so they may not give consent.

To overcome this the researcher might consider doing interviews with the school alongside their PSHE programme.

Interviews may be unsettling for some students – they are, after all, artificial situations. This could be especially true of group interviews, depending on who is making up the groups.

Group interviews

Peer group interviews may well be a good a choice for researchers studying topics within the sociology of education.


  • Group interviews can create a safe environment for pupils
  • Peer-group discussion should be something pupils are familiar with from lessons
  • Peer-support can reduce the power imbalance between interviewer and students
  • The free-flowing nature of the group interview could allow for more information to come forth.
  • The group interview also allows the researcher to observe group dynamics.
  • They are more time efficient than one on one interviews.


  • Peer pressure may mean students are reluctant to be honest for fear of ridicule
  • Students may also encourage each other to exaggerate or lie for laffs.
  • Group interviews are unpredictable, and very difficult to standardise and repeat which mean they are low in validity.

Interviews in Social Research: Advantages and Disadvantages

The strengths of unstructured interviews are that they are respondent led, flexible, allow empathy and can be empowering, the limitations are poor reliability due to interviewer characteristics and bias, time, and low representativeness.

An interview involves an interviewer asking questions verbally to a respondent. Interviews involve a more direct interaction between the researcher and the respondent than questionnaires. Interviews can either be conducted face to face, via phone, video link or social media.

This post has primarily been written for students studying the Research Methods aspect of A-level sociology, but it should also be useful for students studying methods for psychology, business studies and maybe other subjects too!

Types of interview

Structured or formal interviews are those in which the interviewer asks the interviewee the same questions in the same way to different respondents. This will typically involve reading out questions from a pre-written and pre-coded structured questionnaire, which forms the interview schedule. The most familiar form of this is with market research, where you may have been stopped on the street with a researcher ticking boxes based on your responses.

Unstructured or Informal interviews (also called discovery interviews) are more like a guided conversation. Here the interviewer has a list of topics they want the respondent to talk about, but the interviewer has complete freedom to vary the specific questions from respondent to respondent, so they can follow whatever lines of enquiry they think are most appropriated, depending on the responses given by each respondent.

Semi-Structured interviews are those in which respondents have a list of questions, but they are free to ask further, differentiated questions based on the responses given. This allows more flexibility that the structured interview yet more structure than the informal interview.

Group interviews – Interviews can be conducted either one to one (individual interviews) or in a a group, in which the interviewer interviews two or more respondents at a time. Group discussions among respondents may lead to deeper insight than just interviewing people along, as respondents ‘encourage’ each other.

Focus groups are a type of group interview in which respondents are asked to discuss certain topics.

Interviews: key terms

The Interview Schedule – A list of questions or topic areas the interviewer wishes to ask or cover in the course of the interview. The more structured the interview, the more rigid the interiew schedule will be. Before conducting an interview it is usual for the reseracher to know something about the topic area and the respondents themselves, and so they will have at least some idea of the questions they are likely to ask: even if they are doing ‘unstructred interviews’ an interviewer will have some kind of interview schedule, even if it is just a list of broad topic areas to discuss, or an opening question.

Transcription of interviews -Transcription is the process of writing down (or typing up) what respondents say in an interview. In order to be able to transcribe effectively interviews will need to be recorded.

The problem of Leading Questions – In Unstructured Interviews, the interviewer should aim to avoid asking leading questions.

The Strengths and Limitations of Unstructured Interviews 

Unstructured Interviews Mind Map

The strengths of unstructured interviews

The key strength of unstructured interviews is good validity, but for this to happen questioning should be as open ended as possible to gain genuine, spontaneous information rather than ‘rehearsed responses’ and questioning needs to be sufficient enough to elicit in-depth answers rather than glib, easy answers.

Respondent led – unstructured interviews are ‘respondent led’ – this is because the researcher listens to what the respondent says and then asks further questions based on what the respondent says. This should allow respondents to express themselves and explain their views more fully than with structured interviews.

Flexibility – the researcher can change his or her mind about what the most important questions are as the interview develops. Unstructured Interviews thus avoid the imposition problem – respondents are less constrained than with structured interviews or questionnaires in which the questions are written in advance by the researcher. This is especially advantageous in group interviews, where interaction between respondents can spark conversations that the interviewer hadn’t thought would of happened in advance, which could then be probed further with an unstructured methodology.

Rapport and empathy – unstructured interviews encourage a good rapport between interviewee and interviewer. Because of their informal nature, like guided conversations, unstructured interviews are more likely to make respondents feel at ease than with the more formal setting of a structured questionnaire or experiment. This should encourage openness, trust and empathy.

Checking understanding – unstructured interviews also allow the interviewer to check understanding. If an interviewee doesn’t understand a question, the interviewer is free to rephrase it, or to ask follow up questions to clarify aspects of answers that were not clear in the first instance.

Unstructured interviews are good for sensitive topics because they are more likely to make respondents feel at ease with the interviewer. They also allow the interviewer to show more sympathy (if required) than with the colder more mechanical quantitative methods.

They are good for finding out why respondents do not do certain things. For example postal surveys asking why people do not claim benefits have very low response rates, but informal interviews are perfect for researching people who may have low literacy skills.

Empowerment for respondents – the researcher and respondents are on a more equal footing than with more quantitative methods. The researcher doesn’t assume they know best. This empowers the respondents. Feminists researchers in particular believe that the unstructured interview can neutralise the hierarchical, exploitative power relations that they believe to be inherent in the more traditional interview structure. They see the traditional interview as a site for the exploitation and subordination of women, with the interviewers potentially creating outcomes against their interviewees’ interests. In traditional interview formats the interviewer directs the questioning and takes ownership of the material; in the feminist (unstructured) interview method the woman would recount her experiences in her own words with the interviewer serving only as a guide to the account.

Practical advantages – there are few practical advantages with this method, but compared to full-blown participant observation, they are a relatively quick method for gaining in-depth data. They are also a good method to combine with overt participant observation in order to get respondents to further explain the meanings behind their actions. So in short, they are impractical, unless you’re in the middle of a year long Participant Observation study (it’s all relative!).

The Limitations of unstructured interviews

The main theoretical disadvantage is the lack of reliability – unstructured Interviews lack reliability because each interview is unique – a variety of different questions are asked and phrased in a variety of different ways to different respondents.

They are also difficult to repeat, because the success of the interview depends on the bond of trust between the researcher and the respondent – another researcher who does not relate to the respondent may thus get different answers. Group interviews are especially difficult to repeat, given that the dynamics of the interview are influenced not just by the values of the researcher, but also by group dynamics. One person can change the dynamic of a group of three or four people enormously.

Validity can be undermined in several ways:

  • respondents might prefer to give rational responses rather than fuller emotional ones (it’s harder to talk frankly about emotions with strangers)
  • respondents may not reveal their true thoughts and feelings because they do not coincide with their own self-image, so they simply withhold information
  • respondents may give answers they think the interviewer wants to hear, in attempt to please them!

We also need to keep in mind that interviews can only tap into what people SAY about their values, beliefs and actions, we don’t actually get to see these in action, like we would do with observational studies such as Participant Observation. This has been a particular problem with self-report studies of criminal behaviour. These have been tested using polygraphs, and follow up studies of school and criminal records and responses found to be lacking in validity, so much so that victim-surveys have become the standard method for measuring crime rather than self-report studies.

Interviewer bias might undermine the validity of unstructured interviews – this is where the values of the researcher interfere with the results. The researcher may give away whether they approve or disapprove of certain responses in their body language or tone of voice (or wording of probing questions) and this in turn might encourage or discourage respondents from being honest.

The characteristics of the interviewer might also bias the results and undermine the validity – how honest the respondent is in the course of an hour long interview might depend on the class, gender, or ethnicity of the interviewer.

Sudman and Bradburn (1974) conducted a review of literature and found that responses varied depending on the relative demographics of the interviewer and respondent. For example white interviewers received more socially acceptable responses from black respondents than they did from white respondents. Similar findings have been found with different ethnicities, age, social class and religion.

Unstructured interviews also lack representativeness – because they are time consuming, it is difficult to get a large enough sample to be representative of large populations.

It is difficult to quantify data, compare answers and find stats and trends because the data gained is qualitative.

Practical disadvantages – unstructured Interviews may take a relatively long time to conduct. Some interviews can take hours. They also need to be taped and transcribed, and in the analysis phase there may be a lot of information that is not directly relevant to one’s research topic that needs to be sifted through.

Interpersonal skills and training – A further practical problem is that some researchers may lack the interpersonal skills required to conduct informal unstructured interviews. Training might need to be more thorough for researchers undertaking unstructured interviews – to avoid the problem of interviewer bias.

Shapiro and Eberhart (1947) showed that interviewers who were more prepared to probe received fuller answers, and both response rate and extensiveness of response are greater for more experienced interviewers.

There are few ethical problems, assuming that informed consent is gained and confidentially ensured. Although having said this, the fact that the researcher is getting more in-depth data, more of an insight into who the person really is, does offer the potential for the information to do more harm to the respondent if it got into the wrong hands (but this in turn depends on the topics discussed and the exact content of the interviews.

Sociological perspectives on interviews

Interviews of any kind are not a preferred method for positivists because there is no guarantee that responses aren’t artefacts of the interview situation, rather than a reflection of underlying social reality.

If interviews must be used, Positivists prefer structured interviews that follow a standardised schedule, with each question asked to each respondent in the same way. Interviewers should be neutral, show no emotion, avoid suggesting replies, and not skip questions.

For Interactionists, interviews are based on mutual participant observation. The context of the interview is intrinsic to understanding responses and no distinction between research interviews and other social interaction is recognised. Data are valid when mutual understanding between interviewer and respondent is agreed.

Interactionists prefer non-standardised interviews because they allow respondents to shape the interview according to their own world view.

Denzin (2009) goes as far as to argue that what positivists might perceive as problems with interviews are not problems, just part of the process and thus as valid as the data collected. Thus issues of self-presentation, the power relations between interviewer and respondent and opportunities for fabrication are all part of the context and part of the valid-reality that we are trying to get to.

Related Posts

For more posts on research methods please see my research methods page.

Examples of studies using interviews – Using Interviews to research education.

Participant Observation –  A related qualitative research method – detailed class notes on overt and covert participant observation. 

Please click here to return to the homepage –

Recommended further reading: Gilbert and Stoneman (2016) Researching Social Life