A structured interview is where interviewers ask pre-written questions to candidates in a standardised way, following an interview schedule. As far as possible the interviewer asks the same questions in the same order and the same way to all candidates.
(An exception to this is filter questions in which case the interviewer may skip sub-questions if a negative response is provided).
Answers to structured interviews are usually closed, or pre-coded, and the interviewer ticks the appropriate box according to the respondents’ answers. However some structured interviews may be open ended in which case the interviewer writes in the answers for the respondent.
Social surveys are the main context in which researchers will conduct structured interviews.
This post covers:
- the advantages of structured interviews
- the different contexts in which they take place (phone and computer assisted).
- the stages of conducting them: from knowing the schedule to leaving!
- their limitations.
Advantages of Structured Interviews
The main advantage of structured interviews is that they promote standardisation in both the processes of asking questions and recording answers.
This reduces bias and error in the asking of questions and makes it easier to process respondents’ answers.
The two main advantages of structured interviews are thus:
- Reducing error due to interviewer variability.
- Increasing the accuracy and ease of data processing.
Reducing error due to interviewer variability
Structured interviews help to reduce the amount of error in data collection because they are standardised.
Variability and thus error can occur in two ways:
- Intra-interviewer variability: occurs when an interviewer is not consistent with the way they ask the questions or record the answers.
- Inter-interviewer variability: when there are more than two interviewers who are not consistent with each other in the way they ask questions or record answers.
These two sources of variability can occur together and compound the the problem of reduced validity.
The common sources of error in survey research include:
- A poorly worded question.
- The way the question is asked by the interviewer.
- Misunderstanding on the part of the respondent being interviewed.
- Memory problems on the part of the respondent.
- The way the information is recorded by the interviewer.
- The way the information is processed: coding of answers or data entry.
Because the asking of questions and recording of answers are standardised, this means any variation in answers from respondents should be due to true or real variation in the respondents answers, rather than variation arising because of differences in the interview context.
Accuracy and Ease of Data Processing
Structured interviews consist of mainly closed, pre-coded questions or fixed choice questions.
With closed-questions the respondent is given a limited choice of possible answers and is asked to select which response or responses apply to them.
The interviewer then simply ticks the appropriate box.
This limit box ticking procedure limits the scope for interviewer bias to introduce error. There is no scope for the interviewer to omit or modify anything the respondent says because they are not writing down their answer.
Another advantage with pre-coded data gained from the structured interview is that it allows for ‘automatic’ data processing.
If answers had been written down or transcribed from a recording, a researcher would have to examine this qualitative data, sort and assign the various answers to categories.
For example if a survey had produced qualitative data on what respondents thought about Brexit, the researcher might categories the range of answers into ‘for Brexit’, ‘neutral’, and ‘against Brexit’.
This process of reducing more complex and varied data into fewer and simpler ‘higher level’ categories is known as coding data, or establishing a coding frame and is necessary for quantitive analysis to take place.
Coding (whether done before or after a structured interview takes place) introduces another source or potential error. Answers may be categorised incorrectly by the researchers. The researchers may categorise answers differently to how the respondents themselves would have categorised their answers.
There are two sources of error in recording data:
- Intra-rater-variability: where the person applying the coding is inconsistent in the way they apply the rules of assigning answers to categories.
- Inter-rater-variability: where two different raters apply the rules of assigning answers to categories differently.
If either or both of the above occur then variability in responses will be due to error rather than true variability in the responses.
The closed question survey/ interview avoids the above problem because respondents assign themselves to categories, simply by picking an option and the interviewer ticking a box.
There is very little opportunity with pre-coded interviews for interviewers or analysers to misinterpret or miss-assign respondents’ answers to the wrong categories.
Structured Interview Contexts
Structured interviews tend to be done when there is only one respondent. Group interviews are usually more qualitative because they dynamics of having two ore more respondents present mean answers tend to be more complex, and so tick-box answers are not usually sufficient to get valid data.
Besides the face to face interview, there are two particular contexts which are common with structured interviewing: telephone interviewing and computer assisted interviewing. (These are not mutually exclusive).
Telephone interviews are very common with market research companies, and opinion polling companies such as YouGov. They are used less often by academic researchers but an exception to this was during the Covid-19 Pandemic when many studies which would usually rely on in-person interviews had to be carried out over the phone.
The advantages of telephone interviews
The advantages of telephone interviews compared to face to face interviews the advantages of telephone interviews are:
- Telephone interviews are cheaper and quicker to administer because there is no travel time or costs involved in accessing the respondents. The more dispersed the research sample is geographically the larger the advantage.
- Telephone interviews are easier to supervise than face to face interviews. You can have one supervisor in a room with several phone interviewers. Interviewers can be recorded and monitored, although care has to be taken with GDPR.
- Telephone interviews reduce bias due to the personal characteristics of the interviewers. It is much more difficult to tell what the class background or ethnicity or the interviewer is over the phone, for example.
The limitations of phone interviews
- People without phones cannot be part of the sample.
- Call screening with mobile phones has greatly reduced the response rate of phone surveys.
- Respondents with hearing impediments will find phone interviews more difficult.
- The length of a phone interview generally can’t be sustained over 20-25 minutes.
- There is a general belief that telephone interviews achieve lower response rates than face to face interviews.
- There is some evidence that phone interviews are less useful when dealing with sensitive topics but the data is not clear cut.
- There may. be validity problems because telephone interviews do not allow for observation. For example an interviewer cannot observe if a respondent is confused by a question.
- In cases where researchers need specific types of people, telephone interviews do not allow us to check if the correct types of people are actually those being interviewed.
Computer assisted Interviewing
With computer assisted interviewing interviews questions are pre-written and appear on the computer screen. Interviewers follow the instructions and read out questions in order and key in the respondents’ answers, either as open or closed responses.
There are two main types of Computer Assisted Interviewing:
- CAPI – Computer Assisted Personal Interviewing.
- CATI – Computer Assisted Telephone Interviewing.
Most telephone interviews today are Computer Assisted. There are several survey software packages that allow for the construction of effective surveys with analytics tools for data analysis.
They are less popular for personal interviews but have been growing in popularity.
CATI and CAPI are more common among commercial survey organisations such as IPSOS but are used less in academic research conducted by universities.
The advantages of computer assisted interviewing
CAPI are very useful for filter questions as the software can skip to the next question if the previous one isn’t relevant. This reduces the likelihood of the interviewer asking irrelevant questions or missing out questions.
They are also useful for prompt-questions as flash cards can be generated on the screen and shown to the respondents as required. This should mean respondents are more likely to see the flash-cards in the same way as there is no possibility for the researcher to arrange them in a different order for different respondents, as might be the case with physical flashcards.
Another advantage of computer assisted interviewing is automatic storage on the computer or cloud upload which means there is no need to scan paper interview sheets or enter the data manually at a later date.
Thus Computer Assisted Interviews should increase the level of standardisation and reduce the amount of variability error introduced by the interviewer.
The disadvantages of Computer Assisted Interviewing:
- They may create a sense of distance and disconnect between the interviewer and respondents.
- Miskeying may result in the interviewer entering incorrect data, and they are less likely to realise this than with paper interviews.
- Interviewers need to be comfortable with the technology.
Conducting Structured Interviews
The procedures involved with conducting an effective structure interview include:
- Knowing the interview schedule
- Gaining access
- Introducing the research
- Establishing rapport
- Asking questions and recording answers
- Leaving the interview.
The processes above are specifically in relation to structured interviews, but will also apply to semi-structure interviews.
The interview schedule
An interview schedule is the list of questions in order, with relevant instructions about how the questions are to be asked. Before conducting an interview, the interviewer should know the interview schedule inside out.
Interviews can be stressful and pressure can cause interviewers to not follow standardised procedures. For example, interviewers may ask questions in the wrong order or miss questions out.
When several interviewers are involved in the research process it is especially important that all of them know the interview schedule to ensure questions are asked in a standardised way.
Interviews are the interface between the research and the respondents and are thus a crucial link in ensuring a good response rate. In order to gain access interviews need to:
- Be prepared to keep calling back with telephone interviews. Keep in mind the most likely times to get a response.
- Be self-assured and confident.
- Reassure people that you are not a salesperson, but doing research for a deeper purpose.
- Dress appropriately.
- Be prepared to be flexible with time: finding a time that fits the respondent if first contact isn’t convenient.
Introducing the research
Respondents need to be provided with a rationale explaining the purposes of the research and why they are giving up their time to take part.
The introductory rationale may be written down or spoken. A written rationale may be sent out to prospective respondents in advance of the research taking place, as is the case with those selected to take part in the British Social Attitudes survey. A verbal rationale is employed with street-based market research, cold-calling telephone surveys and may also be reiterated during house to house surveys.
An effective introductory statement can be crucial in getting respondents to take part.
What should an introductory statement for social research include?
- Make clear the identity of the interviewer.
- Identify the agency which is conducting the research: for example a university or business.
- Include details of how the research is being funded.
- Indicate the broader purpose of the research in broad terms: what are the overall aims?
- Give an indication of the kind of data that will be collected.
- Make it clear that participation is voluntary.
- Make it clear that data will be anonymised and that the respondent will not be identified in any way, by data being analysed at an aggregate level.
- Provide reassurance about the confidentiality of information.
- Provide a respondent with the opportunity to ask questions.
Establishing rapport with structured interviews
Rapport is what makes the respondent feel as if they want to cooperate with the researcher and take part in the research. Without rapport being established respondents may either not agree to take part or terminate the interview half way through!
Rapport can be established through visual cues of friendliness such as positive body language, listening and good eye contact.
However with structured interviews, establishing rapport is a delicate balancing act as it is crucial for the interviewers be as objective as possible and not get too close to the respondents.
Rapport can be achieved by being friendly with the interviewee, although interviewers shouldn’t take this too far. Too much friendliness can result in the interview taking too long and the interviewee getting bored.
Too much rapport can also result in the respondent providing socially desirable answers.
Asking Questions and Recording Answers
With structured interviews it is important that researchers strive to ask the same questions in the same way to all respondents. They should ask questions as written in order to minimise error.
Experiments in question-wording suggest that even minor variations in wording can influence replies.
Interviewers may be tempted to deviate from the schedule because they feel awkward asking some questions to particular people, but training can help with this and make it more likely that standardisation is kept in place.
Where recording answers is concerned, bias is far less likely with pre-coded answers.
PROVIDING Clear instructions
Interviews need to follow clear instructions through the progress of the interview. This is important if an interview schedule includes filter questions.
Filter questions require the interviewer to ask questions of some respondents but not to others. Filler questions are usually indented on an interview schedule.
- Did you vote in the last general election…? YES / NO
1a (to be asked if respondent answered yes to Q1)
Which of the following political parties did you vote for? Conservatives/ Labour/ Lib Dems/ The Green Party/ Other.
The risk of not following instructions is that the respondent may be asked questions that are irrelevant to them, which may be irritating.
Researchers should stick to the question order on the survey.
Leapfrogging questions may result in questions skipped not being asked because the researcher could forget to go back to them.
Changing the question order may also lead to variability in replies because questions previously asked may affect how respondents answer questions later on in the survey.
Three specific examples demonstrate why question order matters:
People are less likely to respond that taxes should be lowered if they are asked questions about government spending beforehand.
In victim surveys if people are asked about their attitudes to crime first they are more likely to report that they have been a victim of crime in later questions.
One question in the 1988 British Crime Survey asked the following question:
‘Taking everything into account, would you say the police in this area do a good job or a poor job?
For all respondents this question appeared early on, but due to an admin error the question appeared twice in some surveys, and for those who answered the question twice:
- 66% gave the same response
- 22% gave a more positive response
- 12% gave a less positive response.
The fact that only two thirds of respondents gave the same response twice clearly indicates that the effect of question order can be huge.
One theory for the change is that the survey was about crime and as respondents thought more in-depth about crime as the interview progressed, 22% felt more favourable to the police and 13% less favourable, this would have varied with their own experiences.
Rules for ordering questions in social surveys
- Early questions should be clearly related to the topic of the research about which the respondent has already been informed. This is so the respondent immediately feels like the questions are relevant.
- Questions about age/ ethnicity/ gender etc. should not be asked at the beginning of the interview
- Sensitive questions should be left for later.
- With a longer questionnaire, questions should be grouped into sections to break up the interview.
- Within each subgroup general questions should precede specific ones.
- Opinions and attitudes questions should precede questions about behaviour and knowledge. Questions about the later are less likely to be influenced by question order.
- If a respondent has already answered a later question in the course of answering a previous one, that later question should still be asked.
Probing questions in structured interviews
Probing may be required in structured interviews when
- respondents do not understand the question and either ask for or it is clear that they need more information to provide an answer.
- The respondent does not provide a sufficient answer and needs to be probed for more information.
The problem with the interviewer asking additional probing questions is that they introduce researcher-led variability into the interview context.
Tactics for effective probing in structured interviews:
- Employ standardised probes. These work well when open ended answers are required. Examples of standardised probes include: ‘Could you say a little more about that?’ or ‘are there any other reasons why you think that?’.
- If a response does not allow for a pre-existing box to be ticked In a closed ended survey the interviewer could repeat the available options.
- If the response requires a number rather than something like ‘often’ the researcher should just persist with asking the question. They shouldn’t try and second guess a number!
Prompting occurs when the interviewer suggests a possible answer to a question to the respondent. This is effectively what happens with a closed question survey or interview: the options are the prompts. The important thing is that the prompts are the same for all the respondents and asked in the same way.
During face to face interviews there may be times when it is better for researchers to use show cards (or flash cards) to display the answers rather than say them.
Three contexts in which flashcards are better:
- When there is a long list of possible answers. For example if asking respondents about which newspapers they read, it would be easier to show them a list rather than reading them out!
- With Likert Scales, ranked for 1-5 for example, it would be easier to have a showcard with 1-5 and the respondent can point to it, rather than reading out ‘1,2,3,4,5’.
- With some sensitive details such as income, respondents might feel more comfortable if they are shown income bands with letters attached, then they can say the letter. This allows the respondent to not state what their income is out loud.
Leaving the Interview
On leaving the interview thank the respondent for taking part.
Researchers should not engage in further communication about the purpose of the research at this point beyond the standard introductory statement. To do so means this respondent may divulge further information to other respondents yet to take part, possibly biassing their responses.
Problems with structured interviews
Four problems with structured interviews include:
- the characteristics of the interviewer interfering with the results.
- Response sets resulting in reduced validity (acquiescence and social desirability).
- The problem of lack of shared meaning.
- The feminist critique of the unequal power relationship between interviewer and respondent.
The characteristics of the interviewer such as their gender or ethnicity may affect the responses a respondent gives. For example, a respondent may be less likely to open up on sensitive issues with someone who is a different gender to them.
This is where respondents reply to a series of questions in a consistent way but one that is irrelevant to the concept being measured.
This is a particular problem when respondents are answering several Likert Scale questions in a row.
Two of the most prominent types of response set are ‘acquiescence’ and ‘social desirability bias’
Acquiescence refers to a tendency of some respondents to consistently agree or disagree with a set of questions. They may do this because it is quicker for them to get through the interview. This is known as satisficing.
Satisficing is where respondents reduce the amount of effort required to answer a question. They settle for an answer that is satisfactory rather than making the effort to generate the most accurate answer.
Examples of satisficing include:
- Agreeing with yes statements or ‘yeasaying’.
- Opting for middle point answers on scales.
- Not considering the full-range of answers in a range of closed questions, for example picking the first or last answers.
The opposite of satisficing is optimising. Optimising is where respondents expend effort to arrive at the best and most appropriate answer to a question.
It is possible to weed out respondents who do this by ensuring there is a mix of positive and negative sentiment in a batch of Likert questions.
For example you may have a batch of three questions designed to measure attitudes towards Rishi Sunak’s performance as Primeminister.
If you have two scales where ‘5’ is positive and one where 5 is Negative, for example:
- Rishi Sunak is an effective leader 126.96.36.199.5
- Rishi Sunak has managed the economy well 188.8.131.52.5
- Rishi Sunak is NOT to be trusted 184.108.40.206.5
If someone is acquiescing without thinking about their answers, they are likely to circle all 5s, which wouldn’t make sense. Hence we could disregard this response and maybe even the entire survey from this individual.
Social desirability bias
Socially desirable behaviours and attitudes tend to be over-reported. This can especially be the case for sensitive questions.
Strategies for reducing social interviews bias
- Use self-completion forms rather than interviewers.
- Soften the question for example ‘even the calmest of car drivers sometimes lose their temper when driving, has this ever happened to you?
The problem of meaning
Structured surveys and interviews assume that respondents share the same meanings for terms as the interviewers.
However, from an interpretivist perspective interviewer and respondent may not share the same meanings. Respondents may be ticking boxes but mean different things to what the interviewer thinks they mean.
The issue of meaning is side-stepped in structured interviews.
The feminist critique of structured interviews
The structure of the interview epitomises the asymmetrical relationship between researcher and respondent. This is a critique made of all quantitative research.
The researcher extracts information from the respondent and gives little or nothing in return.
Interviewers are even advised not to get too familiar with respondents as giving away too much information may bias the results.
Interviewers should refrain from expressing their opinions, presenting any personal information and engaging in off-topic chatter. All of this is very impersonal.
This means that structured interviews are probably not appropriate for very sensitive topics that involve a more personal touch. For example with domestic violence, unstructured interviews which aim to explore the nature of violence have revealed higher levels of violence than structured interviews such as the Crime Survey of England and Wales.
Sources and signposting
Structured interviews are relevant to the social research methods module within A-level sociology.
This post was adapted from Bryman, A (2016) Social Research Methods.