Home Expanded and non-conforming answers in standardized survey interviews
Article Open Access

Expanded and non-conforming answers in standardized survey interviews

  • Sanne Unger

    Sanne Unger is associate professor at Lynn University in Boca Raton, Florida, where she teaches philosophy and research methods. Her research interests include interaction in survey interviews and action research in higher education, such as student perception of hybrid courses and assessment choice. Her work is published in the Online Learning Journal, the Long Island Education Review, and as a multi-touch book on social science research methods.

    ORCID logo
    , Yfke Ongena

    Yfke Ongena is a senior lecturer and member of the Discourse and Communication group, Faculty of Arts, University of Groningen. She is a communication expert, specializing in survey research methodology. Her interests lie in verbal interaction between interviewers and respondents, and the phenomenon of social desirability and its effects on answering behavior. She has published in international journals such as the Journal of Survey Statistics and Methodology, International Journal of Public Opinion Research, Journal of Official Statistics, Quality & Quantity, Survey Methods: Insights from the Field, and Applied Cognitive Psychology.

    EMAIL logo
    and Tom Koole

    Tom Koole is emeritus professor of Language and Social Interaction at the University of Groningen and visiting professor at the Health Communication Research Unit of the University of the Witwatersrand in Johannesburg. His research is primarily concerned with communication in health care, emergency calls and education. He has published in international journals such as Research on Language and Social Interaction, Journal of Pragmatics, Discourse Studies, Linguistics and Education, Classroom Discourse and International Journal of Health Psychology.

Published/Copyright: January 30, 2024

Abstract

Respondents in standardized survey interviews do not always answer closed-ended questions with just a type-conforming answer, such as “yes” or “three.” Instead, they sometimes expand the type-conforming answer or provide a response that does not contain a type-conforming answer. Standardized survey methodology aims to avoid such answers because they are found to cause interviewers to deviate from their script. However, we found that many expanded and non-conforming responses do not lead to intervention by the interviewer and are treated as unproblematic. A Conversation Analytic study of survey interviews, incorporating three different surveys, with recordings available for interviews varying in number between four and 430 interviews, shows that answer attempts can be divided into five types: four turn expansions (serial extras, uncertainty markers, prefaced answers, answers followed by elaborations), and non-conforming answers. Each of these targets a specific aspect of the interview situation. A follow-up quantitative analysis of 610 Computer-Assisted Telephone Interviews (CATI) shows that expanded answers are overwhelmingly accepted by interviewers, while non-conforming answers are in most cases followed by interviewer probing.

1 Introduction

In survey research, standardization of interviewer behavior is used to reduce measurement error. When interviewers stick to the script, respondents are all exposed to the same stimuli, and variance is reduced, minimizing interviewer error (Cicourel 1982; Fowler and Mangione 1990; Schaeffer and Maynard 2002). A primary reason for interviewers to deviate from the script is when respondents give answers that do not exactly match one of the answer categories. Respondents can do this in various ways; for example, by stating an answer option with uncertainty (Schaeffer et al. 1993), prefacing or elaborating an answer option with more information than requested (Raymond 2003), or by providing a reporting that does not contain any of the answer options (Drew 1984). These responses can lead to interviewer-initiated repair, possibly undermining the survey’s standardization. However, we will show that many of these expanded or ill-fitting answers are unproblematic, not only because they contain enough information for interviewers to enter a response but also because interviewers overwhelmingly accept them.

Increased knowledge of the ways survey respondents adjust and modify the answer options is important for survey researchers. We will show that participants’ deviations from the script can be a sign of cooperation. The expanded and non-conforming responses we categorize in this article do not always display trouble but instead target different aspects of the interview situation. We further support the qualitative findings with a quantitative analysis, showing that interviewers indeed accept many of the minimal turn expansions while they probe more drastically non-conforming answers.

We start with a review of the literature about Conversation Analytic research on question-answer sequences in survey interviews and natural interaction. The literature review also addresses survey pretesting and validity. In the data and methodology section, we explain how we first used Conversation Analysis to identify what survey participants do when they do more than just answer the question. We then follow up on these findings with a quantitative analysis of our categories’ frequency and the interviewers’ acceptance rates. The analysis section similarly consists of two main parts. First, in Section 4, we provide excerpts from three different surveys to explain each identified category. In Section 5, comprising data of a fourth survey, we show how frequently each of these categories occurs and how likely interviewers are to accept or probe them. Finally, we will discuss the implications of our study.

2 Literature review

2.1 Standardization of interaction in survey interviews

To minimize deviations from the script and thus reduce measurement error, survey questions are designed to result in so-called paradigmatic question-answer (QA) sequences, which are unexpanded sequences consisting of a question and unelaborated answer and sometimes an acknowledgment of the answer (Maynard and Schaeffer 2013), after which the interviewer can move on to the next question. What contributes to preventing QA sequences from expanding – for example, by a repair sequence – is when question recipients provide type-conforming answers. In the case of a yes/no-question, a version of yes or no is type-conforming (Raymond 2003). For wh-questions, type-conforming answers correspond to the projected answer type (Schegloff and Lerner 2009). For example, “who” projects a person and “how many” projects a number. Subsequently, Koole and Verberg (2017) have extended the concept of type-conformity to answers that repeat one of the options presented in an alternative question, like “cash or credit?”

In excerpt 1, we can see three QA sequences that are paradigmatic because all answers are type-conforming. The “how old” question (line 2) receives an “age” answer, “what education” (line 5) is answered with a school type (“MTS”), and the yes/no-interrogative (line 7) receives a yes (For transcription conventions, please see Appendix A.)[1]

(1)
1 IR: and eh I’d also like to ask some questions about the ↑other persons↑
2 first about your ↓husband= =how old is your husband↑= Q1
3 IE: =.hh my husband is eh forty nine↓ A1
4 (0.2)
5 IR: and e- what- is ↓his highest school ⎡education↑⎤ Q2
6 IE:                 ⎣ .hhh    ⎦.HHH MTS: A2
7 IR: and did he com↓plete that education↑ Q3
8 IE: y↓es↑ A3
9 (0.4)
1 IR: en eh ik zou graag ook wat vragen willen stellen over de ↑andere
2 personen↑ eerst over uw ↓man= =hoe oud is uw man↑= Q1
3 IE: =.hh me man is eh negenenveertig↓ A1
4 (0.2)
5 IR: en e- wat- is ↓zijn hoogst genoten school ⎡opleiding↑⎤ Q2
6 IE:                    ⎣ .hhh    ⎦.HHH MTS: A2
7 IR: en heeft hij die ↓opleiding voltooid↑ Q3
8 IE: j↓a↑ A3
9 (0.4)

2.2 Pre-testing in survey research

To ensure paradigmatic QA sequences, survey designers pre-test questions, so interviewers can effortlessly read them out as worded, and respondents can answer by choosing one of the options provided or implied (Presser et al. 2004). Essentially, pre-testing methods are evaluative in nature and are used to determine whether questions require revision (Yan et al. 2012). Several different pre-testing methods are available, such as expert reviews (Forsyth and Lessler 1991), cognitive interviewing (Willis 2005), interviewer debriefing (Blair and Presser 1992), behavior coding (Fowler 2011), and statistical procedures (Biemer 2004; Yan et al. 2012). These methods differ with respect to reliability, validity, and the types of problems they detect (Presser et al. 2004). A method that evaluates the effect of question-wording on the flow of the interview is behavior coding. This method quantifies deviations from QA sequences that are considered “paradigmatic.” Behavior coding provides an overview of the questions that are not read as scripted, the times respondents ask for clarification of a question, and many more possible deviations.

2.3 Deviations in the answering process

When asked a question, people sometimes do more than answer. A fairly elaborate overview of survey interviewers’ and respondents’ interactional behavior is presented in Ongena and Dijkstra (2007). In their model, respondents’ interactional behaviors are discussed in light of respondents understanding the question and cognitive problems with respect to retrieval, judgment and formatting. The options respondents have in explicit requests for clarification, and actions that clearly constitute a side-track in the conversation (i.e., comments and digressions) will not be further discussed in this article. However, what Ongena and Dijkstra (2007) call “implicit requests” partly overlaps with deviations described in this article. For example, rather than providing just an answer, respondents can indicate that their answer has a degree of imprecision or uncertainty. Schaeffer et al. (1993) describe these markers and how interviewers accept responses with low levels of uncertainty, while probing those presented as more uncertain. In addition to the answer’s level of uncertainty, interviewers, especially in Computer assisted telephone interviews (CATI) surveys, can be motivated by time constraints to accept answers with uncertainty or imprecision markers to retain a fast pace of interviewing (Holbrook et al. 2003).

Another example of a deviation in the answering process is not providing a type-conforming answer. According to Ongena and Dijkstra (2010), responses that are not type-conforming are the most important cause for survey interviewers to deviate from their script. In Conversation Analysis (CA), type-conforming responses are defined as “responses that conform to the constraints embodied in the grammatical form” of the question (Raymond 2003: 946). Depending on the question, a type-conforming response can be a yes or no, a number, or a category such as a brand name (Koole and Verberg 2017; Schegloff and Lerner 2009).

Non-type-conforming answers can take the form of “reportings” (Drew 1984), designed to leave it to the recipient to gather the upshot. Such reportings allow respondents to defer judgment, letting the interviewer assign an answer category (Moore 2004; Schaeffer and Maynard 2002).

Finally, responses can be prefaced before or elaborated after the type-conforming answer desired by survey researchers. According to Raymond (2000), a preface to a type-conforming answer attempts to modify the terms of the question and, consequently, the action performed by the following type-conforming response. In other words, the respondent adjusts the question before answering it (Clayman 1993, 2001]; Raymond 2000). Elaborations placed after type-conforming answers, however, accomplish a “modification of the stance taken by that type-conforming token” (Raymond 2000: 207).

2.4 Survey data validity

It is unclear how deviations in the question-answering process relate to the validity of survey interview data. Validity refers to the agreement of observed variables (responses given in an interview) with latent variables (the variable or construct of interest). Due to measurement error – for instance, because of respondents’ misunderstanding of questions – observed variables are not always the same as latent variables, and systematic differences lower validity, whereas coincidental differences lower reliability (Alwin 2010). Findings on the relation between deviations and data quality are complex. For example, interruptions may decrease the reliability of responses for questions followed by a definition but have no effects for questions preceded by a definition (Schaeffer and Dykema 2011). Such complications necessitate further development of interaction analysis as a method to improve question-wording. In this article, we have used CA to study deviations in the question-answering process in the context of the interaction in which they occur, thus reformulating existing interaction codes into more meaningful categories.

3 Data and methodology

This research has a two-stage design consisting of a qualitative study into interactional behavior in standardized survey interviews, followed by quantitative interaction analysis to substantiate the findings of the former.

3.1 Data

For the qualitative analysis in this article, we used three corpora of telephone survey interviews carried out by leading Dutch survey centers, two of which were collected by Hanneke Houtkoop-Steenstra. For the quantitative study, we have used a fourth dataset. For all data, informed consent was obtained, and ethics approval was granted prior to conducting and recording the interviews. Table 1 provides an overview of the surveys included.

Table 1:

Overview of available data.

Data Purpose Recording year
Qualitative study
CATI Survey 1 Adult education Unknown
CATI Survey 2 (Houtkoop and Van den Bergh 2000) Introductions and response rates 1995
CATI Survey 3 Magazine 2004
Quantitative study
CATI Survey 4 (Ongena 2005) Health, spare time and nutrition 2004

Computer-Assisted Telephone Interviews (CATI) in survey 1 used standardized interviewing techniques (see Houtkoop-Steenstra 2000: 15). Houtkoop-Steenstra’s second corpus was a nationwide survey recorded for a quantitative analysis of how different introductions affect response rates (see Houtkoop-Steenstra and Van den Bergh 2000). The third corpus is a collection of interviews carried out in 2004. These interviews were not designed for other research purposes than the survey itself and aimed to study respondents’ satisfaction with a magazine subscription.

For the quantitative analysis (Section 5), we turned to a fourth corpus, collected in 2004 as part of a question-wording experiment and since reused for several quantitative projects involving behavior coding (Ongena and Dijkstra 2010). As most questions were derived from existing, frequently used questionnaires, the questionnaire can be considered representative of the average survey. However, the interviewer training emphasized handling non-conforming answers. Half of the interviewers were instructed to always probe with a full list of response alternatives, whereas the other half were allowed to probe with only a selection of alternatives based on the respondent’s answer. The interviewers were given examples of non-conforming answers and how to probe adequately. The 610 completed survey interviews resulted in a dataset of 25,670 QA sequences with 140,117 utterances.

We will display excerpts of the interviews transcribed according to the Conversation Analysis conventions developed by Gail Jefferson (2004) (see Appendix A for an overview of the symbols used). The original data are in Dutch and each fragment is accompanied by an English translation.

3.2 Methods of analysis

For the qualitative research, we used the methodology of Conversation Analysis (CA) (Sidnell and Stivers 2013; see Houtkoop-Steenstra 2000 for a detailed account of CA research of survey interaction). The goal of Conversation Analysis is “the description and explication of the competencies that ordinary speakers use and rely on in participating in intelligible, socially organized interaction” (Heritage and Atkinson 1984: 1). We used CA to explore what respondents accomplish by deviating from or expanding on type-conforming answers and how these turns are understood and treated by the interviewers. The examples in this article are selected from our findings to illustrate patterns found throughout our data; online supplement 2 contains more excerpts for each finding.

For the quantitative study, the transcribed recordings of all 610 interviews of the CATI Survey 4 dataset were coded employing Dijkstra’s (2002) coding scheme, using the Sequence Viewer Program (www.sequenceviewer.nl). Dijkstra’s (2002) coding scheme involves a multivariate coding scheme. In the scheme, every utterance in the conversation is evaluated on a set of variables, each with different values. The combination of values for an utterance then yields a code string that is a meaningful description of that utterance. For instance, the code string RA0M means that the respondent (R) gives an answer (A) that is directly related to the question (0) but does not match the pre-specified response options (M), whereas the code string IQ0A means that the interviewer (I) asks a question (Q) that is directly related to the question as given in the questionnaire (0), and is posed adequately (A). The reliability of the initial coding was assessed by coding a random sample of 10 % of all QA sequences (Cohen’s Kappa = 0.81). For this study, the original coding was reviewed by a research assistant. More details on this review procedure are given in Section 5.

In what follows, we first explore respondents’ expanded and non-conforming responses using Conversation Analysis, clarifying what is accomplished by such responses. The answers fall on a continuum, ranging from minimally expanded answers, to more drastically expanded ones that still contain a type-conforming response, to turns that omit a type-conforming response. Then, our quantitative analysis will show how likely interviewers and respondents are to accept or initiate repair on each of these types of expanded and non-conforming responses.

4 Qualitative data analysis

Our analysis below focuses on the interviewees’ expansions on and deviations from type-conforming answers, and how the interviewers respond. We found that many of the deviations and expansions do not interfere with the interviewer’s ability to record the answer and move on to the next question. The turn expansions are organized by size and impact, starting with the smallest, which do not endanger the survey’s standardization process, since they do not necessitate intervention by the interviewer. The excerpts in this section were chosen from a collection of similar exchanges to exemplify and illustrate each finding.

4.1 Serial extras

We have called the first type of turn expansions discussed here “serial extras.” These extras signal the answer’s relation to previous answers and thus its position in a series. In survey interviews, questions often come in coherent series, with questions about related topics grouped together. When such related questions occur, the respondents sometimes display sensitivity to the position of their answer within the list.

In the following example, we come across a related block of questions with one parent question. The respondent is asked whether they read certain magazines (lines 1–2), projecting a list of related questions. Although the overarching question (“which […] magazines do you read”) is a wh-question, the subsequent list of magazines makes acceptance or rejection relevant. Yes and no are the most straightforward way of accomplishing this task, making those two options type-conforming responses.

In this excerpt, an exciting pattern unfolds. The respondent signals similarity between consecutive items by responding first with a simple “no” (arrows “a”) and next with “also not” (arrows “b”).

(2)
1 IR: and which of the following types of ↑magazines do you read
2 regularly [broad]casting magazines↑
3 IE:        [ oh ]           no↑
(lines 4-15 deleted)
16 IR: =and expensive ↓monthlies↑ such as
17 Marie ↓Claire↑ or Ele↓gance↑
18 IE: a→ no↑
19 (0.2)
20 IR: o↓pinion ↑magazines↑= =Elsevier and HP ↓de Tijd↑=
21 IE: b→ =m- no also not↑=
22 IR: =s↓ports magazines↑
23 (0.8)
24 IE: a→ no↑
25 IR: youth and ↓juvenile magazines↑
26 IE: b→ also not,
1 IR: en welke van de volgende soorten ↑tijd↑schriften leest u
2 geregeld [om]roepbladen↑
3 IE:      [oh ]      nee↑
(lines 4-15 deleted)
16 IR: = en duurdere ↓maandbladen↑ zoals
17 Marie ↓Claire↑ of Ele↓gance↑
18 IE: a→ nee↑
19 (0.2)
20 IR: o↓pinie↑bladen↑= =Elsevier en HP ↓de Tijd↑=
21 IE: b→ =m- nee ook niet↑=
22 IR: =s↓portbladen↑
23 (0.8)
24 IE: a→ neej↑
25 IR: jeugd en ↓jongeren blade↑
26 IE: b→ ook niet,

In lines 18 and 24, the respondent replies “no” to whether they read a particular type of magazine. Their response consists of a type-conforming answer and nothing else. In both cases, the next question receives another negative reply, but this time the respondent expands it (“no also not” in line 21, and “also not” in line 26), signaling sensitivity to the sequential position of this particular item. With “also,” the respondent displays awareness that the answer is part of a list and that they have given the same answer more than once. Because they thus treat their answer as part of a series, we refer to this a “serial extra.” The serial extra avoids the impression that they are answering on automatic pilot and instead marks each answer as genuine.

The serial extra does not influence the way this sequence runs off. The interviewer proceeds by asking the next question, implicitly accepting the answer. This reaction is no different from those in unelaborated, paradigmatic instances: a short pause and the subsequent delivery of the next item with a drop-rise question contour. In other words, the paradigmatic QA sequence is left intact.

Since the interviewer’s behavior does not change due to the serial extras, we can assert that the standardization of the interview is not endangered by this type of turn expansion. Our quantitative study reported below enhances the reliability of this finding.

4.2 Uncertainty markers

Another type of answer expansion is the imprecision marker or uncertainty marker (see also Schaeffer et al. 1993). Like the serial extras discussed above, uncertainty markers are often small and appear to have little impact on the QA sequence. Respondents sometimes design their answers as guesses or approximations by adding items like “I think” or “just about” to their answers.

The following excerpt takes place after the respondent has indicated which newspapers they sometimes read. The respondent now needs to answer how many of the latest six issues they have read, so a type-conforming answer consists of a number between zero and six.

(3)
1 IR: and of the ↓Telegraaf how many of the last six issues↑
2 (0.7)
3 IE: → eh about two,
4 (0.8) ((tick tick))
5 IR: and of the Volkskrant↑
6 (0.8)
7 IE: → a↓bout three↑
8 (0.3)
9 IR: yes, .h
10 (.) ((tick))
11 IR: and of your regional ↓daily↓
1 IR: en van de ↓telegraaf hoeve van de laatste zes nummers↑
2 (0.7)
3 IE: → eh stuk of twee,
4 (0.8) ((tik tik))
5 IR: en van de Volkskrant↑
6 (0.8)
7 IE: → stuk ↓of drie↑
8 (0.3)
9 IR: ja, .h
10 (.) ((tik))
11 IR: en van uw regionale ↓dagblad↓

In lines 3 and 7, the respondent marks a slight lack of precision in answering by saying “about.” The answers are designed to convey that they make something of a guess at how many newspapers they have read. Paradoxically, this imprecise design carefully calibrates the quality of information provided to the interviewer’s needs in conducting the survey.

Despite the display of imprecision, the answers are treated by both participants as final. Following the answer, the interviewer takes a moment to type, and the respondent could, but does not, use this gap for repair, letting the answer stand. Similarly, the interviewer treats the level of certainty as acceptable for this question. After the response, they can be heard to type information into their computer (see Komter 2002 on how typing after an answer signifies acceptance), and in lines 5 and 11 they accept the prior turn by moving on to the next question. These uncertainty markers do not cause the interviewer to deviate from the script. While the respondent’s answers may be less precise than what the survey designers had in mind, the research instrument itself remains standardized.

4.3 Prefaces before and elaborations after type-conforming answers: working with the terms of the question

Just like in the previous sections, the turn expansions discussed in this section occur in addition to a type-conforming answer. The respondent chooses one of the answer categories and adds something before or after the answer. Although these expansions are not as small as serial extras or uncertainty markers, it will become clear that they do not necessarily require the interviewer to stray from the script. In fact, some of them are designed by the respondent to make their answer even more precise than the type-conforming answer alone.

4.3.1 Answer + elaboration

First, we discuss an initial type-conforming response with an elaboration placed after it. By giving a type-conforming response in the slot immediately following the question, respondents accept the terms of the question. The post-elaboration displays how the interviewer should hear the type-conforming answer, showing that the type-conforming response alone is not an ideal match with the respondent’s situation. Elaborating on the type-conforming answer changes the meaning of that answer. Many survey questions allow the respondent to choose from a limited number of options, sometimes as few as two, such as yes/no questions. However, the respondent’s real-life situation is usually more nuanced than the available options, and they sometimes express these nuances by amending their response with a post-elaboration.

In the following excerpt, the interviewer asks a yes/no-question to which the respondent answers “no,” after which they elaborate on the answer to make it more informative.

(4)
1 IR: and are you planning to move house in the next ↓twelve months↑=
2 IE: → =↓no, we’ll be staying here↓ [ heh .h ]
3 IR:            [ okay↑] nowadays the terms modal
1 IR: en bent u van plan in de komende twaalf maanden te ver↓huizen↑=
2 IE: → =↓nee, we blijven hier wel↓ [ heh .h ]
3 IR:            [ okee↑] tegenwoordig worden vaak de

The respondent’s answer in line 2 is delivered without delay or hesitation markers, thus giving the impression of being unproblematic. The respondent then tags something onto the answer. This elaboration targets the length of time in which the respondent is “not moving house,” in the question limited to “the next twelve months” (line 1). The respondent’s elaboration “we’ll be staying here” implies that they will not be moving house at all.

While this elaboration may serve a purpose for the respondent, it does not interfere with the standardized path of the survey interview. The interviewer receipts the elaborated answer (“okay”) and asks the next question without straying from the path of standardization.

4.3.2 Preface + answer

Next, we explore a response turn consisting of a preface followed by a type-conforming answer. Recipients of yes/no questions are confronted with an either/or choice. When they use just the type-conforming response, they align with all the terms of the question. Prefaces can modify those terms, showing the interviewer how the respondent arrived at the answer and which question the respondent is answering.

In the following excerpt, the respondent prefaces their affirmative answer to the yes/no question whether they smoke, “even if that is ever so rarely” (lines 1–2). This question is designed to include all respondents who smoke at whatever frequency in the affirmative category. This respondent does not just answer this question affirmatively but prefaces the answer by stating that “they smoke” (line 4).

(5)
1 IR: =.hhh then a completely different ↑sub↓ject,=
2 =do you ever ↑smoke ↓even if that is ever so rarely↓
3 (.)
4 IE: → eh I ↓smoke yes↑
5 (0.8) ((tik))
6 IR: and then I’ll now name a few ↓smoking articles↑=
1 IR: =.hhh dan ’n heel ander ↑onder↓werp,=
2 =↑rookt u wel eens ↓al is dat zelden↓
3 (.)
4 IE: → eh ik ↓rook ja↑
5 (0.8) ((tik))
6 IR: en dan noem ik nu ‘n aantal ↓rookwaren op↑=

Because of the way the question in line 2 is formulated, the affirmative answer category is so loosely defined that a simple “yes” would not make clear where on the continuum from “party smoker” to “chain smoker” this respondent can be placed. The respondent reformulates part of the question before giving an affirmative type-conforming response. Thus, even though neither the questionnaire nor the interviewer asks for this information, they make a point of telling the interviewer where they fall on the yes/no-continuum.

This respondent changes the terms of the question before giving a type-conforming answer. As a result, their answer is responsive to an adapted version of the question, without the parts about smoking “at times” or “ever so rarely.” This way, the respondent displays that these portions of the question do not apply. What is left is the everyday way of saying that they are a habitual smoker. This rephrased and reduced version of the question is then “responded to” with “yes.”

The prefaces in our data redefine a portion of the question before the answer is given. In contrast to evasive politicians who manipulate news interview questions to provide prepared answers, survey respondents employ these prefaces to give precise type-conforming answers. This allows the interviewer to interfere if the respondent displays that they misinterpreted the question or the scope of an answer category. Issues with the questions are thus brought to the surface where both participants can address them. However, the interviewer is not forced to go off-script since the respondent provides a type-conforming response and even a clarified one. As shown in the example above, the interviewer moves on to the next question.

4.4 Responses that do not contain a type-conforming answer

Now, we look at sequences in which respondents produce an answer to the question without supplying a type-conforming response in the first turn after the question. Interviewers accept some of these non-conforming answers, while others are probed, forcing the interviewer to go off-script. These non-conforming responses are most problematic for standardization, as they do not contain a type-conforming component that can be entered as an answer. Instead, interviewers need to interfere one way or another: either they interpret the response and enter an answer in the computer without expanding the sequence, or they expand the QA sequence by initiating repair.

Some non-conforming answers take the shape of reportings: descriptions of behavior or attitudes. By describing, respondents avoid committing to one of the answer options because they either are not sure which applies to their situation or resist the choices offered. We start by discussing a case where the respondent employs such a reporting to defer judgment, allowing the interviewer to assign one of the answer categories.

4.4.1 Non-conforming answers that get accepted

The following excerpt shows the interviewer accepting a reporting answer. The interviewer asks how the respondent buys his lottery tickets and gives two answer options: cash at a sales point or through the giro or bank (lines 1–3). A repeat or partial repeat of one of these options would constitute a type-conforming response; however, the respondent uses a different formulation to answer the question (line 7).

(6)
1 IR: eh state lottery tickets can be bought cash at one of the
2 ↑sales ↓points but one can also take part through giro or
3 bank↑ .h=
4 IE: =yes↑=
5 IR: =which way do you ↑usually take ↓part↑
6 (0.4)
7 IE: → eh just↑ I get them myself from the post office,
8 (.)
9 IR: ↑cash↓=
10 IE: =-yes- H=
11 IR: =and do you ever take part in the state lottery↓ jackpot↑
1 IR: eh staatsloten ko- ks staatsloten kunnen contant worde gekocht bij
2 een van de ↑verkoop ↓punten maar men kan ook via giro of
3 bank ↑meespelen↑ .h=
4 IE: =ja↑=
5 IR: =welke wijze speelt u ↑meestal ↓mee↑
6 (0.4)
7 IE: → eh gwoen↑ ik haal ze zelf van ’t postkantoor af,
8 (.)
9 IR: ↑contant↓=
10 IE: =-ja- H=
11 IR: =en speelt u mee in de staatsloterij↓ jackpot↑

In line 7, the respondent reports the location where they collect their lottery tickets. In line 9, the interviewer formulates this response into one of the answer options: “cash” (as opposed to an automatic debit from one’s bank account), thus providing their understanding of the respondent’s talk. This understanding is stated matter-of-factly, with a falling intonation, rather than as a probe redoing the question. After the respondent has confirmed this understanding without delay (line 10), the interviewer immediately moves on to the next question (line 11).

This non-conforming response provides enough information for the interviewer to select one of the answer categories. However, the interviewer briefly goes off-script to confirm their interpretation of the non-conforming response, creating an unscripted expansion of the QA sequence.

4.4.2 Non-conforming answers that do not get accepted

Non-conforming answers do not always get accepted. In line 9 of excerpt 7 below, the interviewer asks how many of the past six issues of a certain newspaper the respondent has read. In answer to an earlier question, the respondent has indicated that they sometimes read this newspaper. Here, a number between zero and six qualifies as a type-conforming response (no newspapers on Sundays), but the respondent provides a description rather than a number (lines 3–5).

(7)
1 IR: =.h and ↑how many of the past six issues of the Telegraaf↑ did
2 you re[ad ]
3 IE: →    [WE]ll↑ most of the time eh I buy eh I’m not subscribed to it
4 ↓but if I’m in a supermarket ↓or in town↓ then on
5 Saturdays I always buy the nicely thick newspaper for the ↓weekend
6 (.)
7 IR: did you do that- this Saturday as well↑
8 (0.2)
9 IE: .h Eh the pAst Saturday indeed- NnoT↓=
10 IR: =so you read ↑no Telegraaf for the whole week↓=
11 IE: =↑-no↓=
12 IR: =okay .h how many of the past six issues of the
1 IR: =.h en ↑hoeveel van de laatste zes nummers van de Telegraaf↑ heeft
2 u gele[zen ]
3 IE: →   [NOU] ↑meestal eh koop ik eh ik ben d’r niet op geabonneerd
4 ↓maar als ik in een supermarkt kom ↓of in ‘t dorp↓ dan koop ik
5 ’s Zaterdags altijd de lekker dikke krant voort ↓weekend
6 (.)
7 IR: heeft u dat- deze zaterdag ook gedaan↑
8 (0.2)
9 IE: .h Eh de Aafgelope zaterdag inderdaad- NnieT↓=
10 IR: =dus u heeft de hele week ↑geen Telegraaf gelezen↓=
11 IE: =↑-nee↓=
12 IR: =okee .h hoeveel van de laatste zes nummers van de

In line 3, the respondent starts with a loud and stretched “well,” which alerts the interviewer that a nonstraightforward, multi-unit response is coming up (see Heritage 2015; Schegloff and Learner 2009). What follows is an extended response turn without a type-conforming answer. Indeed, the response does the work of avoiding saying “zero,” which, although it would be type-conforming, is oriented to as dispreferred (see Pomerantz 1984), possibly because it would raise questions about the veracity of their earlier claim. Instead, they describe their usual behavior, starting the description with “most of the time,” setting up a contrast with “this time.”

When the interviewer does not accept this reporting and probes for a different answer (line 7), the respondent communicates more than just a negative response (line 9). By including a timeframe, the respondent displays that the answer may have been different outside the timeframe. This way, the respondent leaves intact the image created in the previous series of questions, that they sometimes read this newspaper, just not recently, which also allows the interviewer to pick either the typical behavior or what happened this week for entry in the survey.

4.5 Summary of qualitative analysis

Our qualitative analysis shows that departures from standardization vary, and different expanded and non-conforming responses have a different impact on the interaction. We have shown type-conforming answers accompanied by serial extras, uncertainty marks, and prefaces before and elaborations after type-conforming answers, as well as responses without a type-conforming answer. We also found that type-conforming responses with expansions are more compatible with the paradigmatic QA sequence than non-conforming responses. Table 2 lists the category, excerpt number, short examples, (interactional) purpose of the action, and whether the interviewer accepts the answer.

Table 2:

Overview of expansions.

Response type Excerpt Example Purpose Accepted?
Serial extra 2 nee ook niet

no also not

ook niet

also not
Indicate similarity to previous answer Yes
Uncertainty marker 3 een stuk of twee

about two
Indicate imprecision Yes
Answer + elaboration 4 nee, we blijven hier wel

no, we’ll be staying here
Change scope of the answer Yes
Preface + answer 5 ik rook ja

I smoke yes
Change the terms of the question Yes
Non-conforming answer 6 ik haal ze zelf van ‘t postkantoor af

I get them myself from the post office
Reporting, letting IR select correct answer Yes
Non-conforming answer 7 ‘s Zaterdags altijd

On Saturdays I always
Reporting, letting IR select correct answer No

5 Quantitative data analysis

In Table 2, we claim that, except for certain non-conforming answers, the interviewer tends to accept expanded answers (e.g., by refraining from probing and proceeding to the next question). We also claim that most turn expansions are unproblematic, but from the qualitative analyses, we cannot tell how frequently each of the types of expansions occur in survey interviews. Therefore, we test our claim and explore the occurrence of answer expansions in a quantitative analysis. As explained in Section 3, a separate dataset of 610 interviews was used for this quantitative study. A research assistant systematically reviewed the original coding to assess whether it adequately identified prefaces before and elaborations after type-conforming answers. In addition, serial extras and uncertainty markers were coded through a text search using the Sequence Viewer program.

Since non-conforming answers were already reliably coded with the same definition, these were not reviewed once again. A distinction was made between non-conforming answers that do not require probing and non-conforming answers that do require probing. The former are answers not literally worded as response alternatives listed but that allow interviewers to easily translate into an alternative (for instance, the respondent answers “yes” whereas the response options are “agree” or “disagree”). Non-conforming answers that require probing (for example, the respondent answers “more or less” when the response options are “agree” or “disagree”) do not allow such translations.

5.1 Results

The first column of Table 3 shows the frequency of occurrence of the six types of answer attempts. Non-conforming answers are the most frequent; as a collapsed category, they occur in 16 percent (7.1 + 8.9) of all QA sequences (see second column) and comprise 63.6 percent (28.6 + 35.0) of all expansion-utterances. The next most frequent are uncertainty markers, post-elaborations, and prefaces (about 3 percent each), whereas serial extras are least frequent (1 percent). In 19,717 QA sequences (75 %), none of the turn expansions occur.

Table 3:

Number of QA sequences with expansions or non-conforming answers and interviewer reactions.

Response type Number of instances Interviewer accepts Interviewer does not accept Respondent repair
Freq. % of QA sequences % of expansion utterances Freq. % Freq. % Freq. %
Serial Extra 269 1.0 4.1 244 90.7 17 6.3 8 3.0
Uncertainty marker 764 2.9 11.6 523 68.5 172 22.5 69 9.0
Answer + elaboration 723 2.7 9.7 411 56.8 217 30.1 95 13.1
Preface + answer 639 2.4 11.0 471 73.7 145 22.7 23 3.6
Non-conforming, not requiring probing 2,307 8.9 35.0 597 25.9 1,314 57.0 396 17.1
Non-conforming, requiring probing 1880 7.1 28.6 203 10.8 1,432 76.2 245 13.0

Exploratory analysis shows three different major outcomes of expansions: interviewer accepts (by means of a short acknowledgment or asking the next question); interviewer does not accept (by means of a probe, repeating the question or the response alternatives); or respondent self-repairs (i.e., restates his answer).

The type of turn expansion and subsequent outcome are significantly related (χ2 (10) = 1899.28, p < 0.01). After a serial extra, the interviewer is much more likely to accept the answer (90.7 %) than after an uncertainty marker (68.5 %), an answer with post-elaboration (56.8 %), a prefaced answer (73.7 %), a non-conforming answer that does not require probing (25.9 %), or a non-conforming answer that does require probing (10.8 %). The analysis shows that, except for non-conforming answers, the interviewer accepts expansions in more than half of the cases. Since interviewers are trained to probe after non-conforming answers that require probing, it is no surprise that in 76.2 % of the cases, these are followed by interviewer probing. However, these types of turn expansions are also often self-repaired (13.0 %), leaving just 10.8 % as accepted. Serial extras are least likely to be followed by interviewer probing (6.3 %) or respondent repair (3 %), and similar results are found for prefaced type-conforming answers.

6 Discussion and conclusion

In survey methodology, the assumption is that unexpanded, type-conforming answers provide the most reliable information. Such answers do not require the interviewer to do any probing or understanding of the response. This way, no interviewer error is introduced, and the recorded answers reflect characteristics of that particular respondent only. Survey researchers devote much time and energy to designing questions so that respondents can provide such unexpanded, type-conforming answers. Most of the time, this is precisely what respondents do (see Ongena and Dijkstra 2006). But sometimes, they offer expanded responses or responses not formulated as any of the answer options.

The qualitative analysis reported in this article showed that the actions performed by survey participants in expanded and non-conforming responses are limited in number. Respondents use serial extras, uncertainty markers, prefaces and post-elaborations, as well as reportings. These actions certainly do not all display trouble answering the question, nor do they all signal that the question was poorly designed. Expansions may simply arise from the order in which the respondent happens to give their answers, or aim to adjust the meaning of the type-conforming answer to the respondent’s specific situation.

Respondents in standardized survey interviews are usually presented with a choice of answers from a restricted set of categories. Questions with just two answer categories are not unusual (such as yes/no), but even when there are more than two categories, they still restrict the respondent’s freedom of answering. A mundane consequence of this limited choice is that respondents may find themselves repeatedly giving the same answer. In Section 4.1, we showed that respondents can mark such patterns by inserting a serial extra.

Another consequence of limited answer options is that none of the type-conforming alternatives may match the respondent’s particular situation, even if the options are perfectly mutually exclusive. In Section 4.3, respondents prefaced or elaborated on their type-conforming responses to solve dichotomies too strict or broad for their circumstances or challenge presumptions that do not apply to them.

In Section 4.4, we discussed respondent turns that do not contain a type-conforming answer at all. In some cases, respondents display trouble mapping their behavior onto one category, offering a reporting instead. They leave it to the interviewer to gather from the reporting which answer category fits their situation best. Interviewers accept some of these, while not accepting others.

In Section 5, we reported a quantitative analysis to empirically test the claims in Section 4. This analysis showed that non-problematic expansions are much more likely to be accepted (i.e., not followed by interviewer probing) than problematic expansions. In the CATI survey analyzed, interviewers were trained to probe after non-conforming answers, but as soon as a conforming answer is given (either after probing or after respondent-initiated repair), interviewers accept the answer given. We recommend future research on which non-conforming responses elicit interviewer probing, and how they differ from non-conforming responses that are accepted.

We especially see acceptance without probing in the case of serial extras. Since serial extras only indicate similarity with a previous answer, there is little reason for interviewers to intervene. Similarly, prefaced type-conforming answers received little intervention. With prefaces, respondents externalize thought processes (see also Ongena and Dijkstra 2007). An answer follows the preface, giving interviewers less reason to intervene than with elaborations placed after a type-conforming answer, nor is there much reason for respondents to self-repair.

This article shows that, overwhelmingly, respondents expand their type-conforming responses or give non-conforming responses for reasons other than having trouble understanding the question or the answer options. Respondents give expanded or non-conforming answers when the question and its answer categories are clear, and even when it is clear which category applies to them. Questions like “are you planning to move house in the next twelve months,” “do you ever smoke,” or “how many of the past six newspapers have you read” do not in themselves cause problems in understanding. The parameters of the question and the way it needs to be answered are unproblematic. Instead, when respondents do not simply pick a category but expand their answer, they try to be understood correctly and represent their situation accurately.


Corresponding author: Yfke Ongena, University of Groningen, Oude Kijk in ’t Jatstraat 26, 9712 EK, Groningen, The Netherlands, E-mail:
This article is an updated and expanded version of chapter 4 of Sanne van ’t Hof’s (2006). From Text to Talk: Answers and their Uptake in Standardised Survey Interviews, PhD-dissertation Utrecht University. The chapter contains more examples than can be included in this shorter journal article; however, the current version contains a new quantitative analysis. This research was also presented at the QDET2conference in 2016.

About the authors

Sanne Unger

Sanne Unger is associate professor at Lynn University in Boca Raton, Florida, where she teaches philosophy and research methods. Her research interests include interaction in survey interviews and action research in higher education, such as student perception of hybrid courses and assessment choice. Her work is published in the Online Learning Journal, the Long Island Education Review, and as a multi-touch book on social science research methods.

Yfke Ongena

Yfke Ongena is a senior lecturer and member of the Discourse and Communication group, Faculty of Arts, University of Groningen. She is a communication expert, specializing in survey research methodology. Her interests lie in verbal interaction between interviewers and respondents, and the phenomenon of social desirability and its effects on answering behavior. She has published in international journals such as the Journal of Survey Statistics and Methodology, International Journal of Public Opinion Research, Journal of Official Statistics, Quality & Quantity, Survey Methods: Insights from the Field, and Applied Cognitive Psychology.

Tom Koole

Tom Koole is emeritus professor of Language and Social Interaction at the University of Groningen and visiting professor at the Health Communication Research Unit of the University of the Witwatersrand in Johannesburg. His research is primarily concerned with communication in health care, emergency calls and education. He has published in international journals such as Research on Language and Social Interaction, Journal of Pragmatics, Discourse Studies, Linguistics and Education, Classroom Discourse and International Journal of Health Psychology.

Acknowledgments

We thank Paul Drew for his comments on parts of the original analysis and Geoffrey Raymond for his comments on parts of the current version.

  1. Competing interests: No potential conflict of interest was reported by the authors.

Appendix A: Transcription conventions

IR: Indicates that it is the interviewer speaking
IE: Indicates that it is the respondent/interviewee speaking
(0.4) Silence (of 0.4 s)
(.) A beat of silence, less than 0.2 s
⌈ ⌉

⎣ ⎦
Overlapping talk between brackets
word=

=word
Speaker latches his talk onto that of the previous speaker leaving no gap. Also used when a speaker ‘rushes through’ to a next TCU, without leaving a gap. The equal sign is placed at the end of the previous TCU and at the beginning of the next.
word Stressed syllable or sound
WORD Upper case indicates increased volume compared to the surrounding talk
°word The degree sign indicates talk that is quieter than the surrounding talk
wo:rd Stretching of the sound that is followed by the colons. One colon indicates a stretching of about 0.2 s
word↑ Rise in pitch
word↓ Drop in pitch
↑ word↓ Pitch rises and then drops (or the other way around)
word, The pitch rises to mid
wo- Indicates that a word or sound is cut off, usually with a stop
>word< The talk between these brackets is sped up relative to the surrounding talk
<word> The talk between these brackets is slower than the surrounding talk
hH Hearable breathing, probably breathing out. Upper case indicates that the breathing is particularly loud. One ‘h’ indicates a breath of about 0.2 s.
.hh Hearable inbreath. Upper case indicates that the breathing is particularly loud. One ‘h’ indicates a breath of about 0.2 s.
(h) Laughter particles during talk
heheh A rendition of laughter
(word) Between brackets is talk that is difficult to understand. It is a candidate hearing.
(***) In the absence of a candidate hearing, these symbols are provided. Each star indicates one syllable of indistinguishable talk.
((title)) The transcriber has inserted a comment or replaced a word or name
.t .pt .mt .kl etc. Clicks, often turn-initial. Transcription takes into account where the sound is produced (p, m for labial clicks, t for alveolar clicks, etc.)
((tik)) Audible strike on the interviewer’s keyboard

References

Alwin, Duane F. 2010. How good is survey measurement? Assessing the reliability and validity of survey measures. In Peter V. Marsden & James D. Write (eds.), Handbook of survey research, 405–434. Bingley, UK: Emerald Group.Search in Google Scholar

Biemer, Paul. 2004. Modeling measurement error to identify flawed questions. In Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin & Eleanor Singer (eds.), Methods for testing and evaluating survey questionnaires, 225–246. New York: Wiley.10.1002/0471654728.ch12Search in Google Scholar

Blair, Johnny & Stanley Presser. 1992. An experimental comparison of alternative pretest techniques: A note on preliminary findings. Journal of Advertising Research 32(2). 2–5.Search in Google Scholar

Cicourel, AaronV. 1982. Interviews, surveys, and the problem of ecological validity, 11–20. The American Sociologist.Search in Google Scholar

Clayman, Steven E. 1993. Reformulating the question: A device for answering/not answering questions in news interviews and press conferences. Text 13(2). 159–188. https://doi.org/10.1515/text.1.1993.13.2.159.Search in Google Scholar

Clayman, StevenE. 2001. Answers and evasions. Language in Society 30(3). 403–442. https://doi.org/10.1017/S0047404501003037.Search in Google Scholar

Dijkstra, Wil. 2002. Transcribing, coding, and analyzing verbal interactions in survey interviews. In Douglas W. Maynard, Hanneke Houtkoop-Steenstra, Nora C. Schaeffer & Johannes Van der Zouwen (eds.), Standardization and tacit knowledge: Interaction and practice in the survey interview, 401–426. New York: Wiley.Search in Google Scholar

Drew, Paul. 1984. Speakers’ reportings in invitation sequence. In J. Maxwell Atkinson & John Heritage (eds.), Structures of social action. Studies in conversation analysis, 129–151. Cambridge: Cambridge University Press.10.1017/CBO9780511665868.010Search in Google Scholar

Forsyth, Barbara H. & Judith T. Lessler. 1991. Cognitive laboratory methods: A taxonomy. In P. Paul Biemer, Robert M. Grovers, Lars E. Lysberg, Nancy A. Mathiowetz & Seymour Sudman (eds.), Measurement errors in surveys, 393–418. New York: Wiley.10.1002/9781118150382.ch20Search in Google Scholar

Fowler, Floyd J. 2011. Coding the behavior of interviewers and respondents to evaluate survey questions. In Jennifer Madans, Kristen Miller, Aaron Maitland & Gordon Willis (eds.), Question evaluation methods: Contributing to the science of data quality, 7–22. New York: Wiley.10.1002/9781118037003.ch2Search in Google Scholar

Fowler, Floyd J. & Thomas W. Mangione. 1990. Standardized survey interviewing; minimizing interviewer-related error. London: Sage.10.4135/9781412985925Search in Google Scholar

Heritage, John. 2015. Well-prefaced turns in English conversation: A conversation analytic perspective. Journal of Pragmatics 88. 88–104. https://doi.org/10.1016/j.pragma.2015.08.008.Search in Google Scholar

Heritage, John & J. Maxwell Atkinson. 1984. Introduction. In J. Maxwell Atkinson & John Heritage (eds.), Structures of social action. Studies in conversation analysis, 1–15. Cambridge: Cambridge University Press.Search in Google Scholar

Holbrook, Allyson L., Melanie C. Green & Jon A. Krosnick. 2003. Telephone versus face-to-face interviewing of national probability samples with long questionnaires: Comparisons of respondent satisficing and social desirability response bias. Public Opinion Quarterly 67(1). 79–125. https://doi.org/10.1086/346010.Search in Google Scholar

Houtkoop-Steenstra, Hanneke. 2000. Interaction and the standardized survey interview. The living questionnaire. Cambridge: Cambridge University Press.10.1017/CBO9780511489457Search in Google Scholar

Houtkoop-Steenstra, Hanneke & Huub van den Bergh. 2000. Effects of introductions in large-scale telephone survey interviews. Sociological Methods and Research 28(3). 281–300. https://doi.org/10.1177/0049124100028003002.Search in Google Scholar

Jefferson, Gail. 2004. Glossary of transcript symbols with an Introduction. In Gene H. Lerner (ed.), Conversation analysis: Studies from the first generation, 13–23. Amsterdam: John Benjamins.10.1075/pbns.125.02jefSearch in Google Scholar

Komter, Martha L. 2002. The construction of records in Dutch police interrogations. Information Design Journal + Document Design 11(2/3). 201–213. https://doi.org/10.1075/idj.11.2.12kom.Search in Google Scholar

Koole, Tom & Nina Verberg. 2017. Aligning caller and call-taker: The opening phrase of Dutch emergency calls. Pragmatics and Society 8(1). 129–153. https://doi.org/10.1075/ps.8.1.07koo.Search in Google Scholar

Maynard, Douglas W. & Nora C. Schaeffer. 2013. Conversation analysis and interaction in standardized survey interviews. In Carol A. Chapelle (ed.), The encyclopedia of applied linguistics, 1016–1022. Malden, MA: Wiley-Blackwell.10.1002/9781405198431.wbeal1309Search in Google Scholar

Moore, Robert J. 2004. Managing troubles in answering survey questions: Respondents’ uses of projective reporting. Social Psychology Quarterly 67(1). 50–69. https://doi.org/10.1177/019027250406700106.Search in Google Scholar

Ongena, YfkeP. 2005. Interviewer and respondent interaction in survey interviews. Amsterdam: Doctoral dissertation, Vrije Universiteit.Search in Google Scholar

Ongena, Yfke. P. & Wil Dijkstra. 2006. Methods of behavior coding of survey interviews. Journal of Official Statistics 22. 419–451.Search in Google Scholar

Ongena, Yfke P. & Wil Dijkstra. 2007. A model of cognitive processes and conversational principles in survey interview interaction. Applied Cognitive Psychology, Special Issue on Cognitive Aspects of Survey Research 21(2). 145–163. https://doi.org/10.1002/acp.1334.Search in Google Scholar

Ongena, Yfke P. & Wil Dijkstra. 2010. Preventing mismatch answers in standardized survey interviews. Quality and Quantity 44(4). 641–659. https://doi.org/10.1007/s11135-009-9227-x.Search in Google Scholar

Pomerantz, Anita. 1984. Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. Maxwell Atkinson & John Heritage (eds.), Studies in conversation analysis, 57–101. Cambridge: Cambridge University Press.10.1017/CBO9780511665868.008Search in Google Scholar

Presser, Stanley, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin, Jennifer M. Rothgeb & Eleanor Singer. 2004. Methods for testing and evaluating survey questions. Public Opinion Quarterly 68(1). 109–130. https://doi.org/10.1093/poq/nfh008.Search in Google Scholar

Raymond, Geoffrey T. 2000. The structure of responding: Conforming and nonconforming responses to yes-no type interrogatives. Santa Barbara, CA: UCSB dissertation.Search in Google Scholar

Raymond, Geoffrey T. 2003. Grammar and social organization: Yes/no interrogatives and the structure of responding. American Sociological Review 68(6). 939–967. https://doi.org/10.2307/1519752.Search in Google Scholar

Schaeffer, Nora C. & Jennifer Dykema. 2011. Response 1 to Fowler’s chapter: Coding the behavior of interviewers and respondents to evaluate survey questions. In Jennifer Madans, Kristen Miller, Aaron Maitland & Gordon Willis (eds.), Question evaluation methods: Contributing to the science of data quality, 23–40. New York: Wiley.10.1002/9781118037003.ch3Search in Google Scholar

Schaeffer, Nora C. & Douglas W. Maynard. 2002. Occasions for intervention: Interactional resources for comprehension in standardized survey interviews. In Douglas W. Maynard, Hanneke Houtkoop-Steenstra, Nora C. Schaeffer & Johannes Van der Zouwen (eds.), Standardization and tacit knowledge: Interaction and practice in the survey interview, 261–280. New York: Wiley.Search in Google Scholar

Schaeffer, Nora C., Douglas W. Maynard & Robert Cradock. 1993. Negotiating certainty: Uncertainty proposals and their disposal in standardized survey interviews (No. 93–25; Paper).Search in Google Scholar

Schegloff, Emanuel A. & Gene H. Lerner. 2009. Beginning to respond: Well-prefaced responses to wh-questions. Research on Language and Social Interaction 42(2). 91–115. https://doi.org/10.1080/08351810902864511.Search in Google Scholar

Sidnell, Jack & Tanya Stivers. 2013. The handbook of conversation analysis. Malden, MA: Wiley-Blackwell.10.1002/9781118325001Search in Google Scholar

Willis, Gordon. 2005. Cognitive interviewing. A tool for improving questionnaire design. Thousand Oaks, CA: Sage.10.1037/e538062007-001Search in Google Scholar

Yan, Ting, Frauke Kreuter & Roger Tourangeau. 2012. Evaluating survey questions: A comparison of methods. Journal of Official Statistics 28(4). 503–529.Search in Google Scholar

Received: 2022-10-10
Accepted: 2024-01-15
Published Online: 2024-01-30
Published in Print: 2025-01-29

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 13.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/text-2022-0157/html
Scroll to top button