Home Linguistics & Semiotics Data collection methods applied in studies in the journal Intercultural Pragmatics (2004–2020): a scientometric survey and mixed corpus study
Article Open Access

Data collection methods applied in studies in the journal Intercultural Pragmatics (2004–2020): a scientometric survey and mixed corpus study

  • EMAIL logo
Published/Copyright: August 22, 2022
Become an author with De Gruyter Brill

Abstract

Methods in Intercultural Pragmatics are inherently multifaceted and varied, given discipline’s breaching of numerous cross-disciplinary boundaries. In fact, research in Intercultural Pragmatics represents merely new ways of thinking about language and, thus, of researching interactants’ (non-)verbal behaviors: With core common ground and shared knowledge about conventionalized frames of the target language being limited, intercultural communication features a number of unique characteristics in comparison to L1 communication. This being said, the range of methods employed in data collection and analysis in Intercultural Pragmatics is not only wide, but highly heterogeneous at the same time. The present paper takes a scientometric approach to data collection methods and data types in Intercultural Pragmatics research. In order to provide an extensive diachronic survey of methods and approaches featuring in empirical studies published specifically by the journal Intercultural Pragmatics (edited by Istvan Kecskés), this study includes a self-compiled corpus of 358 papers in 17 volumes published since its launch in 2004 thru 2020. The aim is to carve out diachronic method preferences and emerging as well as declining trends in data collection methods and data types adhered to within this discipline. These are further discussed within the context of relevant state-of-the-art accounts that have specifically offered surveys of methods and methodologies pertaining to issues in data collection and data analysis in (Intercultural) Pragmatics in recent years.

1 Contents, structure, and rationale of this study[1]

Research methods commonly refer to such procedures within any given intellectual field or discipline that are systematically and aimfully employed as modes of investigating or inquiring about a study object (cf. e.g. Gülich 2001). With specific regard to Intercultural Pragmatics research, Kecskés already claimed several years ago that “a great variety of research tools, data collection methods, and data analysis [was being] used” (2014: 219). The present paper offers a comprehensive, scientometric overview of data collection methods in the discipline of Intercultural Pragmatics (ICUP) as well as a discussion and critical assessment of the strengths and weaknesses of the most salient research methods applied within the wide-ranging context of the discipline. The guiding question overarching the present paper is: Which are the most saliently trending data collection methods adhered to in current and recent research based on data from journal Intercultural Pragmatics?

The sole focus of this study shall be on empirical data, i.e. such data that is conventionally collected through observation, extraction (‘field methods’) or elicitation (‘laboratory methods’). Solely intuitive and introspective lines of scholarly argumentation will not be considered here.

The paper is structured as follows: Section 2 offers a review of literature concerned with data collection methods and the most salient types of data obtainable, i.e. observed, extracted and elicited (2.1, 2.2), with Section 2.3 reviewing the pertinent set of studies that have taken scientometric approaches to research methods in pragmatic areas of research. Section 3 lays out the methods adopted in the present study and offers details on the data used, Section 4 presents the results and a complementary discussion, and Section 5 concludes this study.

2 A literature review on data collection methods

Overall, relatively few publications have addressed data collection methods in ICUP specifically. They are limited to a forthcoming handbook chapter by Kirner-Ludwig (2022) and several path-breaking chapters by Kecskés, which, however, strongly focus on methods of data analysis rather than data collection in ICUP research (2012, 2014, 2017, 2018). All of these do certainly stand within the much wider context and tradition of literature on methods in Pragmatics research in general, which is extensive, to say the least (cf. e.g. Barron et al. 2017; Herring et al. 2013; Jucker et al. 2018; Liedtke and Tuchen 2018; Norrick and Bublitz 2011; Noveck 2018; Senft et al. 2009; Taguchi 2019). Apart from those, a good number of comprehensive reviews of such research designs saliently and successfully ‘transferred’ into ICUP research have been provided by researchers in Interlanguage Pragmatics (cf. e.g. Bardovi-Harlig 1999; Bebee and Cummings 2006; Kasper 2008; Kasper and Dahl 1991; Kasper and Roever 2005; Kasper and Rose 2002; Martínez-Flor and Usó-Juan 2011; Noveck 2018; Trosborg 1995, 2010).

Since the present paper focuses on collecting and collected empirical data only, the various categories of data types and collection procedures that I shall apply and discuss in the following are briefly introduced in subsections 2.1 and 2.2. Section 2.3 reviews the small body of literature that pertains to applied scientometric approaches to research trends in Pragmatics methodology.

2.1 On observing and extracting data

Traditionally, most sociolinguistic and pragmatic scholars would agree on naturally occurring discourse (NOD) – due to its untainted, i.e. non-elicited nature – being the non-plus-ultra kind of data desirable for empirical research (Bataller and Shively 2011; Bou-Franch and Lorenzo-Dus 2008; Félix-Brasdefer 2007; Golato 2017; Martínez-Flor 2006; Turnbull 2001; Sampietro et al. 2022). NOD represents the kind of data one will usually obtain through external, non-biasing observation of subjects and communicative scenarios (thus, “field method”; cf. Clark and Bangerter 2004). It is data that “has not been elicited by the researcher for the purpose of his or her research project but that occurs for communicative reasons outside of the research project for which it is used” (Jucker 2009: 1615).

Certain drawbacks have recurrently been raised when it comes to collecting data through observation – one of them being the “lack of control of speaker and context variables” (cf. Houck and Gass 1996: 47; also cf. Blum-Kulka et al. 1989b: 13). What is more, various factors need to be considered determining the validity (Golato 2017: 24; Yuan 2001) and, overall, the reliability of observed data (cf. McKay and Hornberger 2005, pp. 391–392; Leung et al. 2004: 242).

Apart from methods of observation, the extraction of certain data is a commonly applied field method. Extracted data points may be represented just as much by quantifiable, large sets of materials (available as electronic corpora) as they can be by individual texts that the researcher is zooming in on. Compared to observed data, which is, in principle, authentic as well as naturally occurring, extracted data may range between various categorizations of ‘naturalness’. As Schneider puts it,

corpus data do not all qualify as observational data. They are naturally occurring to the extent that their existence does not depend on a researcher. Yet there are significant differences between the data types included in machine-readable corpora, sometimes even in the same corpus. (2018: 50)

I therefore find the distinction between ‘observed’ and ‘extracted’ data rather useful, even though it is not generally made (cf. Kirner-Ludwig 2022).

2.2 On eliciting data through productive and comprehensive tasks

Data elicitation refers to the obtaining of data through applying systematically controlled settings, prompts, and variables (cf. Clark and Bangerter 2004). The present study distinguishes between non-experimental versus experimental settings based on the added complexities that come with experimental elicitation (see Section 4.2.3). This notional decision is in line with the trending research foci in (Intercultural) Pragmatics and particularly in acknowledgement of the field of Experimental Pragmatics (cf. Meibauer and Steinbach 2011; Noveck 2018: ch. 4; Noveck and Sperber 2004). Non-experimental research designs usually share their primary focus on data reflecting subjects’ actual language use (cf. 4.2.2), while experimental setups are saliently concerned with the aim of eliciting speaker intentions and shedding light on cognitive processing. They seek to elicit such data through manipulation of certain factors and by significantly frequently adhering to psycholinguistic methods (cf. 4.2.3).

I follow Schneider (2018) in my fluid categorizing of elicitation tasks according to a low, medium or high interaction/collaboration level required. More specifically, he proposes that elicitation tasks should be viewed on a “continuum […] decreasing [in] interactionality and, at the same time, increasing [with regard to] researcher control” (2018: 58). Respectively, low-interaction tasks are such that require a very restricted interaction level in order to elicit productive data. On the most basic level, there are intuitive tasks that will ask the participant to provide personal information, such as age, gender, occupation etc. In questionnaires, these tend to come as single-choice items. A more sophisticated instrument to elicit productive intuitive and introspective data is the diary or verbal-report method, which requires participants to note down relevant anecdotes or self-observational aspects (cf. e.g. Cohen 1996; Kasper 2008: 297; Schneider 2018: 73). Open-ended questionnaires, too, may be used to elicit rich introspective data, but will usually require significantly longer and more complex coding phases and analytical procedures.

The most salient low-interaction task (both on the meta-as well as the application level) is the so-called discourse-completion task (DCT), which, typically, requires participants to produce (written or oral) utterances that (in the participant’s view) appropriately complete or complement a prompt provided by the researcher. This prompt generally describes a specific socioculturally embedded situation in lockstep with the beginning of a (seemingly) authentic dialogue. As Mey puts it, “[t]his method basically consists in creating a (written) ‘role play’ situation” (2004: 39). The situational context provided in the prompt is deliberately construed so as to elicit the specific (pragmatic) aspect aimed for (often without the participant being aware of it so to avoid bias).[2]

In general, DCTs have been used saliently within the context of research on speech acts (cf. Blum-Kulka et al. 1989a), formulaic language (cf. e.g. Kecskés 2000), and pragmatic knowledge (cf. Félix-Brasdefer and Hasler-Barker 2017), with colleagues acknowledging this instrument’s time efficiency, potential of cross- and interdisciplinary replicability as well as the high level of variable control (cf. Kasper and Roever 2005; Houck and Gass 1996). Yet, DCTs’ significant shortcomings are repeatedly highlighted in the literature, too, concerning, for instance, the unavoidable factor of artificiality entailed in presenting short written segments that are actually prompted and analyzed as if they were oral (cf. Martínez-Flor and Usó-Juan 2011: 53; cf. Golato 2017: 22; Turnbull 2001; Bou-Franch and Lorenzo-Dus 2008; Félix-Brasdefer 2010; Economidou-Kogetsidis 2013; Schauer and Adolphs 2006). At the end of the day, there seems to be a consensus that, while DCT responses may not adequately reflect natural speech, they do “accurately reflect the content expressed in natural speech” as well as “the values [and norms] of the native culture” (Beebe and Cummings 1995: 75; cf. Kasper 2008: 329). This is why DCTs remain a popular go-to data collection method in many subareas of Pragmatics research, including ICUP.

Medium-interaction tasks cover most kinds of elicited conversations, i.e. include all such tasks in which “researchers specify topics, interactional goals or discourse roles” (Kasper 2008: 287). Interviews have been found to be the most frequent subtype of elicited talk in this category (Schneider 2018: 62). They are usually conducted orally with the researcher eliciting responses through verbal (sometimes in combination with visual) prompts[3] that start and structure the conversation to certain extents. Thus, interviews may be fully or in part narrative (i.e. unstructured, open-ended), semi-structured or structured, formal or informal, while the conversation between interviewee and interviewer (researcher) is usually audio- or video-recorded (provided the interviewee’s informed consent).

The category of high-interaction tasks pertains to collaborative learning activities of various kinds employed to elicit productive data. These may range from collaborative writing or translation assignments, to video-conferencing sessions, group discussions and peer feedback tasks. Also role-play tasks (RPTs) belong with this highly collaborative group. They are used relatively saliently in L2 contexts (cf. e.g. Abdoola et al. 2017; Ross and Kasper 2013; Taguchi and Kim 2018; Youn 2020), where they have occasionally even been found to “yield more realistic data than other data elicitation methods” (Golato 2017: 22; cf. also Félix-Brasdefer 2007; Kasper and Dahl 1991; Turnbull 2001). However, RPTs inherently pose significant logistical challenges and will not guarantee the quality of elicited data to be authentic and useable, which is the main reason for scholars refraining from this method.

Within any of these low-to-high interaction spheres, research designs may additionally feature experimental elements so as to elicit both productive and receptive data. In fact, pragmatic methods have been shifting significantly into experimental spheres in recent years (cf. Noveck and Sperber 2004: 8; Schlesewsky 2009).

2.3 On scientometric approaches in (Intercultural) Pragmatics

With the present study taking a scientometric approach to the trends in data collection methods, there is a small set of studies that deserve specific attention for having paved this way. In general, scientometric studies carving out trends and developments in Pragmatics are still relatively rare in our discipline, but have been presented by Bardovi-Harlig (2010), Culpeper and Gillings (2019), Hu and Fan (2011), Jucker and Staley (2017), Kecskés and Kirner-Ludwig (2020), and Nguyen (2019). Kecskés and Kirner-Ludwig (2020) include a scientometric case study in an introductory chapter to a volume on new directions in Pragmatics research, adhering to the Web of Science Corpus. Nguyen’s handbook chapter (2019) presents a survey of methods in L2 Pragmatics research based on Bardovi-Harlig’s data published in 2010 (i.e. overall a survey of 246 empirical studies in L2 pragmatics published from 1979 to 2017), and employs database meta-information from LLIBA, ERIC, and ProQuest. Jucker and Staley (2017)’s survey presents figures specifically concerned with preferred data types in research on politeness and impoliteness and uses journal materials from the Journal of Pragmatics. Culpeper and Gillings (2019), too, extracted a corpus of 200 papers published in the Journal of Pragmatics and carved out data trends in Pragmatics research over the course of 1999–2018. Hu and Fan (2011) surveyed journal data (dating from 2001 to 2005) with regard to research contents and methods in intercultural communication research with a focus on China.

3 Methods adopted in the present study

I take a meta-approach and combine certain empirical methods in order to demonstrate their applications in ICUP research. Data types will be described and discussed in lockstep with the specific empirical methods used to collect these data. In doing so, each one of the upcoming (sub)sections is enriched by a combined discussion of various resource types. It should be emphasized that the present paper not only addresses but also applies a range of empirical (i.e. evidence and databased) methods for the sake of both demonstration and research. I choose a data-driven approach in order to guarantee an adequate description of the research trends in ICUP since 2004.

Note that – while my study focuses on data collection methods only – my understanding of the highly evasive notion of ‘method’ incorporates procedures not only of data collection, but also of processing and analysis alike. I distinguish method from methodology, as the latter pertains to the rationale for any research approach to begin with. This study solely focuses on methods and collected data. What is more, methods are here understood as hypernymous to ‘tasks’, i.e. such assignments that subjects complete for the researcher so to obtain elicited data. Such tasks include comprehension and production tasks that are completed under low (e.g. discourse completion) to high interactional conditions (cf. 2.2 and 4.2.2). Subsections 3.1 and 3.2 offer details on the corpus data to be analyzed here as well as on the coding protocol specifically developed for this study.

3.1 Browsing electronic corpus data for data collection methods

All upcoming elaborations follow the pretense that only empirical data can be analyzed. Thus, non-collected data (i.e. what Bednarek refers to as “non-attested data”, cf. 2011) is here used as a notion to refer to non-empirical data and solely introspective or intuitive lines of argumentation as opposed to systematically obtained data via empirical methods. The latter include observed, extracted and elicited data collection procedures, out of which the first two inherently aim for naturally occurring data, while all three pertain to (quasi-)authentic data (cf. also Al-Surmi 2012: 673f.; Bednarek 2018, 2010; Dynel 2015; Rose 2001).

The present study adheres to electronic corpus data in order to provide an extensive diachronic, scientometric survey of methods and data collection approaches within ICUP since 2004. My research design is similar to Culpeper and Gillings’ (2019), Hu and Fan’s (2011), and Nguyen’s (2019) in terms of data surveyed and coding schemes developed, but does exceed their dataset size significantly. Out of the 200 papers that Culpeper and Gillings (2019) extracted from the Journal of Pragmatics, they only discussed the top 50 most-cited ones from four designated time periods. Nguyen’s (2019) corpus was comprised of 246 studies, out of which 105 were borrowed from Bardovi-Harlig’s (2010) dataset covering studies published between 1979 and 2008.

In comparison, the corpus that I have compiled for the present chapter is composed of the entirety of papers published by Intercultural Pragmatics since the journal’s launch in 2004 and until the end of 2020, altogether adding up to 358 papers in 17 volumes (excluding book reviews) and to a total of 3,385,001 words. This corpus will henceforth be referred to by the acronym CICUP (i.e. ‘Corpus of Intercultural Pragmatics’). Each paper in full (including titles, keywords if provided, and references) was downloaded, individually labelled as P1 – P358,[4] and compiled into one searchable corpus that was then uploaded and quantitatively analyzed on Sketch Engine via concordance searches. This corpus approach was combined with the application of four sequential rounds of extensive manual coding protocols to each individual paper.

3.2 Coding protocol[5]

While most of my coding foci also feature in the studies by Bardovi-Harlig (2010), Culpeper and Gillings (2019), and Nguyen (2019), I take a more in-depth and comprehensive approach to my significantly larger dataset. After all, my coding decisions took into account both meta- and object-data points, the former of which were not systematically or straightforwardly identifiable through word and concordance searches alone. Therefore, coding steps 1 and 2 included decision-making procedures based on manual and close-reading approaches.

In a first step, each one of the 358 papers in the dataset was categorized (upon close reading) according to whether the study was based on intuitive/introspective, observational or elicited data. Note again that papers that took a solely intuitive or introspective approach, i.e. “did not analyze actual language data but work with reflections on language” (Jucker 2009: 1615), were not included in any of the following deliberations, but only served me as a big-picture orientation point: out of 358 papers in total, I categorized 138 papers (i.e. 38.5%) as introspective/intuitive studies and excluded them (cf. Figure 1),[6] which left 220 papers (i.e. 61.5%) for me to analyze in detail. In the following, these 220 papers will serve as my focus dataset (i.e. as 100% of those studies using empirical data).

Figure 1: 
Distribution of data types and modes in CICUP.
Figure 1:

Distribution of data types and modes in CICUP.

In a second round of coding, the 220 papers were browsed for whether they contained sections explicitly dedicated to describing their methods (rather than their methodology). Finally, the browse function on Sketch Engine was used to doublecheck the date for specific features, such as ‘authentic data’, ‘participant observer’. These hits had to be investigated one-by-one within their context so as to make sure that they were actually referring to the main data in the respective study (rather than, e.g., to literature reviewed).

In a third go, the 220 papers were coded according to the collection mode of data, i.e. as to whether the data collected was written or spoken. As Figure 1 forestalls, the relation is more or less balanced, with a slightly higher preference for spoken data in CICUP.

Note that, while extracted data will necessarily be obtained in a written or transcribed form, these data may have been spoken to begin with – consider any spoken electronic corpus consisting of interviews, or group discussions. Special in-between cases are posed by such text types that are commonly (semi-)scripted in order to be performed in spoken mode (e.g. TV news reports, public speeches, certain kinds of telecinematic discourse). I therefore did acknowledge the original mode of data in my coding scheme.

In a final round, all papers were coded according to subcategories such as dataset size, languages and cultures addressed, and any specific elicitation tasks employed. Many, yet not all, of these variables could be uncovered by way of concordance and key word searches. All of those databased papers that did not offer methodological descriptions in the conventional register expectable had to be reviewed yet again at this stage. Note also that additional spot tests were conducted throughout, as not all concordance results could simply be taken at face value. Even with keywords such as ‘DCT’ or ‘experimental’ featuring in a paper, it still needed to be confirmed that these characteristics or specifications were in fact pertaining to methods actually adhered to rather than merely mentioned as approaches excluded by the authors.

4 Results and discussion

4.1 Observed and extracted data in CICUP

Following the notional distinction between extracted and observed data as elaborated upon in 2.1, the distributions in CICUP are shown in Figures 2a and b. Note that the absolute sum of numbers in Figure 2a exceeds the number of studies in the focus dataset (i.e. 220), as a number of studies employ more than one data collection method and data type.

Figure 2a. 
Absolute number of data types chosen.
Figure 2a.

Absolute number of data types chosen.

As can be seen, the data preferences are equally divided between non-experimentally elicited data on the one hand and extracted data on the other. Particularly the latter reflects the fact that, with computer-aided research emerging, extracting authentic data can nowadays be done in immensely efficient ways. This includes, for instance, access to already-transcribed conversational data.[7] Extracted data points may be represented by quantifiable, large sets of materials (electronic corpora, lexicographical resources, cf. e.g. P204) just as much as by individual texts or smaller datasets that the researcher is zooming in on. These may include newspaper articles (cf. e.g. P235, P236), multimodal data from websites (cf. P352), lifestyle weblogs and online fora (P303), or specific text types like electronic self-reviews (cf. P188). Email conversations are generally considered highly private and will thus not be shared widely by their composers, which is why such authentic data is particularly hard to come by unless the researcher was a conversant themselves (but cf. P216, P240). As for e.g. Facebook posts, the data extracted will often include a researcher falling back on their personal account contacts (cf. e.g. P217). As shown in Figure 2b, those 106 studies that use extracted data, are merely balanced with regard to using large corpus data (not self-compiled, 48%) versus self-compiled corpora (47%). 36 studies primarily focus on observed data with 10 of these obtaining their NOD via field notes.

Figure 2b. 
Extracted data sources.
Figure 2b.

Extracted data sources.

Despite of authentic data arguably being the non-plus-ultra kind of data desirable for empirical research (Bataller and Shively 2011; Bou-Franch and Lorenzo-Dus 2008; Félix-Brasdefer 2007; Martínez-Flor 2006; Turnbull 2001), only 51 papers in the CICUP focus dataset (i.e. 23.2%) actually emphasize their own use of authentic data (cf. P151, p. 668; P239; P285). As mentioned above (2.1), such data is not necessarily straightforward to come by and this is certainly reflected strongly in the data at hand (cf. again Figure 2a). The logistical complexities to data collection and the level of reliability of thus-obtained data are particularly precarious in such observational designs where the researcher keeps themselves out of the equation, i.e. the communicative scenario observed: In P156, for instance, in order to “ensure the naturalness of the data”, the researchers had

the participants record […] themselves without the presence of the researcher. Randomly selected participants were provided with cassette recorders and instructed to record their interactions (p. 99)

Participational or interactive observation designs are thus more and more emerging among field methods of choice. In CICUP, 7 out of 36 papers take an interactional-observational approach (cf. P95, P121, P160, P211, P215, P224, P319). Particularly elegant are such research designs that allow for the researcher to take on the role of participant observer whilst actually being a genuine part of the NOD scenario, as in meetings conventionally minuted or audio recorded anyway (cf. e.g. P211).

As Figure 3 shows, the salient standard design – diachronically speaking – has been the cross-sectional study with 9 rising to an average of 12 papers published per year since 2004. Longitudinal and case study designs are significantly rarer, yet also stable over time with an average of 1 and 2 papers published in Intercultural Pragmatics per year respectively.

Figure 3: 
Research designs as applied in CICUP (2004–2020).
Figure 3:

Research designs as applied in CICUP (2004–2020).

Note that Culpeper and Gillings (2019: 8) found case studies to be declining dramatically in Journal of Pragmatics papers between 2000 and 2008 and since then plateauing far beneath corpus studies. What we see in CICUP seems to be the continuance of this low-plateau trend of case studies in pragmatics research overall, which is, however, not entirely surprising given the fact that case studies will not conventionally offer strong implications of result representativeness.

4.2 Task-elicited data types in CICUP

The hypothesis that obtaining data by systematically controlling the settings and variables for data collection and by employing certain elicitation prompts to get a hold of the data of interest is widely preferred in ICUP research (cf. Clark and Bangerter 2004) finds affirmation in CICUP. As stated by P218, “[e]ven the authors who realize the deficiencies of elicitation tasks view their use as inevitable in contrastive studies” (p. 650). In CICUP, 64 studies (29.1% of the data-based papers) make use of questionnaires as direct lines to intuitive, meta- (e.g. P301) as well as task-elicited data. Subsections 4.2.1 and 4.2.2 outline the results in detail.

4.2.1 Intuitive, self-observational, and retrospective/reflective production tasks

Specific tasks in this category include e.g. pragmatic assessment tasks, often appearing in the format of appropriateness or acceptability judgment questionnaires (cf. e.g. P176, P198, P269, P300, P326, P330). For instance, P300 distributed an online questionnaire with several 3-turn dialogue items in English, of which participants were to assess the final turn’s acceptability on a 7-point scale.

The diary or verbal-report method has been employed by several studies in CICUP: P303 had their participants note down offensive and impolite incidents. P259 even features three sets of diary-keeping tasks (on meta-awareness of pragmatics) in its research design. In complementation of participants’ diary documentations, some researchers additionally include their own field notes into the design as a backup as well as an additional data source (cf. P303). As P198 observes,

[t]he rationale behind collecting verbalization data comes from information-processing theory (Ericsson and Simon 1993): Information processed in short-term memory is open to conscious inspection during task completion and may remain accessible for a short period of time following the task. Coupled with learners’ responses to assessment items, verbal protocols can assist in arriving at more fine-grained evaluations of pragmatic competence than are possible when performance data are considered alone. (pp.71f.)

It also shows from CICUP that retrospective verbal reports and think-aloud tasks have emerged as one salient method for monitoring and understanding learners’ development in pragmatic competence (e.g. P194, P198). While 12 studies in CICUP employ retrospective interview tasks (i.e. P65, P67, P69, P84, P93, P105, P116, P149, P209, P222, P223, P224), 4 papers resort to retrospective verbal reports (P28, P176, P324, P330).

All in all, 44 studies (i.e. 20%) adhere to introspective, intuitive or retrospective tasks in their designs. P160 puts it into a nutshell when claiming that “[t]he use of introspective data […] is a compromise between the use of DCT data, which is alleged to be unnatural, and naturally occurring data, which needs much time and many resources to collect” (p. 232). Overall, using introspective data as complementation of (non-)experimentally elicited data is the common practice (cf. e.g. P198, p. 76). 38 of the 44 studies mentioned above combine introspective tasks with other productive tasks for triangulation.

4.2.2 Production tasks in (semi-)designed and controlled settings

Given that the focus of task-based elicitation of data is on production overall, comprehension tasks are used very rarely in CICUP (cf. P313, P164; also cf. Takimoto 2009; but see 4.2.3). In line with Schneider’s categorization proposal of low- to high-interaction elicitation tasks (2018: 58), one might come to hypothesize that the higher the level of interactionality in a production task, the more likely it might be found worthwhile to ICUP research. This would lead to the assumption that e.g. role-plays and interviews would be featuring rather frequently in CICUP, whilst e.g. DCTs would not. As Figure 4 shows, this is, however, not exactly reflected in the data when it comes to the salient representatives of low, medium and high-interaction tasks, i.e. DCTs, interviews and role-plays respectively. While Martínez-Flor and Usó-Juan found the most widely used methods to elicit pragmatic data in Interlanguage Pragmatics to be role-plays and DCTs 10 years ago (2011: 51), CICUP studies turn out to have used interview data more than twice as often as role-plays and DCTs each (cf. Figure 4). However, one should keep in mind here that these proportions change if one allows for the possibility that the notions ‘DCT’ and ‘questionnaire’ may be used interchangeably by some researchers. Subsections 4.2.2.1 thru 4.2.2.3 offer detailed discussions of the results pertaining to low-, medium- and high-interaction tasks.

Figure 4: 
Distribution of data elicitation tasks in CICUP.
Figure 4:

Distribution of data elicitation tasks in CICUP.

4.2.2.1 Low-interaction tasks: written and oral DCTs

Amongst the various kinds of tasks created to elicit productive data and categorized here as ‘low-interaction’ are writing assignments (P168), read-aloud tasks for intonation research (P22), elicited narratives (P205), and, as the most salient group (both on the meta- as well as the application level), DCTs. In line with the respective state-of-the-art literature cited in 2.2, colleagues in ICUP making use of DCTs will usually appreciate their time-efficiency, potential of cross- and interdisciplinary replicability (P93), as well as the high level of variable control (cf. P140, p.317). Overall, DCTs are used by 31 papers in CICUP, with the vast majority (25 studies) administering them in written rather than in oral format.

CICUP also provides evidence of the fact that researchers have been trying to address much of the criticism that has been raised against DCTs (cf. e.g. Martínez-Flor and Usó-Juan 2011: 53). As a result, aimful enhancements of this data collection method have been proposed and put to action in a number of studies in the corpus, with e.g. P289 prioritizing a content-enriched description of the scenario (also cf. Billmyer and Varghese 2000), or P93 creating a DCT on the basis of large spoken (authentic) corpus data rather than self-created and potentially inauthentic examples and prompts. Most frequently, though, DCTs have been triangulated with other methods, e.g. observation (cf. P93), role-plays (cf. P239), or extracted data (cf. P237).

4.2.2.2 Medium-level interaction tasks: elicited dyadic conversations

Examples of elicited conversations that are not actual interviews in CICUP include various research designs in which dyadic conversations were specifically set up by the researchers (cf. P114, P144, P225, P242, P317). For instance, in P225, informants were instructed to make a table reservation at a restaurant via skype within the context of language proficiency assessment; P114 triangulates a set of interactive oral tasks, i.e. gap tasks, a dialogue reading task and an informal conversation.

Of the 220 papers working with collected data, almost a third (i.e. 61 papers, 27.7%) make use of interview data, which seems to align with e.g. Bardovi-Harlig and Salsbury’s (2004) reported impression that the richness of interview-elicited data that they collected longitudinally clearly outweighed the transcription workload by far.[8] At the same time, the emergence of internet-mediated research has given rise to interviews also being conducted online e.g. via computer-mediated chat functions or teleconferencing platforms (cf. P238), which has rendered interviews a much more efficient method nowadays. Several papers in CICUP emphasize the added value in web-based communication particularly in intercultural scenarios (cf. e.g. P326).

The range of interview (sub)types is multifaceted, including what authors call, amongst others, ‘informal’ (P36, P197), ‘spontaneous’ (P108), ‘off-the-record’ (P131), ‘open-ended’ (P224, P95), ‘free’ (P209), and ‘(semi-) structured’ (P228, P260, P288, P332, P35, P39, P105, P127, P160). Specifically relating to L2 contexts are e.g. ‘language awareness interviews’ (P198) and ‘oral proficiency interviews’ (P7). The most frequent kind, however, are retrospective (post factum) interviews, usually conducted for affirming triangulation with other data types collected (P67, P83, P84, P93, P94, P105, P116 P 148, P159 P166, P194, P216, P222, P223, P224 P33). In rare cases, researchers take on the effort of incorporating interviews in a pre-data collection stage in order to proactively justify or strengthen their framework or terminology (cf. P332). In few cases, structured interviews are even used to collect background and demographic data from participants (cf. e.g. P260).

Finally, a particularly rich yet generally still under-used interview type has emerged with focus group interview designs.[9] These “allow individuals to respond in their own words, using their own categorizations and perceived associations” (Stewart et al. 2007: 13), thus producing the type of interactional data that is “suitable for a detailed discursive analysis” (Goodman and Burke 2010: 328). Four CICUP papers employ this instrument (P121, P147, P152, P206), with e.g. P152 only providing one conversation trigger and thus leaving the conversants to themselves from there on up. Such designs clearly border on or even overlap with high-interaction tasks (cf. 4.2.2.3).

Note that yet another 14 papers (not counted into the 61 just discussed) have extracted various kinds of interview data from electronic corpora and other already existing datasets (e.g. political or live interviews, e.g. P79, P112). All of these observations suggest that data elicitation through interview techniques has been frequently applied by scholars in ICUP. At the same time, interview data alone is rarely considered a sufficient primary data source in and of itself. In fact, interviews often fulfil the function of confirming assumptions deducted from data collected and analyzed prior (cf. e.g. Moreland and Cowie 2005) and thus are usually triangulated with other complementing data, which is the case also in CICUP.

4.2.2.3 High-interaction and collaboration tasks

The range of collaborative, high-interaction learning activities as tasks used to elicit data in CICUP covers writing or translation assignments (e.g. P226, P257), video-conferencing sessions (P139), group discussions that may be set up as face-to-face (cf. e.g. P187, P222, P283) or via computer-mediation (P117, P118, P155, P226; P33), and peer assessment as well as peer feedback tasks (cf. P36, P326).

Further, 27 papers in CICUP employ RPTs (cf. e.g. P330, P260), whilst, however, acknowledging that method’s limitations and drawbacks already mentioned above (2.2; cf. P353, P216, P218, P239), One can certainly agree with P218 on the fact that

[t]he choice of situations to be put to a test in role plays […] should take into consideration that speakers of different cultural backgrounds may be sensitive to different aspects of the context related to multiparty interaction. (p. 671)

4.2.3 Data elicited under experimental conditions

Based on the CICUP data at hand, Experimental Pragmatics has hardly been informing ICUP research so far. Out of the total of 358 studies in CICUP, 15 do cite the relevant literature (e.g. Garrett and Harnish 2007; Meibauer 2012; Meibauer and Steinbach 2011; Noveck and Sperber 2004), but only one study, i.e. P306, does explicitly position itself as experimental.[10] Reasons for this seemingly reluctant attitude are probably to be looked for in logistics mainly (sampling procedures, technical affordances, etc.). For instance, Experimental Pragmatics focuses not only on learners, but more so than other subfields on e.g. subjects with neuro-developmental disorders (Pijnacker et al. 2009) and language impairment (Katsos et al. 2011). Recruiting participants in general must therefore come with an unlimitedly transparent and ethically impeccable research design and recruitment procedures that ensure that participants will be able to freely give their informed consent to participate in the study. What is more, an experimental study will usually employ

a hypothesis testing procedure in which certain variables are manipulated while others are held constant. […] The manipulation is powerful and occurs in an experimentally controlled setting yet it is a common part of daily life and the study occurs “n the “rea” world” rather than in the laboratory. For these reasons, the data generated under these conditions should be highly representative or generalizable. (Turnbull 2001: 37f.)

As for technical equipment required for many experimental research designs, no single paper in CICUP draws upon data obtained through neuroimaging, event-related potentials, functional magnetic resonance imaging or electroencephalography.[11] Only eye tracking is occasionally employed (cf. e.g. P101).

Still, 20 papers (including P306) do adhere to experimentally elicited data in their studies, even if they may not be explicitly claiming so (cf. Figure 2a above). Diachronically, experimental designs in CICUP reached their highest peak so far in 2008 (Figure 6 below).

While the 20 experimental studies in CICUP demonstrate quite a multifaceted scope of topics addressed (negation, context effects, intonation, speech acts, etc.) as well as a range of diversified experimental designs overall, most of them (i.e. 18) include a quantitative analysis of the obtained data, with 15 of these also employing descriptive or inferential statistics. Only two studies (P124, P266) zoom in on a solely qualitative interpretation of results. 10 studies apply mixed analytical approaches. One of these is P99, which investigates to what extent incongruent context in presented stories enhances the subjects’ use of negation. As visualized in Figure 5, the majority of the experimental papers targets aspects of comprehension and cognitive processing (13 papers), e.g. measuring processing times under self-paced reading conditions (cf. P100, P101, P243). P101 uses eye-tracking technology to do so.

Figure 5: 
Distribution of experimental elicitation task approaches in CICUP.
Figure 5:

Distribution of experimental elicitation task approaches in CICUP.

In contrast, fewer designs in CICUP adhere to experimental production tasks (7 papers, cf. Figure 5), with only one of them (i.e. P342) explicitly eliciting both production as well as comprehension data. Yet, these 7 studies on a whole do provide an intriguing range of designed components to elicit comprehension data. For instance, P266 sets up an experimental, qualitative study to investigate speakers’ choices of overtness in discourse relations; P124 uses RPTs to elicit experimental data; P334 poses counter-expectational questions in order to elicit gestural data; and P242 manages to manipulate subjects into producing NOD, with the following conditions created:

At the end of the semester, two final exams for each level were intentionally scheduled to be administered on the same day. Teachers of those subjects informed students of the possibility of altering the timeslot of one of those exams, provided that half the group individually ask the administrator to defer or bring forward one of those exams to another day. […] The administrator was informed that his interactions with the learners were being recorded in order for the department to evaluate the progress of learners’ language. The learners, however, were not informed that their interactions with the administrator were recorded, making the data collected genuine. (p. 627)

While the absolute number of experimental designs in CICUP is admittedly relatively small, Figure 6 shows that experimental designs have been following a steady and constant trend in CICUP since 2004 after all.

Figure 6: 
Trends of experimental and non-experimental data collection methods in CICUP (2004–2020).
Figure 6:

Trends of experimental and non-experimental data collection methods in CICUP (2004–2020).

5 Conclusions

This study has not only provided an outline and survey of data collection methods in the relatively young field of Intercultural Pragmatics, but has presented an empirical study, the outcomes of which may be summarized as follows:

In contrast to what Culpeper and Gillings projected based on their Journal of Pragmatics data, i.e. that “Pragmatics [would] remain overwhelmingly focused on data” and that “[n]o increase in the few theory-focused and methodology-focused papers [was] likely” (2019: 13), there is certainly not an “overwhelming” majority of data-based studies amongst the 358 papers in CICUP. In fact, research in ICUP has been fairly balanced with regard to introspective versus observational and elicitatory approaches, which may be taken as an attestation to the fact that colleagues in our field are equally interested in expanding and deepening the theoretical groundwork as they are in unlocking new data and exploring new angles in Interactional and Intercultural Pragmatics research. Overall, though, introspective papers seem to have been trending downwards continuously, while data-based research designs have been trending consistently upwards (Figure 7).

Figure 7: 
Trends with regard to introspective versus data collection approaches as demonstrated in CICUP.
Figure 7:

Trends with regard to introspective versus data collection approaches as demonstrated in CICUP.

Accordingly, elicited data continues strong with a little under 50% of the 220 papers in my focus dataset. As stated by P218,

collecting natural data is costly and time-consuming, [which is why] it is unlikely that the researchers of [intercultural] politeness will give up the convenient tools of data elicitations sanctioned by the multitude of research that has already been conducted by this means. (p. 650)

This is in alignment with the recurrent opinion that elicited data may even be preferable to authentic data, with the former having “obvious advantages over authentic data in research studies where situational control is important” (Hendriks 2008: 338). These trends are visualized in Figure 8.

Figure 8: 
Trends with regard to data collection methods as demonstrated in CICUP.
Figure 8:

Trends with regard to data collection methods as demonstrated in CICUP.

While e.g. Bibok [P272] states as a “fact that pragmatics studies seem to prefer corpora as data sources” (2016: 408), this also is not exactly confirmed by the CICUP data at hand: 48.2% of those papers from the focus dataset extract their data from already existing corpora, datasets and archives, while 57.3% employ elicitation tasks, and 16.4% observation. This suggests that there is a prevailing tendency in the field to rely on one’s own data sets, research methods and collection procedures more so than on prefabricated materials. However, a close look at the triangulation trends offers a much more comprehensive view (cf. Figure 9): while elicitation tasks may be trending upwards overall (cf. Figure 8), this might in fact be because intuitive and introspective elicitation tasks provide a strong and steady counterbalance and complementation, employed mostly in combination with DCTs, RPTs and interviews. On their own, each one of these three data elicitation task types has in fact been trending downwards since 2004 (cf. Figure 10), but that is probably due to research designs on a whole becoming more complex and inclusive of various data types.

Figure 9: 
Data type triangulation trends as demonstrated in CICUP.
Figure 9:

Data type triangulation trends as demonstrated in CICUP.

Figure 10: 
Trends with regard to low-, medium- and high-interaction tasks as demonstrated in CICUP.
Figure 10:

Trends with regard to low-, medium- and high-interaction tasks as demonstrated in CICUP.

Several papers in CICUP put forth the tenet that, ideally, “triangulation should entail methods that are different in nature. Thus, the data obtained with an intuitive method, such as responses to a questionnaire, should be contrasted with data obtained empirically” (P47, p.217; cf. also P53). This is in fact what the majority of CICUP papers, i.e. 36%, do: they rely on method and task type combination rather than isolated methods to data collection, which suggests that scholars in our field share a high commitment to and awareness of internal validity and reliability required in quality research. Note, however, that only 11 papers mention or describe their inter-rater measures, for instance, which is a procedure that is expected to be more and more applied in future studies.

As has been shown, ICUP research systematically continues many of the trends that have been found to be ongoing for pragmatics research in general (cf. e.g. Culpeper and Gillings 2019), whilst, at the same time, having brought forth a rather multifaceted and distinct set of methods. Just as much as the ICUP scholar follows their calling with regard to conversational, interactional and intercultural data, multimethod designs have clearly emerged as a ICUP-specific signature approach that can be expected to become even more established in upcoming years.


Corresponding author: Monika Kirner-Ludwig, University of Innsbruck, Innsbruck, Austria, E-mail:

References

Abdoola, Fareeaa, Penelope S. Flack & Saira B. Karrim. 2017. Facilitating pragmatic skills through role-play in learners with language learning disability. South African Journal of Communication Disorders 64(1). 1–12. https://doi.org/10.4102/sajcd.v64i1.187.Search in Google Scholar

Al-Surmi, Mansoor. 2012. Authenticity and TV shows: A multidimensional analysis perspective. Tesol Quarterly 46(4). 671–694. https://doi.org/10.1002/tesq.33.Search in Google Scholar

Archer, Dawn & Peter Grundy (eds.). 2011. The pragmatics reader. Abingdon: Routledge.Search in Google Scholar

Bardovi-Harlig, Kathleen. 1999. Researching method. In Lawrence F. Bouton (ed.), Pragmatics and language learning, vol. 9, 237–267. Urbana-Champaign: University of Illinois Press.Search in Google Scholar

Bardovi-Harlig, Kathleen. 2010. Exploring the pragmatics of interlanguage pragmatics: Definition by design. In Anna Trosborg (ed.), Pragmatics across languages and cultures, 219–259. Berlin & New York: de Gruyter Mouton.10.1515/9783110214444.2.219Search in Google Scholar

Bardovi-Harlig, Kathleen & Tom Salsbury. 2004. The organization of turns in the disagreements of l2 learners: A longitudinal perspective. In Diana Boxer & Andrew D. Cohen (eds.), Studying speaking to inform second language learning (Second language acquisition 8), 199–227. Clevedon: Multilingual Matters.Search in Google Scholar

Barron, Anne, Yueguo Gu & Gerard Steen (eds.). 2017. The Routledge handbook of pragmatics (Routledge handbooks in applied linguistics). Milton Park, Abingdon, Oxon & New York, NY: Routledge [2017] | Series: Routledge Handbooks in applied linguistics: Routledge.10.4324/9781315668925Search in Google Scholar

Bataller, Rebecca & Rachel Shively. 2011. Role-plays and naturalistic data in pragmatics research: Service encounters during study abroad. Journal of Linguistics and Language Learning 2(1). 15–50.Search in Google Scholar

Bebee, Leslie & Louise Cummings. 1995. Natural speech act data versus written questionnaire data: How data collection method affects speech act performance. In Susan M. Gass & Joyce Neu (eds.), Speech acts across cultures: Challenges to communication in a second language (Studies on language acquisition 11), 65–86. Berlin: Mouton de Gruyter.10.1515/9783110219289.1.65Search in Google Scholar

Bednarek, Monika. 2010. The language of fictional television: Drama and identity. New York, NY: Continuum. http://site.ebrary.com/lib/academiccompletetitles/home.action.Search in Google Scholar

Bednarek, Monika. 2011. Approaching the data of pragmatics. In Neal R. Norrick & Wolfram Bublitz (eds.), Handbook of pragmatics: Volume 1: Foundations of pragmatics. Berlin & New York: de Gruyter Mouton.10.1515/9783110214260.537Search in Google Scholar

Bednarek, Monika. 2018. Language and television series: A linguistic approach to tv dialogue (The Cambridge applied linguistics series). Cambridge & New York: Cambridge University Press.10.1017/9781108559553Search in Google Scholar

Bibok, Károly. 2016. Encyclopedic information and pragmatic interpretation. Intercultural Pragmatics 13(3). 407–437. https://doi.org/10.1515/ip-2016-0017.Search in Google Scholar

Billmyer, Kristine & Manka Varghese. 2000. Investigating instrument-based pragmatic variability: Effects of enhancing discourse completion tests. Applied Linguistics 21(4). 517–552. https://doi.org/10.1093/applin/21.4.517.Search in Google Scholar

Blum-Kulka, Shoshana, Juliane House & Gabriele Kasper (eds.). 1989a. Cross-cultural pragmatics: Requests and apologies (Advances in discourse processes 31). Norwood, N.J.: Ablex.Search in Google Scholar

Blum-Kulka, Shoshana, Juliane House & Gabriele Kasper. 1989b. Investigating cross-cultural pragmatics: An introductory overview. In Shoshana Blum-Kulka, Juliane House & Gabriele Kasper (eds.), Cross-cultural pragmatics: Requests and apologies (Advances in discourse processes 31), 1–34. Norwood, N.J.: Ablex.Search in Google Scholar

Bou-Franch, Patricia & Nuria Lorenzo-Dus. 2008. Natural versus elicited data in cross-cultural speech act realisation: The case of requests in Peninsular Spanish and British English. Spanish in Context 5(2). 246–277. https://doi.org/10.1075/sic.5.2.06lor.Search in Google Scholar

Briggs, Charles. 2009. Interview. In Gunter Senft, Jan-Ola Östman & Jef Verschueren (eds.), Culture and language use (Handbook of pragmatics highlights), 202–209. Amsterdam: Benjamins.10.1075/hoph.2.18briSearch in Google Scholar

Clark, Herbert H. & Adrian Bangerter. 2004. Changing ideas about reference. In Ira Noveck & Dan Sperber (eds.), Experimental pragmatics (Palgrave Studies in Pragmatics, Language and Cognition), 25–49. New York: Palgrave Macmillan.10.1057/9780230524125_2Search in Google Scholar

Cohen, Andrew D. 1996. Verbal reports as a source of insights into second language learner strategies. Applied Language Learning 7. 5–24.Search in Google Scholar

Culpeper, Jonathan & Mathew Gillings. 2019. Pragmatics: Data trends. Journal of Pragmatics 145. 4–14. https://doi.org/10.1016/j.pragma.2019.01.004.Search in Google Scholar

Culpeper, Jonathan, Michael Haugh & Dániel Z. Kádár (eds.). 2017. The Palgrave handbook of linguistic (im)politeness, [Enhanced Credo edition]. London, United Kingdom & Boston, Massachusetts: Palgrave Macmillan; Credo Reference.10.1057/978-1-137-37508-7Search in Google Scholar

Du Bois, John W., Stephan Schuetze-Coburn, Susanna Cumming & Danae Paolino. 1993. Outline of discourse transcription. In Jane A. Edwards & Martin D. Lampert (eds.), Talking data: Transcription and coding methods for language research, 45–87. Hillsdale: Erlbaum.Search in Google Scholar

Dynel, Marta. 2015. Impoliteness in the service of verisimilitude in film interaction. In Marta Dynel & Jan Chovanec (eds.), Participation in public and social media interactions (Pragmatics & Beyond New Series N.S., 256), 157–182. Amsterdam & Philadelphia: Benjamins.10.1075/pbns.256.07dynSearch in Google Scholar

Economidou-Kogetsidis, Maria. 2013. Strategies, modification and perspective in native speakers’ requests: A comparison of WDCT and naturally occurring requests. Journal of Pragmatics 53. 21–38. https://doi.org/10.1016/j.pragma.2013.03.014.Search in Google Scholar

Félix-Brasdefer, J. César. 2007. Natural speech vs. elicited data: A comparison of natural and role play requests in Mexican Spanish. Spanish in Context 4(2). 159–185.10.1075/sic.4.2.03felSearch in Google Scholar

Félix-Brasdefer, J. César. 2010. Data collection methods in speech act performance. In Alicia Martínez-Flor & Esther Usó-Juan (eds.), Speech act performance: Theoretical, empirical and methodological issues (Language Learning & Language Teaching 26), 41–56. Amsterdam: Benjamins.10.1075/lllt.26.03felSearch in Google Scholar

Félix-Brasdefer, J. César. & Maria Hasler-Barker. 2017. Elicited data. In Anne Barron, Yueguo Gu & Gerard Steen (eds.), The Routledge Handbook of pragmatics (Routledge handbooks in applied linguistics), 27–40. Milton Park, Abingdon, Oxon & New York, NY: Routledge [2017] | Series: Routledge Handbooks in applied linguistics: Routledge.10.4324/9781315668925-4Search in Google Scholar

Garrett, Merrill & Robert M. Harnish. 2007. Experimental pragmatics. Testing for implicitures. Pragmatics and Cognition 15(1). 65–90. https://doi.org/10.1075/pc.15.1.07gar.Search in Google Scholar

Golato, Andrea. 2017. Naturally occurring data. In Anne Barron, Yueguo Gu & Gerard Steen (eds.), The Routledge handbook of pragmatics (Routledge handbooks in applied linguistics), 21–26. Milton Park, Abingdon, Oxon & New York, NY: Routledge [2017] | Series: Routledge Handbooks in applied linguistics: Routledge.10.4324/9781315668925-3Search in Google Scholar

Golato, Andrea & Peter Golato. 2013. Pragmatics research methods. In C. H. Chapelle (ed.), The Encyclopedia of applied linguistics. Oxford, UK: Wiley-Blackwell.10.1002/9781405198431.wbeal0946Search in Google Scholar

Goodman, Simon & Shani Burke. 2010. ‘Oh you don’t want asylum seekers, oh you’re just racist’: A discursive analysis of discussions about whether it’s racist to oppose asylum seeking. Discourse & Society 21(3). 325–340. https://doi.org/10.1177/0957926509360743.Search in Google Scholar

Grucza, Sambor & Silvia Hansen-Schirra. 2016. Eyetracking and applied linguistics (Translation and Multilingual Natural Language Processing 2). Berlin: Language Science Press.Search in Google Scholar

Gülich, Elisabeth. 2001. Zum Zusammenhang von alltagsweltlichen und wissenschaftlichen ‘Methoden’ (On the Connection between Lay and Specialist “Methods”). In Klaus Brinker, Gerd Antos, Wolfgang Heinemann & Sven F. Sager (eds.), Text- und Gesprächslinguistik: Linguistics of text and conversation: An international handbook of contemporary research (Handbücher zur Sprach- und Kommunikationswissenschaft 16.2), vol. XVI, 103. Berlin: de Gruyter.10.1515/9783110169188.2.16.1086Search in Google Scholar

Heine, Bernd & Heiko Narrog. 2015. The Oxford Handbook of linguistic analysis (Oxford Handbook of Linguistics). Oxford: Oxford University Press.10.1093/oxfordhb/9780199677078.001.0001Search in Google Scholar

Hendriks, Berna. 2008. Dutch English requests: A study of request performance by Dutch learners of English. In Martin, Pütz & JoAnne Neff-van Aertselaer (eds.), Developing contrastive pragmatics: Interlanguage and cross-cultural perspectives, 335–354. Berlin: De Gruyter Mouton.10.1515/9783110207217.3.335Search in Google Scholar

Herring, Susan, Dieter Stein & Tuija Virtanen (eds.). 2013. Pragmatics of computer-mediated communication. Berlin & Boston: de Gruyter.10.1515/9783110214468Search in Google Scholar

Houck, Noel & Susan Gass. 1996. Non-native refusals: A methodological perspective. In Susan Gass & Joyce Neu (eds.), Speech acts across cultures, 45–64. Berlin: de Gruyter.10.1515/9783110219289.1.45Search in Google Scholar

Hu, Yanhong & Weiwei Fan. 2011. An exploratory study on intercultural communication research contents and methods: A survey based on the international and domestic journal papers published from 2001 to 2005. International Journal of Intercultural Relations 35(5). 554–566. https://doi.org/10.1016/j.ijintrel.2010.12.004.Search in Google Scholar

Jefferson, Gail. 2004. Glossary of transcript symbols with an introduction In: Lerner, Gene (ed.), Conversation Analysis. Studies from the first generation. Amsterdam: Benjamins, 13–31.10.1075/pbns.125.02jefSearch in Google Scholar

Jucker, Andreas H. 2009. Speech act research between armchair, field and laboratory. Journal of Pragmatics 41(8). 1611–1635. https://doi.org/10.1016/j.pragma.2009.02.004.Search in Google Scholar

Jucker, Andreas H., Klaus P. Schneider & Wolfram Bublitz (eds.). 2018. Methods in pragmatics (Handbook of Pragmatics 10). Berlin: de Gruyter Mouton.10.1515/9783110424928Search in Google Scholar

Jucker, Andreas H. & Larssyn Staley. 2017. (im)politeness and developments in methodology. In Jonathan Culpeper, Michael Haugh & Dániel Z. Kádár (eds.), The Palgrave handbook of linguistic (im)politeness (Palgrave handbooks), 403–429. London: Palgrave Macmillan.10.1057/978-1-137-37508-7_16Search in Google Scholar

Kanik, Mehmet. 2016. Reverse discourse completion task as an assessment tool for intercultural competence. Studies in Second Language Learning and Teaching 3(4). 621–644.10.14746/ssllt.2013.3.4.8Search in Google Scholar

Kasper, Gabriele. 2008. Data collection in pragmatics research. In Helen Spencer-Oatey (ed.), Culturally speaking: Culture, communication and politeness theory, 316–341. London: Continuum.10.5040/9781350934085.ch-014Search in Google Scholar

Kasper, Gabriele & Merete Dahl. 1991. Research methods in interlanguage pragmatics. Studies in Second Language Acquisition 13(2). 215–247. https://doi.org/10.1017/s0272263100009955.Search in Google Scholar

Kasper, Gabriele & Carsten Roever. 2005. Pragmatics in second language learning. In Eli Hinkel (ed.), Handbook of research in second language teaching and learning, 317–334. Mahwah, NJ: Erlbaum.Search in Google Scholar

Kasper, Gabriele & Kenneth R. Rose. 2002. Pragmatic development in a second language (Language learning monograph series). Malden, MA: Blackwell.Search in Google Scholar

Katsos, Napoleon, Clara A. Roqueta, Rosa A. C. Estevan & Chris Cummins. 2011. Are children with specific language impairment competent with the pragmatics and logic of quantification? Cognition 119(1). 43–57. https://doi.org/10.1016/j.cognition.2010.12.004.Search in Google Scholar

Kecskés, Istvan. 2000. Conceptual fluency and the use of situation-bound utterances in L2. Links and Letters 7. 145–161.Search in Google Scholar

Kecskés, Istvan. 2012. Interculturality and intercultural pragmatics. In Jane Jackson (ed.), The Routledge handbook of language and intercultural communication (Routledge handbook of applied linguistics), 67–84. London: Routledge.Search in Google Scholar

Kecskés, Istvan. 2014. Intercultural pragmatics. New York: Oxford University Press.Search in Google Scholar

Kecskés, Istvan. 2017. Cross-cultural and intercultural pragmatics. In Yan Huang (ed.), The Oxford handbook of pragmatics (Oxford handbooks in linguistics). Oxford, United Kingdom: Oxford University Press.10.1093/oxfordhb/9780199697960.013.29Search in Google Scholar

Kecskés, Istvan. 2018. Intercultural pragmatics. In Frank Liedtke & Astrid Tuchen (eds.), Handbuch Pragmatik, 140–149. Stuttgart: Metzler.10.1007/978-3-476-04624-6_14Search in Google Scholar

Kecskés, Istvan & Monika Kirner-Ludwig. 2020. Introduction: New waves in pragmatics. In Monika Kirner-Ludwig (ed.), Fresh perspectives on issues in pragmatics (Routledge Research on New Waves in Pragmatics 1). New York, NY: Routledge.10.4324/9781003017462-1Search in Google Scholar

Kirner-Ludwig, Monika. 2022. Research methods in intercultural pragmatics. In Istvan Kecskés (ed.), The Cambridge handbook of intercultural pragmatics (CHIP), 361–394. Cambridge: Cambridge University Press.10.1017/9781108884303.015Search in Google Scholar

Krueger, Richard A. & Mary A. Casey. 2009. Focus groups: A practical guide for applied research, 4th edn. Los Angeles, Calif.: Sage.10.3138/cjpe.024.007Search in Google Scholar

Leech, Geoffrey N. 2014. The pragmatics of politeness (Oxford Studies in Sociolinguistics). New York: Oxford University Press.10.1093/acprof:oso/9780195341386.001.0001Search in Google Scholar

Leung, Constant, Roxy Harris & Ben Rampton. 2004. Living with inelegance in qualitative research on task-based learning. In Bonny Norton & Kelleen Toohey (eds.), Critical pedagogies and language learning (The Cambridge applied linguistics series), 242–268. Cambridge: Cambridge Univ. Press.10.1017/CBO9781139524834.013Search in Google Scholar

Liedtke, Frank & Astrid Tuchen (eds.). 2018. Handbuch Pragmatik. Stuttgart: Metzler.10.1007/978-3-476-04624-6Search in Google Scholar

Martínez-Flor, Alicia. 2006. Task effects on EFL learner’s production of suggestions: A focus on elicited phone messages and emails. Miscelanea: A Journal of English and American Studies 33. 47–64.10.26754/ojs_misc/mj.200610088Search in Google Scholar

Martínez-Flor, Alicia & Esther Usó-Juan. 2011. Research methodologies in pragmatics: Eliciting refusals to requests. Estudios de lingüística inglesa aplicada 11. 47–87.Search in Google Scholar

McKay, Sandra & Nancy H. Hornberger (eds.). 2005. Sociolinguistics and language teaching, 10. pr (The Cambridge applied linguistics series). Cambridge: Cambridge Univ. Press.Search in Google Scholar

Meibauer, Jörg. 2012. Pragmatic evidence, context, and story design: An essay on recent developments in experimental pragmatics. Language Sciences 34(6). 768–776. https://doi.org/10.1016/j.langsci.2012.04.014.Search in Google Scholar

Meibauer, Jörg & Markus Steinbach. 2011. Experimental pragmatics/semantics (Linguistik aktuell/linguistics today v. 175). Amsterdam & Philadelphia: Benjamins.10.1075/la.175Search in Google Scholar

Mey, Jacob L. 2004. Between culture and pragmatics: Scylla and Charybdis? The precarious condition of intercultural pragmatics. Intercultural Pragmatics 1(1). 27–48. https://doi.org/10.1515/iprg.2004.006.Search in Google Scholar

Moreland, Judy & Bronwen Cowie. 2005. Exploring the methods of auto-photography and photo- interviews: Children taking pictures of science and technology. Waikato Journal of Education 11(1). 73–87. https://doi.org/10.15663/wje.v11i1.320.Search in Google Scholar

Nguyen & Thi Thuy Minh, . 2019. Data collection methods in L2 pragmatics research: An overview. In Naoko Taguchi (ed.), The Routledge Handbook of SLA and Pragmatics, 195–211. New York: Routledge.10.4324/9781351164085-13Search in Google Scholar

Norrick, Neal R. & Wolfram Bublitz (eds.). 2011. Handbook of pragmatics: Volume 1: Foundations of pragmatics. Berlin & New York: de Gruyter Mouton.Search in Google Scholar

Noveck, Ira & Dan Sperber (eds.). 2004. Experimental pragmatics (Palgrave Studies in Pragmatics, Language and Cognition). New York: Palgrave Macmillan.10.1057/9780230524125Search in Google Scholar

Noveck, Ira A. 2018. Experimental pragmatics: The making of a cognitive science. Cambridge: Cambridge University Press.10.1017/9781316027073Search in Google Scholar

Pijnacker, Judith, Hagoort Peter, Buitelaar Jan, Jan-Pieter Teunisse & Bart Geurts. 2009. Pragmatic inferences in high-functioning adults with autism and asperger syndrome. Journal of Autism and Developmental Disorders 39(4). 607–618. https://doi.org/10.1007/s10803-008-0661-8.Search in Google Scholar

Rose, Kenneth R. 2000. An exploratory cross-sectional study of interlanguage pragmatic development. Studies in Second Language Acquisition 22(1). 27–67. https://doi.org/10.1017/s0272263100001029.Search in Google Scholar

Rose, Kenneth R. 2001. Compliments and compliment responses in film: Implications for pragmatics research and language teaching. International Review of Applied Linguistics in Language Teaching 39(4). https://doi.org/10.1515/iral.2001.007.Search in Google Scholar

Salmons, Janet (ed.). 2012. Cases in online interview research. Thousand Oaks, California: SAGE Publications, Inc.10.4135/9781506335155Search in Google Scholar

Sampietro, Agnese, Samuel Felder & Beat Siebenhaar. 2022. Do you kiss when you text? Cross-cultural differences in the use of the kissing emojis in three WhatsApp corpora. Intercultural Pragmatics 19(2). 183–208. https://doi.org/10.1515/ip-2022-2002.Search in Google Scholar

Schauer, Gila A. & Svenja Adolphs. 2006. Expressions of gratitude in corpus and DCT data: Vocabulary, formulaic sequences, and pedagogy. System 34(1). 119–134. https://doi.org/10.1016/j.system.2005.09.003.Search in Google Scholar

Schlesewsky, Matthias. 2009. Linguistische Daten aus experimentellen Umgebungen: Eine multiexperimentelle und multimodale Perspektive. Zeitschrift für Sprachwissenschaft 28(1). 169–178. https://doi.org/10.1515/zfsw.2009.020.Search in Google Scholar

Schmidt, Thomas & Kai Wörner. 2009. EXMARaLDA – creating, analysing and sharing spoken language corpora for pragmatic research. Pragmatics 19(4). 565–582.10.1075/prag.19.4.06schSearch in Google Scholar

Schneider, Klaus P. 2018. Methods and ethics of data collection. In Andreas H. Jucker, Klaus P. Schneider, Wolfram Bublitz, Andreas H. Jucker, Klaus P. Schneider & Wolfram Bublitz (eds.), Methods in pragmatics, 37–94. Berlin & Boston: de Gruyter.10.1515/9783110424928-002Search in Google Scholar

Selting, Margret, Peter Auer, Dagmar Barth-Weingarten, Jörg Bergmann, Pia Bergmann, Karin Birkner, Elizabeth Couper-Kuhlen, Arnulf Deppermann, Peter Gilles, Susanne Günthner, Martin Hartung, Friederike Kern, Christine Mertzlufft, Christian Meyer, Miriam Morek, Frank, Oberzaucher, Jörg Peters, Uta Quasthoff, Wilfried Schütte, Anja Stukenbrock & Susanne Uhmann. 2009. Gesprächsanalytisches Transkriptionssystem 2 (GAT 2). Gesprächsforschung – Online-Zeitschrift zur verbalen Interaktion 10. 353–402.Search in Google Scholar

Senft, Gunter, Jan-Ola Östman & Jef Verschueren (eds.). 2009. Culture and language use (Handbook of pragmatics highlights 2). Amsterdam: Benjamins.10.1075/hoph.2Search in Google Scholar

Sidnell, Jack & Tanya Stivers (eds.). 2014. The handbook of conversation analysis (Blackwell handbooks in linguistics). Chichester: Wiley-Blackwell.Search in Google Scholar

Sobreira, Catarina, Joyce K. Klu, Christian Cole, Niamh Nic Daéid & Hervé Ménard. 2020. Reviewing research trends—a scientometric approach using Gunshot Residue (GSR) literature as an example. Publications 8(1). 1–17. https://doi.org/10.3390/publications8010007.Search in Google Scholar

Stewart, David W., Prem N. Shamdasani & Dennis W. Rook. 2007. Focus groups: Theory and practice (Applied social research methods series 20), 2nd edn. Thousand Oaks: SAGE Publications.10.4135/9781412991841Search in Google Scholar

Taguchi, Naoko (ed.). 2019. The Routledge handbook of second language acquisition and pragmatics (Routledge handbooks in second language acquisition 1). London & New York NY: Routledge.10.4324/9781351164085-1Search in Google Scholar

Taguchi, Naoko & YouJin Kim. 2018. Task-based approaches to teaching and assessing pragmatics (Task-Based Language Teaching Ser v.10). Amsterdam & Philadelphia: Benjamins.10.1075/tblt.10Search in Google Scholar

Takimoto, Masahiro. 2009. Input-based task and interlanguage pragmatics: The effects of input-based task on the development of learners’ pragmatic proficiency. Saarbrücken: VDM Verlag Dr. Müller.10.1016/j.pragma.2008.12.001Search in Google Scholar

Tracy, Sally K. 2013. Qualitative research methods: Collecting evidence, crafting analysis, communicating impact. Chichester: Wiley-Blackwell.Search in Google Scholar

Trosborg, Anna. 1995. Interlanguage pragmatics: Requests, complaints, and apologies (Studies in anthropological linguistics 7). Berlin & New York: Mouton de Gruyter.10.1515/9783110885286Search in Google Scholar

Trosborg, Anna (ed.). 2010. Pragmatics across languages and cultures. Berlin & New York: de Gruyter Mouton.10.1515/9783110214444Search in Google Scholar

Turnbull, William. 2001. An appraisal of pragmatic elicitation techniques for the social psychological study of talk. Pragmatics 11(1). 31–61.10.1075/prag.11.1.03turSearch in Google Scholar

Van Olmen, Daniël & Vittorio Tantucci. 2022. Getting attention in different languages: A usage-based approach to parenthetical look in Chinese, Dutch, English, and Italian. Intercultural Pragmatics 19(2). 141–181. https://doi.org/10.1515/ip-2022-2001.Search in Google Scholar

Youn, Soo J. 2020. Interactional features of L2 pragmatic interaction in role‐play speaking assessment. Tesol Quarterly 54(1). 201–233. https://doi.org/10.1002/tesq.542.Search in Google Scholar

Yuan, Yi. 2001. An inquiry into empirical pragmatics data-gathering methods: Written DCTs, oral DCTs, field notes, and natural conversations. Journal of Pragmatics 33(2). 271–292. https://doi.org/10.1016/s0378-2166(00)00031-x.Search in Google Scholar

Zhu, Hua (ed.). 2016. Research methods in intercultural communication: A practical guide (Guides to research methods in language and linguistics 8). Chichester & Malden, Mass: Wiley-Blackwell.10.1002/9781119166283Search in Google Scholar

Published Online: 2022-08-22
Published in Print: 2022-09-27

© 2022 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.4.2026 from https://www.degruyterbrill.com/document/doi/10.1515/ip-2022-4002/html
Scroll to top button