Chapter 11. Mobile dual eye-tracking in face-to-face interaction
-
Anja Stukenbrock
Abstract
In face-to-face interaction deixis, i.e. (the use of) a particular class of linguistic items that have grammaticalised the space-, time- and person-bound structure of the participants’ subjective orientation in the speech event (Bühler, 1965[1934]), is intricately connected to visible acts of demonstration (prototypically pointing) and joint attention. A growing body of publications within the field of conversation analysis and research on multimodality acknowledges the central role that pointing plays in acts of deictic reference (Eriksson, 2009; Fricke, 2007; Goodwin, 2003; Kendon, 2004; Kita, 2003; Mondada, 2012a; Stukenbrock, 2009, 2014a, 2014b, 2015). Surprisingly, eye gaze has remained an unexplored area although it serves a variety of crucial functions in the participants’ on-line organisation of a joint focus of attention on deictically foregrounded entities in the immediate spatial surroundings (Stukenbrock, 2009, 2010, 2014a, 2014b, 2015). The few existing studies mainly rely on video-recordings that do not allow a precise analysis of eye gaze.
Drawing on innovative mobile eye-tracking technology, my paper explores different forms of gaze behaviour that systematically occur when participants direct their interlocutor’s attention to visible entities in the surroundings by means of deictic pointing. My data consists of mobile eye-tracking recordings undertaken with two pairs of eye-tracking glasses worn by participants in non-laboratory, everyday settings ((1) shopping together at a market, (2) searching for a book in a library, (3) conducting an informal conversation). The analysis is based on frame-precisely synchronised split-screen videos consisting of two complementary eye-tracking videos which allow a moment-by-moment reconstruction of the way in which the participants coordinate talk, body movements and gaze in the emergent interaction.
Abstract
In face-to-face interaction deixis, i.e. (the use of) a particular class of linguistic items that have grammaticalised the space-, time- and person-bound structure of the participants’ subjective orientation in the speech event (Bühler, 1965[1934]), is intricately connected to visible acts of demonstration (prototypically pointing) and joint attention. A growing body of publications within the field of conversation analysis and research on multimodality acknowledges the central role that pointing plays in acts of deictic reference (Eriksson, 2009; Fricke, 2007; Goodwin, 2003; Kendon, 2004; Kita, 2003; Mondada, 2012a; Stukenbrock, 2009, 2014a, 2014b, 2015). Surprisingly, eye gaze has remained an unexplored area although it serves a variety of crucial functions in the participants’ on-line organisation of a joint focus of attention on deictically foregrounded entities in the immediate spatial surroundings (Stukenbrock, 2009, 2010, 2014a, 2014b, 2015). The few existing studies mainly rely on video-recordings that do not allow a precise analysis of eye gaze.
Drawing on innovative mobile eye-tracking technology, my paper explores different forms of gaze behaviour that systematically occur when participants direct their interlocutor’s attention to visible entities in the surroundings by means of deictic pointing. My data consists of mobile eye-tracking recordings undertaken with two pairs of eye-tracking glasses worn by participants in non-laboratory, everyday settings ((1) shopping together at a market, (2) searching for a book in a library, (3) conducting an informal conversation). The analysis is based on frame-precisely synchronised split-screen videos consisting of two complementary eye-tracking videos which allow a moment-by-moment reconstruction of the way in which the participants coordinate talk, body movements and gaze in the emergent interaction.
Chapters in this book
- Prelim pages i
- Table of contents v
- Chapter 1. Introduction 1
-
Part 1. Theoretical considerations
- Chapter 2. Eye gaze as a cue for recognizing intention and coordinating joint action 21
- Chapter 3. Effects of a speaker’s gaze on language comprehension and acquisition 47
- Chapter 4. Weaving oneself into others 67
- Chapter 5. On the role of gaze for successful and efficient communication 91
-
Part 2. Methodological considerations
- Chapter 6. Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach 109
- Chapter 7. Gaze and face-to-face interaction 139
- Chapter 8. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection 169
-
Part 3. Case studies
- Chapter 9. Gaze, addressee selection and turn-taking in three-party interaction 197
- Chapter 10. Gaze as a predictor for lexical and gestural alignment 233
- Chapter 11. Mobile dual eye-tracking in face-to-face interaction 265
- Chapter 12. Displaying recipiency in an interpreter-mediated dialogue 301
- Index 323
Chapters in this book
- Prelim pages i
- Table of contents v
- Chapter 1. Introduction 1
-
Part 1. Theoretical considerations
- Chapter 2. Eye gaze as a cue for recognizing intention and coordinating joint action 21
- Chapter 3. Effects of a speaker’s gaze on language comprehension and acquisition 47
- Chapter 4. Weaving oneself into others 67
- Chapter 5. On the role of gaze for successful and efficient communication 91
-
Part 2. Methodological considerations
- Chapter 6. Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach 109
- Chapter 7. Gaze and face-to-face interaction 139
- Chapter 8. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection 169
-
Part 3. Case studies
- Chapter 9. Gaze, addressee selection and turn-taking in three-party interaction 197
- Chapter 10. Gaze as a predictor for lexical and gestural alignment 233
- Chapter 11. Mobile dual eye-tracking in face-to-face interaction 265
- Chapter 12. Displaying recipiency in an interpreter-mediated dialogue 301
- Index 323