Chapter 8. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection
-
Stijn De Beugher
Abstract
In this chapter, we discuss a novel method for the analysis of mobile eye-tracking data in natural environments. Mobile eye-tracking systems generate large amounts of continuous data, making manual analysis extremely time-consuming. Available solutions provided by commercially available eye-tracking systems, such as marker-based analysis, minimize the manual labor but require experimental control, making real-life experiments practically unfeasible, and generally only apply to the analysis of objects. Here, we discuss a novel method for the processing of mobile eye-tracking data, based on the integration of computer vision techniques. Using such an approach allows us to automatically detect specific objects, faces and human bodies/body parts in images captured by a mobile eye-tracker. By mapping the gaze data on top of these detections, we gain insights into the visual behavior of recorded participants. As an important step in the integration of this method in the analysis of multimodal interaction, we developed an output format that is compatible with annotation tools such as ELAN, making our software integratable with existing annotations. In this chapter we give an overview of relevant image processing techniques and their application in interaction studies. We also present a thorough comparison between manual analysis and our automatic analysis in both speed and accuracy on challenging, real-life experiments.
Abstract
In this chapter, we discuss a novel method for the analysis of mobile eye-tracking data in natural environments. Mobile eye-tracking systems generate large amounts of continuous data, making manual analysis extremely time-consuming. Available solutions provided by commercially available eye-tracking systems, such as marker-based analysis, minimize the manual labor but require experimental control, making real-life experiments practically unfeasible, and generally only apply to the analysis of objects. Here, we discuss a novel method for the processing of mobile eye-tracking data, based on the integration of computer vision techniques. Using such an approach allows us to automatically detect specific objects, faces and human bodies/body parts in images captured by a mobile eye-tracker. By mapping the gaze data on top of these detections, we gain insights into the visual behavior of recorded participants. As an important step in the integration of this method in the analysis of multimodal interaction, we developed an output format that is compatible with annotation tools such as ELAN, making our software integratable with existing annotations. In this chapter we give an overview of relevant image processing techniques and their application in interaction studies. We also present a thorough comparison between manual analysis and our automatic analysis in both speed and accuracy on challenging, real-life experiments.
Kapitel in diesem Buch
- Prelim pages i
- Table of contents v
- Chapter 1. Introduction 1
-
Part 1. Theoretical considerations
- Chapter 2. Eye gaze as a cue for recognizing intention and coordinating joint action 21
- Chapter 3. Effects of a speaker’s gaze on language comprehension and acquisition 47
- Chapter 4. Weaving oneself into others 67
- Chapter 5. On the role of gaze for successful and efficient communication 91
-
Part 2. Methodological considerations
- Chapter 6. Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach 109
- Chapter 7. Gaze and face-to-face interaction 139
- Chapter 8. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection 169
-
Part 3. Case studies
- Chapter 9. Gaze, addressee selection and turn-taking in three-party interaction 197
- Chapter 10. Gaze as a predictor for lexical and gestural alignment 233
- Chapter 11. Mobile dual eye-tracking in face-to-face interaction 265
- Chapter 12. Displaying recipiency in an interpreter-mediated dialogue 301
- Index 323
Kapitel in diesem Buch
- Prelim pages i
- Table of contents v
- Chapter 1. Introduction 1
-
Part 1. Theoretical considerations
- Chapter 2. Eye gaze as a cue for recognizing intention and coordinating joint action 21
- Chapter 3. Effects of a speaker’s gaze on language comprehension and acquisition 47
- Chapter 4. Weaving oneself into others 67
- Chapter 5. On the role of gaze for successful and efficient communication 91
-
Part 2. Methodological considerations
- Chapter 6. Quantifying the interplay of gaze and gesture in deixis using an experimental-simulative approach 109
- Chapter 7. Gaze and face-to-face interaction 139
- Chapter 8. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection 169
-
Part 3. Case studies
- Chapter 9. Gaze, addressee selection and turn-taking in three-party interaction 197
- Chapter 10. Gaze as a predictor for lexical and gestural alignment 233
- Chapter 11. Mobile dual eye-tracking in face-to-face interaction 265
- Chapter 12. Displaying recipiency in an interpreter-mediated dialogue 301
- Index 323