Home Technology 6 Modeling the Relationship between Driver Gaze Behavior and Traffic Context during Lane Changes Using a Recurrent Neural Network
Chapter
Licensed
Unlicensed Requires Authentication

6 Modeling the Relationship between Driver Gaze Behavior and Traffic Context during Lane Changes Using a Recurrent Neural Network

  • Daiki Hayashi , Chiyomi Miyajima and Kazuya Takeda
Become an author with De Gruyter Brill
Vehicles, Drivers, and Safety
This chapter is in the book Vehicles, Drivers, and Safety

Abstract

Lane changes are one of the most difficult tasks faced by drivers because they require a high degree of situational awareness. Previous studies have shown that drivers exhibit typical gaze patterns prior to lane changes. In this study, we assume that there is a correlation between driver gaze behavior and the existing traffic context, and propose a method for modeling the correlation. Driver gaze behavior during lane changes is modeled using a recurrent neural network (RNN). The input for the RNN consists of eight features which are calculated based on positions of surrounding vehicles in the eight areas around the ego-vehicle. We regard the output of the RNN as probabilities of a driver looking in each of ten different gaze directions, for example, “front,” “right,” mirror,” etc. The RNN generates a sequence of the probabilities for a given traffic context. To evaluate our gaze behavior model, we conducted a risky lane change detection experiment. We collected driving data from nine drivers driving an instrumented vehicle on expressways and passing other vehicles, resulting in a total of 859 lane change scenes. Another ten evaluators watched the front-view video of each lane change scene and rated how risky they felt. The normalized average of the risk scores was used as the ground truth risk level for each scene. We then selected 10% safest and 10% least safe scenes to train two RNNs, one for safe lane changes and one for risky lane changes, respectively. The similarity ratios of driver gaze behavior during ten seconds of each lane change to the risky and safe models were used to detect risky lane change scenes. We used the leave-one-out method to evaluate detection performance. Our experimental results showed that the proposed model achieved AUCs of 0.90 and 0.61 for right and left lane changes, respectively.

Abstract

Lane changes are one of the most difficult tasks faced by drivers because they require a high degree of situational awareness. Previous studies have shown that drivers exhibit typical gaze patterns prior to lane changes. In this study, we assume that there is a correlation between driver gaze behavior and the existing traffic context, and propose a method for modeling the correlation. Driver gaze behavior during lane changes is modeled using a recurrent neural network (RNN). The input for the RNN consists of eight features which are calculated based on positions of surrounding vehicles in the eight areas around the ego-vehicle. We regard the output of the RNN as probabilities of a driver looking in each of ten different gaze directions, for example, “front,” “right,” mirror,” etc. The RNN generates a sequence of the probabilities for a given traffic context. To evaluate our gaze behavior model, we conducted a risky lane change detection experiment. We collected driving data from nine drivers driving an instrumented vehicle on expressways and passing other vehicles, resulting in a total of 859 lane change scenes. Another ten evaluators watched the front-view video of each lane change scene and rated how risky they felt. The normalized average of the risk scores was used as the ground truth risk level for each scene. We then selected 10% safest and 10% least safe scenes to train two RNNs, one for safe lane changes and one for risky lane changes, respectively. The similarity ratios of driver gaze behavior during ten seconds of each lane change to the risky and safe models were used to detect risky lane change scenes. We used the leave-one-out method to evaluate detection performance. Our experimental results showed that the proposed model achieved AUCs of 0.90 and 0.61 for right and left lane changes, respectively.

Chapters in this book

  1. Frontmatter I
  2. Contents V
  3. Contributing Authors VII
  4. Introduction XI
  5. Part A: Driver/Vehicle Interaction Systems
  6. 1 MobileUTDrive: A Portable Device Platform for In-vehicle Driving Data Collection 3
  7. 2 Semantic Analysis of Driver Behavior by Data Fusion 25
  8. 3 Predicting When Drivers Need AR Guidance 35
  9. 4 Driver’s Mental Workload Estimation with Involuntary Eye Movement 49
  10. 5 Neurophysiological Driver Behavior Analysis 67
  11. 6 Modeling the Relationship between Driver Gaze Behavior and Traffic Context during Lane Changes Using a Recurrent Neural Network 87
  12. 7 A Multimodal Control System for Autonomous Vehicles Using Speech, Gesture, and Gaze Recognition 101
  13. 8 Head Pose as an Indicator of Drivers’ Visual Attention 113
  14. Part B: Models & Theories of Driver/Vehicle Systems
  15. 9 Evolving Neural Network Controllers for Tractor-Trailer Vehicle Backward Path Tracking 135
  16. 10 Spectral Distance Analysis for Quality Estimation of In-Car Communication Systems 149
  17. 11 Combination of Hands-Free and ICC Systems 165
  18. 12 Insights into Automotive Noise PSD Estimation Based on Multiplicative Constants 183
  19. 13 In-Car Communication: From Single- to Four-Channel with the Frequency Domain Adaptive Kalman Filter 213
  20. Part C: Self–driving and the Mobility in 2050
  21. 14 The PIX Moving KuaiKai: Building a Self-Driving Car in Seven Days 233
  22. 15 Vehicle Ego-Localization with a Monocular Camera Using Epipolar Geometry Constraints 251
  23. 16 Connected and Automated Vehicles: Study of Platooning 263
  24. 17 Epilogue – Future Mobility 2050 285
  25. Index 311
Downloaded on 16.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/9783110669787-006/html
Scroll to top button