How does your HMI Design affect the visual attention of the driver?

In this tutorial the participants learn the basic mechanisms of how humans tune their visual attention to monitor automated systems. We introduce the participants to basic models from psychology, which are selected and discussed based on their applicability for concrete human factor issues in manual and automated driving.

With the theoretical foundations of visual attention in mind, we present the current state of the art techniques and tools that support modeling and analyzing how humans spent and tune their attention while driving.

Finally, participants get the chance to apply what they have learned as part of the tutorial either based on their own use cases or on exemplary ones that we offer along with a software tool that participants can take with them back home to further analyze how they can improve the visual attention allocation and reaction times of drivers for their HMI designs.

Targeted Audience and Prerequisites

The tutorial is targeted to those that are interested in visual HMI or assistant system integration in vehicles. Ideally participants have designed or implemented in-car HMIs before or have an idea in mind for a future HMI that they then might use as an application example for visual attention modeling for the hands-on practical tutorial session. No specific further prerequisites are required and graduate students up to experienced researchers and practitioners from the industry are invited. Those with a notebook running on windows can install the required software beforehand. But we also offer computers with the software preinstalled in our lab.

The lab size limits participation to 12. Registration can be done via the Automotive UI conference site.

Workshop Agenda

  1. Lecture (60 min)
  2. Use case presentations (30 min)
  3. Hands-on modelling session (90 min)
  4. Results discussion and presentation (45min)

1. Introduction to Model-based Visual Attention Prediction

Figure 1

We introduce the participants to the approach of simulating human behavior based on psychological and physiological plausible models. Such model-based prediction methods have been applied for a wide variety of tasks: e.g. to evaluate drivers’ monitoring behavior while approaching intersections [1], to explore how characteristics of the in-vehicle tasks impact visual scanning behavior [4], or to explore the impact of different urban ACC assistance system HMI designs [3] to the visual attention allocation of drivers. In the lecture we focus specifically on the SEEV model [6] and the AIE model [7]. The SEEV (Saliency, Effort, Expectancy, and Value) model of human attention allocation proposed by Wickens et al. [6] predicts human attention distribution based on four influencing factors. Two of these factors are task-related, top-down factors Expectancy and Value and are the main drivers of attention while driving a car [4]. Determining the SEEV model’s expectancy and value parameters is difficult, because it requires expertise in cognitive modelling and the application domain. Techniques, like the lowest-ordinal heuristic and tools like the Human Efficiency Evaluator (HEE) [2] have been proposed recently to ease the creation of SEEV models based on these top down factors. Figure 1 illustrates the four principle steps for attention modeling.

Figure 2

2. Use Case Presentations

Ideally participants bring their own HMI designs with them: A graphical interface sketch (images of around 1024px/765px in png/jpg) in either 2-3 design variants or in 2-3 different traffic situations on an USB stick. Alternatively, we offer an exemplary use case for an HMI to model. Those participants that bring their own designs with them, are asked to briefly explain the functionality of their design to the other participants.

 

 

3. Hands-on Modelling Session and 4. Results Exploration

Figure 3

We guide the participants through the driver monitoring simulation and aggregate the predictions from participants that modeled the same use cases. In small groups the results are then explored, summarized and finally presented to the entire audience of the tutorial. Figure 2 shows an exemplary result: a heatmap illustrating a driver’s attention allocation. Figure 3 illustrates exemplary visualizations for further model explorations, like e.g. graphs depicting prediction differences between modelers that reveal differences in understanding of a situation in that an HMI is aimed to support the driver.

4. After the tutorial

The Human Efficiency Evaluator is offered free for all participants and access to the generated models and predictions will reamain available after the tutorial so that participants can improve their models or examine the models of the other participants.

 

References

1. Alexander J. Bos, Daniele Ruscio, Nicholas D. Cassavaugh, Justin Lach, Pujitha Gunaratne, and Richard W. Backs. Comparison of novice and experienced drivers using the seev model to predict attention allocation at intersections during simulated driving. In Proceedings of the Eighth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, June 2015.

2. Sebastian Feuerstack and Bertram Wortelen. A model-driven tool for getting insights into car drivers’ monitoring behavior. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV’17) (in press), 2017.

3. Sebastian Feuerstack, Bertram Wortelen, Carmen Kettwich, and Anna Schieben. Theater-system technique and model-based attention prediction for the early automotive hmi design evaluation. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2016.

4. William J. Horrey, Christopher D. Wickens, and Kyle P. Consalus. Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies. Journal of Experimental Psychology: Applied, 12(2):67–78, 2006.

5. Visual-manual driver distraction guidelines for portable and aftermarket devices. Technical Report ID: NHTSA-2013-0137-0059, National Highway Traffic Safety Administration, 2016.

6. Christopher D. Wickens, John Helleberg, Juliana Goh, Xidong Xu, and William J. Horrey. Pilot task management: Testing an attentional expected value model of visual scanning. Technical report, NASA Ames Research Center Moffett Field, CA, 2001.

7. Bertram Wortelen, Martin Baumann, and Andreas Lüdtke. Dynamic simulation and prediction of drivers’ attention distribution. Transportation research part F: traffic psychology and behaviour, 21:278–294, 2013.