A soft barrier model for predicting human visuomotor behavior in a driving task

Abstract

We present a task-based model of human gaze allocation in a driving environment. When engaged in natural tasks, gaze is predominantly directed towards task relevant objects. In particular in a multi-task scenario such as driving, human drivers must access multiple perceptual cues that can be used for effective control. Our model uses visual task modules that require multiple independent sources of information for control, analogous to human foveation on different task-relevant objects. Building on the framework described by Sprague and Ballard (2003), we use a modular structure to feed information to a set of PID controllers that drive a simulated car and introduce a barrier model for gaze selection. The softmax barrier model uses performance thresholds to represent task importance across modules and allows noise to be added to any module to represent task uncertainty. Results from the model compare favorably with human gaze data gathered from subjects driving in a virtual environment.


Back to Table of Contents