SUNDAy: Saliency Using Natural Statistics for Dynamic Analysis of Scenes

Abstract

The notion that novelty attracts attention is core to many accounts of visual saliency. However, a consensus has not been reached on how to best define novelty. Various interpretations of novelty lead to different bottom-up saliency models that have been proposed for static images and more recently for dynamic scenes. In previous work, we assumed that a basic goal of the visual system is to locate targets such as predators and food that are potentially important for survival, and developed a probabilistic model of salience (Zhang, Tong, Marks, Shan, & Cottrell, 2008). The probabilistic description of this goal naturally leads a definition of novelty as self-information, an idea that has appeared in other work. However, our notion uses the idea that the statistics used to determine novelty are learned from prior experience, rather than on the current image, leading to an efficient implementation that explains several search asymmetries other models fail to predict. In this paper, we generalize our saliency framework to dynamic scenes and develop a simple, efficient, and online bottom-up saliency algorithm. Our algorithm matches the performance of more complex state of the art algorithms in predicting human fixations during free-viewing of videos.


Back to Saturday Posters