Gaze Control in Scene Perception
Gaze Control in Scene Perception
11 November 2003
In human vision, acuity and color sensitivity are best at be dissociated [7], the strong natural relationship between
the point of fixation, and the visual-cognitive system covert and overt attention has led investigators to suggest
exploits this fact by actively controlling gaze to direct that studying covert visual attention independently of
fixation towards important and informative scene overt attention is misguided [8].
regions in real time as needed. How gaze control oper- Third, eye movements provide an unobtrusive, sensi-
ates over complex real-world scenes has recently tive, real-time behavioral index of ongoing visual and
become of central concern in several core cognitive cognitive processing. This fact has been exploited to a
science disciplines including cognitive psychology, significant degree in the study of perceptual and linguistic
visual neuroscience, and machine vision. This article processes in reading [9 – 11], and is coming to play a
reviews current approaches and empirical findings in similarly important role in studies of language production
human gaze control during real-world scene perception. and spoken language comprehension [12,13]. Eye move-
ments have been exploited to a lesser extent to understand
During human scene perception, high quality visual visual and cognitive processes in scene perception,
information is acquired only from a limited spatial region although after 25 years of relative inactivity, the study of
surrounding the center of gaze (the fovea). Visual quality gaze control in scenes has recently experienced a rebirth.
falls off rapidly and continuously from the center of gaze Several advances in technology have sparked this renewed
into a low-resolution visual surround. We move our eyes interest, including more accurate and robust stationary
about three times each second via rapid eye movements eyetrackers, new mobile eyetrackers that can be used in
(saccades) to reorient the fovea through the scene. Pattern the natural environment, progress in computer graphics
information is only acquired during periods of relative technology enabling presentation of full color scene images
gaze stability (fixations) owing to ‘saccadic suppression’ under precisely controlled conditions, and new computa-
during the saccades themselves [1– 3]. Gaze control is the tional methods for analyzing image properties (Figure 2).
process of directing fixation through a scene in real time in Early studies of gaze control demonstrated that empty,
the service of ongoing perceptual, cognitive and behavioral
uniform, and uninformative scene regions are often not
activity (Figure 1).
fixated. Viewers instead concentrate their fixations,
There are at least three reasons why gaze control is an
including the very first fixation in a scene, on interesting
important topic in scene perception. First, vision is an
and informative regions (Box 1 and Figure 3). What
active process in which the viewer seeks out task-relevant
visual information. In fact, virtually all animals with
developed visual systems actively control their gaze using
eye, head, and/or body movements [4]. Active vision
ensures that high quality visual information is available
when it is needed, and also simplifies a variety of otherwise
difficult computational problems [5,6]. A complete theory
of vision and visual cognition requires understanding how
ongoing visual and cognitive processes control the orien- 238
tation of the eyes in real time, and in turn how vision and 340 154
471 216
cognition are affected by gaze direction over time. 113
355
161 250
Second, because attention plays a central role in visual
119 205
and cognitive processing, and because eye movements are
an overt behavioral manifestation of the allocation of
attention in a scene, eye movements serve as a window
into the operation of the attentional system. Indeed,
although behavioral and neurophysiological evidence sug-
gest that internal visual attentional systems (covert visual Figure 1. Scan pattern of one viewer during visual search. The viewer was count-
ing the number of people in the scene. The circles represent fixations (scaled in
attention) and eye movements (overt visual attention) can size to their durations, which are shown in milliseconds) and the lines represent
saccades. The figure illustrates that fixation durations are variable even for a single
Corresponding author: John M. Henderson ( [email protected]). viewer examining a single scene.
http://tics.trends.com 1364-6613/$ - see front matter q 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.tics.2003.09.006
Review TRENDS in Cognitive Sciences Vol.7 No.11 November 2003 499
Figure 2. Participant on a dual-Purkinje-image eyetracker. The availability of eye- gaze in a scene, and these points can be correlated with
trackers with high spatial and temporal resolution, along with high quality visual observed human fixations [23,24]. The saliency map
displays, has greatly contributed to recent gains in our understanding of gaze con-
trol in scenes. (Photo courtesy of Dan Gajewski.)
approach serves an important heuristic function in the
study of gaze control because it provides an explicit model
that generates precise quantitative predictions about
constitutes an ‘interesting and informative’ scene fixation locations and their sequences. Important ques-
region? Recent work on gaze control has focused on tions remaining to be answered within this approach
two potential answers: bottom-up stimulus-based infor- include the following:
mation generated from the image, and top-down memory-
based knowledge generated from internal visual and
cognitive systems. How many times is a saliency map computed for a given
scene?
It may be that one saliency map is computed across the
Stimulus-based gaze control entire scene during the first fixation on that scene, or that
Three general approaches have been adopted to inves- a new saliency map is computed in each fixation. In the
tigate the image properties that influence where a former approach, the initial map could be used to generate
viewer will fixate in a scene. First, scene patches centered an ordered set of sites that are fixated in turn. This
at each fixation position are analyzed to determine approach assumes that a single map is retained over
whether they differ in some image property from unse- multiple fixations, an assumption that is suspect given the
lected patches. Using this ‘scene statistics’ approach, evidence that metrically precise sensory information about
investigators have found that high spatial frequency a scene is not retained across saccades [25,26]. The alter-
content and edge density are somewhat greater at fixation native approach is to compute the saliency map anew
sites [14,15], and that local contrast (the standard following each successive fixation. This approach does
deviation of intensity in a patch) is higher and two-point
correlation (intensity of the fixated point and nearby
points) is lower for fixated scene patches than unfixated
patches [16– 18].
Second, properties of early vision are instantiated in a
computational model and used to predict fixation posi-
tions. One prominent model of this type generates visual
saliency based on the known properties of primary visual
cortex [19– 22]. In this ‘saliency-map’ approach, the
visual properties present in an image give rise to a
representation (the saliency map) that explicitly marks
regions that are different from their surround on one or
more image dimensions such as color, intensity, contrast,
edge orientation, and so forth over multiple spatial scales.
The maps generated for each image dimension are then
combined to create a single saliency map. The intuition
behind this approach is that regions that are uniform
along some image dimension are uninformative, whereas
those that differ from neighboring regions across spatial Figure 3. Distribution of fixations over a scene. Representation of all fixations (indi-
cated by red dots) produced by 20 participants viewing a scene in preparation for a
scales are potentially informative. The salient points in the later memory test. Note that the fixations are clustered on regions containing
map serve as a prediction about the spatial distribution of objects; relatively homogenous regions of a scene receive few if any fixations.
http://tics.trends.com
500 Review TRENDS in Cognitive Sciences Vol.7 No.11 November 2003
away with the need to retain the saliency map across Correlation or causation?
fixations, but potentially increases computational load A shortcoming of both the scene statistics and saliency
because a new saliency map must be generated every few map approaches to human gaze control is that they are
hundred milliseconds. This approach also requires that correlational techniques, so they do not allow a causal link
‘inhibition of return’ (IOR) [27] be retained across fixations to be established between image properties and fixation
and properly assigned to points within the regenerated site selection. A third method establishes causality by
saliency map to ensure that gaze does not oscillate directly manipulating the information present in an
between highly salient points. Given that IOR appears image. For example, foveal and extra-foveal visual
to be object-based as well as space-based in humans [28], information have been manipulated independently using
this problem seems tractable. the ‘moving-window technique’ [34]. Results from these
studies indicate that high spatial frequency information
(edges) is preferentially used over low spatial frequency
What image properties should be included in the saliency information to direct gaze to peripheral scene regions.
map? More studies of this type will be needed to directly test
One prominent model assumes that the saliency map can hypotheses about the influence of scene properties on
be derived from a weighted linear combination of spatial gaze control.
orientation, intensity, and color [19], but there is as yet no
strong evidence that these specific features have a unique
or even central role in determining fixation placement in Knowledge-driven gaze control
scenes. Predictions of fixation positions based on these Human eye movement control is ‘smart’ in the sense that it
features correlate with observed human fixations, with the draws not only on currently available visual input, but also
correlations decreasing as a visual pattern becomes more on several cognitive systems, including short-term mem-
meaningful [24]. Evidence from the scene statistics ory for previously attended information in the current
method suggests that additional image properties might scene, stored long-term visual, spatial and semantic
need to be implemented in saliency-map models to account information about other similar scenes, and the goals
for gaze control completely [14 –18]. and plans of the viewer. In fact, fixation sites are less
strongly tied to visual saliency when meaningful scenes
are viewed during active tasks [23,35,36,37]. The modu-
How should stimulus-based and knowledge-based lation or replacement of visual saliency by knowledge-
information be combined? driven control can increase over time within a scene-
The fact that gaze control draws on stored knowledge viewing episode as more knowledge is acquired about the
implies that image properties about potential fixation identities and meanings of previously fixated objects and
targets must somehow be combined with top-down their relationships to each other and to the scene [35]. But
constraints. How is this accomplished? One approach is even the very first saccade in a scene can take the eyes in
to construct the initial stimulus-based saliency map the likely direction of a search target, whether or not the
taking relevant knowledge (e.g. visual properties of a target is present, presumably because the global scene gist
search target) into account from the outset [29]. Another and spatial layout acquired from the first fixation provide
approach is to compute a stimulus-based saliency map important information about where a particular object is
independently of other knowledge-based maps. For likely to be found [23,35].
example, Oliva et al. [23] filtered an image-based saliency
map using a separate knowledge-based map highlighting
regions likely to contain a specific target. Other methods Episodic scene knowledge
are certainly possible. Which approach best accounts for Henderson and Ferreira [13] provided a typology of the
human gaze control, and which will best support artificial knowledge available to the human gaze control system.
active foveated vision systems, is an important current This knowledge includes information about a specific
topic of investigation. scene that can be learned over the short term in the
current perceptual encounter (short-term episodic scene
knowledge) and over the longer term across multiple
Where in the brain is the saliency map computed and encounters (long-term episodic scene knowledge). An
represented? example of short-term episodic knowledge is the memory
A final issue concerns the neural implementation of that the latest issue of Trends in Cognitive Sciences is on
the saliency map. Is there a single neural map, perhaps my computer table. Short-term knowledge supports a
computed directly over image properties in V1 [30]? viewer’s propensity to refixate areas of the current scene that
Or might there be multiple maps computed over mul- are semantically interesting or informative [35,38,39,40],
tiple brain areas combining input from a variety of and ensures that objects are fixated when needed during
bottom-up and top-down sources, as has been sug- motor interaction with the environment [36]. Long-term
gested in the spatial attention literature [31]. This episodic knowledge involves information about a particu-
issue is likely to receive increased scrutiny in the coming lar scene acquired and retained over time, such as knowing
years [30,32,33]. that my office clock resides on my filing cabinet. Recent
http://tics.trends.com
Review TRENDS in Cognitive Sciences Vol.7 No.11 November 2003 501
Figure I. Fixation landscape over a scene. Representation of the positions of all fixations by all viewers on a scene (a), and the same fixations weighted by fixation dur-
ation (b). These landscapes were created by placing a Gaussian with a diameter of 2 degrees of visual angle (equivalent to the fovea) centered at each fixation point,
summing the Gaussians, and normalizing the height of the resulting sums [74] For the duration-weighted landscape, the height of each Gaussian was proportional to
the duration of that fixation in milliseconds. Comparison of the unweighted and duration-weighted landscapes illustrates that although fixations are distributed over a
good deal of a scene, the majority of fixation time is concentrated on specific objects. The duration-weighted landscape can be interpreted as an ‘attentional landscape’
associated with scene interpretation. In this scene, fixation time is concentrated on the interesting object: the dog asleep on the couch.
http://tics.trends.com
Review TRENDS in Cognitive Sciences Vol.7 No.11 November 2003 503
References 29 Rao, R.P.N. et al. (2002) Eye movements in iconic visual search. Vision
1 Matin, E. (1974) Saccadic suppression: a review and an analysis. Res. 42, 1447 – 1463
Psychol. Bull. 81, 899– 917 30 Li, Z. (2002) A saliency map in primary visual cortex. Trends Cogn. Sci.
2 Thiele, A. et al. (2002) Neural mechanisms of saccadic suppression. 6, 9 – 16
Science 295, 2460 – 2462 31 Corbetta, M. and Shulman, G.L. (2002) Control of goal-directed
3 Volkmann, F.C. (1986) Human visual suppression. Vision Res. 26, and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 3,
1401 – 1416 201 – 215
4 Land, M.F. (1999) Motion and vision: why animals move their eyes. 32 Findlay, J.M. and Walker, R. (1999) A model of saccade generation
J. Comp. Physiol. Ser. A 185, 341– 352 based on parallel processing and competitive inhibition. Behav. Brain
5 Ballard, D.H. et al. (1997) Deictic codes for the embodiment of Sci. 22, 661– 721
cognition. Behav. Brain Sci. 20, 723– 767 33 Gottlieb, J.P. et al. (1998) The representation of salience in monkey
6 Churchland, P.S. et al. (1994) A critique of pure vision. In Large Scale parietal cortex. Nature 391, 481 – 484
Neuronal Theories of the Brain (Koch, C. and Davis, S., eds), pp. 23–60, 34 van Diepen, P.M.J. et al. (1998) Functional division of the visual field:
MIT Press moving masks and moving windows. In Eye Guidance in Reading and
7 Luck, S.J. and Vecera, S.P. (2002) Attention. In Steven’s Handbook of Scene Perception (Underwood, G., ed.), Elsevier
Experimental Psychology, 3rd edn, (Pashler, H. and Yantis, S., eds), 35 Henderson, J.M. et al. (1999) Effects of semantic consistency on eye
pp. 235–286 movements during scene viewing. J. Exp. Psychol. Hum. Percept.
8 Findlay, J.M. (2004) Eye scanning and visual search. In The Interface Perform. 25, 210– 228
of Language, Vision, and Action: Eye Movements and the Visual World 36 Land, M.F. and Hayhoe, M. (2001) In what ways do eye movements
(Henderson, J.M. and Ferreira, F., eds), Psychology Press contribute to everyday activities? Vision Res. 41, 3559– 3565
9 Liversedge, S.P. and Findlay, J.M. (2000) Saccadic eye movements and 37 Turano, K.A. et al. (2003) Oculomotor strategies for the direction of
cognition. Trends Cogn. Sci. 4, 6 – 14 gaze tested with a real-world activity. Vision Res. 43, 333 – 346
10 Rayner, K. (1998) Eye movements in reading and information 38 Buswell, G.T. (1935) How People Look at Pictures, University of
processing: 20 years of research. Psychol. Bull. 124, 372– 422 Chicago Press
39 Loftus, G.R. and Mackworth, N.H. (1978) Cognitive determinants of
11 Sereno, S. and Rayner, K. (2003) Measuring word recognition in
fixation location during picture viewing. J. Exp. Psychol. Hum.
reading: eye movements and event-related potentials. Trends Cogn.
Percept. Perform. 4, 565 – 572
Sci. 7, 489 – 493
40 Yarbus, A.L. (1967) Eye Movements and Vision, Plenum Press
12 Tanenhaus, M.K. et al. (1995) Integration of visual and linguistic
41 Henderson, J.M. and Hollingworth, A. (2003) Eye movements, visual
information in spoken language comprehension. Science 268,
memory, and scene representation. In Perception of Faces, Objects, and
1632 – 1634
Scenes: Analytic and Holistic Processes (Peterson, M. and Rhodes, G.,
13 Henderson, J.M. and Ferreira, F. (2004) Scene perception for
eds), Oxford University Press
psycholinguists. In The Interface of Language, Vision, and Action:
42 Hollingworth, A. and Henderson, J.M. (2002) Accurate visual memory
Eye Movements and the Visual World (Henderson, J.M. and Ferreira,
for previously attended objects in natural scenes. J. Exp. Psychol.
F., eds), Psychology Press
Hum. Percept. Perform. 28, 113 – 136
14 Mannan, S.K. et al. (1996) The relationship between the locations of
43 Henderson, J.M., Ferreira, F. eds, (2004) The Interface of Language,
spatial features and those of fixations made during visual examination
Vision, and Action: Eye Movements and the Visual World, Psychology
of briefly presented images. Spat. Vis. 10, 165 – 188
Press
15 Mannan, S.K. et al. (1997) Fixation patterns made during brief
44 Biederman, I. et al. (1982) Scene perception: detecting and judging
examination of two-dimensional images. Perception 26,
objects undergoing relational violations. Cognit. Psychol. 14, 143 – 177
1059 – 1072
45 Friedman, A. (1979) Framing pictures: the role of knowledge in
16 Krieger, G. et al. (2000) Object and scene analysis by saccadic eye-
automatized encoding and memory for gist. J. Exp. Psychol. Gen. 108,
movements: an investigation with higher-order statistics. Spat. Vis. 316– 355
13, 201 – 214 46 Mandler, J.M. and Johnson, N.S. (1976) Some of the thousand words a
17 Parkhurst, D.J. and Niebur, E. (2003) Scene content selected by active picture is worth. J. Exp. Psychol. [Hum Learn] 2, 529– 540
vision. Spat. Vis. 16, 125 – 154 47 Schyns, P. and Oliva, A. (1994) From blobs to boundary edges: evidence
18 Reinagel, P. and Zador, A.M. (1999) Natural scene statistics at the for time- and spatial-scale-dependent scene recognition. Psychol. Sci.
centre of gaze. Network 10, 341 – 350 5, 195 – 200
19 Itti, L. and Koch, C. (2000) A saliency-based search mechanism for 48 Land, M.F. and Lee, D.N. (1994) Where we look when we steer. Nature
overt and covert shifts of visual attention. Vision Res. 40, 369, 742 – 744
1489 – 1506 49 Potter, M.C. (1999) Understanding sentences and scenes: the role of
20 Itti, L. and Koch, C. (2001) Computational modeling of visual conceptual short-term memory. In Fleeting Memories (Coltheart, V.,
attention. Nat. Rev. Neurosci. 2, 194 – 203 ed.), pp. 13 – 46, MIT Press
21 Koch, C. and Ullman, S. (1985) Shifts in selective visual attention: 50 Thorpe, S.J. et al. (1996) Speed of processing in the human visual
towards the underlying neural circuitry. Hum. Neurobiol. 4, system. Nature 381, 520 – 522
219 – 227 51 Li, F-F. et al. (2002) Natural scene categorization in the near absence of
22 Torralba, A. (2003) Modeling global scene factors in attention. J. Opt. attention. Proc. Natl. Acad. Sci. U. S. A. 99, 9596 – 9601
Soc. Am. A Opt. Image Sci. Vis. 20, 1407– 1418 52 Oliva, A. and Torralba, A. (2001) Modeling the shape of the scene: a
23 Oliva, A. et al. (2003) Top-down control of visual attention in object holistic representation of the spatial envelope. Int. J. Comput. Vis. 42,
detection. IEEE Proceedings of the International Conference on Image 145– 175
Processing (Vol. I), IEEE, pp. 253– 256 53 Torralba, A. and Oliva, A. (2003) Statistics of natural image categories.
24 Parkhurst, D. et al. (2002) Modeling the role of salience in the Network 14, 391– 412
allocation of overt visual attention. Vision Res. 42, 107 – 123 54 Henderson, J.M. et al. (2003) Eye movements and picture processing
25 Henderson, J.M. and Hollingworth, A. (2003) Global transsaccadic during recognition. Percept. Psychophys. 65, 725– 734
change blindness during scene perception. Psychol. Sci. 14, 55 Henderson, J.M. and Hollingworth, A. (1999) The role of fixation
493 – 497 position in detecting scene changes across saccades. Psychol. Sci. 10,
26 Irwin, D.E. (1992) Visual memory within and across fixations. In Eye 438– 443
Movements and Visual Cognition: Scene Perception and Reading 56 Hollingworth, A. et al. (2001) Change detection in the flicker
(Rayner, K., ed.), pp. 146 – 165, Springer-Verlag paradigm: the role of fixation position within the scene. Mem.
27 Posner, M.I. et al. (1985) Inhibition of return: neural basis and Cognit. 29, 296 – 304
function. Cogn. Neuropsychol. 2, 211 – 228 57 Hollingworth, A. et al. (2001) To see and remember: visually specific
28 Leek, E.C. et al. (2003) Inhibition of return for objects and locations in information is retained in memory from previously attended objects in
static displays. Percept. Psychophys. 65, 388 – 395 natural scenes. Psychon. Bull. Rev. 8, 761– 768
http://tics.trends.com
504 Review TRENDS in Cognitive Sciences Vol.7 No.11 November 2003
58 Ballard, D.H. et al. (1995) Memory representations in natural tasks. scene viewing: an overview. In Eye Guidance in Reading and Scene
J. Cogn. Neurosci. 7, 66 – 80 Perception (Underwood, G., ed.), pp. 269 – 283, Elsevier
59 Nelson, W.W. and Loftus, G.R. (1980) The functional visual field during 68 Loftus, G.R. (1985) Picture perception: effects of luminance on
picture viewing. J. Exp. Psychol. [Hum Learn] 6, 391 – 399 available information and information-extraction rate. J. Exp. Psy-
60 Brooks, R. and Meltzoff, A.N. (2002) The importance of eyes: chol. Gen. 114, 342– 356
how infants interpret adult looking behavior. Dev. Psychol. 38, 69 Loftus, G.R. et al. (1992) Effects of visual degradation on eye-fixation
958 – 966 durations, perceptual processing, and long-term visual memory. In Eye
61 Macrae, C.N. et al. (2002) Are you looking at me? Eye gaze and Movements and Visual Cognition: Scene Perception and Reading
perception. Psychol. Sci. 13, 460– 464 (Rayner, K., ed.), pp. 203 – 226, Springer
62 Noton, D. and Stark, L. (1971) Scan paths in eye movements during 70 van Diepen, P.M.J. (1995) Chronometry of foveal information extrac-
pattern perception. Science 171, 308 – 311 tion during scene perception. In Eye Movement Research: Mechanisms,
63 Noton, D. and Stark, L. (1971) Scan paths in saccadic eye Processes and Applications (Findlay, J.M. et al., eds), pp. 349 – 362,
movements while viewing and recognizing patterns. Vision Res. Elsevier
11, 929 – 944 71 Henderson, J.M. and Hollingworth, A. (2003) Eye movements and
64 Groner, R. et al. (1984) Looking at face: local and global aspects of visual memory: detecting changes to saccade targets in scenes.
scanpths. In Theoretical and Applied Aspects of Eye Movement Percept. Psychophys. 65, 58 – 71
Research (Gale, A.G. and Johnson, F., eds), pp. 523 – 533, Elsevier 72 Hayhoe, M.M. et al. (1998) Task constraints in visual working memory.
65 Mannan, S.K. et al. (1997) Fixation sequences made during visual Vision Res. 38, 125– 137
examination of briefly presented 2D images. Spat. Vis. 11, 73 De Graef, P. et al. (1990) Perceptual effects of scene context on object
157 – 178 identification. Psychol. Res. 52, 317 – 329
66 Reichle, E.D. et al. (1998) Toward a model of eye movement control in 74 Pomplun, M. et al. (1996) Disambiguating complex visual information:
reading. Psychol. Rev. 105, 125– 157 towards communication of personal views of scenes. Perception 25,
67 Henderson, J.M. and Hollingworth, A. (1998) Eye movements during 931– 948
Endeavour
the quarterly magazine for the history
and philosophy of science
featuring
‘Dr. Steinach coming to make old young!’: sex glands, vasectomy and the quest for rejuvenation in the roaring twenties
by C. Sengoopta
Cheese mites and other delicacies: the introduction of test objects into microscopy by J. Schickore
An herbal El Dorado: the quest for botanical wealth in the Spanish Empire by P. De Vos
Zones of inhibition: interactions between art and science by K. Davies
Global science: the eruption of Krakatau by M. Döerries
Two pills, two paths: a tale of gender bias by M. Potts
Joseph Banks: Pacific pictures by P. Fara
and coming soon
Mr Blandowski misses out: discovery and loss of fish species in 19th century Australia by P. Humphries
The traffic and display of body parts in the early 19th century by S. Alberti and S. Chaplin
Exhibiting monstrosity: Chang and Eng, the ’original’ Siamese twins by S. Mitchell
The ancient mariner and the transit of Venus at Hudson Bay by R. Griffin-Short
‘I got rhythm’: Gershwin and birth control in the 1930s by P. Viterbo
The market for Julia Pastrana by J. Browne and S. Messenger
Race mixing and science in the United States by P. Farber
Continental drift under the Third Reich by E. Buffetaut
The first president of the Royal Society by P. Fara
and much, much more . . .
Locate Endeavour in the BioMedNet Reviews collection (http://reviews.bmn.com)
or on ScienceDirect (http://www.sciencedirect.com)
http://tics.trends.com