Optical Versus Video See-Through Head-Mounted Disp Compressed
Optical Versus Video See-Through Head-Mounted Disp Compressed
net/publication/220089776
CITATIONS READS
391 6,152
2 authors:
All content following this page was uploaded by Jannick P Rolland on 16 May 2014.
Henry Fuchs
Department of Computer Science
University of North Carolina Abstract
Chapel Hill, NC 27599-3175
We compare two technological approaches to augmented reality for 3-D medical
visualization: optical and video see-through devices. We provide a context to discuss
the technology by reviewing several medical applications of augmented-reality re-
search efforts driven by real needs in the medical eld, both in the United States and
in Europe. We then discuss the issues for each approach, optical versus video, from
both a technology and human-factor point of view. Finally, we point to potentially
promising future developments of such devices including eye tracking and multifocus
planes capabilities, as well as hybrid optical/video technology.
1 Introduction
that system was that viewing the data in situ allows sur- the positioning of the leg around the knee joint. The
geons to make better surgical plans because they will be joint is accurately tracked optically by using three infra-
able to see the complex relationships between the bone red video cameras to locate active infrared markers
and soft tissue more clearly. Holloway found that the placed around the joint. Figure 4 shows the results of
largest registration error between real and virtual objects the optical superimposition of the graphical knee joint
in optical see-through HMDs was caused by delays in on a leg model, seen through one of the lenses of our
presenting updated information associated with track- stereoscopic bench prototype display.
ing. Extensive research in tracking has been pursued An optical see-through HMD coupled with optical
since at UNC-CH (Welch & Bishop, 1997). tracking devices positioned along the knee joint of a
One of the authors and colleagues are currently devel- model patient is used to visualize the 3-D computer-
oping an augmented-reality tool for the visualization of rendered anatomy directly superimposed on the real leg
human anatomical joints in motion (Wright et al., 1995; in motion. The user may further manipulate the joint
Kancherla et al., 1995; Rolland & Arthur, 1997; Parsons and investigate the joint motions. From a technological
& Rolland, 1998; Baillot & Rolland, 1998; Baillot et al., aspect, the field of view (FOV) of the HMD should be
1999). An illustration of the tool using an optical see- sufficient to capture the knee-joint region, and the track-
through HMD for visualization of anatomy is shown in ing devices and image-generation system must be fast
figure 3. In the first prototype, we have concentrated on enough to track typical knee-joint motions during ma-
290 PRE S E N CE : V O L U ME 9 , N UM B ER 3
with demonstrated accuracy and precision of 0.01 mm merging a reflected image of the computer-generated
(Peuchot, 1998). Peuchot chose the hybrid system over scene into the view of the real world. Video see-through
a video see-through approach because ‘‘it allows the op- HMDs are typically more obtrusive in the sense that
erator to work in his real environment with a perception they block out the real-world view in exchange for the
space that is real.’’ Peuchot judged this point to be criti- ability to merge the two views more convincingly. In
cal in a medical application like surgery. recent developments, narrow fields of view in video see-
through HMDs have replaced large field-of-view
HMDs, thus reducing the area where the real world
3 A Comparison of Optical and Video (captured through video) and the computer-generated
See-Through Technology images are merged into a small part of the visual scene.
In any case, a fundamental consideration is whether the
As suggested in the description of the applications additional features afforded by video see-through
described, the main goal of augmented-reality systems is HMDs justify the loss of the unobstructed real-world
to merge virtual objects into the view of the real scene so view.
that the user’s visual system suspends disbelief into per- Our experience indicates that there are many tradeoffs
ceiving the virtual objects as part of the real environ- between optical and video see-through HMDs with re-
ment. Current systems are far from perfect, and system spect to technological and human-factors issues that af-
designers typically end up making a number of applica- fect designing, building, and assessing these HMDs. The
tion-dependent trade offs. We shall list and discuss these specific issues are laid out in figure 11. While most of
tradeoffs in order to guide the choice of technology de- these issues could be discussed from both a technologi-
pending upon the type of application considered. cal and human-factors-standpoint (because the two are
Both systems, optical and video, have two image closely interrelated in HMD systems), we have chosen to
sources: the real world and the computer-generated classify each issue where it is most adequately addressed
world. These image sources are to be merged. Optical at this time, given the present state of the technology.
see-through HMDs take what might be called a ‘‘mini- For example, delays in HMD systems are addressed un-
mally obtrusive’’ approach; that is, they leave the view of der technology because technological improvements are
the real world nearly intact and attempt to augment it by actively being pursued to minimize delays. Delays also
294 PRE S E N CE : V O L U ME 9 , N UM B ER 3
certainly have impact on various human-factor issues user. This swimming effect has been demonstrated and
(such as the perceived location of objects in depth and minimized by predicting HMD position instead of sim-
user acceptance). Therefore, the multiple arrows shown ply measuring positions (Azuma & Bishop, 1994).
in figure 11 indicate that the technological and human- Current HMD systems are lag limited as a conse-
factor-categories are highly interrelated. quence of tracker lag, the complexity of rendering, and
displaying the images. Tracker lag is often not the limit-
ing factor in performance. If displaying the image is the
3.1 Technological Issues
limiting factor, novel display architectures supporting
The technological issues for HMDs include latency frameless rendering can help solve the problem (Bishop
of the system, resolution and distortion of the real scene, et al., 1994). Frameless rendering is a procedure for con-
field of view (FOV), eyepoint matching of the see- tinuously updating a displayed image, as information
through device, and engineering and cost factors. While becomes available instead of updating entire frames at a
we shall discuss properties of both optical and video see- time. The tradeoffs between lag and image quality are
through HMDs, it must be noted that, contrary to opti- currently being investigated (Scher-Zagier, 1997). If we
cal see-through HMDs, there are no commercially avail- assume that we are limited by the speed of rendering an
able products for video see-through HMDs. Therefore, image, eye-tracking capability may be useful to quickly
discussions of such systems should be considered care- update information only around the gaze point of the
fully as findings may be particular to only a few current user (Thomas et al., 1989; Rolland, Yoshida, et al.,
systems. Nevertheless, we shall provide as much insight 1998; Vaissie & Rolland, 1999).
as possible into what we have learned with such systems One of the major advantages of video see-through
as well. HMDs is the potential capability of reducing the relative
latencies between the 2-D real and synthetic images as a
3.1.1 System Latency. An essential component consequence of both types of images being digital (Ja-
of see-through HMDs is the capacity to properly register cobs et al., 1997). Manipulation of the images in space
a user’s surroundings and the synthetic space. A geomet- and in time is applied to register them. Three-dimen-
ric calibration between the tracking devices and the sional registration is computationally intensive, if at all
HMD optics must be performed. The major impedi- robust, and challenging for interactive speed. The spatial
ment to achieving registration is the gap in time, re- approach to forcing registration in video see-through
ferred to as lag, between the moment when the HMD systems is to correct registration errors by imaging land-
position is measured and the moment when the syn- mark points in the real world and registering virtual ob-
thetic image for that position is fully rendered and pre- jects with respect to them (State et al., 1996). One ap-
sented to the user. proach to eliminating temporal delays between the real
Lag is the largest source of registration error in most and computer-generated images in such a case is to cap-
current HMD systems (Holloway, 1995). This lag in ture a video image and draw the graphics on top of the
typical systems is between 60 ms and 180 ms. The head video image. Then the buffer is swapped, and the com-
of a user can move during such a period of time, and the bined image is presented to the HMD user. In such a
discrepancy in perceived scene and superimposed scene configuration, no delay apparently exists between the
can destroy the illusion of the synthetic objects being real and computer-generated images. If the actual la-
fixed in the environment. The synthetic objects can tency of the computer-generated image is large with re-
‘‘swim’’ around significantly in such a way that they may spect to the video image, however, it may cause sensory
not even seem to be part of the real object to which they conflicts between vision and proprioception because the
belong. For example, in the case of ultrasound-guided video images no longer correspond to the real-world
biopsy, the computer-generated tumor may appear to be scene. Any manual interactions with real objects could
located outside the breast while tracking the head of the suffer as a result.
Rolland and Fuchs 295
Another approach to minimizing delays in video see- jects will always be where they are perceived to be, and
through HMDs is to delay the video image until the this may be crucial for a broad range of applications.
computer-generated image is rendered. Bajura and Neu-
mann (1995) applied chroma keying, for example, to 3.1.2 Real-Scene Resolution and Distortion. If
dynamically image a pair of red LEDs placed on two real real-scene resolution refers to the resolution of the real-
objects (one stream) and then registered two virtual ob- scene object, the best real-scene resolution that a see-
jects with respect to them (second stream). By tracking through device can provide is that perceived with the
more landmarks, better registration of real and virtual naked eye under unit magnification of the real scene.
objects may be achieved (Tomasi and Kanade, 1991). Certainly under microscopic observation as described by
The limitation of the approach taken is the attempt to Hill (Edwards et al., 1995), the best scene resolution
register 3-D scenes using 2-D constraints. If the user goes beyond that obtained with a naked eye. It is also
rotates his head rapidly or if a real-world object moves, assumed that the see-through device has no image-pro-
there may be no ‘‘correct’’ transformation for the virtual cessing capability.
A resolution extremely close to that obtained with the
scene image. To align all the landmarks, one must either
naked eye is easily achieved with a nonmicroscopic opti-
allow errors in registration of some of the landmarks or
cal see-through HMD, because the optical interface to
perform a nonlinear warping of the virtual scene that
the real world is simply a thin parallel plate (such as a
may create undesirable distortions of the virtual objects.
glass plate) positioned between the eyes and the real
The nontrivial solution to this problem is to increase the
scene. Such an interface typically introduces only very
speed of the system until scene changes between frames
small amounts of optical aberrations to the real scene:
are small and can be approximated with simple 2-D
For example, for a real-point object seen through a 2
transformations.
mm planar parallel plate placed in front of a 4 mm dia.
In a similar vein, it is also important to note that the
eye pupil, the diffusion spot due to spherical aberration
video view of the real scene will normally have some lag
would subtend a 2 102 7 arc-minute visual angle for a
due to the time it takes to acquire and display the video point object located 500 mm away. Spherical aberration
images. Thus, the image in a video see-through HMD is one of the most common and simple aberrations in
will normally be slightly delayed with respect to the real optical systems that lead to blurring of the images. Such
world, even without adding delay to match the synthetic a degradation of image quality is negligible compared to
images. This delay may increase if an image-processing the ability of the human eye to resolve a visual angle of 1
step is applied to either enforce registration or perform minute of arc. Similarly, planar plates introduce low dis-
occlusion. The key issue is whether the delay in the sys- tortion of the real scene, typically below 1%. There is no
tem is too great for the user to adapt to it (Held & distortion only for the chief rays that pass the plate paral-
Durlach, 1987). lel to its normal.1
Systems using optical see-through HMDs have no In the case of a video see-through HMD, real-scene
means of introducing artificial delays into the real scene. images are digitized by miniature cameras (Edwards et
Therefore, the system may need to be optimized for low al., 1993) and converted into an analog signal that is fed
latency, perhaps less than 60 ms, where predictive track- to the HMD. The images are then viewed through the
ing can be effective (Azuma & Bishop, 1994). For any HMD viewing optics that typically use an eyepiece de-
remaining lag, the user may have to limit his actions to sign. The perceived resolution of the real scene can thus
slow head motions. Applications in which speed of be limited by the resolution of the video cameras or the
movement can be readily controlled, such as in the HMD viewing optics. Currently available miniature
VRDA tool described earlier, can benefit from optical
1. A chief ray is defined as a ray that emanates from a point in the
see-through technology (Rolland & Arthur, 1997). The FOV and passes through the center of the pupils of the system. The
advantage of having no artificial delays is that real ob- exit pupil in an HMD is the entrance pupil of the human eye.
296 PRE S E N CE : V O L U ME 9 , N UM B ER 3
video cameras typically have a resolution of 640 3 480, the technical problems associated with providing overlay
which is also near the resolution limit of the miniature can be solved.
displays currently used in HMDs.2 Depending upon the Theoretically, distortion is not a problem in video see-
magnification and the field of view of the viewing optics, through systems because the cameras can be designed to
various effective visual resolutions may be reached. compensate for the distortion of the optical viewer, as
While the miniature displays and the video cameras seem demonstrated by Edwards et al. (1993). However, if the
to currently limit the resolution of most systems, such goal is to merge real and virtual information, as in ultra-
performance may improve with higher-resolution detec- sound echography, having a warped real scene signifi-
tors and displays. cantly increases the complexity of the synthetic-image
In assessing video see-through systems, one must dis- generation (State et al., 1994). Real-time video correc-
tinguish between narrow and wide FOV devices. Large- tion can be used at the expense of an additional delay in
FOV ($ 50 deg.) eyepiece designs are known to be ex- the image-generation sequence. An alternative is to use
tremely limited in optical quality as a consequence of low-distortion video cameras at the expense of a nar-
factors such as optical aberrations that accompany large rower FOV, merge unprocessed real scenes with virtual
FOVs, pixelization that may become more apparent un- scenes, and warp the merged images. Warping can be
der large magnification, and the exit pupil size that must done using (for example) real-time texture mapping to
accommodate the size of the pupils of a person’s eyes. compensate for the distortion of the HMD viewing op-
Thus, even with higher-resolution cameras and displays, tics as a last step (Rolland & Hopkins, 1993; Watson &
video see-through HMDs may remain limited in their Hodges, 1995).
ability to provide a real-scene view of high resolution if The need for high, real-scene resolution is highly task
conventional eyepiece designs continue to be used. In dependent. Demanding tasks such as surgery or engi-
the case of small to moderate FOV (10 deg. to 20 deg.) neering training, for example, may not be able to toler-
video see-through HMDs, the resolution is still typically ate much loss in real-scene resolution. Because the large-
much less than the resolving power of the human eye. FOV video see-through systems that we have
A new technology, referred to as tiling, may overcome experienced are seriously limited in terms of resolution,
some of the current limitations of conventional eyepiece narrow-FOV video see-through HMDs are currently
design for large FOVs (Kaiser, 1994). The idea is to use preferred. Independently of resolution, an additional
multiple narrow-FOV eyepieces coupled with miniature critical issue in aiming towards narrow-FOV video see-
displays to completely cover (or tile) the user’s FOV. through HMDs is the need to match the viewpoint of
Because the individual eyepieces have a fairly narrow the video cameras with the viewpoint of the user. Match-
FOV, higher resolution (nevertheless currently less than ing is challenging with large-FOV systems. Also, meth-
the human visual system) can be achieved. One of the ods for matching video and real scenes for large-FOV
few demonstrations of high-resolution, large-FOV dis- tiled displays must be developed. At this time, consider-
plays is the tiled displays. A challenge is the minimization ing the growing availability of high-resolution flat-panel
of seams in assembling the tiles, and the rendering of displays, we foresee that the resolution of see-though
multiple images at interactive speed. The tiled displays HMDs could gradually increase for both small- and
certainly bring new practical and computational chal- large-FOV systems. The development and marketing of
lenges that need to be confronted. If a see-through ca- miniature high-resolution technology must be under-
pability is desired (for example, to display virtual furni- taken to achieve resolutions that match that of human
ture in an empty room), it is currently unclear whether vision.
the binocular FOV means that fewer head movements through HMDs, however, whether or not they have a
are required to perceive an equivalently large scene. We large overlay FOV, have been typically designed open
believe that a large FOV is especially important for tasks enough that users can use their peripheral vision around
that require grabbing and moving objects and that it the device, thus increasing the total real-world FOV to
provides increased situation awareness when compared closely match one’s natural FOV. An annulus of obstruc-
to narrow-FOV devices (Slater & Wilbur, 1997). The tion usually results from the mounts of the thin see-
situation with see-through devices is somewhat different through mirror similar to the way that our vision may be
from that of fully opaque HMDs in that the aim of using partially occluded by a frame when wearing eyeglasses.
the technology is different from that of immersing the In the design of video see-through HMDs, a difficult
user in a virtual environment. engineering task is matching the frustum of the eye with
that of the camera (as we shall discuss in section 3.1.4).
3.1.3.1 Overlay and Peripheral FOV. The term While such matching is not so critical for far-field view-
overlay FOV is defined as the region of the FOV where ing, it is important for near-field visualization as in vari-
graphical information and real information are superim- ous medical visualizations. This difficult matching prob-
posed. The peripheral FOV is the real-world FOV be- lem has lead to the consideration of narrower-FOV
yond the overlay FOV. For immersive opaque HMDs, systems. A compact, 40 3 30 deg. FOV design, de-
no such distinction is made; one refers simply to the signed for optical see-through HMD but adaptable to
FOV. It is important to note that the overlay FOV may video see-through, was proposed by Manhart, Malcolm,
need to be narrow only for certain augmented-reality & Frazee (1993). Video see-through HMDs, on the
applications. For example, in a visualization tool such as other hand, can provide (in terms of a see-through
the VRDA tool, only the knee-joint region is needed in FOV) the FOV displayed with the opaque type viewing
the overlay FOV. In the case of video HMD-guided optics that typically range from 20 deg. to 90 deg. In
breast biopsy, the overlay FOV could be as narrow as the such systems where the peripheral FOV of the user is
synthesized tumor. The real scene need not necessarily occluded, the effective real-world FOV is often smaller
be synthesized. The available peripheral FOV, however, than in optical see-through systems. When using a video
is critical for situation awareness and is most often re- see-through HMD in a hand-eye coordination task, we
quired for various applications whether it is provided as found in a recent human-factor study that users needed
part of the overlay or around the overlay. If provided to perform larger head movements to scan an active field
around the overlay, the transition from real to virtual of vision than when performing the task with the un-
imagery must be made as seamless as possible. This is an aided eye (Biocca & Rolland, 1998). We predict that the
investigation that has not yet been addressed in video need to make larger head movements would not arise as
see-through HMDs. much with see-through HMDs with equivalent overlay
Optical see-through HMDs typically provide from 20 FOVs but larger peripheral FOVs, because users are pro-
deg. to 60 deg. overlay FOV via the half-transparent vided with increased peripheral vision, and thus addi-
mirrors placed in front of the eyes, a characteristic that tional information, to more naturally perform the task.
may seem somewhat limited but promising for a variety
of medical applications whose working visualization dis- 3.1.3.2 Increasing Peripheral FOV in Video
tance is within arm reach. Larger FOVs have been ob- See-Through HMDs. An increase in peripheral FOV in
tained, up to 82.5 3 67 deg., at the expense of reduced video see-through systems can be accomplished in two
brightness, increased complexity, and massive, expensive ways: in a folded optical design, as used for optical see-
technology (Welch & Shenker, 1984). Such FOVs may through HMDs, however with an opaque mirror instead
have been required for performing navigation tasks in of a half-transparent mirror, or in a nonfolded design
real and virtual environments but are likely not required but with nonenclosed mounts. The latter calls for inno-
in most augmented-reality applications. Optical see- vative optomechanical design because heavier optics
298 PRE S E N CE : V O L U ME 9 , N UM B ER 3
(including cost) have also hindered the construction of a technology offset the problems induced by the encum-
first prototype as well. New generations of computer- brance and sensory conflicts that are associated with it.
controlled fabrication and testing are expected to change In particular, one of us thinks that video see-through
this trend. HMDs may be met with resistance in the workplace be-
Since their beginning, high-resolution HMDs have cause they remove the direct, real-world view in order to
been CRT based. Early systems were even monochrome, augment it. This issue of trust may be difficult to over-
but color CRTs using color wheels or frame-sequential come for some users. If wide-angle FOV video see-
color have been fabricated and incorporated into HMDs through HMDs are used, this problem is exacerbated in
(Allen, 1993). Five years ago, we may have thought that, safety-critical applications. A key difference in such appli-
today, high-resolution, color, flat-panel displays would cations may turn out to be the failure mode of each
be the first choice for HMDs. While this is slowly hap- technology. A technology failure in the case of optical
pening, miniature CRTs are not fully obsolete. The cur- see-through HMDs may leave the subject without any
rent optimism, however, is prompted by new technolo- computer-generated images but still with the real-world
gies such as reflective LCDs, microelectromechanical view. In the case of video see-through, it may leave the
systems (MEMS)-based displays, laser-based displays, user with the complete suppression of the real-world
and nanotechnology-based displays. view, as well as the computer-generated view.
However, it may be that the issue has been greatly
lessened because the video view occupies such a small
3.2 Human-Factor and Perceptual
fraction (approximately 10 deg. visual angle) of the
Issues
scene in recent developments of the technology. It is
Assuming that many of the technological chal- especially true of flip-up and flip-down devices such as
lenges described have been addressed and high-perfor- that developed at UNC-CH and shown in figure 13.
mance HMDs can be built, a key human-factor issue for Image quality and its tradeoffs are definitely critical
see-through HMDs is that of user acceptance and safety, issues related to user acceptance for all types of technol-
which will be discussed first. We shall then discuss the ogy. In a personal communication, Martin Shenker, a
technicalities of perception in such displays. The ulti- senior optical engineer with more than twenty years of
mate see-through display is one that provides quantita- experience designing HMDs, pointed out that there are
tive and qualitative visual representations of scenes that currently no standards of image quality and technology
conform to a predictive model (for example, conform to specifications for the design, calibration, and mainte-
that given by the real world if that is the intention). Is- nance of HMDs. This is a current concern at a time
sues include the accuracy and precision of the rendered when the technology may be adopted in various medical
and perceived locations of objects in depth, the accuracy visualizations.
and precision of the rendered and perceived sizes of real
and virtual objects in a scene, and the need of an unob- 3.2.2 Perceived Depth. 3.2.2.1 Occlusion. The
structed peripheral FOV (which is important for many ability to perform occlusion in see-through HMDs is an
tasks that require situation awareness and the simple ma- important issue of comparison between optical and
nipulation of objects and accessories. video see-through HMDs. One of the most important
differences between these two technologies is how they
3.2.1 User Acceptance and Safety. A fair ques- handle the depth cue known as occlusion (or interposi-
tion for either type of technology is ‘‘will anyone actually tion). In real life, an opaque object can block the view of
wear one of these devices for extended periods?’’ The another object so that part or all of it is not visible.
answer will doubtless be specific to the application and While there is no problem in making computer-gener-
the technology included, but it will probably center ated objects occlude each other in either system, it is
upon whether the advanced capabilities afforded by the considerably more difficult to make real objects occlude
Rolland and Fuchs 301
virtual objects (and vice versa) unless the real world for cockpit, however, the appropriate pixels of the com-
an application is predefined and has been modeled in the puter-generated image are masked so that they can see
computer. Even then, one would need to know the exact the real instruments. The room is kept fairly dark so that
location of a user with respect to that real environment. this technique will work (Barrette, 1992). David Mizell
This is not the case in most augmented-reality applica- (from Boeing Seattle) and Tom Caudell (University of
tions, in which the real world is constantly changing and New Mexico) are also using this technique; they refer to
on-the-fly acquisition is all the information one will ever it as ‘‘fused reality’’ (Mizell, 1998).
have of the real world. Occlusion is a strong monocular While optical see-through HMDs can allow real ob-
cue to depth perception and may be required in certain jects to occlude virtual objects, the reverse is even more
applications (Cutting & Vishton, 1995). challenging because normal beam splitters have no way
In both systems, computing occlusion between the of selectively blocking out the real environment. This
real and virtual scenes requires a depth map of both problem has at least two possible partial solutions. The
scenes. A depth map of the virtual scene is usually avail- first solution is to spatially control the light levels in the
able (for z-buffered image generators), but a depth map real environment and to use displays that are bright
of the real scene is a much more difficult problem. While enough so that the virtual objects mask the real ones by
one could create a depth map in advance from a static reason of contrast. (This approach is used in the flight
real environment, many applications require on-the-fly simulator just mentioned for creating the virtual instru-
image acquisition of the real scene. Assuming the system ments.) This may be a solution for a few applications. A
has a depth map of the real environment, video see- possible second solution would be to locally attenuate
through HMDs are perfectly positioned to take advan- the real-world view by using an addressable filter device
tage of this information. They can, on a pixel-by-pixel placed on the see-through mirror. It is possible to gener-
basis, selectively block the view of either scene or even ate partial occlusion in this manner because the effective
blend them to minimize edge artifacts. One of the chief beam of light entering the eye from some point in the
advantages of video see-through HMDs is that they scene covers only a small area of the beam splitter, the
handle this problem so well. eye pupil being typically 2mm to 4mm in photopic vi-
The situation for optical see-through HMDs can be sion. A problem with this approach is that the user does
more complex. Existing optical see-through HMDs not focus on the beam splitter, but rather somewhere in
blend the two images with beam splitters, which blend the scene. A point in the scene maps to a disk on the
the real and virtual images uniformly throughout the beam splitter, and various points in the scene map to
FOV. Normally, the only control the designer has is the overlapping disks on the beam splitter. Thus, any block-
amount of reflectance versus transmittance of the beam ing done at the beam splitter may occlude more of the
splitter, which can be chosen to match the brightness of scene than expected, which might lead to odd visual ef-
the displays with the expected light levels in the real- fects. A final possibility is that some applications may
world environment. If the system has a model of the real work acceptably without properly rendered occlusion
environment, it is possible to have real objects occlude cues. That is, in some cases, the user may be able to use
virtual ones by simply not drawing the occluded parts of other depth cues, such as head-motion parallax, to re-
the virtual objects. The only light will then be from the solve the ambiguity caused by the lack of occlusion cues.
real objects, giving the illusion that they are occluding
the virtual ones. Such an effect requires a darkened 3.2.2.2 Rendered Locations of Objects in Depth. We
room with light directed where it is needed. This tech- shall distinguish between errors in the rendered and per-
nique has been used by CAE Electronics in their flight ceived locations of objects in depth. The former yields
simulator. When the pilots look out the window, they the latter. One can conceive, however, that errors in the
see computer-generated objects. If they look inside the perceived location of objects in depth can also occur
302 PRE S E N CE : V O L U ME 9 , N UM B ER 3
even in the absence of errors in rendered depths as a re- HMDs, the real-scene video images must be acquired
sult of an incorrect computational model for stereo pair from the correct viewpoint (Biocca & Rolland, 1998).
generation or a suboptimal presentation of the stereo For the computer graphics-generation component,
images. This is true both for optical and video see- three choices of eyepoint locations within the human eye
through HMDs. Indeed, if the technology is adequate have been proposed: the nodal point of the eye3 (Robi-
to support a computational model, and the model ac- nett & Rolland, 1992; Deering, 1992), the entrance
counts for required technology and corresponding pa- pupil of the eye (Rolland, 1994; Rolland et al., 1995),
rameters, the rendered locations of objects in depth—as and the center of rotation of the eye (Holloway, 1995).
well as the resulting perceived locations of objects in Rolland (1995) discusses that the choice of the nodal
depth—will follow expectations. Vaissie recently showed point would in fact yield errors in rendered depth in all
some limitations of the choice of a static eyepoint in cases whether the eyes are tracked or not. For a device
computational models for stereo pair generation for vir- with eye-tracking capability, the entrance pupil of the eye
tual environments that yield errors in rendered and thus should be taken as the eyepoint. If eye movements are
perceived location of objects in depths (Vaissie and ignored, meaning that the computer-graphics eyepoints
Rolland, 2000). The ultimate goal is to derive a compu- are fixed, then it was proposed that it is best to select the
tational model and develop the required technology that center of rotation of the eye as the eyepoint (Fry, 1969;
yield the desired perceived location of objects in depth. Holloway, 1995). An in-depth analysis of this issue re-
Errors in rendered depth typically result from inaccurate veals that while the center of rotation yields higher accu-
display calibration and parameter determination such as racy in position, the center of the entrance pupil yields in
the FOV, the frame-buffer overscan, the eyepoints’ loca- fact higher angular accuracy (Vaissie & Rolland, 2000).
tions, conflicting or noncompatible cues to depth, and Therefore, depending on the task involved, and whether
remaining optical aberrations including residual optical angular accuracy or position accuracy is most important,
distortions. the centers of rotation or the centers of the entrance pu-
pil may be selected as best eyepoints location in HMDs.
3.2.2.3 FOV and Frame-Buffer Overscan. Inaccu-
racies of a few degrees in FOV are easily made if no cali- 3.2.2.5 Residual Optical Distortions. Optical dis-
bration is conducted. Such inaccuracies can lead to sig- tortion is one of the few optical aberrations that do not
nificant errors in rendered depths depending on the affect image sharpness; rather, it introduces warping of
imaging geometry. For some medical and computer- the image. It occurs only for optics that include lenses. If
guided surgery applications, for example, errors of sev- the optics include only plane mirrors, there are no dis-
eral millimeters are likely to be unacceptable. The FOV tortions (Peuchot, 1994). Warping of the images leads
and the overscan of the frame buffer that must be mea- to errors in rendered depths. Distortion results from the
sured and accounted for to yield accurate rendered locations of the user’s pupils away from the nodal points
depths are critical parameters for stereo pair generation of the optics. Moreover, it varies as a function of where
in HMDs (Rolland et al., 1995). These parameters must the user looks through the optics. However, if the optics
be set correctly regardless of whether the technology is are well calibrated to account for the user’s IPD, distor-
optical or video see-through. tion will be fairly constant for typical eye movements
behind the optics. Prewarping of the computer-gener-
3.2.2.4 Specification of Eyepoint Location. The ated image can thus be conducted to compensate for the
specification of the locations of the user’s eyepoints
(which are used to render the stereo images from the
correct viewpoints) must be specified for accurate ren-
3. Nodal points are conjugate points in an optical system that satisfy
dered depth. This applies to both optical and video see- an angular magnification of 1. Two points are considered to conjugate
through HMDs. In addition, for video see-through of each other if they are images of each other.
Rolland and Fuchs 303
optical residual distortions (Robinett & Rolland, 1992; In the case of nonoverlapping objects, one may resort
Rolland & Hopkins, 1993; Watson & Hodges, 1995). to depth cues other than occlusion. These include famil-
iar sizes, stereopsis, perspective, texture, and motion
3.2.2.6 Perceived Location of Objects in Depth. parallax. A psychophysical investigation of the perceived
Once depths are accurately rendered according to a location of objects in depth in an optical see-through
given computational model and the stereo images are HMD using stereopsis and perspective as the visual cues
presented according to the computational model, the to depth is given in Rolland et al., (1995), and Rolland
perceived location and size of objects in depth become et al. (1997). The HMD shown in figure 14 is mounted
an important issue in the assessment of the technology on a bench for calibration purpose and flexibility in vari-
and the model. Accuracy and precision can be defined ous parameter settings.
only statistically. Given an ensemble of measured per- In a first investigation, a systematic shift of 50 mm in
ceived location of objects in depths, the depth percept the perceived location of objects in depth versus pre-
will be accurate if objects appear in average at the loca- dicted values was found in this first set of study (Rolland
tion predicted by the computational model. The per- et al., 1995). Moreover, the precision of the measures
ceived location of objects in depth will be precise if ob- varied significantly across subjects. As we learn more
jects appear within a small spatial zone around that about the interface optics and computational model
average location. We shall distinguish between overlap- used in the generation of the stereo image pairs and im-
ping and nonoverlapping objects. prove on the technology, we have demonstrated errors
304 PRE S E N CE : V O L U ME 9 , N UM B ER 3
on the order of 2 mm. The technology is now ready to an upward translation in a lateral pointing task after
deploy for extensive testing in specific applications, and wearing the HMD. Moreover, some participants experi-
the VRDA tool is one of the applications we are cur- enced some early signs of cybersickness.
rently pursuing. The presence of negative aftereffects has some poten-
Studies of the perceived location of objects in depth tially disturbing practical implications for the diffusion of
for overlapping objects in an optical see-through HMD large-FOV video see-through HMDs (Kennedy & Stan-
have been conducted by Ellis and Buchler (1994). They ney, 1997). Some of the intended earlier users of these
showed that the perceived location of virtual objects can HMDs are surgeons and other individuals in the medical
be affected by the presence of a nearby opaque physical profession. Hand-eye sensory recalibration for highly
object. When a physical object was positioned in front of skilled users (such as surgeons) could have potentially
(or at) the initial perceived location of a 3-D virtual ob- disturbing consequences if the surgeon were to enter
ject, the virtual object appeared to move closer to the surgery within some period after using an HMD. It is an
observer. In the case in which the opaque physical object empirical question how long the negative aftereffects
was positioned substantially in front of the virtual object, might persist and whether a program of gradual adapta-
human subjects often perceived the opaque object to be tion (Welch, 1994) or dual adaptation (Welch, 1993)
transparent. In current investigations with the VRDA might minimize the effect altogether. In any case, any
tool, the opaque leg model appears transparent when a shift in the camera eyepoints need to be minimized as
virtual knee model is projected on the leg as seen in fig- much as possible to facilitate the adaptation process that
ure 4. The virtual anatomy subjectively appears to be is taking place. As we learn more about these issues, we
inside the leg model (Baillot, 1999; Outters et al., 1999; will build devices with less error and more similarity be-
Baillot et al., 2000). tween using these systems and a pair of eyeglasses (so
that adaptation takes less time and aftereffects decrease
3.2.3 Adaptation. When a system does not offer as well).
what the user ultimately wants, two paths may be taken: A remaining issue is the conflict between accommoda-
improving on the current technology, or first studying tion and convergence in such displays. The issue can be
the ability of the human system to adapt to an imperfect solved at some cost (Rolland, et al., 2000). For lower-
technological unit and then developing adaptation train- end systems, a question to investigate is how users adapt
ing when appropriate. This is possible because of the to various settings of the technology. For high-end sys-
astonishing ability of the human visual and propriocep- tems, much research is still needed to understand the
tive systems to adapt to new environments, as has been importance of perceptual conflicts and how to best mini-
shown in studies on adaptation (Rock, 1966, for ex- mize them.
ample).
Biocca and Rolland (1998) conducted a study of ad- 3.2.4 Peripheral FOV. Given that peripheral
aptation to visual displacement using a large-FOV video vision can be provided in both optical and video see-
see-through HMD. Users see the real world through through systems, the next question is whether it is used
two cameras that are located 62 mm higher than and effectively for both systems. In optical see-through,
165 mm forward from their natural eyepoints as shown there is almost no transition or discrepancy between the
in figure 2. Subjects showed evidence of perceptual ad- real scene captured by the see-through device and the
aptation to sensory disarrangement during the course of peripheral vision seen on the side of the device.
the study. This revealed itself as improvement in perfor- For video see-through, the peripheral FOV has been
mance over time while wearing the see-through HMD provided by letting the user see around the device, as
and as negative aftereffects once they removed it. More with optical see-through. However, it remains to be seen
precisely, the negative aftereffect manifested itself clearly whether the difference in presentation of the superim-
as a large overshoot in a depth-pointing task, as well as posed real scene and the peripheral real scene will cause
Rolland and Fuchs 305
discomfort or provide conflicting cues to the user. The slightly less than with unaided eyes. This is a problem
issue is that the virtual displays call for a different accom- only if the user is working with nearby objects and the
modation for the user than the real scene in various virtual images are focused outside of the depth of field
cases. that is required for nearby objects. For the virtual images
and no autofocus capability for the 2-D virtual images,
3.2.5 Depth of Field. One important property of the depth of field is imposed by the human visual system
optical systems, including the visual system, is depth of around the location of the displayed virtual images.
field. (Depth of field refers to the range of distances When the retinal images are not sharp following some
from the detector (such as the eye) in which an object discrepancy in accommodation, the visual system is con-
appears to be in focus without the need for a change in stantly processing somewhat blurred images and tends
the optics focus (such as eye accommodation). For the to tolerate blur up to the point at which essential detail is
human visual system example, if an object is accurately obscured. This tolerance for blur considerably extends
focused monocularly, other objects somewhat nearer and the apparent depth of field so that the eye may be as
farther away are also seen clearly without any change in much as 6 0.25 diopters out of focus without stimulat-
accommodation. Still nearer or farther objects are ing accommodative change (Moses, 1970).
blurred. Depth of field reduces the necessity for precise
accommodation and is markedly influenced by the diam- 3.2.6 Qualitative Aspects. The representation
eter of the pupil. The larger the pupil, the smaller the of virtual objects, and in some cases of real objects, is
depth of field. For a 2 mm and 4 mm pupil, the depths altered by see-through devices. Aspects of perceptual
of field are 6 0.06 and 6 0.03 diopters, respectively. For representation include the shape of objects, their color,
a 4 mm pupil, for example, such a depth of field trans- brightness, contrast, shading, texture, and level of detail.
lates as a clear focus from 0.94 m to 1.06 m for an object In the case of optical see-through HMDs, folding the
1 m away, and from 11 m to 33 m for an object 17 m optical path by using a half-transparent mirror is neces-
away (Campbell, 1957; Moses, 1970). An important sary because it is the only configuration that leaves the
point is that accommodation plays an important role real scene almost unaltered. A thin, folding mirror will
only at close working distances, where depth of field is introduce a small apparent shift in depth of real objects
narrow. precisely equal to e(n 2 1)/n, where e is the thickness of
With video see-through systems, the miniature cam- the plate and n is its index of refraction. This is in addi-
eras that acquire the real-scene images must provide a tion to a small amount of distortion (, 1%) of the scene
depth of field equivalent to the required working dis- at the edges of a 60 deg. FOV. Consequently, real ob-
tance for a task. For a large range of working distances, jects are seen basically unaltered.
the camera may need to be focused at the middle work- Virtual objects, on the other hand, are formed from
ing distance. For closer distances, the small depth of field the fusion of stereo images formed through magnifying
may require an autofocus instead of a fixed-focus cam- optics. Each optical virtual image formed of the display
era. associated with each eye is typically optically aberrated.
With optical see-through systems, the available depth For large-FOV optics such as HMDs, astigmatism and
of field for the real scene is essentially that of the human chromatic aberrations are often the limiting factors. Cus-
visual system, but for a larger pupil than would be acces- tom-designed HMD optics can be analyzed from a visual
sible with unaided eyes. This can be explained by the performance point of view (Shenker, 1994; Rolland,
brightness attenuation of the real scene by the half-trans- 2000). Such analysis allows the prediction of the ex-
parent mirror. As a result, the pupils are dilated (we as- pected visual performance of HMD users.
sume here that the real and virtual scenes are matched in It must be noted that real and virtual objects in such
brightness). Therefore, the effective depth of field is systems may be seen sharply by accommodating in dif-
306 PRE S E N CE : V O L U ME 9 , N UM B ER 3
ferent planes under most visualization settings. This planes also allow autofocusing but with no need for eye
yields conflicts in accommodation for real and virtual tracking.
imagery. For applications in which the virtual objects are
presented in a small working volume around some mean
display distance (such as arm-length visualization), the
2-D optical images of the miniature displays can be lo- 4 Conclusion
cated at that same distance to minimize conflicts in ac-
commodation and convergence between real and virtual We have discussed issues involving optical and
objects. Another approach to minimizing conflicts in video see-through HMDs. The most important issues
accommodation and convergence is multifocal planes are system latency, occlusion, the fidelity of the real-
technology as described in Rolland et al., 2000). world view, and user acceptance. Optical see-through
Beside brightness attenuation and distortion, other systems offer an essentially unhindered view of the real
aspects of object representation are altered in video see- environment; they also provide an instantaneous real-
through HMDs. The authors’ experience with at least world view that assures that visual and proprioception
one system is that the color and brightness of real ob- information is synchronized. Video systems forfeit the
jects are altered along with the loss in texture and levels unhindered view in return for improved ability to see
of detail due to the limited resolution of the miniature real and synthetic imagery simultaneously.
video cameras and the wide-angle optical viewer. This Some of us working with optical see-through devices
alteration includes spatial, luminance, and color resolu- strongly feel that providing the real scene through opti-
tion. This is perhaps resolvable with improved technol- cal means is important for applications such as medical
ogy, but it currently limits the ability of the HMD user visualization in which human lives are at stake. Others,
to perceive real objects as they would appear with un- working with video see-through devices feel that a
aided eyes. In wide-FOV video see-through HMDs, flip-up view is adequate for the safety of the patient.
both real and virtual objects call for the same accommo- Also, how to render occlusion of the real scene at given
dation; however, conflicts of accommodation and con- spatial locations may be important. Video see-through
vergence are also present. As with optical see-through systems can also guarantee registration of the real and
HMDs, these conflicts can be minimized if objects are virtual scenes at the expense of a mismatch between vi-
perceived at a relatively constant depth near the plane of sion and proprioception. This may or may not be per-
the optical images. In narrow-FOV systems in which the ceived as a penalty if the human observer is able to adapt
real scene is seen in large part outside the overlay imag- to such a mismatch. Hybrid solutions, such as that de-
ery, conflicts in accommodation can also result between veloped by Peuchot (1994), including optical see-
the real and computer-generated scene. through technology for visualization and video technol-
For both technologies, a solution to these various ogy for tracking objects in the real environment, may
conflicts in accommodation may be to allow autofocus play a key role in future developments of technology for
of the 2-D virtual images as a function of the location of 3-D medical visualization.
the user gaze point in the virtual environment, or to Clearly, there is no ‘‘right’’ system for all applications:
implement multifocal planes (Rolland et al., 2000). Each of the tradeoffs discussed in this paper must be ex-
Given eye-tracking capability, autofocus could be pro- amined with respect to specific applications and available
vided because small displacements of the miniature dis- technology to determine which type of system is most
play near the focal plane of the optics would yield large appropriate. Furthermore, additional HMD features
axial displacements of the 2-D virtual images in the pro- such as multiplane focusing and eye tracking are cur-
jected virtual space. The 2-D virtual images would move rently investigated at various research and development
in depth according to the user gaze point. Multifocal sites and may provide solutions to current perceptual
Rolland and Fuchs 307
conflicts in HMDs. A shared concern among scientists Baillot, Y., & Rolland, J. P. (1998). Modeling of a knee joint
developing further technology is the lack of standards for the VRDA tool. Proc. of Medicine Meets Virtual Reality
not only in the design but also most importantly in the (pp. 366–367).
calibration and maintenance of HMD systems. Baillot, Y., Rolland, J. P., & Wright, D. (1999). Kinematic
modeling of knee-joint motion for a virtual reality tool. In
Proc. of Medicine Meets Virtual Reality (pp. 30–35).
Baillot, Y., Rolland, J. P., Lin, K., & Wright, D. L. (2000). Au-
Acknowledgments
tomatic modeling of knee-joint motion for the Virtual Real-
ity Dynamic Anatomy (VRDA) Tool. Presence: Teleoperators
This review was expanded from an earlier paper in a SPIE pro-
and Virtual Environments, 9(3), 223–235.
ceeding by Rolland, Holloway, and Fuchs (1994), and the au-
Baillot, Y. (1999). First implementation of the Virtual Reality
thors would like to thank Rich Holloway for his earlier contri-
Dynamic Anatomy (VRDA) tool. Unpublished master’s the-
bution to this work. We thank Myron Krueger from Artificial
sis, University of Central Florida.
Reality Corp. for stimulating discussions on various aspects of
Barrette, R. E. (1992). Wide field of view, full-color, high-
the technology, as well as Martin Shenker from M.S.O.D. and
resolution, helmet-mounted display. Proc. of SID ’92 Sympo-
Brian Welch from CAE Electronics for discussions on current
sium (pp. 69–72).
optical technologies. Finally, we thank Bernard Peuchot, Derek
Biocca, F. A., & Rolland, J. P. (1998). Virtual eyes can rear-
Hill, and Andrei State for providing information about their
range your body: Adaptation to virtual-eye location in see-
research that has significantly contributed to the improvement
thru head-mounted displays. Presence: Teleoperators and Vir-
of this paper. We deeply thank our various sponsors not only
tual Environments, 7(3), 262–277.
for their financial support that has greatly facilitated our re-
Bishop, G., Fuchs, H., McMillan, L., & Scher-Zagier, E. J.
search in see-through devices but also for the stimulating dis-
(1994). Frameless rendering: Double buffering considered
cussions they have provided over the years. Contracts and
harmful. Proc. of SIGGRAPH’94 (pp. 175–176).
grants include ARPA DABT 63-93-C-0048, NSF Cooperative
Brooks, F. P. (1992). Walkthrough project: Final technical re-
Agreement ASC-8920219; Science and Technology Center for
port to National Science Foundation Computer and Infor-
Computer Graphics and Scientific Visualization, ONR
mation Science and Engineering. Technical Report TR92-
N00014-86-K-0680, ONR N00014-94-1-0503, ONR
026, University of North Carolina at Chapel Hill.
N000149710654, NIH 5-R24-RR-02170, NIH
Buchroeder, R. A., Seeley, G. W., & Vukobratovich, D.
1-R29LM06322-O1A1, and DAAH04-96-C-0086.
(1981). Design of a catadioptric VCASS helmet-mounted
display. Optical Sciences Center, University of Arizona, un-
der contract to U.S. Air Force Armstrong Aerospace Medi-
References cal Research Laboratory, Wright-Patterson Air Force Base,
Dayton, Ohio, AFAMRL-TR-81-133.
Allen, D. (1993). A high resolution field sequential display for Campbell, F. W. (1957). The depth of field of the human eye.
head-mounted applications. Proc. IEEE virtual reality an- Optica Acta, 4, 157–164.
nual international symposium (VRAIS’93) (pp. 364–370). Cutting, J. E., & Vishton, P. M. (1995). Perceiving the layout
Azuma, R., & Bishop, G. (1994). Improving static and dy- and knowing distances: The integration, relative potency,
namic registration in an optical see-through HMD. Com- and contextual use of different information about depth. In
puter Graphics: Proceedings of SIGGRAPH ’94 (pp. 197– W. Epstein & S. Rogers (Eds.), Perception of Space and Mo-
204). tion (pp. 69–117). Academic Press.
Bajura, M., Fuchs, H., & Ohbuchi R. (1992). Merging virtual Deering, M. (1992). High resolution virtual reality. Proc. of
objects with the real world, Computer Graphics, 26, 203– SIGGRAPH’92, Computer Graphics, 26(2), 195–201.
210. Desplat, S. (1997). Characterization des elements actuals et
Bajura, M., & Newmann H. (1995). Dynamic registration cor- future de loculometre Metrovision-Sextant. Technical Re-
rection in video-based augmented reality systems. IEEE port, Ecole Nationale Superieure de Physique de Marseilles,
Computer Graphics and Applications, 15(5), 52–60. November.
308 PRE S E N CE : V O L U ME 9 , N UM B ER 3
Dhome, M., Richetin, M., Lapreste, J. P., & Rives, G. (1989). (1995). A novel virtual reality tool for teaching dynamic 3D
Determination of the attitude of 3-D objects from a single anatomy. Proc. of CVRMed ’95, 163–169.
perspective view. IEEE Trans. Pattern Analysis and Machine Kandebo, S. W. (1988). Navy to evaluate Agile Eye helmet-
Intelligence, 11(12), 1265–1278. mounted display system. Aviation Week & Space Technology,
Droessler, J. G., & Rotier, D. J. (1990). Tilted cat helmet- August 15, 94–99.
mounted display. Optical Engineering, 29(8), 849–854. Kennedy, R. S., & Stanney, K. M. (1997). Aftereffects in vir-
Edwards, E. K., Rolland, J. P., & Keller, K. P. (1993). Video tual environment exposure: Psychometric issues. In M. J.
see-through design for merging of real and virtual environ- Smith, G. Salvendy, & R. J. Koubek (Eds.), Design of Com-
ments. Proc. of IEEE VRAIS’93 (pp. 223–233). puting.
Edwards, P. J., Hawkes, D. J., Hill, D. L. G., Jewell, D., Spink, Kijima, R., & Ojika, T. (1997). Transition between virtual en-
R., Strong, A., & Gleeson, M. (1995). Augmentation of vironment and workstation environment with projective
reality using an operating microscope for otolaryngology head-mounted display. Proc. of VRAIS ’97, 130–137.
and neurosurgical guidance. J. Image Guided Surgery, 1(3), Manhart, P. K., Malcom R. J., & Frazee, J. G. (1993). Augeye:
172–178. A compact, solid Schmidt optical relay for helmet mounted
Ellis, S. R., & Bucher, U. J. (1994). Distance perception of displays. Proc. of IEEE VRAIS ’93, 234–245.
stereoscopically presented virtual objects optically superim- Mizell, D. (1998). Personnal communication.
posed on physical objects in a head-mounted see-through Moses, R. A. (1970). Adlers Physiology of the Eye. St. Louis,
display. Proc. of the Human Factors and Ergonomic Society, MO: Mosby.
Nashville. Oster, P. J., & Stern, J. A. (1980). ‘‘Measurement of Eye
Fernie, A. (1995). A 3.1 Helmet-mounted Display with Dual Movement,’’ in I. Martin & P. H. Venables (Eds.), Tech-
Resolution. CAE Electronics, Ltd, Montreal, Canada, SID niques of Psychophysiology, New York: John Wiley & Sons.
95 Applications Digest, 37–40. Outters, V., Argotti, Y., & Rolland, J. P. (1999). Knee motion
Fernie, A. (1995). Improvements in area of interest helmet- capture and representation in augmented reality. Technical
mounted displays. In Proc. of SID95. Report TR99-006, University of Central Florida.
Fry, G. A. (1969). Geometrical Optics. Chilton Book Company. Parsons, J., & Rolland, J. P. (1998). A non-intrusive display
Furness, T. A. (1986). The super cockpit and its human factors technique for providing real-time data within a surgeons
challenges. Proceedings of the Human Factors Society 30, critical area of interest. Proc. of Medicine Meets Virtual Real-
48–52. ity, 246–251.
Held, R., & Durlach, N. (1987). Telepresence, time delay and Peuchot, B. (1993). Camera virtual equivalent model: 0.01
adaptation. NASA Conference publication 10032. pixel detectors. Special issue on 3D Advanced Image Process-
Holloway, R. (1995). An analysis of registration errors in a see- ing in Medicine in Computerized Medical Imaging and
through head-mounted display system for craniofacial sur- Graphics, 17(4/5), 289–294.
gery planning. Unpublished doctoral dissertation, University ———. (1994). Utilization de detecteurs subpixels dans la
of North Carolina at Chapel Hill. modelisation d’une camera—verification de l’hypothese
Hua, H., Girardot, A., Gao, C., and Rolland, J.P. (2000). En- stenope, 9e Congres AFCET, reconnaissance des formes et in-
gineering of head-mounted projective displays. Applied Op- telligence artificielle, Paris.
tics (in press). ———. (1998). Personal communication.
Jacobs, M. C., Livingston, M. A., & State, A. (1997). Manag- Peuchot, B., Tanguy, A., & Eude, M. (1994). Dispositif op-
ing latency in complex augmented reality systems. Proc. of tique pour la visualization d’une image virtuelle tridimen-
1997 Symposium on Interactive 3D Graphics, ACM sionelle en superimposition avec an object notamment pour
SIGGRAPH, 235–240. des applications chirurgicales. Depot CNRS #94106623,
Kaiser Electro-Optics. (1994). Personal communication from May 31.
Frank Hepburn of KEO, Carlsbad, CA. General description ———. (1995). Virtual reality as an operative tool during sco-
of Kaiser’s VIM system (‘‘Full immersion head-mounted liosis surgery. Proc. of CVRMed ’95, 549–554.
display system’’) is available via ARPA’s ESTO home page: Robinett, W., & Rolland, J. P. (1992). A computational model
http://esto.sysplan.com/ESTO/. for the stereoscopic optics of a headmounted display. Pres-
Kancherla, A., Rolland, J. P., Wright, D., & Burdea, G. ence: Teleoperators and Virtual Environments, 1(1), 45–62.
Rolland and Fuchs 309
Rock, I. (1966). The nature of perceptual adaptation. New presence in virtual environments, Presence: Teleoperators and
York: Basic Books. Virtual Environments, 6(6), 603–616.
Rolland, J. P. (1994). Head-mounted displays for virtual envi- State, A., Chen, D., Tector, C., Brandt, C., Chen, H., Ohbu-
ronments: The optical interface. International Optical De- chi, R., Bajura, M., and Fuchs, H. (1994). Case study: Ob-
sign Conference 94, Proc. OSA, 22, 329–333. serving a volume rendered fetus within a pregnant patient,
Rolland, J. P. (1998). Mounted displays, Optics and Photonics Proceedings of Visualization ’94. Washington, DC., 364–373.
News, 9(11), 26–30. State, A., Hirota, G., Chen, D. T., Garrett, W. E., and Living-
Rolland, J. P. (2000). Wide angle, off-axis, see-through head- ston, M. (1996). Superior augmented-reality registration by
mounted display. Optical Engineering—Special Issue on integrating landmark tracking and magnetic tracking. Proc.
Pushing the Envelop in Optical Design Software, 39(7), (in of SIGGRAPH 1996, ACM SIGGRAPH, 429–438.
press). Sutherland, I. (1965). The ultimate display, Information Pro-
Rolland, J. P., Ariely, D., & Gibson, W. (1995). Towards quan- cessing 1965: Proc. of IFIP Congress 65, 506–508.
tifying depth and size perception in virtual environments. Sutherland, I. E. (1968). A head-mounted three-dimensional
Presence: Teleoperators and Virtual Environments, 4(1), display, Fall Joint Computer Conference, AFIPS Conference
24–49. Proceedings 33, 757–764.
Rolland, J. P. & Arthur, K. (1997). Study of depth judgments
Thomas, M. L., Siegmund, W. P., Antos, S. E., and Robinson,
in a see-through mounted display, Proceeding of SPIE 3058,
R. M. (1989). Fiber optic development for use on the fiber
AEROSENSE, 66–75.
optic helmet-mounted display, in Helmet-Mounted Displays,
Rolland, J. P., Holloway, R. L., & Fuchs, H. (1994). A com-
J. T. Colloro, Ed., Proc. of SPIE 1116, 90–101.
parison of optical and video see-through head-mounted dis-
Tomasi, C., & Kanade, T. (1991). Shape and motion from im-
plays. Proceedings of SPIE 2351, 293–307.
age streams: a factorization method-Part 3: Detection and
Rolland, J. P., & Hopkins, T. (1993). A method for computa-
tracking of point features, Carnegie Mellon technical report
tional correction of optical distortion in head-mounted dis-
CMU-CS-91-132.
plays. Technical Report TR93-045, University of North
Vaissie, L., & Rolland, J. P. (1999). Eye-tracking integration in
Carolina at Chapel Hill.
head-mounted displays, Patent Filed January.
Rolland, J. P., Krueger, M., and Goon, A. (2000). Multifocus
Vaissie, L., Rolland, J. P. (2000). Alberlí́an errors in head-
planes in head-mounted displays, Applied Optics, 39(19), in
mounted displays: choice of eyejoints location.
press.
Technical Report TR 2000-001, University of Central
Rolland, J. P., Parsons, J., Poizat, D., & Hancock, D. (1998).
Florida.
Conformal optics for 3D visualization, Proc. of the Interna-
tional Lens Design Conference, Hawaii (June), 760–764. Watson, B., & Hodges, L. F. (1995). Using texture maps to
Rolland, J. P., Yoshida, A., Davis, L., & Reif, J. H. (1998). correct for optical distortion in head-mounted displays,
High-resolution inset head-mounted display, Applied Optics, Proc. of VRAIS’95, 172–178.
37(19), 4183–4193. Welch, B., & Shenker, M. (1984). The fiber-optic Helmet-
Roscoe, S. N. (1984). Judgments of size and distance with im- Mounted Display, Image III, 345–361.
aging displays, Human Factors, 26(6), 617–629. Welch, R. B. (1993). Alternating prism exposure causes dual
Roscoe, S. N. (1991). The eyes prefer real images, in Pictorial adaptation and generalization to a novel displacement. Per-
Communication in Virtual and Real Environments, Ed. ception and Psychophysics, 54(2), 195–204.
Stephen R. Ellis, Taylor and Francis. Welch, R. B. (1994). Adapting to virtual environments and
Scher-Zagier, E. J. (1997). A human’s eye view: motion, blur, teleoperators, Unpublished manuscript, NASA-Ames Re-
and frameless rendering. ACM Crosswords 97. search Center, Moffett Field, CA.
Shenker, M. (1994). Image quality considerations for head- Welch, G., & Bishop, G. (1997). SCAAT: Incremental track-
mounted displays. Proc. of the OSA: International Lens De- ing with incomplete information, Proc. of ACM
sign Conference, 22, 334–338. SIGGRAPH, (pp. 333–344).
Shenker, M. (1998). Personal Communication. Wright, D. L., Rolland, J. P., & Kancherla, A. R. (1995). Us-
Slater, M., and Wilbur, S. (1997). A framework for immersive ing virtual reality to teach radiographic positioning, Radio-
virtual environments (FIVE): Speculations on the role of logic Technology, 66(4), 167–172.