0% found this document useful (0 votes)
21 views24 pages

Optical Versus Video See-Through Head-Mounted Disp Compressed

This document compares optical and video see-through head-mounted displays (HMDs) for augmented reality in medical visualization, discussing their technological and human-factor trade-offs. It reviews various medical applications and research efforts in the field, highlighting the importance of accurate visualization for procedures like biopsies and surgeries. The authors also suggest future developments, including eye tracking and hybrid technologies, to enhance the capabilities of these devices.

Uploaded by

Leah May
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views24 pages

Optical Versus Video See-Through Head-Mounted Disp Compressed

This document compares optical and video see-through head-mounted displays (HMDs) for augmented reality in medical visualization, discussing their technological and human-factor trade-offs. It reviews various medical applications and research efforts in the field, highlighting the importance of accurate visualization for procedures like biopsies and surgeries. The authors also suggest future developments, including eye tracking and hybrid technologies, to enhance the capabilities of these devices.

Uploaded by

Leah May
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220089776

Optical Versus Video See-Through Head-Mounted Displays in Medical


Visualization

Article in Presence Teleoperators & Virtual Environments · June 2000


DOI: 10.1162/105474600566808 · Source: DBLP

CITATIONS READS
391 6,152

2 authors:

Jannick P Rolland Henry Fuchs


University of Rochester University of North Carolina at Chapel Hill
648 PUBLICATIONS 14,555 CITATIONS 219 PUBLICATIONS 11,823 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Jannick P Rolland on 16 May 2014.

The user has requested enhancement of the downloaded file.


Jannick P. Rolland
School of Optics/CREOL, and
Optical Versus Video See-Through
School of Electrical Engineering and Head-Mounted Displays in Medical
Computer Science
University of Central Florida
Visualization
Orlando, FL 32816–2700

Henry Fuchs
Department of Computer Science
University of North Carolina Abstract
Chapel Hill, NC 27599-3175
We compare two technological approaches to augmented reality for 3-D medical
visualization: optical and video see-through devices. We provide a context to discuss
the technology by reviewing several medical applications of augmented-reality re-
search efforts driven by real needs in the medical Želd, both in the United States and
in Europe. We then discuss the issues for each approach, optical versus video, from
both a technology and human-factor point of view. Finally, we point to potentially
promising future developments of such devices including eye tracking and multifocus
planes capabilities, as well as hybrid optical/video technology.

1 Introduction

One of the most promising and challenging future uses of head-mounted


displays (HMDs) is in applications in which virtual environments enhance
rather than replace real environments. This is referred to as augmented reality
(Bajura, Fuchs, & Ohbuchi, 1992). To obtain an enhanced view of the real en-
vironment, users wear see-through HMDs to see 3-D computer-generated ob-
jects superimposed on their real-world view. This see-through capability can be
accomplished using either an optical HMD, as shown in figure 1, or a video
see-through HMD, as shown in figure 2. We shall discuss the tradeoffs between
optical and video see-through HMDs with respect to technological and hu-
man-factor issues, and discuss our experience designing, building, and testing
these HMDs in medical visualization.
With optical see-through HMDs, the real world is seen through half-trans-
parent mirrors placed in front of the user’s eyes, as shown in figure 1. These
mirrors are also used to reflect the computer-generated images into the user’s
eyes, thereby optically combining the real- and virtual-world views. With a
video see-through HMD, the real-world view is captured with two miniature
video cameras mounted on the head gear, as shown in figure 2, and the com-
puter-generated images are electronically combined with the video representa-
tion of the real world (Edwards, Rolland, & Keller, 1993; State et al., 1994).
See-through HMDs were first developed in the 1960s. Ivan Sutherland’s
1965 and 1968 optical see-through and stereo HMDs were the first computer
graphics-based HMDs that used miniature CRTs for display devices, a me-
chanical tracker to provide head position and orientation in real time, and a
Presence, Vol. 9, No. 3, June 2000, 287–309 hand-tracking device (Sutherland, 1965, 1968). While most of the develop-
r 2000 by the Massachusetts Institute of Technology ments in see-through HMDs aimed at military applications (Buchroeder,

Rolland and Fuchs 287


288 PRE S E N CE : V O L U ME 9 , N UM B ER 3

the various applications surveyed. Finally, we shall dis-


cuss what the technology may evolve to become.

2 Some Past and Current Applications of


Optical and Video See-Through HMDs

The need for accurate visualization and diagnosis


in health care is crucial. One of the main developments
of medical care has been imaging. Since the discovery of
X-rays in 1895 by Wilhelm Roentgen, and the first X-ray
Figure 1. Optical see-through head-mounted clinical application a year later by two Birmingham (UK)
display (Photo courtesy of KaiserElectro-Optics). doctors, X-ray imaging and other medical imaging mo-
dalities (such as CT, ultrasound, and NMR) have
emerged. Medical imaging allows doctors to view as-
pects of the interior architecture of living beings that
were unseen before. With the advent of imaging tech-
nologies, opportunities for minimally invasive surgical
procedures have arisen. Imaging and visualization can be
used to guide needle biopsy, laparoscopic, endoscopic,
and catheter procedures. Such procedures do require
additional training because the physicians cannot see the
natural structures that are visible in open surgery. For
example, the natural eye-hand coordination is not avail-
able during laparoscopic surgery. Visualization tech-
niques associated with see-through HMDs promise to
Figure 2. A custom optics video
help restore some of the lost benefits of open surgery
see-through head-mounted display
(for example, by projecting a virtual image directly on
developed at UNC-CH. Edwards et al.
the patient, eliminating the need for a remote monitor).
(1993) designed the miniature video
cameras. The viewer was a large FOV
The following paragraphs briefly discuss examples of
opaque HMD from Virtual Research. recent and current research conducted with optical see-
through HMDs at the University of North Carolina at
Chapel Hill (UNC-CH), the University of Central
Seeley, & Vukobratovich, 1981; Furness, 1986; Florida (UCF), and the United Medical and Dental
Droessler & Rotier, 1990; Barrette, 1992; Kandebo, Schools of Guy’s and Saint Thomas’s Hospitals in En-
1988; Desplat, 1997), developments in 3-D scientific gland, video-see-through at UNC-CH, and hybrid opti-
and medical visualization were initiated in the 1980s at cal-video see-through at the University of Blaise Pascal
the University of North Carolina at Chapel Hill (Brooks, in France.
1992). A rigorous error-analysis for an optical see-through
In this paper, we shall first review several medical visu- HMD targeted toward the application of optical see-
alization applications developed using optical and video through HMD to craniofacial reconstruction was con-
see-through technologies. We shall then discuss techno- ducted at UNC-CH (Holloway, 1995). The superimpo-
logical and human-factors and perceptual issues related sition of CT skull data onto the head of the real patient
to see-through devices, some of which are employed in would give the surgeons ‘‘X-ray vision.’’ The premise of
Rolland and Fuchs 289

Figure 3. (a) The VRDA tool will allow superimposition of


virtual anatomy on a model patient. (b) An illustration of the Figure 4. First demonstration of the superimposition of a graphical
view of the HMD user (Courtesy of Andrei State). (c) A knee-joint superimposed on a leg model for use in the VRDA tool: (a) a
rendered frame of the knee-joint bone structures animated picture of the benchprototype setup; a snapshot of the superimposition
based on a kinematic model of motion developed by Baillot through one lens of the setup in (b) a diagonal view and (c) a side view
and Rolland that will be integrated in the tool (1998). (1999).

that system was that viewing the data in situ allows sur- the positioning of the leg around the knee joint. The
geons to make better surgical plans because they will be joint is accurately tracked optically by using three infra-
able to see the complex relationships between the bone red video cameras to locate active infrared markers
and soft tissue more clearly. Holloway found that the placed around the joint. Figure 4 shows the results of
largest registration error between real and virtual objects the optical superimposition of the graphical knee joint
in optical see-through HMDs was caused by delays in on a leg model, seen through one of the lenses of our
presenting updated information associated with track- stereoscopic bench prototype display.
ing. Extensive research in tracking has been pursued An optical see-through HMD coupled with optical
since at UNC-CH (Welch & Bishop, 1997). tracking devices positioned along the knee joint of a
One of the authors and colleagues are currently devel- model patient is used to visualize the 3-D computer-
oping an augmented-reality tool for the visualization of rendered anatomy directly superimposed on the real leg
human anatomical joints in motion (Wright et al., 1995; in motion. The user may further manipulate the joint
Kancherla et al., 1995; Rolland & Arthur, 1997; Parsons and investigate the joint motions. From a technological
& Rolland, 1998; Baillot & Rolland, 1998; Baillot et al., aspect, the field of view (FOV) of the HMD should be
1999). An illustration of the tool using an optical see- sufficient to capture the knee-joint region, and the track-
through HMD for visualization of anatomy is shown in ing devices and image-generation system must be fast
figure 3. In the first prototype, we have concentrated on enough to track typical knee-joint motions during ma-
290 PRE S E N CE : V O L U ME 9 , N UM B ER 3

nipulation at interactive speed. The challenge of captur-


ing accurate knee-joint motion using optical markers
located on the external surface of the joint was addressed
by Rolland and Arthur (1997). The application aims at
developing a more advanced tool for teaching dynamic
anatomy (advanced in the sense that the tool allows
combination of the senses of touch and vision). We aim
this tool to specifically impart better understanding of
bone motions during radiographic positioning for the
radiological science (Wright et al., 1995).
Figure 5. Real-time acquisition and superimposition of ultrasound
To support the need for accurate motions of the knee
slice images on a pregnant woman (1992).
joint in the Virtual Reality Dynamic Anatomy (VRDA)
tool, an accurate kinematic model of joint motion based
on the geometry of the bones and collision detection equivalent to an optical see-through system operating on
algorithms was developed (Baillot & Rolland, 1998; a microscopic scale. In this case, the real scene is now
Baillot et al., 1999). This component of the research is seen through magnifying optics, but the eye of the ob-
described in another paper of this special issue (Baillot et server is still the direct detecting device as in optical see-
al., 2000). The dynamic registration of the leg with the through.
simulated bones is reported elsewhere (Outters et al., One of the authors and colleagues at the UNC-CH
1999). High-accuracy optical tracking methods, care- are currently developing techniques that merge video
and graphical images for augmented reality. The goal is
fully designed and calibrated HMD technology, and ap-
to develop a system displaying live, real-time, ultrasound
propriate computer graphics models for stereo pair gen-
data properly registered in 3-D space on a scanned sub-
eration play an important role in achieving accurate
ject. This would be a powerful and intuitive visualization
registration (Vaissie and Rolland, 2000; Rolland et al.,
tool as well. The first application developed was the visu-
2000).
alization of a human fetus during ultrasound echogra-
At the United Medical and Dental Schools of Guy’s
phy. Figure 5 shows the real-time ultrasound images
and Saint Thomas’s Hospitals in England, researchers
which appear to be pasted in front of the patient’s body,
are projecting simple image features derived from preop-
rather than fixed within it (Bajura et al., 1992). Real-
erative magnetic resonance and computer-tomography
time imaging and visualization remains a challenge. Fig-
images into the light path of a stereo operating micro-
ure 6 shows a more recent, non-real-time implementa-
scope, with the goal of eventually allowing surgeons to tion of the visualization in which the fetus is rendered
visualize underlying structures during surgery. The first more convincingly within the body (State et al., 1994).
prototype used low-contrast color displays (Edwards et Recently, knowledge from this video and ultrasound
al., 1995). The current prototype uses high-contrast technology has also been applied to developing a visual-
monochrome displays. The microscope is tracked intra- ization method for ultrasound-guided biopsies of breast
operatively, and the optics are calibrated (including lesions that were detected during mammography screen-
zoom and focus) using a pinhole camera model. The ing procedures (Figure 7) (State et al., 1996). This ap-
intraoperative coordinate frame is registered using ana- plication was motivated from the challenges we observed
tomical features and fiducial markers. The image features during a biopsy procedure while collaborating on re-
used in the display are currently segmented by hand. search with Etta Pisano, head of the Mammography Re-
These include the outline of a lesion, the track of key search Group at UNC-CH. The goal was to be able to
nerves and blood vessels, and bone landmarks. This locate any tumor within the breast as quickly and accu-
computer-guided surgery system can be said to be rately as possible. The technology of video see-through
Rolland and Fuchs 291

Figure 6. Improved rendering of fetus inside the


abdomen (1994).

Figure 8. Laboratory prototype of the


hybrid optical/video see-through AR tool
for guided scoliosis surgery developed by
Peuchot at the University of Blaise
Pascal, France (1995).

needle to maximize the chances of biopsy of the lesion.


In the case in which the lesion is located fairly deep in
the breast tissue, the procedure is difficult and can be
lengthy (one to two hours is not atypical for deep le-
sions). Several challenges remain to be overcome before
the technology developed can actually be tested in the
clinic, including accurate and precise tracking and a
technically reliable HMD. The technology may have
applications in guiding laparoscopy, endoscopy, or cath-
eterization as well.
At the University of Blaise Pascal in Clermont Fer-
Figure 7. Ultrasound guided biopsy (a) Laboratory setup during rand, France, researchers developed several augmented-
evaluation of the technology with Etta Pisano and Henry Fuchs (b) A reality visualization tools based on hybrid optical and
view through the HMD (1996). video see-through to assist in surgery to correct scoliosis
(abnormal curvature of the spine column) (Peuchot,
Tanguy, & Eude, 1994, 1995). This application was de-
already developed was thus applied to this problem. The veloped in collaboration with a surgeon of infantile sco-
conventional approach to biopsy is to follow the inser- liosis. The visualization system shown in figure 8 is from
tion of a needle in the breast tissue with a remote moni- an optics point of view, the simplest see-through system
tor displaying real-time, 2-D, ultrasound depth images. one may conceive. It is first of all fixed on a stand, and it
Such a procedure typically requires five insertions of the is designed as a viewbox positioned above the patient.
292 PRE S E N CE : V O L U ME 9 , N UM B ER 3

Figure 10. Optical scheme of the hybrid


optical/video see-through AR tool shown in Fig. 8.

specimens and accuracy of submillimeters in depth has


been demonstrated (Peuchot, 1993, 1994).
The success of the method can be attributed to the
fine calibration of the system, which, contrary to most
systems, does not assume a pinhole camera model for
the video camera. Moreover, having a fixed viewer with
Figure 9. Graphics illustration of current and future
no optical magnification (contrary to typical HMDs)
use of computer-guided surgery according to Bernard
and a constant average plane of surgical operation re-
Peuchot.
duces the complexity of problems such as registration
and visualization. It can be shown, for example, that ren-
The surgeon positions himself above the viewbox to see dered depth errors are minimized when the virtual im-
the patient, and the graphical information is superim- age planes through the optics (a simple semi-transparent
posed on the patient as illustrated in figure 9. The sys- mirror in Peuchot’s case) is located in the average plane
tem includes a large monitor where a stereo pair of im- of the 3-D virtual object visualized (Rolland et al.,
ages is displayed, as well as half-silvered mirrors that 1995). Furthermore, the system avoids the challenging
allow the superimposition of the real and virtual objects. problems of tracking, optical distortion compensation,
The monitor is optically imaged on a plane through the and conflicts of accommodation and convergence re-
semi-transparent mirrors, and the spine under surgery is lated to HMDs (Robinett & Rolland, 1992; Rolland &
located within a small volume around that plane. An op- Hopkins, 1993). Some tracking and distortion issues
tical layout of the system is shown in figure 10. will be further discussed in sections 3.1 and 3.2, respec-
In the above hybrid optical-video system, vertebrae tively. However, good registration of real and virtual
are located in space by automatic analysis of the perspec- objects in a static framework is a first step to good cali-
tive view from a single video camera of the vertebrae. A bration in a dynamic framework, and Peuchot’s results
standard algorithm such as the inverse perspective algo- are state of the art in this regard.
rithm is used to extract the 3-D information from the It is important to note that the method developed for
projections observed in the detector plane (Dhome et this application employs a hybrid optical-video technol-
al., 1989). The method relies heavily on accurate video ogy. In this case, video is essentially used to localize real
tracking of vertebral displacements. High-accuracy algo- objects in the surgical field, and optical see-through is
rithms were developed to support the application includ- used as the visualization tool for the surgeon. While the
ing development of subpixel detectors and calibration first system developed used one video camera, the meth-
techniques. The method has been validated on vertebral ods have been extended to include multiple cameras
Rolland and Fuchs 293

Figure 11. Outline of sections 3.1 and 3.2 of this paper.

with demonstrated accuracy and precision of 0.01 mm merging a reflected image of the computer-generated
(Peuchot, 1998). Peuchot chose the hybrid system over scene into the view of the real world. Video see-through
a video see-through approach because ‘‘it allows the op- HMDs are typically more obtrusive in the sense that
erator to work in his real environment with a perception they block out the real-world view in exchange for the
space that is real.’’ Peuchot judged this point to be criti- ability to merge the two views more convincingly. In
cal in a medical application like surgery. recent developments, narrow fields of view in video see-
through HMDs have replaced large field-of-view
HMDs, thus reducing the area where the real world
3 A Comparison of Optical and Video (captured through video) and the computer-generated
See-Through Technology images are merged into a small part of the visual scene.
In any case, a fundamental consideration is whether the
As suggested in the description of the applications additional features afforded by video see-through
described, the main goal of augmented-reality systems is HMDs justify the loss of the unobstructed real-world
to merge virtual objects into the view of the real scene so view.
that the user’s visual system suspends disbelief into per- Our experience indicates that there are many tradeoffs
ceiving the virtual objects as part of the real environ- between optical and video see-through HMDs with re-
ment. Current systems are far from perfect, and system spect to technological and human-factors issues that af-
designers typically end up making a number of applica- fect designing, building, and assessing these HMDs. The
tion-dependent trade offs. We shall list and discuss these specific issues are laid out in figure 11. While most of
tradeoffs in order to guide the choice of technology de- these issues could be discussed from both a technologi-
pending upon the type of application considered. cal and human-factors-standpoint (because the two are
Both systems, optical and video, have two image closely interrelated in HMD systems), we have chosen to
sources: the real world and the computer-generated classify each issue where it is most adequately addressed
world. These image sources are to be merged. Optical at this time, given the present state of the technology.
see-through HMDs take what might be called a ‘‘mini- For example, delays in HMD systems are addressed un-
mally obtrusive’’ approach; that is, they leave the view of der technology because technological improvements are
the real world nearly intact and attempt to augment it by actively being pursued to minimize delays. Delays also
294 PRE S E N CE : V O L U ME 9 , N UM B ER 3

certainly have impact on various human-factor issues user. This swimming effect has been demonstrated and
(such as the perceived location of objects in depth and minimized by predicting HMD position instead of sim-
user acceptance). Therefore, the multiple arrows shown ply measuring positions (Azuma & Bishop, 1994).
in figure 11 indicate that the technological and human- Current HMD systems are lag limited as a conse-
factor-categories are highly interrelated. quence of tracker lag, the complexity of rendering, and
displaying the images. Tracker lag is often not the limit-
ing factor in performance. If displaying the image is the
3.1 Technological Issues
limiting factor, novel display architectures supporting
The technological issues for HMDs include latency frameless rendering can help solve the problem (Bishop
of the system, resolution and distortion of the real scene, et al., 1994). Frameless rendering is a procedure for con-
field of view (FOV), eyepoint matching of the see- tinuously updating a displayed image, as information
through device, and engineering and cost factors. While becomes available instead of updating entire frames at a
we shall discuss properties of both optical and video see- time. The tradeoffs between lag and image quality are
through HMDs, it must be noted that, contrary to opti- currently being investigated (Scher-Zagier, 1997). If we
cal see-through HMDs, there are no commercially avail- assume that we are limited by the speed of rendering an
able products for video see-through HMDs. Therefore, image, eye-tracking capability may be useful to quickly
discussions of such systems should be considered care- update information only around the gaze point of the
fully as findings may be particular to only a few current user (Thomas et al., 1989; Rolland, Yoshida, et al.,
systems. Nevertheless, we shall provide as much insight 1998; Vaissie & Rolland, 1999).
as possible into what we have learned with such systems One of the major advantages of video see-through
as well. HMDs is the potential capability of reducing the relative
latencies between the 2-D real and synthetic images as a
3.1.1 System Latency. An essential component consequence of both types of images being digital (Ja-
of see-through HMDs is the capacity to properly register cobs et al., 1997). Manipulation of the images in space
a user’s surroundings and the synthetic space. A geomet- and in time is applied to register them. Three-dimen-
ric calibration between the tracking devices and the sional registration is computationally intensive, if at all
HMD optics must be performed. The major impedi- robust, and challenging for interactive speed. The spatial
ment to achieving registration is the gap in time, re- approach to forcing registration in video see-through
ferred to as lag, between the moment when the HMD systems is to correct registration errors by imaging land-
position is measured and the moment when the syn- mark points in the real world and registering virtual ob-
thetic image for that position is fully rendered and pre- jects with respect to them (State et al., 1996). One ap-
sented to the user. proach to eliminating temporal delays between the real
Lag is the largest source of registration error in most and computer-generated images in such a case is to cap-
current HMD systems (Holloway, 1995). This lag in ture a video image and draw the graphics on top of the
typical systems is between 60 ms and 180 ms. The head video image. Then the buffer is swapped, and the com-
of a user can move during such a period of time, and the bined image is presented to the HMD user. In such a
discrepancy in perceived scene and superimposed scene configuration, no delay apparently exists between the
can destroy the illusion of the synthetic objects being real and computer-generated images. If the actual la-
fixed in the environment. The synthetic objects can tency of the computer-generated image is large with re-
‘‘swim’’ around significantly in such a way that they may spect to the video image, however, it may cause sensory
not even seem to be part of the real object to which they conflicts between vision and proprioception because the
belong. For example, in the case of ultrasound-guided video images no longer correspond to the real-world
biopsy, the computer-generated tumor may appear to be scene. Any manual interactions with real objects could
located outside the breast while tracking the head of the suffer as a result.
Rolland and Fuchs 295

Another approach to minimizing delays in video see- jects will always be where they are perceived to be, and
through HMDs is to delay the video image until the this may be crucial for a broad range of applications.
computer-generated image is rendered. Bajura and Neu-
mann (1995) applied chroma keying, for example, to 3.1.2 Real-Scene Resolution and Distortion. If
dynamically image a pair of red LEDs placed on two real real-scene resolution refers to the resolution of the real-
objects (one stream) and then registered two virtual ob- scene object, the best real-scene resolution that a see-
jects with respect to them (second stream). By tracking through device can provide is that perceived with the
more landmarks, better registration of real and virtual naked eye under unit magnification of the real scene.
objects may be achieved (Tomasi and Kanade, 1991). Certainly under microscopic observation as described by
The limitation of the approach taken is the attempt to Hill (Edwards et al., 1995), the best scene resolution
register 3-D scenes using 2-D constraints. If the user goes beyond that obtained with a naked eye. It is also
rotates his head rapidly or if a real-world object moves, assumed that the see-through device has no image-pro-
there may be no ‘‘correct’’ transformation for the virtual cessing capability.
A resolution extremely close to that obtained with the
scene image. To align all the landmarks, one must either
naked eye is easily achieved with a nonmicroscopic opti-
allow errors in registration of some of the landmarks or
cal see-through HMD, because the optical interface to
perform a nonlinear warping of the virtual scene that
the real world is simply a thin parallel plate (such as a
may create undesirable distortions of the virtual objects.
glass plate) positioned between the eyes and the real
The nontrivial solution to this problem is to increase the
scene. Such an interface typically introduces only very
speed of the system until scene changes between frames
small amounts of optical aberrations to the real scene:
are small and can be approximated with simple 2-D
For example, for a real-point object seen through a 2
transformations.
mm planar parallel plate placed in front of a 4 mm dia.
In a similar vein, it is also important to note that the
eye pupil, the diffusion spot due to spherical aberration
video view of the real scene will normally have some lag
would subtend a 2 102 7 arc-minute visual angle for a
due to the time it takes to acquire and display the video point object located 500 mm away. Spherical aberration
images. Thus, the image in a video see-through HMD is one of the most common and simple aberrations in
will normally be slightly delayed with respect to the real optical systems that lead to blurring of the images. Such
world, even without adding delay to match the synthetic a degradation of image quality is negligible compared to
images. This delay may increase if an image-processing the ability of the human eye to resolve a visual angle of 1
step is applied to either enforce registration or perform minute of arc. Similarly, planar plates introduce low dis-
occlusion. The key issue is whether the delay in the sys- tortion of the real scene, typically below 1%. There is no
tem is too great for the user to adapt to it (Held & distortion only for the chief rays that pass the plate paral-
Durlach, 1987). lel to its normal.1
Systems using optical see-through HMDs have no In the case of a video see-through HMD, real-scene
means of introducing artificial delays into the real scene. images are digitized by miniature cameras (Edwards et
Therefore, the system may need to be optimized for low al., 1993) and converted into an analog signal that is fed
latency, perhaps less than 60 ms, where predictive track- to the HMD. The images are then viewed through the
ing can be effective (Azuma & Bishop, 1994). For any HMD viewing optics that typically use an eyepiece de-
remaining lag, the user may have to limit his actions to sign. The perceived resolution of the real scene can thus
slow head motions. Applications in which speed of be limited by the resolution of the video cameras or the
movement can be readily controlled, such as in the HMD viewing optics. Currently available miniature
VRDA tool described earlier, can benefit from optical
1. A chief ray is defined as a ray that emanates from a point in the
see-through technology (Rolland & Arthur, 1997). The FOV and passes through the center of the pupils of the system. The
advantage of having no artificial delays is that real ob- exit pupil in an HMD is the entrance pupil of the human eye.
296 PRE S E N CE : V O L U ME 9 , N UM B ER 3

video cameras typically have a resolution of 640 3 480, the technical problems associated with providing overlay
which is also near the resolution limit of the miniature can be solved.
displays currently used in HMDs.2 Depending upon the Theoretically, distortion is not a problem in video see-
magnification and the field of view of the viewing optics, through systems because the cameras can be designed to
various effective visual resolutions may be reached. compensate for the distortion of the optical viewer, as
While the miniature displays and the video cameras seem demonstrated by Edwards et al. (1993). However, if the
to currently limit the resolution of most systems, such goal is to merge real and virtual information, as in ultra-
performance may improve with higher-resolution detec- sound echography, having a warped real scene signifi-
tors and displays. cantly increases the complexity of the synthetic-image
In assessing video see-through systems, one must dis- generation (State et al., 1994). Real-time video correc-
tinguish between narrow and wide FOV devices. Large- tion can be used at the expense of an additional delay in
FOV ($ 50 deg.) eyepiece designs are known to be ex- the image-generation sequence. An alternative is to use
tremely limited in optical quality as a consequence of low-distortion video cameras at the expense of a nar-
factors such as optical aberrations that accompany large rower FOV, merge unprocessed real scenes with virtual
FOVs, pixelization that may become more apparent un- scenes, and warp the merged images. Warping can be
der large magnification, and the exit pupil size that must done using (for example) real-time texture mapping to
accommodate the size of the pupils of a person’s eyes. compensate for the distortion of the HMD viewing op-
Thus, even with higher-resolution cameras and displays, tics as a last step (Rolland & Hopkins, 1993; Watson &
video see-through HMDs may remain limited in their Hodges, 1995).
ability to provide a real-scene view of high resolution if The need for high, real-scene resolution is highly task
conventional eyepiece designs continue to be used. In dependent. Demanding tasks such as surgery or engi-
the case of small to moderate FOV (10 deg. to 20 deg.) neering training, for example, may not be able to toler-
video see-through HMDs, the resolution is still typically ate much loss in real-scene resolution. Because the large-
much less than the resolving power of the human eye. FOV video see-through systems that we have
A new technology, referred to as tiling, may overcome experienced are seriously limited in terms of resolution,
some of the current limitations of conventional eyepiece narrow-FOV video see-through HMDs are currently
design for large FOVs (Kaiser, 1994). The idea is to use preferred. Independently of resolution, an additional
multiple narrow-FOV eyepieces coupled with miniature critical issue in aiming towards narrow-FOV video see-
displays to completely cover (or tile) the user’s FOV. through HMDs is the need to match the viewpoint of
Because the individual eyepieces have a fairly narrow the video cameras with the viewpoint of the user. Match-
FOV, higher resolution (nevertheless currently less than ing is challenging with large-FOV systems. Also, meth-
the human visual system) can be achieved. One of the ods for matching video and real scenes for large-FOV
few demonstrations of high-resolution, large-FOV dis- tiled displays must be developed. At this time, consider-
plays is the tiled displays. A challenge is the minimization ing the growing availability of high-resolution flat-panel
of seams in assembling the tiles, and the rendering of displays, we foresee that the resolution of see-though
multiple images at interactive speed. The tiled displays HMDs could gradually increase for both small- and
certainly bring new practical and computational chal- large-FOV systems. The development and marketing of
lenges that need to be confronted. If a see-through ca- miniature high-resolution technology must be under-
pability is desired (for example, to display virtual furni- taken to achieve resolutions that match that of human
ture in an empty room), it is currently unclear whether vision.

3.1.3 Field of View. A generally challenging issue


2. The number of physical elements is typically 640 3 480. One can
use signal processing to interpolate between lines to get higher resolu-
of HMDs is providing the user with an adequate FOV
tions. for a given application. For most applications, increasing
Rolland and Fuchs 297

the binocular FOV means that fewer head movements through HMDs, however, whether or not they have a
are required to perceive an equivalently large scene. We large overlay FOV, have been typically designed open
believe that a large FOV is especially important for tasks enough that users can use their peripheral vision around
that require grabbing and moving objects and that it the device, thus increasing the total real-world FOV to
provides increased situation awareness when compared closely match one’s natural FOV. An annulus of obstruc-
to narrow-FOV devices (Slater & Wilbur, 1997). The tion usually results from the mounts of the thin see-
situation with see-through devices is somewhat different through mirror similar to the way that our vision may be
from that of fully opaque HMDs in that the aim of using partially occluded by a frame when wearing eyeglasses.
the technology is different from that of immersing the In the design of video see-through HMDs, a difficult
user in a virtual environment. engineering task is matching the frustum of the eye with
that of the camera (as we shall discuss in section 3.1.4).
3.1.3.1 Overlay and Peripheral FOV. The term While such matching is not so critical for far-field view-
overlay FOV is defined as the region of the FOV where ing, it is important for near-field visualization as in vari-
graphical information and real information are superim- ous medical visualizations. This difficult matching prob-
posed. The peripheral FOV is the real-world FOV be- lem has lead to the consideration of narrower-FOV
yond the overlay FOV. For immersive opaque HMDs, systems. A compact, 40 3 30 deg. FOV design, de-
no such distinction is made; one refers simply to the signed for optical see-through HMD but adaptable to
FOV. It is important to note that the overlay FOV may video see-through, was proposed by Manhart, Malcolm,
need to be narrow only for certain augmented-reality & Frazee (1993). Video see-through HMDs, on the
applications. For example, in a visualization tool such as other hand, can provide (in terms of a see-through
the VRDA tool, only the knee-joint region is needed in FOV) the FOV displayed with the opaque type viewing
the overlay FOV. In the case of video HMD-guided optics that typically range from 20 deg. to 90 deg. In
breast biopsy, the overlay FOV could be as narrow as the such systems where the peripheral FOV of the user is
synthesized tumor. The real scene need not necessarily occluded, the effective real-world FOV is often smaller
be synthesized. The available peripheral FOV, however, than in optical see-through systems. When using a video
is critical for situation awareness and is most often re- see-through HMD in a hand-eye coordination task, we
quired for various applications whether it is provided as found in a recent human-factor study that users needed
part of the overlay or around the overlay. If provided to perform larger head movements to scan an active field
around the overlay, the transition from real to virtual of vision than when performing the task with the un-
imagery must be made as seamless as possible. This is an aided eye (Biocca & Rolland, 1998). We predict that the
investigation that has not yet been addressed in video need to make larger head movements would not arise as
see-through HMDs. much with see-through HMDs with equivalent overlay
Optical see-through HMDs typically provide from 20 FOVs but larger peripheral FOVs, because users are pro-
deg. to 60 deg. overlay FOV via the half-transparent vided with increased peripheral vision, and thus addi-
mirrors placed in front of the eyes, a characteristic that tional information, to more naturally perform the task.
may seem somewhat limited but promising for a variety
of medical applications whose working visualization dis- 3.1.3.2 Increasing Peripheral FOV in Video
tance is within arm reach. Larger FOVs have been ob- See-Through HMDs. An increase in peripheral FOV in
tained, up to 82.5 3 67 deg., at the expense of reduced video see-through systems can be accomplished in two
brightness, increased complexity, and massive, expensive ways: in a folded optical design, as used for optical see-
technology (Welch & Shenker, 1984). Such FOVs may through HMDs, however with an opaque mirror instead
have been required for performing navigation tasks in of a half-transparent mirror, or in a nonfolded design
real and virtual environments but are likely not required but with nonenclosed mounts. The latter calls for inno-
in most augmented-reality applications. Optical see- vative optomechanical design because heavier optics
298 PRE S E N CE : V O L U ME 9 , N UM B ER 3

have to be supported than in either optical or folded


video see-through. Folded systems require only a thin
mirror in front of the eyes, and the heavier optical com-
ponents are placed around the head. However, the
tradeoff with folded systems is a significant reduction in
the overlay FOV.

3.1.3.3 Tradeoff Resolution and FOV. While the


resolution of a display in an HMD is defined in the
graphics community by the number of pixels, the rel-
evant measure of resolution is the number of pixels per
angular FOV, which is referred to as angular resolution.
Indeed, what is of importance for usability is the angular
subtends of a pixel at the eye of the HMD user. Most
current high-resolution HMDs achieve higher resolu-
tion at the expense of a reduced FOV. That is, they use
the same miniature, high-resolution CRTs but with op- Figure 12. Proof of concept prototype of a
tics of less magnification in order to achieve higher an- projection head-mounted display with
microreector sheeting (1998).
gular resolution. This results in a FOV that is often nar-
row. The approach that employs large high-resolution
displays, or light valves, and transports the high-resolu- next-generation prototypes of the technology using cus-
tion images to the eyes by imaging optics coupled to a tom-made miniature lightweight optics. The system
bundle of optical fibers achieves high resolution at fairly presents various advantages over conventional HMDs,
large FOVs (Thomas et al., 1989). The current pro- including distortion-free images, occluded virtual ob-
posed solutions that improve resolution without trading jects from real-objects interposition, no image cross-
FOV are either tiling techniques, high-resolution inset talks for multiuser participants in the virtual world, and
displays (Fernie, 1995; Rolland, Yoshida, et al., 1998), the potential for a wide FOV (up to 120 deg.).
or projection HMDs (Hua et al., 2000).
Projective HMDs differ from conventional HMDs in 3.1.4 Viewpoint Matching. In video see-
that projection optics are used instead of eyepiece optics through HMDs, the camera viewpoint (that is, the en-
to project real images of miniature displays in the envi- trance pupil) must be matched to the viewpoint of the
ronment. A screen placed in the environment reflects the observer (the entrance pupil of the eye). The viewpoint
images back to the eyes of the user. Projective HMDs of a camera or eye is equivalent to the center of projec-
have been designed and demonstrated, for example, by tion used in the computer graphics model that computes
Kijima and Ojika (1997) and Parsons and Rolland the stereo images and is taken here to be the center of
(1998). Kijima used a conventional projection screen in the entrance pupil of the eye or camera (Vaissie &
his prototype. Parsons and Rolland developed a first- Rolland, 2000). In earlier video see-through designs,
prototype projection HMD system to demonstrate that Edwards et al. (1993) investigated ways to mount the
an undistorted virtual 3-D image could be rendered cameras to minimize errors in viewpoint matching. The
when projecting a stereo pair of images on a bent sheet error minimization versus exact matching was a conse-
of microretroreflector cubes. The first proof-of-concept quence of working with wide-FOV systems. If the view-
system is shown in figure 12. A comprehensive investiga- points of the cameras do not match the viewpoints of the
tion of the optical characteristics of projective HMDs is eyes, the user experiences a spatial shift in the perceived
given by Hua et al. (2000). We are also developing the scene that may lead to perceptual anomalies (as further
Rolland and Fuchs 299

Finally, correctly mounting the video cameras in a


video see-through HMD requires that the HMD has an
interpupillary distance (IPD) adjustment. Given the IPD
of a user, the lateral separation of the video cameras must
then be adjusted to that value in order for the views ob-
tained by the video cameras to match those that would
have been obtained with naked eyes. If one were to ac-
count for eye movements in video see-through HMDs,
the level of complexity in slaving the camera viewpoint
to the user viewpoint would be highly increased. To our
knowledge, such complexity has not yet been consid-
ered.

3.1.5 Engineering and Cost Factors. HMD


designs often suffer from fairly low resolution, limited
FOV, poor ergonomic designs, and excessive weight. A
good ergonomic design requires an HMD whose weight
is similar to a pair of eyeglasses, or which folds around
Figure 13. A 10 degree FOV video see-through
HMD: Dglasses developed at UNC-CH. Lipstick
the user’s head so the device’s center of gravity falls near
cameras and a double fold mirror arrangement was the center of rotation of the head (Rolland, 1994). The
used to match the viewpoints of the camera and user goal here is maximum comfort and usability. Reasonably
(1997). lightweight HMD designs currently suffer narrow
FOVs, on the order of 20 deg. To our knowledge, at
present, no large-FOV stereo see-through HMDs of any
type are comparable in weight to a pair of eyeglasses.
discussed under human-factors issues (Biocca & Rolland predicts that it could be achieved with some
Rolland, 1998). Error analysis should then be con- emerging technology of projection HMDs (Rolland,
ducted in such a case to match the need of the applica- Parsons, et al., 1998). However, it must be noted that
tion. such technology may not be well suited to all visualiza-
For cases in which the FOV is small (less than approxi- tion schemes as it requires a projection screen some-
mately 20 deg.), exact matching in viewpoints is pos- where in front of the user that is not necessarily attached
sible. Because the cameras cannot be physically placed at to the user’s head.
the actual eyepoints, mirrors can be used to fold the op- With optical see-through HMDs, the folding can be
tical path (much like a periscope) to make the cameras’ accomplished with either an on-axis or an off-axis de-
viewpoints correspond to the real eyepoints as shown in sign. Off-axis designs are more elegant and also far more
figure 13 (Edwards et al., 1993). While such geometry attractive because they elimate the ghost images that
solves the problem of the shift in viewpoint, it increases currently plague users of on-axis HMDs (Rolland,
the length of the optical path, which reduces the field of 2000). Off-axis designs are not commercially available
view, for the same reason that optical see-through because very few prototypes have been built (and those
HMDs tend to have smaller fields of view. Thus, video that have been built are classified) (Shenker, 1998).
see-through HMDs must either trade their large FOVs Moreover, off-axis systems are difficult to design and are
for correct real-world viewpoints or require the user to thus expensive to build as a result of off-axis components
adapt to the shifted viewpoints as further discussed in (Shenker, 1994). A nonclassified, off-axis design has
section 3.2.3. been designed by Rolland (1994, 2000). Several factors
300 PRE S E N CE : V O L U ME 9 , N UM B ER 3

(including cost) have also hindered the construction of a technology offset the problems induced by the encum-
first prototype as well. New generations of computer- brance and sensory conflicts that are associated with it.
controlled fabrication and testing are expected to change In particular, one of us thinks that video see-through
this trend. HMDs may be met with resistance in the workplace be-
Since their beginning, high-resolution HMDs have cause they remove the direct, real-world view in order to
been CRT based. Early systems were even monochrome, augment it. This issue of trust may be difficult to over-
but color CRTs using color wheels or frame-sequential come for some users. If wide-angle FOV video see-
color have been fabricated and incorporated into HMDs through HMDs are used, this problem is exacerbated in
(Allen, 1993). Five years ago, we may have thought that, safety-critical applications. A key difference in such appli-
today, high-resolution, color, flat-panel displays would cations may turn out to be the failure mode of each
be the first choice for HMDs. While this is slowly hap- technology. A technology failure in the case of optical
pening, miniature CRTs are not fully obsolete. The cur- see-through HMDs may leave the subject without any
rent optimism, however, is prompted by new technolo- computer-generated images but still with the real-world
gies such as reflective LCDs, microelectromechanical view. In the case of video see-through, it may leave the
systems (MEMS)-based displays, laser-based displays, user with the complete suppression of the real-world
and nanotechnology-based displays. view, as well as the computer-generated view.
However, it may be that the issue has been greatly
lessened because the video view occupies such a small
3.2 Human-Factor and Perceptual
fraction (approximately 10 deg. visual angle) of the
Issues
scene in recent developments of the technology. It is
Assuming that many of the technological chal- especially true of flip-up and flip-down devices such as
lenges described have been addressed and high-perfor- that developed at UNC-CH and shown in figure 13.
mance HMDs can be built, a key human-factor issue for Image quality and its tradeoffs are definitely critical
see-through HMDs is that of user acceptance and safety, issues related to user acceptance for all types of technol-
which will be discussed first. We shall then discuss the ogy. In a personal communication, Martin Shenker, a
technicalities of perception in such displays. The ulti- senior optical engineer with more than twenty years of
mate see-through display is one that provides quantita- experience designing HMDs, pointed out that there are
tive and qualitative visual representations of scenes that currently no standards of image quality and technology
conform to a predictive model (for example, conform to specifications for the design, calibration, and mainte-
that given by the real world if that is the intention). Is- nance of HMDs. This is a current concern at a time
sues include the accuracy and precision of the rendered when the technology may be adopted in various medical
and perceived locations of objects in depth, the accuracy visualizations.
and precision of the rendered and perceived sizes of real
and virtual objects in a scene, and the need of an unob- 3.2.2 Perceived Depth. 3.2.2.1 Occlusion. The
structed peripheral FOV (which is important for many ability to perform occlusion in see-through HMDs is an
tasks that require situation awareness and the simple ma- important issue of comparison between optical and
nipulation of objects and accessories. video see-through HMDs. One of the most important
differences between these two technologies is how they
3.2.1 User Acceptance and Safety. A fair ques- handle the depth cue known as occlusion (or interposi-
tion for either type of technology is ‘‘will anyone actually tion). In real life, an opaque object can block the view of
wear one of these devices for extended periods?’’ The another object so that part or all of it is not visible.
answer will doubtless be specific to the application and While there is no problem in making computer-gener-
the technology included, but it will probably center ated objects occlude each other in either system, it is
upon whether the advanced capabilities afforded by the considerably more difficult to make real objects occlude
Rolland and Fuchs 301

virtual objects (and vice versa) unless the real world for cockpit, however, the appropriate pixels of the com-
an application is predefined and has been modeled in the puter-generated image are masked so that they can see
computer. Even then, one would need to know the exact the real instruments. The room is kept fairly dark so that
location of a user with respect to that real environment. this technique will work (Barrette, 1992). David Mizell
This is not the case in most augmented-reality applica- (from Boeing Seattle) and Tom Caudell (University of
tions, in which the real world is constantly changing and New Mexico) are also using this technique; they refer to
on-the-fly acquisition is all the information one will ever it as ‘‘fused reality’’ (Mizell, 1998).
have of the real world. Occlusion is a strong monocular While optical see-through HMDs can allow real ob-
cue to depth perception and may be required in certain jects to occlude virtual objects, the reverse is even more
applications (Cutting & Vishton, 1995). challenging because normal beam splitters have no way
In both systems, computing occlusion between the of selectively blocking out the real environment. This
real and virtual scenes requires a depth map of both problem has at least two possible partial solutions. The
scenes. A depth map of the virtual scene is usually avail- first solution is to spatially control the light levels in the
able (for z-buffered image generators), but a depth map real environment and to use displays that are bright
of the real scene is a much more difficult problem. While enough so that the virtual objects mask the real ones by
one could create a depth map in advance from a static reason of contrast. (This approach is used in the flight
real environment, many applications require on-the-fly simulator just mentioned for creating the virtual instru-
image acquisition of the real scene. Assuming the system ments.) This may be a solution for a few applications. A
has a depth map of the real environment, video see- possible second solution would be to locally attenuate
through HMDs are perfectly positioned to take advan- the real-world view by using an addressable filter device
tage of this information. They can, on a pixel-by-pixel placed on the see-through mirror. It is possible to gener-
basis, selectively block the view of either scene or even ate partial occlusion in this manner because the effective
blend them to minimize edge artifacts. One of the chief beam of light entering the eye from some point in the
advantages of video see-through HMDs is that they scene covers only a small area of the beam splitter, the
handle this problem so well. eye pupil being typically 2mm to 4mm in photopic vi-
The situation for optical see-through HMDs can be sion. A problem with this approach is that the user does
more complex. Existing optical see-through HMDs not focus on the beam splitter, but rather somewhere in
blend the two images with beam splitters, which blend the scene. A point in the scene maps to a disk on the
the real and virtual images uniformly throughout the beam splitter, and various points in the scene map to
FOV. Normally, the only control the designer has is the overlapping disks on the beam splitter. Thus, any block-
amount of reflectance versus transmittance of the beam ing done at the beam splitter may occlude more of the
splitter, which can be chosen to match the brightness of scene than expected, which might lead to odd visual ef-
the displays with the expected light levels in the real- fects. A final possibility is that some applications may
world environment. If the system has a model of the real work acceptably without properly rendered occlusion
environment, it is possible to have real objects occlude cues. That is, in some cases, the user may be able to use
virtual ones by simply not drawing the occluded parts of other depth cues, such as head-motion parallax, to re-
the virtual objects. The only light will then be from the solve the ambiguity caused by the lack of occlusion cues.
real objects, giving the illusion that they are occluding
the virtual ones. Such an effect requires a darkened 3.2.2.2 Rendered Locations of Objects in Depth. We
room with light directed where it is needed. This tech- shall distinguish between errors in the rendered and per-
nique has been used by CAE Electronics in their flight ceived locations of objects in depth. The former yields
simulator. When the pilots look out the window, they the latter. One can conceive, however, that errors in the
see computer-generated objects. If they look inside the perceived location of objects in depth can also occur
302 PRE S E N CE : V O L U ME 9 , N UM B ER 3

even in the absence of errors in rendered depths as a re- HMDs, the real-scene video images must be acquired
sult of an incorrect computational model for stereo pair from the correct viewpoint (Biocca & Rolland, 1998).
generation or a suboptimal presentation of the stereo For the computer graphics-generation component,
images. This is true both for optical and video see- three choices of eyepoint locations within the human eye
through HMDs. Indeed, if the technology is adequate have been proposed: the nodal point of the eye3 (Robi-
to support a computational model, and the model ac- nett & Rolland, 1992; Deering, 1992), the entrance
counts for required technology and corresponding pa- pupil of the eye (Rolland, 1994; Rolland et al., 1995),
rameters, the rendered locations of objects in depth—as and the center of rotation of the eye (Holloway, 1995).
well as the resulting perceived locations of objects in Rolland (1995) discusses that the choice of the nodal
depth—will follow expectations. Vaissie recently showed point would in fact yield errors in rendered depth in all
some limitations of the choice of a static eyepoint in cases whether the eyes are tracked or not. For a device
computational models for stereo pair generation for vir- with eye-tracking capability, the entrance pupil of the eye
tual environments that yield errors in rendered and thus should be taken as the eyepoint. If eye movements are
perceived location of objects in depths (Vaissie and ignored, meaning that the computer-graphics eyepoints
Rolland, 2000). The ultimate goal is to derive a compu- are fixed, then it was proposed that it is best to select the
tational model and develop the required technology that center of rotation of the eye as the eyepoint (Fry, 1969;
yield the desired perceived location of objects in depth. Holloway, 1995). An in-depth analysis of this issue re-
Errors in rendered depth typically result from inaccurate veals that while the center of rotation yields higher accu-
display calibration and parameter determination such as racy in position, the center of the entrance pupil yields in
the FOV, the frame-buffer overscan, the eyepoints’ loca- fact higher angular accuracy (Vaissie & Rolland, 2000).
tions, conflicting or noncompatible cues to depth, and Therefore, depending on the task involved, and whether
remaining optical aberrations including residual optical angular accuracy or position accuracy is most important,
distortions. the centers of rotation or the centers of the entrance pu-
pil may be selected as best eyepoints location in HMDs.
3.2.2.3 FOV and Frame-Buffer Overscan. Inaccu-
racies of a few degrees in FOV are easily made if no cali- 3.2.2.5 Residual Optical Distortions. Optical dis-
bration is conducted. Such inaccuracies can lead to sig- tortion is one of the few optical aberrations that do not
nificant errors in rendered depths depending on the affect image sharpness; rather, it introduces warping of
imaging geometry. For some medical and computer- the image. It occurs only for optics that include lenses. If
guided surgery applications, for example, errors of sev- the optics include only plane mirrors, there are no dis-
eral millimeters are likely to be unacceptable. The FOV tortions (Peuchot, 1994). Warping of the images leads
and the overscan of the frame buffer that must be mea- to errors in rendered depths. Distortion results from the
sured and accounted for to yield accurate rendered locations of the user’s pupils away from the nodal points
depths are critical parameters for stereo pair generation of the optics. Moreover, it varies as a function of where
in HMDs (Rolland et al., 1995). These parameters must the user looks through the optics. However, if the optics
be set correctly regardless of whether the technology is are well calibrated to account for the user’s IPD, distor-
optical or video see-through. tion will be fairly constant for typical eye movements
behind the optics. Prewarping of the computer-gener-
3.2.2.4 Specification of Eyepoint Location. The ated image can thus be conducted to compensate for the
specification of the locations of the user’s eyepoints
(which are used to render the stereo images from the
correct viewpoints) must be specified for accurate ren-
3. Nodal points are conjugate points in an optical system that satisfy
dered depth. This applies to both optical and video see- an angular magnification of 1. Two points are considered to conjugate
through HMDs. In addition, for video see-through of each other if they are images of each other.
Rolland and Fuchs 303

Figure 14. (a) Bench prototype head-mounted display with head-motion


parallax developed in the VGILab at UCF (1997). (b) Schematic of the optical
imaging from a top view of the setup.

optical residual distortions (Robinett & Rolland, 1992; In the case of nonoverlapping objects, one may resort
Rolland & Hopkins, 1993; Watson & Hodges, 1995). to depth cues other than occlusion. These include famil-
iar sizes, stereopsis, perspective, texture, and motion
3.2.2.6 Perceived Location of Objects in Depth. parallax. A psychophysical investigation of the perceived
Once depths are accurately rendered according to a location of objects in depth in an optical see-through
given computational model and the stereo images are HMD using stereopsis and perspective as the visual cues
presented according to the computational model, the to depth is given in Rolland et al., (1995), and Rolland
perceived location and size of objects in depth become et al. (1997). The HMD shown in figure 14 is mounted
an important issue in the assessment of the technology on a bench for calibration purpose and flexibility in vari-
and the model. Accuracy and precision can be defined ous parameter settings.
only statistically. Given an ensemble of measured per- In a first investigation, a systematic shift of 50 mm in
ceived location of objects in depths, the depth percept the perceived location of objects in depth versus pre-
will be accurate if objects appear in average at the loca- dicted values was found in this first set of study (Rolland
tion predicted by the computational model. The per- et al., 1995). Moreover, the precision of the measures
ceived location of objects in depth will be precise if ob- varied significantly across subjects. As we learn more
jects appear within a small spatial zone around that about the interface optics and computational model
average location. We shall distinguish between overlap- used in the generation of the stereo image pairs and im-
ping and nonoverlapping objects. prove on the technology, we have demonstrated errors
304 PRE S E N CE : V O L U ME 9 , N UM B ER 3

on the order of 2 mm. The technology is now ready to an upward translation in a lateral pointing task after
deploy for extensive testing in specific applications, and wearing the HMD. Moreover, some participants experi-
the VRDA tool is one of the applications we are cur- enced some early signs of cybersickness.
rently pursuing. The presence of negative aftereffects has some poten-
Studies of the perceived location of objects in depth tially disturbing practical implications for the diffusion of
for overlapping objects in an optical see-through HMD large-FOV video see-through HMDs (Kennedy & Stan-
have been conducted by Ellis and Buchler (1994). They ney, 1997). Some of the intended earlier users of these
showed that the perceived location of virtual objects can HMDs are surgeons and other individuals in the medical
be affected by the presence of a nearby opaque physical profession. Hand-eye sensory recalibration for highly
object. When a physical object was positioned in front of skilled users (such as surgeons) could have potentially
(or at) the initial perceived location of a 3-D virtual ob- disturbing consequences if the surgeon were to enter
ject, the virtual object appeared to move closer to the surgery within some period after using an HMD. It is an
observer. In the case in which the opaque physical object empirical question how long the negative aftereffects
was positioned substantially in front of the virtual object, might persist and whether a program of gradual adapta-
human subjects often perceived the opaque object to be tion (Welch, 1994) or dual adaptation (Welch, 1993)
transparent. In current investigations with the VRDA might minimize the effect altogether. In any case, any
tool, the opaque leg model appears transparent when a shift in the camera eyepoints need to be minimized as
virtual knee model is projected on the leg as seen in fig- much as possible to facilitate the adaptation process that
ure 4. The virtual anatomy subjectively appears to be is taking place. As we learn more about these issues, we
inside the leg model (Baillot, 1999; Outters et al., 1999; will build devices with less error and more similarity be-
Baillot et al., 2000). tween using these systems and a pair of eyeglasses (so
that adaptation takes less time and aftereffects decrease
3.2.3 Adaptation. When a system does not offer as well).
what the user ultimately wants, two paths may be taken: A remaining issue is the conflict between accommoda-
improving on the current technology, or first studying tion and convergence in such displays. The issue can be
the ability of the human system to adapt to an imperfect solved at some cost (Rolland, et al., 2000). For lower-
technological unit and then developing adaptation train- end systems, a question to investigate is how users adapt
ing when appropriate. This is possible because of the to various settings of the technology. For high-end sys-
astonishing ability of the human visual and propriocep- tems, much research is still needed to understand the
tive systems to adapt to new environments, as has been importance of perceptual conflicts and how to best mini-
shown in studies on adaptation (Rock, 1966, for ex- mize them.
ample).
Biocca and Rolland (1998) conducted a study of ad- 3.2.4 Peripheral FOV. Given that peripheral
aptation to visual displacement using a large-FOV video vision can be provided in both optical and video see-
see-through HMD. Users see the real world through through systems, the next question is whether it is used
two cameras that are located 62 mm higher than and effectively for both systems. In optical see-through,
165 mm forward from their natural eyepoints as shown there is almost no transition or discrepancy between the
in figure 2. Subjects showed evidence of perceptual ad- real scene captured by the see-through device and the
aptation to sensory disarrangement during the course of peripheral vision seen on the side of the device.
the study. This revealed itself as improvement in perfor- For video see-through, the peripheral FOV has been
mance over time while wearing the see-through HMD provided by letting the user see around the device, as
and as negative aftereffects once they removed it. More with optical see-through. However, it remains to be seen
precisely, the negative aftereffect manifested itself clearly whether the difference in presentation of the superim-
as a large overshoot in a depth-pointing task, as well as posed real scene and the peripheral real scene will cause
Rolland and Fuchs 305

discomfort or provide conflicting cues to the user. The slightly less than with unaided eyes. This is a problem
issue is that the virtual displays call for a different accom- only if the user is working with nearby objects and the
modation for the user than the real scene in various virtual images are focused outside of the depth of field
cases. that is required for nearby objects. For the virtual images
and no autofocus capability for the 2-D virtual images,
3.2.5 Depth of Field. One important property of the depth of field is imposed by the human visual system
optical systems, including the visual system, is depth of around the location of the displayed virtual images.
field. (Depth of field refers to the range of distances When the retinal images are not sharp following some
from the detector (such as the eye) in which an object discrepancy in accommodation, the visual system is con-
appears to be in focus without the need for a change in stantly processing somewhat blurred images and tends
the optics focus (such as eye accommodation). For the to tolerate blur up to the point at which essential detail is
human visual system example, if an object is accurately obscured. This tolerance for blur considerably extends
focused monocularly, other objects somewhat nearer and the apparent depth of field so that the eye may be as
farther away are also seen clearly without any change in much as 6 0.25 diopters out of focus without stimulat-
accommodation. Still nearer or farther objects are ing accommodative change (Moses, 1970).
blurred. Depth of field reduces the necessity for precise
accommodation and is markedly influenced by the diam- 3.2.6 Qualitative Aspects. The representation
eter of the pupil. The larger the pupil, the smaller the of virtual objects, and in some cases of real objects, is
depth of field. For a 2 mm and 4 mm pupil, the depths altered by see-through devices. Aspects of perceptual
of field are 6 0.06 and 6 0.03 diopters, respectively. For representation include the shape of objects, their color,
a 4 mm pupil, for example, such a depth of field trans- brightness, contrast, shading, texture, and level of detail.
lates as a clear focus from 0.94 m to 1.06 m for an object In the case of optical see-through HMDs, folding the
1 m away, and from 11 m to 33 m for an object 17 m optical path by using a half-transparent mirror is neces-
away (Campbell, 1957; Moses, 1970). An important sary because it is the only configuration that leaves the
point is that accommodation plays an important role real scene almost unaltered. A thin, folding mirror will
only at close working distances, where depth of field is introduce a small apparent shift in depth of real objects
narrow. precisely equal to e(n 2 1)/n, where e is the thickness of
With video see-through systems, the miniature cam- the plate and n is its index of refraction. This is in addi-
eras that acquire the real-scene images must provide a tion to a small amount of distortion (, 1%) of the scene
depth of field equivalent to the required working dis- at the edges of a 60 deg. FOV. Consequently, real ob-
tance for a task. For a large range of working distances, jects are seen basically unaltered.
the camera may need to be focused at the middle work- Virtual objects, on the other hand, are formed from
ing distance. For closer distances, the small depth of field the fusion of stereo images formed through magnifying
may require an autofocus instead of a fixed-focus cam- optics. Each optical virtual image formed of the display
era. associated with each eye is typically optically aberrated.
With optical see-through systems, the available depth For large-FOV optics such as HMDs, astigmatism and
of field for the real scene is essentially that of the human chromatic aberrations are often the limiting factors. Cus-
visual system, but for a larger pupil than would be acces- tom-designed HMD optics can be analyzed from a visual
sible with unaided eyes. This can be explained by the performance point of view (Shenker, 1994; Rolland,
brightness attenuation of the real scene by the half-trans- 2000). Such analysis allows the prediction of the ex-
parent mirror. As a result, the pupils are dilated (we as- pected visual performance of HMD users.
sume here that the real and virtual scenes are matched in It must be noted that real and virtual objects in such
brightness). Therefore, the effective depth of field is systems may be seen sharply by accommodating in dif-
306 PRE S E N CE : V O L U ME 9 , N UM B ER 3

ferent planes under most visualization settings. This planes also allow autofocusing but with no need for eye
yields conflicts in accommodation for real and virtual tracking.
imagery. For applications in which the virtual objects are
presented in a small working volume around some mean
display distance (such as arm-length visualization), the
2-D optical images of the miniature displays can be lo- 4 Conclusion
cated at that same distance to minimize conflicts in ac-
commodation and convergence between real and virtual We have discussed issues involving optical and
objects. Another approach to minimizing conflicts in video see-through HMDs. The most important issues
accommodation and convergence is multifocal planes are system latency, occlusion, the fidelity of the real-
technology as described in Rolland et al., 2000). world view, and user acceptance. Optical see-through
Beside brightness attenuation and distortion, other systems offer an essentially unhindered view of the real
aspects of object representation are altered in video see- environment; they also provide an instantaneous real-
through HMDs. The authors’ experience with at least world view that assures that visual and proprioception
one system is that the color and brightness of real ob- information is synchronized. Video systems forfeit the
jects are altered along with the loss in texture and levels unhindered view in return for improved ability to see
of detail due to the limited resolution of the miniature real and synthetic imagery simultaneously.
video cameras and the wide-angle optical viewer. This Some of us working with optical see-through devices
alteration includes spatial, luminance, and color resolu- strongly feel that providing the real scene through opti-
tion. This is perhaps resolvable with improved technol- cal means is important for applications such as medical
ogy, but it currently limits the ability of the HMD user visualization in which human lives are at stake. Others,
to perceive real objects as they would appear with un- working with video see-through devices feel that a
aided eyes. In wide-FOV video see-through HMDs, flip-up view is adequate for the safety of the patient.
both real and virtual objects call for the same accommo- Also, how to render occlusion of the real scene at given
dation; however, conflicts of accommodation and con- spatial locations may be important. Video see-through
vergence are also present. As with optical see-through systems can also guarantee registration of the real and
HMDs, these conflicts can be minimized if objects are virtual scenes at the expense of a mismatch between vi-
perceived at a relatively constant depth near the plane of sion and proprioception. This may or may not be per-
the optical images. In narrow-FOV systems in which the ceived as a penalty if the human observer is able to adapt
real scene is seen in large part outside the overlay imag- to such a mismatch. Hybrid solutions, such as that de-
ery, conflicts in accommodation can also result between veloped by Peuchot (1994), including optical see-
the real and computer-generated scene. through technology for visualization and video technol-
For both technologies, a solution to these various ogy for tracking objects in the real environment, may
conflicts in accommodation may be to allow autofocus play a key role in future developments of technology for
of the 2-D virtual images as a function of the location of 3-D medical visualization.
the user gaze point in the virtual environment, or to Clearly, there is no ‘‘right’’ system for all applications:
implement multifocal planes (Rolland et al., 2000). Each of the tradeoffs discussed in this paper must be ex-
Given eye-tracking capability, autofocus could be pro- amined with respect to specific applications and available
vided because small displacements of the miniature dis- technology to determine which type of system is most
play near the focal plane of the optics would yield large appropriate. Furthermore, additional HMD features
axial displacements of the 2-D virtual images in the pro- such as multiplane focusing and eye tracking are cur-
jected virtual space. The 2-D virtual images would move rently investigated at various research and development
in depth according to the user gaze point. Multifocal sites and may provide solutions to current perceptual
Rolland and Fuchs 307

conflicts in HMDs. A shared concern among scientists Baillot, Y., & Rolland, J. P. (1998). Modeling of a knee joint
developing further technology is the lack of standards for the VRDA tool. Proc. of Medicine Meets Virtual Reality
not only in the design but also most importantly in the (pp. 366–367).
calibration and maintenance of HMD systems. Baillot, Y., Rolland, J. P., & Wright, D. (1999). Kinematic
modeling of knee-joint motion for a virtual reality tool. In
Proc. of Medicine Meets Virtual Reality (pp. 30–35).
Baillot, Y., Rolland, J. P., Lin, K., & Wright, D. L. (2000). Au-
Acknowledgments
tomatic modeling of knee-joint motion for the Virtual Real-
ity Dynamic Anatomy (VRDA) Tool. Presence: Teleoperators
This review was expanded from an earlier paper in a SPIE pro-
and Virtual Environments, 9(3), 223–235.
ceeding by Rolland, Holloway, and Fuchs (1994), and the au-
Baillot, Y. (1999). First implementation of the Virtual Reality
thors would like to thank Rich Holloway for his earlier contri-
Dynamic Anatomy (VRDA) tool. Unpublished master’s the-
bution to this work. We thank Myron Krueger from Artificial
sis, University of Central Florida.
Reality Corp. for stimulating discussions on various aspects of
Barrette, R. E. (1992). Wide field of view, full-color, high-
the technology, as well as Martin Shenker from M.S.O.D. and
resolution, helmet-mounted display. Proc. of SID ’92 Sympo-
Brian Welch from CAE Electronics for discussions on current
sium (pp. 69–72).
optical technologies. Finally, we thank Bernard Peuchot, Derek
Biocca, F. A., & Rolland, J. P. (1998). Virtual eyes can rear-
Hill, and Andrei State for providing information about their
range your body: Adaptation to virtual-eye location in see-
research that has significantly contributed to the improvement
thru head-mounted displays. Presence: Teleoperators and Vir-
of this paper. We deeply thank our various sponsors not only
tual Environments, 7(3), 262–277.
for their financial support that has greatly facilitated our re-
Bishop, G., Fuchs, H., McMillan, L., & Scher-Zagier, E. J.
search in see-through devices but also for the stimulating dis-
(1994). Frameless rendering: Double buffering considered
cussions they have provided over the years. Contracts and
harmful. Proc. of SIGGRAPH’94 (pp. 175–176).
grants include ARPA DABT 63-93-C-0048, NSF Cooperative
Brooks, F. P. (1992). Walkthrough project: Final technical re-
Agreement ASC-8920219; Science and Technology Center for
port to National Science Foundation Computer and Infor-
Computer Graphics and Scientific Visualization, ONR
mation Science and Engineering. Technical Report TR92-
N00014-86-K-0680, ONR N00014-94-1-0503, ONR
026, University of North Carolina at Chapel Hill.
N000149710654, NIH 5-R24-RR-02170, NIH
Buchroeder, R. A., Seeley, G. W., & Vukobratovich, D.
1-R29LM06322-O1A1, and DAAH04-96-C-0086.
(1981). Design of a catadioptric VCASS helmet-mounted
display. Optical Sciences Center, University of Arizona, un-
der contract to U.S. Air Force Armstrong Aerospace Medi-
References cal Research Laboratory, Wright-Patterson Air Force Base,
Dayton, Ohio, AFAMRL-TR-81-133.
Allen, D. (1993). A high resolution field sequential display for Campbell, F. W. (1957). The depth of field of the human eye.
head-mounted applications. Proc. IEEE virtual reality an- Optica Acta, 4, 157–164.
nual international symposium (VRAIS’93) (pp. 364–370). Cutting, J. E., & Vishton, P. M. (1995). Perceiving the layout
Azuma, R., & Bishop, G. (1994). Improving static and dy- and knowing distances: The integration, relative potency,
namic registration in an optical see-through HMD. Com- and contextual use of different information about depth. In
puter Graphics: Proceedings of SIGGRAPH ’94 (pp. 197– W. Epstein & S. Rogers (Eds.), Perception of Space and Mo-
204). tion (pp. 69–117). Academic Press.
Bajura, M., Fuchs, H., & Ohbuchi R. (1992). Merging virtual Deering, M. (1992). High resolution virtual reality. Proc. of
objects with the real world, Computer Graphics, 26, 203– SIGGRAPH’92, Computer Graphics, 26(2), 195–201.
210. Desplat, S. (1997). Characterization des elements actuals et
Bajura, M., & Newmann H. (1995). Dynamic registration cor- future de loculometre Metrovision-Sextant. Technical Re-
rection in video-based augmented reality systems. IEEE port, Ecole Nationale Superieure de Physique de Marseilles,
Computer Graphics and Applications, 15(5), 52–60. November.
308 PRE S E N CE : V O L U ME 9 , N UM B ER 3

Dhome, M., Richetin, M., Lapreste, J. P., & Rives, G. (1989). (1995). A novel virtual reality tool for teaching dynamic 3D
Determination of the attitude of 3-D objects from a single anatomy. Proc. of CVRMed ’95, 163–169.
perspective view. IEEE Trans. Pattern Analysis and Machine Kandebo, S. W. (1988). Navy to evaluate Agile Eye helmet-
Intelligence, 11(12), 1265–1278. mounted display system. Aviation Week & Space Technology,
Droessler, J. G., & Rotier, D. J. (1990). Tilted cat helmet- August 15, 94–99.
mounted display. Optical Engineering, 29(8), 849–854. Kennedy, R. S., & Stanney, K. M. (1997). Aftereffects in vir-
Edwards, E. K., Rolland, J. P., & Keller, K. P. (1993). Video tual environment exposure: Psychometric issues. In M. J.
see-through design for merging of real and virtual environ- Smith, G. Salvendy, & R. J. Koubek (Eds.), Design of Com-
ments. Proc. of IEEE VRAIS’93 (pp. 223–233). puting.
Edwards, P. J., Hawkes, D. J., Hill, D. L. G., Jewell, D., Spink, Kijima, R., & Ojika, T. (1997). Transition between virtual en-
R., Strong, A., & Gleeson, M. (1995). Augmentation of vironment and workstation environment with projective
reality using an operating microscope for otolaryngology head-mounted display. Proc. of VRAIS ’97, 130–137.
and neurosurgical guidance. J. Image Guided Surgery, 1(3), Manhart, P. K., Malcom R. J., & Frazee, J. G. (1993). Augeye:
172–178. A compact, solid Schmidt optical relay for helmet mounted
Ellis, S. R., & Bucher, U. J. (1994). Distance perception of displays. Proc. of IEEE VRAIS ’93, 234–245.
stereoscopically presented virtual objects optically superim- Mizell, D. (1998). Personnal communication.
posed on physical objects in a head-mounted see-through Moses, R. A. (1970). Adlers Physiology of the Eye. St. Louis,
display. Proc. of the Human Factors and Ergonomic Society, MO: Mosby.
Nashville. Oster, P. J., & Stern, J. A. (1980). ‘‘Measurement of Eye
Fernie, A. (1995). A 3.1 Helmet-mounted Display with Dual Movement,’’ in I. Martin & P. H. Venables (Eds.), Tech-
Resolution. CAE Electronics, Ltd, Montreal, Canada, SID niques of Psychophysiology, New York: John Wiley & Sons.
95 Applications Digest, 37–40. Outters, V., Argotti, Y., & Rolland, J. P. (1999). Knee motion
Fernie, A. (1995). Improvements in area of interest helmet- capture and representation in augmented reality. Technical
mounted displays. In Proc. of SID95. Report TR99-006, University of Central Florida.
Fry, G. A. (1969). Geometrical Optics. Chilton Book Company. Parsons, J., & Rolland, J. P. (1998). A non-intrusive display
Furness, T. A. (1986). The super cockpit and its human factors technique for providing real-time data within a surgeons
challenges. Proceedings of the Human Factors Society 30, critical area of interest. Proc. of Medicine Meets Virtual Real-
48–52. ity, 246–251.
Held, R., & Durlach, N. (1987). Telepresence, time delay and Peuchot, B. (1993). Camera virtual equivalent model: 0.01
adaptation. NASA Conference publication 10032. pixel detectors. Special issue on 3D Advanced Image Process-
Holloway, R. (1995). An analysis of registration errors in a see- ing in Medicine in Computerized Medical Imaging and
through head-mounted display system for craniofacial sur- Graphics, 17(4/5), 289–294.
gery planning. Unpublished doctoral dissertation, University ———. (1994). Utilization de detecteurs subpixels dans la
of North Carolina at Chapel Hill. modelisation d’une camera—verification de l’hypothese
Hua, H., Girardot, A., Gao, C., and Rolland, J.P. (2000). En- stenope, 9e Congres AFCET, reconnaissance des formes et in-
gineering of head-mounted projective displays. Applied Op- telligence artificielle, Paris.
tics (in press). ———. (1998). Personal communication.
Jacobs, M. C., Livingston, M. A., & State, A. (1997). Manag- Peuchot, B., Tanguy, A., & Eude, M. (1994). Dispositif op-
ing latency in complex augmented reality systems. Proc. of tique pour la visualization d’une image virtuelle tridimen-
1997 Symposium on Interactive 3D Graphics, ACM sionelle en superimposition avec an object notamment pour
SIGGRAPH, 235–240. des applications chirurgicales. Depot CNRS #94106623,
Kaiser Electro-Optics. (1994). Personal communication from May 31.
Frank Hepburn of KEO, Carlsbad, CA. General description ———. (1995). Virtual reality as an operative tool during sco-
of Kaiser’s VIM system (‘‘Full immersion head-mounted liosis surgery. Proc. of CVRMed ’95, 549–554.
display system’’) is available via ARPA’s ESTO home page: Robinett, W., & Rolland, J. P. (1992). A computational model
http://esto.sysplan.com/ESTO/. for the stereoscopic optics of a headmounted display. Pres-
Kancherla, A., Rolland, J. P., Wright, D., & Burdea, G. ence: Teleoperators and Virtual Environments, 1(1), 45–62.
Rolland and Fuchs 309

Rock, I. (1966). The nature of perceptual adaptation. New presence in virtual environments, Presence: Teleoperators and
York: Basic Books. Virtual Environments, 6(6), 603–616.
Rolland, J. P. (1994). Head-mounted displays for virtual envi- State, A., Chen, D., Tector, C., Brandt, C., Chen, H., Ohbu-
ronments: The optical interface. International Optical De- chi, R., Bajura, M., and Fuchs, H. (1994). Case study: Ob-
sign Conference 94, Proc. OSA, 22, 329–333. serving a volume rendered fetus within a pregnant patient,
Rolland, J. P. (1998). Mounted displays, Optics and Photonics Proceedings of Visualization ’94. Washington, DC., 364–373.
News, 9(11), 26–30. State, A., Hirota, G., Chen, D. T., Garrett, W. E., and Living-
Rolland, J. P. (2000). Wide angle, off-axis, see-through head- ston, M. (1996). Superior augmented-reality registration by
mounted display. Optical Engineering—Special Issue on integrating landmark tracking and magnetic tracking. Proc.
Pushing the Envelop in Optical Design Software, 39(7), (in of SIGGRAPH 1996, ACM SIGGRAPH, 429–438.
press). Sutherland, I. (1965). The ultimate display, Information Pro-
Rolland, J. P., Ariely, D., & Gibson, W. (1995). Towards quan- cessing 1965: Proc. of IFIP Congress 65, 506–508.
tifying depth and size perception in virtual environments. Sutherland, I. E. (1968). A head-mounted three-dimensional
Presence: Teleoperators and Virtual Environments, 4(1), display, Fall Joint Computer Conference, AFIPS Conference
24–49. Proceedings 33, 757–764.
Rolland, J. P. & Arthur, K. (1997). Study of depth judgments
Thomas, M. L., Siegmund, W. P., Antos, S. E., and Robinson,
in a see-through mounted display, Proceeding of SPIE 3058,
R. M. (1989). Fiber optic development for use on the fiber
AEROSENSE, 66–75.
optic helmet-mounted display, in Helmet-Mounted Displays,
Rolland, J. P., Holloway, R. L., & Fuchs, H. (1994). A com-
J. T. Colloro, Ed., Proc. of SPIE 1116, 90–101.
parison of optical and video see-through head-mounted dis-
Tomasi, C., & Kanade, T. (1991). Shape and motion from im-
plays. Proceedings of SPIE 2351, 293–307.
age streams: a factorization method-Part 3: Detection and
Rolland, J. P., & Hopkins, T. (1993). A method for computa-
tracking of point features, Carnegie Mellon technical report
tional correction of optical distortion in head-mounted dis-
CMU-CS-91-132.
plays. Technical Report TR93-045, University of North
Vaissie, L., & Rolland, J. P. (1999). Eye-tracking integration in
Carolina at Chapel Hill.
head-mounted displays, Patent Filed January.
Rolland, J. P., Krueger, M., and Goon, A. (2000). Multifocus
Vaissie, L., Rolland, J. P. (2000). Alberlí́an errors in head-
planes in head-mounted displays, Applied Optics, 39(19), in
mounted displays: choice of eyejoints location.
press.
Technical Report TR 2000-001, University of Central
Rolland, J. P., Parsons, J., Poizat, D., & Hancock, D. (1998).
Florida.
Conformal optics for 3D visualization, Proc. of the Interna-
tional Lens Design Conference, Hawaii (June), 760–764. Watson, B., & Hodges, L. F. (1995). Using texture maps to
Rolland, J. P., Yoshida, A., Davis, L., & Reif, J. H. (1998). correct for optical distortion in head-mounted displays,
High-resolution inset head-mounted display, Applied Optics, Proc. of VRAIS’95, 172–178.
37(19), 4183–4193. Welch, B., & Shenker, M. (1984). The fiber-optic Helmet-
Roscoe, S. N. (1984). Judgments of size and distance with im- Mounted Display, Image III, 345–361.
aging displays, Human Factors, 26(6), 617–629. Welch, R. B. (1993). Alternating prism exposure causes dual
Roscoe, S. N. (1991). The eyes prefer real images, in Pictorial adaptation and generalization to a novel displacement. Per-
Communication in Virtual and Real Environments, Ed. ception and Psychophysics, 54(2), 195–204.
Stephen R. Ellis, Taylor and Francis. Welch, R. B. (1994). Adapting to virtual environments and
Scher-Zagier, E. J. (1997). A human’s eye view: motion, blur, teleoperators, Unpublished manuscript, NASA-Ames Re-
and frameless rendering. ACM Crosswords 97. search Center, Moffett Field, CA.
Shenker, M. (1994). Image quality considerations for head- Welch, G., & Bishop, G. (1997). SCAAT: Incremental track-
mounted displays. Proc. of the OSA: International Lens De- ing with incomplete information, Proc. of ACM
sign Conference, 22, 334–338. SIGGRAPH, (pp. 333–344).
Shenker, M. (1998). Personal Communication. Wright, D. L., Rolland, J. P., & Kancherla, A. R. (1995). Us-
Slater, M., and Wilbur, S. (1997). A framework for immersive ing virtual reality to teach radiographic positioning, Radio-
virtual environments (FIVE): Speculations on the role of logic Technology, 66(4), 167–172.

View publication stats

You might also like