0% found this document useful (0 votes)
34 views8 pages

Visible Surface Detection

The document discusses visible-surface detection methods essential for generating realistic graphics displays by identifying which parts of a scene are visible from a chosen position. It classifies algorithms into object-space and image-space methods, detailing various techniques such as the depth-buffer method, A-buffer method, and scan-line method, each with its own advantages and applications. The document emphasizes the importance of sorting and coherence methods to enhance performance in visible-surface detection.

Uploaded by

SHABNAM S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views8 pages

Visible Surface Detection

The document discusses visible-surface detection methods essential for generating realistic graphics displays by identifying which parts of a scene are visible from a chosen position. It classifies algorithms into object-space and image-space methods, detailing various techniques such as the depth-buffer method, A-buffer method, and scan-line method, each with its own advantages and applications. The document emphasizes the importance of sorting and coherence methods to enhance performance in visible-surface detection.

Uploaded by

SHABNAM S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CHAP TER

Visible-Surface Detection

13 Methods

469
A majr onsideration in the generation of realistic graphics displavs
identifving those parts of ascene that are visible from achosen
psition. Thene ar many appnaches we can take to solve this problem, viewn!
merus algorithms have been devised tor efficient identification of visibleandobts
u
tor diterent tvpes of applications. Some methods require more memory,
volve more processing time, and some apply only to special types of objets.some in
ading upon amethod for a particular application can depend on such tactorsD:as
the complexity of the scene, type of objects to be displayed, available
and whether static or animated displays are to be equipment.
generated. The various algu
rithms are rfernei to as yisible-surface detection methods. Sometimes these
methods are also referred to as hidden-surface elimination methods,
there can be subtle differences between identifying visible surfaces and although
ing hidden surfaces. Eor wireframe displays, for example, we may notelininatwant to
actually eliminate the hidden surfaces, but rather to display them with dashed
boundarnes or in some other way to retain information about their shape. In this
chapter, we explore some of the most commonly used methods for detectng visi
ble aurfaces in athree-dimensional scene.

13-1
CLASSIFICATION OF VISIBLE-SURFACE DETECTION
ALGORITHMS

Visible-surface detection algorithms are broadly classified according to whether


th¹y deal with object definitions directly or with their projected images. Thse
two approaches are called object-space methods and image-space methods, IN
spectively. An object-space method compares objects and parts of objts to cach
other within the scenedefinition to determine which surtaces, as a whole, wC
should label as visible. In an image-space algorithm, visibility is decided point bv
point at each pixel position on the projection plane. Most visible-surtace algo
rithms use image-space methods, although object-space methods can be used et
fectively to locate visible surfaces in some cases. Line-display algorithms, on the
other hand, generally use object-space methods to identify visible lines in wirN
frame displays, but many image-space visible-surface algorithms can be adapted
easily to visible-line detection.
Although there are major differences in the basic approach taken by the var
1Ous visible-surface detection algorithms, most use sorting and coherenve meth
ods to improve performance. Sorting is used to facilitate depth conparisons by
ordering the individual surlaces in à scene according to their distance tromthe
Section 13
view plane. Coherence methods are used to take advantage of regularities in a Back-Face
SCene. An individual scan line can be expected to contain intervals (runs) of con
stant pixel intensities, and scan-line patterns often change little from one ine to
the next. Animation frames contain changes only in the vicinity of moving ob
jects. And constant relationships often can be established between objects and
surfaces in a scene.
13-3
DEPTH-BUFFER METHÌD Mathed)
detecting visible surfaces is the
A commonly used image-space approach to_ pixel position on
depth-buffer method, which compares surface depths at each z-buffer method,
projection plane. This procedure is also referred to as the
the the view plane along the z axis of a
since object depth is usually measured from
processed separately, one point at a
viewing systemn. Each surface of a scene is applied to scenes containing only
the surface. The method is usually
the
time across
surfaces, because depth values can be computed very quickly and
polygon su
to implement. But the method can be applied to nonplanar
method is easy
faces.
descriptions_ converted to projection coordinates, each (x, , z)
With object the orthographic projection point
corresponds to
position on a polygon surface
Therefore, for each pixel position (I, y) on the view
(x, y) on the view plane. compared by comparing z values. Figure 13-4 shows
plane, object depths can bedistances along the orthographic projection line from
three surfaces at varying plane. Surface S, is closest
at this
plane taken as the x,y,
position ( ,y) in a view value at (x, )is saved. normalized coordinates,
its surface intensity
position, so depth-buffer algorithm in
We can implement the to zmax at the front clip
back clipping plane
thatz values range from
0at the
so
Section 13-3
Depth Bufter Method

S.
(x, y)
Iigure 13-4
2 At view-plane position(x, y).
surface S, has the smallest depth
from the view plane and so is
visible at that position.

ping plane. The value of Zmay can be set either to 1(for a unit cube) or to the
largest value that can be stored on the system.
As implied by the name of this method, two buffer areas are required. A
depth bufter is used to store depth values for each (x, ) position as surfaces are
processed, andthe refresh buffer stores the intensity values for each position. Ini
tially, all positions ne depth buffer are set to 0 (minimum depth), and there
fresh buffer is initialized to the background intensity. Each surface listed in the
polygon tables is then processed, one scan line at a time, calculating the depthlz
value)at each (1, y) pixel position. The calculated depthis compared to the value
previously stored in the depth buffer at that position. If the calculated depth is
greater than the value stored in the depth bufer, the new depth value is stored,
and the surface intensity at that position is determined and placed in the samne xy
location in the refresh buffer.
We summarize the stepsof adepth-buffer algorithm as follows:

1. Initialize the depth buffer and refresh buffer so that for all buffer posi
tions (x, y),

depth(x, y)- 0, refresh(x, y) = Ibackgnd


2. For each position on each polygon surface, compare depth values to
previously stored values in the depth buffer to determine visibility.
" Calculate the depth z for each (x, y) position on the polygon.
" Ifz > depth(x, y), then set

depth(x, y)=z, refresh(x, y) = Igurlx,y)

where Ipakgnd is the value for the background intensity, and ur(,y) is
the projected intensity value for the surface at pixel position (,y).
After all surfaces have been processed, the depth buffer contains
depth values for the visible surfaces and the refresh buffer contains
the corresponding intensity values for those surfaces.

Depth values for a surface position (x, y) are calculated from the plane
equation for each surface:

-Ax By - D
Z=
C (13-4)

473
eüsed tor the next section.

13-4
A-BUFFER METHOD
An extension of the ideas in the depth-buffer method is the
the other end of the alphabet from "z-buffer", where z A-buffer method (at
represents depth). The A
buffer method represents an antialiased, area-averaged, accumulation-buffer
developed by Lucasfilm for implementation in the surfacerendering method system
called REYES (arn acronym for "Renders Everything You Ever Saw").
A drawback of the depth-buffer method is that it can only find one visible
surface at each pixel position. In other words, it deals only with opaque surfaces
and cannot accumulate intensityy values for more than one surface, as is necessary
if transparent surfaces are to be displayed (Fig. 13-8). The A-buffer method ex
pands the depth buffer so that each position in the buffer can reference a linked
list of surfaces. Thus, more than one surface intensity can be taken into consider
ation at each pixel position, and object edges can be antialiased.
Each position in the A-buffer has two fields:

depth field stores a positive or negative real number


intensity field stores surface-intensity information or a pointer value.

background
opaque
surface
foreground Figure 13-8
transparent
surface Viewing an opaque surface through
atransparent surface requires
multiple surface-intensity
contributions for pixel positions.
13-5
SCAN-LINE ME lHOD

an extension
This image-spaemethod for removing hidden surfaces is
algorithm for filling8 polygon interiors. Instead of filling just one surt.x
scan-line ll polyon
wdeal with multiple surfaces. As each scan line is processed,
deteT0TE whieh are yisibj
surtaces intersecting that line are examined to
Acrbss Ràch scan line, depth calculations are made for each overlapping surtace
visible surtace has
tÕdetermine whch is nearest to the view pane. WheR he thre retresh
been etermined, the intensity value for that position is enteret nto
buffer.
We assume that tables are set up for the various surfaces, as discussed in
Chapter 10, which include both-an-edge table and a polygon table. The edge table
contains coordinate endpointsfor each line in the scene, the inverse sope of each
ine, and pointers into the polygon table to identify the suTfaces bounded by each
Section 13-5
line. The polygon table contaj1s cocfficients of the plane cquation foreachsur
face, intensity information for the surfaces, and possibly pointers into the edge Scan line Methol
table. To iacilitate the search for surfaces crossinga givenseantne, we canset up
amàctive ist of edges from information in the edge table. This active list willcon
tain only edges that cross the current scan line, sorted inorder of increasing r: In
addition, we defne alag tor each surface that is set on or off totndicate whether
a posifion along a scan line is inside or outside of the surface Scan lines are
processed from left to right. At the leftmost boundary of asurface, the surface
flag is turned on; and at the rightmost boundary, it isturned off.
Figure 13-10illustrates the scan-line method for locatingvisible portions of
surfaces for pixel positions along the line. The active list for scan line 1contains
information from the edge table for edges AB, BC, EH, and FG. For positions
along this scan line between edges AB and BC, only the flag for surface S, is on.
Therefore, no depth calculations are necessary, and intensity information for sur
Similarly, be
face S, is entered from the polygon talble into the refresh buffer.other positions
tween edges EH and FG, only the flag for surface S, is on. No
values in the other areas are
along scan line 1intersect surfaces, so the intensity through
intensity. The background intensity can be loaded
set to the background
initialization routine.
out the buffer in an
edge list contains edges AD,
For scan lines 2 and 3 in Fig. 13-10, the active edge EH, only the flag for
EH, BC, and FG. Along scan line 2 from edge AD to
on.
S, is on. But between edges EH and BC, the flags for both surfaces are
surface coefficients for
must be made using the plane
In this interval, depth calculations the depth of surface S, is assumed to be less
example,
thÁ two surfaces. For this surface S, are loaded into the refresh buffer
until
So intensities for
than that of S, and intensities
BC is encountered. Then the flag for surface S, goes off,
boundary
for surface S, are stored until edge FG is passed. the scan lines as we pass from
ofeeherence along edges
We can take advantage scan line 3 has the same active list of
next In Fig. 13-10, unneces
one scan line to the intersections, it is
have occurred in line
as scan line 2. Since no changes The two sur
calculations between edges EH and BC.
sary again to make depth

Scan Line 1
S
Scan Line 2
Scan Line 3

Figure 13-10 in the


surfaces, S, and S2,
the projection of two hidden surfaces
crossing boundaries of
Scan lines lines indicate the
plane. Dashed 477
view

You might also like