VISIBLE SURFACE DETECTION METHODS
• A major consideration in the generation of realistic graphics displays is identifying those parts
of a scene that are visible from a chosen viewing position.
• There are many approaches we can take to solve this problem, and numerous algorithms have
been devised for efficient identification of visible objects for different types of applications.
• The various algorithms are referred to as visible-surface detection methods.
• Sometimes these methods are also referred to as hidden-surface elimination methods.
Classification of Visible Surface Detection Algorithms
Visible-surface detection algorithms are broadly classified according to whether they deal with
object definitions directly or with their projected images.
1. Object Space Methods
2. Image Space Method
• An object-space method compares objects and parts of objects to each other within the scene
definition to determine which surfaces, as a whole, we should label as visible.
• In an image-space algorithm, visibility is decided point by point at each pixel position on the
projection plane.
• Most visible-surface algorithms use image-space methods, although object space methods can
be used effectively to locate visible surfaces in some cases.
• Although there are major differences in the basic approach taken by the various visible-surface
detection algorithms, most use sorting and coherence methods to improve performance .
• Sorting is used to facilitate depth comparisons by ordering the individual surfaces in a scene
according to their distance from the view plane.
• Coherence methods are used to take advantage of regularities in a scene.
Back Face Detection
• A fast and simple object-space method for identifying the back faces of a polyhedron is based
on the "inside-outside" tests.
• A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and D if
Ax+By+Cz+D<0
• When an inside point is along the line of sight to the surface, the polygon must be a back face
(we are inside that face and cannot see the front of it from our viewing position).
• We can simplify this test by considering the normal vector N to a polygon surface, which has
Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the
eye (or "camera") position, then this polygon is a back face if
V.N>0
• Furthermore, if object descriptions have been converted to projection coordinates and our
viewing direction is parallel to the viewing z-axis, then V = (0, 0, Vz) and
V.N=VZC
• So that we only need to consider the sign of C, the ; component of the normal vector N
• In a right-handed viewing system with viewing direction along the negative zV axis, the
polygon is a back face if C < 0.
• AIso, we cannot see any face whose normal has z component C = 0, since our viewing direction
is grazing that polygon.
• Thus, in general, we can label any polygon as a back face if its normal vector has a z component
value:
C<=0
• In left handed system viewing system along the positive position axis, then the polygon is back
face if C>0.
• By examining C we can identify all back faces.
Drawback of Back Face Detection
• For other objects, such as the concave polyhedron more tests need to be carried out to determine
whether there are additional faces that are totally or partly obscured by other faces.
• And a general scene can be expected to contain overlapping objects along the line of sight.
• So we then need to determine where the obscured objects are partially or completely hidden by
other object.
• In general back-face removal can be expected to eliminate about half of the polygon surfaces in
a scene from further visibility tests.
Fig. View of a concave polyhedron with one face partially hidden by other faces
Depth Buffer Method
• A commonly used image-space approach to detecting visible surfaces is the depth-buffer
method, which compares surface depths at each pixel position on the projection plane.
• Due to depth calculation it is referred as depth buffer.
• This procedure is also referred to as the z-buffer method since object depth is usually measured
from the view plane along the z-axis of a viewing system.
• Each surface of a scene is processed separately, one point at a time across the surface.
• The method is usually applied to scenes containing only polygon surfaces because depth values
can be computed very quickly and the method is easy to implement.
• Each (x, y, 2) position on a polygon surface corresponds to the orthographic projection point (x,
y) on the view plane.
• for each pixel position (x, y) on the view plane, object depths can be compared by comparing z
values. Then surfaces with highest depth (z value) is closer to the viewing position, so it is
visible and its surface intensity value at (x,y) is saved for display.
Fig. At view plane position (x,y) surface S1 has the smallest depth from the view
plane and so is visible at that position.
• Two buffer areas are required. A depth buffer is used to store depth values for each (x, y)
position as surfaces are processed, and the refresh buffer stores the intensity values for each
position.
• Initially, all positions in the depth buffer are set to 0 (minimum depth), and the refresh buffer is
initialized to the background intensity.
• Each surface listed in the polygon tables is then processed, one scan line at a time, calculating
the depth (z value) at each (x, y) pixel position. The calculated depth is compared to the value
previously stored in the depth buffer at that position.
• Summarize the steps of a depth-buffer algorithm as follows:
1. Initialize the depth buffer and refresh buffer so that for all buffer positions (x,y),
depth(x,y) = 0, refresh(x,y) = Ibackgnd
2. For each position on each polygon surface, compare depth values to previously stored values
in the depth buffer to determine visibility.
• Calculate the depth z for each (x,y) position on the polygon.
• If z > depth(x,y), then set
depth(x,y) = z, refresh(x,y) = Isurf(x,y)
Where Ibackgnd is the value for the background intensity, and Isurf(x,y) is the projected
intensity value for the surface at pixel position (x,y). After all surfaces have been processed, the
depth buffer contains depth values for the visible surface and the refresh buffer contains the
corresponding intensity values for those surfaces.
• Depth values for a surface position (x, y) are calculated from the plane equation for each
surface:
z = -Ax - By - D /C
• . If
the depth of position (x, y) has been determined to be z, then the depth z' of the next position
(x + 1, y) along the scan line is obtained as
z' = -A(x+1) – By – D / C = z - A /C
• The ratio -A/C is constant for each surface, so succeeding depth values across a scan line are
obtained from preceding values with a single addition.
• On each scan line, we start by calculating the depth on a left edge of the polygon that intersects
that scan line.
• Depth values at each successive position across the scan line are then calculated by the previous
equations. We first determine the y-coordinate extents of each polygon, and process the surface
from the topmost scan line to the bottom scan line, Starting at a top vertex, we can recursively
calculate x positions down a left edge of the polygon as x' = x - l/m, where m is the slope of the
edge
z' = z + A / m + B / C
• If we are processing down a vertical edge, the slope is infinite and the recursive calculations
reduce to
z' = z + B / C
Advantages
• It is easy to implement
• It processes one object at a time
Disadvantages
• It requires large memory
• It is time consuming process