Mod 3-Complete Notes PDF
Mod 3-Complete Notes PDF
Mod 2-part 2
Line Clipping
Line clipping involves several possible cases.
• 1. Completely inside the clipping window.
• 2. Completely outside the clipping window.
• 3. Partially inside and partially outside the
clipping window.
• Line which is completely inside is displayed
completely.
• Line which is completely outside is eliminated
from display.
• And for partially inside line we need to
calculate intersection with window boundary
and find which part is inside the clipping
boundary and which part is eliminated.
• Algorithm of Cohen-Sutherland Line Clipping:
• Step 1: Assign the bit code for both endpoints of the
line.
• Step 2: Now,implement AND operation on both
endpoints of the line.
• STEP 3: If AND is not equal to 0000
Then
{The line is not acceptable (Invisible)}
Else
AND = 0000
{The line needs to be clip}
• Step 5: Replace endpoint with the intersection
point and update the region code.
• Step 6: Repeat step 1 for other lines.
problems
• Use cohen sutherland algorithm to clip line
P1(70,20) and P2(100,10) against a window
lower left hand corner (50,10) and upper right
hand corner(80,40).
• Given a clipping window a(20,20) B(60,20)
C(60,40) D(20,40). Using Sutherland cohen
algorithm find the visible portion of line
segment joining the points P(40,80) and
Q(120,30).
Mod 3-part 3
Polygon clipping-SutherlandHodgean
polygon clipping algorithm
• A polygon can also be clipped by specifying
the clipping window.
• Sutherland Hodgeman polygon clipping
algorithm is used for polygon clipping.
• In this algorithm, all the vertices of the
polygon are clipped against each edge of the
clipping window.
• First the polygon is clipped against the left edge
of the polygon window to get new vertices of the
polygon.
• These new vertices are used to clip the polygon
against right edge, top edge, bottom edge, of the
clipping window.
• While processing an edge of a polygon with
clipping window, an intersection point is found if
the edge is not completely inside clipping window
and a partial edge from the intersection point to
the outside edge is clipped.
• Beginning with the initial set of polygon
vertices, we first clip the polygon against left
boundary to produce new sequence of
vertices.
• The new set of vertices is then successively
passed to a right boundary , a bottom
boundary, and a top boundary clipper.At each
step, a new set of vertices is generated &
passed to the next window clipper.
There are 4 possible cases when processing vertices in
sequence around the polygon edges.For each pair of
vertices , we make the following tests
• Both vertices are inside : Only the second vertex is
added to the output list
• First vertex is outside while second one is inside :
Both the point of intersection of the edge with the clip
boundary and the second vertex are added to the
output list
• First vertex is inside while second one is outside : Only
the point of intersection of the edge with the clip
boundary is added to the outputlist
• Both vertices are outside : No vertices are added to
the output list
• Example
• Disadvantage: The algorithm correctly clips
convex polygons, but may display extraneous
lines for concave polygons .So this algorithm is
not suited for concave polygons.
3D viewing pipeline
• The 3D viewing process is more complex than the
2D viewing process.
• The complexity added in the 3D viewing is
because of added dimension and the fact that
eventhough objects are three dimensional the
display devices are only 2D.
• The mismatch between 3D objects and 2D
displays is compensated by introducing
projections.
• The projections transform 3D objects into a 2D
projection plane.
• In 3D viewing, we specify a view valume in the world
co-ordinates using modelling transformation.
• The world co-ordinate positions of the objects are then
converted into viewing co-ordinates by viewing
transformation.
• The projection transformation is then used to convert
3D description of objects in viewing co-ordinates to the
2D projection co-ordinates.
• Finally, the workstation transformation transforms the
projection co-ordinates into the device co-ordinates.
Mod 3-part 4
Projections
Perspective projection
Mod 3-part 5
Visible Surface Detection Methods
• A major consideration in the generation of
realistic graphics displays is identifying those
parts of a scene that are visible from a chosen
viewing position.
• There are numerous for efficient identification of
visible objects for different types of applications.
• Some methods require more memory, some
involve more processing time, and some apply
only to special types of objects.
• Decision upon a method for a particular
application can depend on such factors as the
complexity of the scene, type of objects to be
displayed, available equipment, and whether
static or animated displays are to be generated.
• The various algorithms are referred to as visible-
surface detection methods. Sometimes these
methods are also referred to as hidden-surface
elimination methods.
• Visible-surface detection algorithms are
broadly classified according to whether they
deal with object definitions directly or with
their projected images. These two approaches
are :
• Object-space methods
• Image-space methods
• An object-space method compare objects and
parts of objects to each other within the scene
definition to determine which surfaces, as a
whole, we should label as visible.
• In an image- space algorithm, visibility is
decided point by point at each pixel position
on the projection plane.
Depth-Buffer Method
• A commonly used image-space approach for
detecting visible surfaces is the depth-buffer
method, which compares surface depths at
each pixel position on the projection plane.
• This procedure is also referred to as the z-
buffer method, since object depth is usually
measured from the view plane along the z axis
of a viewing system.
• Figure shows three surfaces at varying
distances along the orthographic projection
line from position (x,y) in a view plane.
• Surface 1, is closest at this position, so its
surface intensity value at (x, y) is saved.
• Z-buffer is a simple extension of the frame
buffer idea. A frame buffer is used to store the
intensity of each pixel in image space.
• The z-buffer is a seperate depth buffer used to
store the z-coordinate or depth or every
visible pixel in image space.
• In this algorithm a buffer of the same size as
the frame buffer is set up which holds depth
information.
• Each element of the depth buffer corresponds to
a pixel in the frame buffer and initially holds the
maximum depth in the scene. The frame buffer is
cleared to the background color.
• As each polygon is scan converted the depth at
each pixel is calculated and compared with the
corresponding depth in the depth buffer.If the
depth is less than that stored in the frame buffer
then that pixel is set in the frame buffer with the
polygon color at that point and the depth buffer
is set to the polygon depth.
• If the polygon depth is greater than the depth
buffer depth at that point then that pixel is
not written to the frame buffer.
Scan line algorithm
• This image-space method for removing hidden surfaces
is an extension of the scan-line algorithm for filling
polygon interiors.
• As each scan line is processed, all polygon surfaces
intersecting that line are examined to determine which
are visible.
• Across each scan line, depth calculations are made for
each overlapping surface to determine which is nearest
to the view plane. When the visible surface has been
determined, the intensity value for that position is
entered into the refresh buffer.
Two important tables are maintained for this.
1. The edge table contains coordinate endpoints
for each line in-the scene, the inverse slope of
each line, and pointers into the polygon table to
identify the surfaces bounded by each line.
2. The polygon table contains coefficients of the
plane equation for each surface, intensity
information for the surfaces, and possibly
pointers into the edge table.
Fig: Scan lines crossing the projection of two surfaces S1 and S2 in the
view plane.Dashed lines indicate the boundaries of hidden surfaces
• To facilitate the search for surfaces crossing a given
scan line, we can set up an active list of edges from
information in the edge table.
• This active list will contain only edges that cross the
current scan line, sorted in order of increasing x. In
addition, we define a flag for each surface that is set on
or off to indicate whether a position along a scan line is
inside or outside of the surface.
• Scan lines are processed from left to right. At the
leftmost boundary of a surface, the surface flag is
turned on; and at the rightmost boundary, it is turned
off.