0% found this document useful (0 votes)
22 views46 pages

Mod 4

The document discusses perspective projection with reference point and vanishing point. It explains perspective projection transformation coordinates and equations for different cases. It also discusses depth buffer method for hidden surface removal and provides the algorithm.

Uploaded by

unnathi naik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views46 pages

Mod 4

The document discusses perspective projection with reference point and vanishing point. It explains perspective projection transformation coordinates and equations for different cases. It also discusses depth buffer method for hidden surface removal and provides the algorithm.

Uploaded by

unnathi naik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Jan 2019

7a. Explain the perspective projection with reference point and vanishing point with neat diagrams

4.7 Perspective Projections

 We can approximate this geometric-optics effect by projecting objects to the view plane
along converging paths to a position called the projection reference point (or center of
projection).
 Objects are then displayed with foreshortening effects, and projections of distant objects
are smaller than the projections of objects of the same size that are closer to the view
plane

Perspective-Projection Transformation Coordinates

 Figure below shows the projection path of a spatial position (x, y, z) to a general
projection reference point at (xprp, yprp, zprp).
 The projection line intersects the view plane at the coordinate position (xp, yp, zvp), where
zvp is some selected position for the view plane on the zview axis.

 We can write equations describing coordinate positions along this perspective-projection


line in parametric form as

 On the view plane, z’ = zvp and we can solve the z’ equation for parameter u at this
position along the projection line:

 Substituting this value of u into the equations for x’ and y’, we obtain the general
perspective-transformation equations

Perspective-Projection Equations: Special


Cases Case 1:

 To simplify the perspective calculations, the projection reference point could be limited

to positions along the zview axis, then

xprp = yprp = 0:

Case 2:

 Sometimes the projection reference point is fixed at the coordinate origin, and

(xprp, yprp, zprp) = (0, 0, 0) :


Case 3:

 If the view plane is the uv plane and there are no restrictions on the placement of the

projection reference point, then we


have zvp = 0:

Case 4:

 With the uv plane as the view plane and the projection reference point on the zview axis,
the perspective equations are
xprp = yprp = zvp = 0:

 The view plane is usually placed between the projection reference point and the scene,
but, in general, the view plane could be placed anywhere except at the projection point.

 If the projection reference point is between the view plane and the scene, objects are
inverted on the view plane (refer below figure)
 Perspective effects also depend on the distance between the projection reference point
and the view plane, as illustrated in Figure below.

 If the projection reference point is close to the view plane, perspective effects are
emphasized; that is, closer objects will appear much larger than =more distant objects of
the same size.

 Similarly, as the projection reference point moves farther from the view plane, the
difference in the size of near and far objects decreases

Vanishing Points for Perspective Projections

 The point at which a set of projected parallel lines appears to converge is called a
vanishing point.

 Each set of projected parallel lines has a separate vanishing point.

 For a set of lines that are parallel to one of the principal axes of an object, the vanishing
point is referred to as a principal vanishing point.
 We control the number of principal vanishing points (one, two, or three) with the
orientation of the projection plane, and perspective projections are accordingly classified
as one-point, two-point, or three-point projections

Principal vanishing points for


perspective-projection views of a
cube. When the cube in (a) is projected
to a view plane that intersects only the

z axis, a single vanishing point in the


z direction (b) is generated. When the
cube is projected to a view plane that
intersects both the z and x axes, two
vanishing points (c) are produced.
Perspective-Projection View Volume

 A perspective-projection view volume is often referred to as a pyramid of vision because


it approximates the cone of vision of our eyes or a camera.

 The displayed view of a scene includes only those objects within the pyramid, just as we
cannot see objects beyond our peripheral vision, which are outside the cone of vision.

 By adding near and far clipping planes that are perpendicular to the zview axis (and
parallel to the view plane), we chop off parts of the infinite, perspectiveprojection view
volume to form a truncated pyramid, or frustum, view volume
 But with a perspective projection, we could also use the near clipping plane to take out
large objects close to the view plane that could project into unrecognizable shapes within
the clipping window.

 Similarly, the far clipping plane could be used to cut out objects far from the projection
reference point that might project to small blots on the view plane.

Perspective-Projection Transformation Matrix

 We can use a three-dimensional, homogeneous-coordinate representation to express the


perspective-projection equations in the form

where the homogeneous parameter has the value

h = zprp – z
 The perspective-projection transformation of a viewing-coordinate position is then
accomplished in two steps.

 First, we calculate the homogeneous coordinates using the perspective-transformation


matrix:

Where,

Ph is the column-matrix representation of the homogeneous point (xh, yh, zh,


h) and P is the column-matrix representation of the coordinate position (x, y, z,
1).

 Second, after other processes have been applied, such as the normalization
transformation and clipping routines, homogeneous coordinates are divided by
parameter h to obtain the true transformation-coordinate positions.

 The following matrix gives one possible way to formulate a perspective-projection


matrix.

 Parameters sz and tz are the scaling and translation factors for normalizing the
projected values of z-coordinates.
 Specific values for sz and tz depend on the normalization range we select.
7b. Discuss depth buffer method with algorithm (ia3)

4.12 Depth-Buffer Method


 A commonly visible surfaces is the depth-buffer method, which compares surface
depth values throughout a scene for each pixel position on the projection plane.

 The algorithm is usually applied to scenes containing only polygon surfaces, because
depth values can be computed very quickly and the method is easy to implement.

 This visibility-detection approach is also frequently alluded to as the z-buffer


method,

because object depth is usually measured along the z axis of a viewing system

Figure above shows three surfaces at varying distances along the orthographic projection line
from position (x, y) on a view plane.

 These surfaces can be processed in any order.

 If a surface is closer than any previously processed surfaces, its surface color is
calculated and saved, along with its depth.

 The visible surfaces in a scene are represented by the set of surface colors that have
been saved after all surface processing is completed
 As implied by the name of this method, two buffer areas are required. A depth buffer
is used to store depth values for each (x, y) position as surfaces are processed, and
the frame buffer stores the surface-color values for each pixel position.

Depth-Buffer Algorithm

1. Initialize the depth buffer and frame buffer so that for all buffer positions
(x, y), depthBuff (x, y) = 1.0, frameBuff (x, y) =
backgndColor

2. Process each polygon in a scene, one at a time, as follows:

• For each projected (x, y) pixel position of a polygon, calculate the depth z (if not
already

known).

• If z < depthBuff (x, y), compute the surface color at that position and set

depthBuff (x, y) = z, frameBuff (x, y) = surfColor (x, y)

After all surfaces have been processed, the depth buffer contains depth values for the visible
surfaces and the frame buffer contains the corresponding color values for those surfaces.

 Given the depth values for the vertex positions of any polygon in a scene, we can
calculate the depth at any other point on the plane containing the polygon.

 At surface position (x, y), the depth is calculated from the plane equation as

 If the depth of position (x, y) has been determined to be z, then the depth z’ of the next
position (x + 1, y) along the scan line is obtained as
 The ratio −A/C is constant for each surface, so succeeding depth values across a scan
line are obtained from preceding values with a single addition.
 We can implement the depth-buffer algorithm by starting at a top vertex of the
polygon.

 Then, we could recursively calculate the x-coordinate values down a left edge of the
polygon.

 The x value for the beginning position on each scan line can be calculated from the
beginning (edge) x value of the previous scan line as

Where m is the slope of the edge(Figure below).

 Depth values down this edge are obtained recursively as if we are processing down a
vertical edge, the slope is infinite and the recursive calculations reduce to

 One slight complication with this approach is that while pixel positions are at integer
(x,
y) coordinates, the actual point of intersection of a scan line with the edge of a
polygon may not be.

 As a result, it may be necessary to adjust the intersection point by rounding its


fractional part up or down, as is done in scan-line polygon fill algorithms.

 An alternative approach is to use a midpoint method or Bresenham-type algorithm for


determining the starting x values along edges for each scan line.

 The method can be applied to curved surfaces by determining depth and color values
at each surface projection point.

 In addition, the basic depth-buffer algorithm often performs needless calculations.

 Objects are processed in an arbitrary order, so that a color can be computed for a
surface point that is later replaced by a closer surface.

8a. Demonstrate how transformation from world coordinate to viewing coordinate with matrix
representation

The Three-Dimensional Viewing Pipeline

 First of all, we need to choose a viewing position corresponding to where we would place a
camera.

 We choose the viewing position according to whether we want to display a front, back, side,
top, or bottom view of the scene.

 We could also pick a position in the middle of a group of objects or even inside a single
object, such as a building or a molecule.

 Then we must decide on the camera orientation.


 Finally, when we snap the shutter, the scene is cropped to the size of a selected clipping
window, which corresponds to the aperture or lens type of a camera, and light from the visible
surfaces is projected onto the camera film.

 Some of the viewing operations for a three-dimensional scene are the same as, or similar to,
those used in the two-dimensional viewing pipeline.

 A two-dimensional viewport is used to position a projected view of the three dimensional


scene on the output device, and a two-dimensional clipping window is used to select a view that is
to be mapped to the viewport.

 Clipping windows, viewports, and display windows are usually specified as rectangles with
their edges parallel to the coordinate axes.

 The viewing position, view plane, clipping window, and clipping planes are all specified
within the viewing-coordinate reference frame.

 Figure above shows the general processing steps for creating and transforming a three-
dimensional scene to device coordinates.
 Once the scene has been modeled in world coordinates, a viewing-coordinate system is
selected and the description of the scene is converted to viewing coordinates

 A two-dimensional clipping window, corresponding to a selected camera lens, is defined on


the projection plane, and a three-dimensional clipping region is established.

 This clipping region is called the view volume.

 Projection operations are performed to convert the viewing-coordinate description of the


scene to coordinate positions on the projection plane.

 Objects are mapped to normalized coordinates, and all parts of the scene outside the view
volume are clipped off.

 The clipping operations can be applied after all device-independent coordinate


transformations.

 We will assume that the viewport is to be specified in device coordinates and that
normalized coordinates are transferred to viewport coordinates, following the clipping operations.

 The final step is to map viewport coordinates to device coordinates within a selected display
window

4.4 Transformation from World to Viewing Coordinates

 In the three-dimensional viewing pipeline, the first step after a scene has been
constructed is to transfer object descriptions to the viewing-coordinate reference
frame.

 This conversion of object descriptions is equivalent to a sequence of transformations


that superimposes the viewing reference frame onto the world frame

1. Translate the viewing-coordinate origin to the origin of the worldcoordinate


system.
2. Apply rotations to align the xview, yview, and zview axes with the world
xw, yw, and zw axes, respectively.

 The viewing-coordinate origin is at world position P0 = (x0, y0, z0). Therefore, the
matrix for translating the viewing origin to the world origin is

 For the rotation transformation, we can use the unit vectors u, v, and n to form the
composite rotation matrix that superimposes the viewing axes onto the world frame.
This transformation matrix is

where the elements of matrix R are the components of the uvn axis vectors.

 The coordinate transformation matrix is then obtained as the product of the preceding
translation and rotation matrices:

 Translation factors in this matrix are calculated as the vector dot product of each of
the u, v, and n unit vectors with P 0, which represents a vector from the world origin to
the viewing origin.
 These matrix elements are evaluated as
b. Explain orthogonal projections in detail

4.6 Orthogonal Projections

 A transformation of object descriptions to a view plane along lines that are all
parallel to the view-plane normal vector N is called an orthogonal projection also
termed as orthographic projection.

 This produces a parallel-projection transformation in which the projection lines are


perpendicular to the view plane.

 Orthogonal projections are most often used to produce the front, side, and top views
of an object

 Front, side, and rear orthogonal projections of an object are called elevations; and a
top orthogonal projection is called a plan view
Axonometric and Isometric Orthogonal Projections

 We can also form orthogonal projections that display more than one face of an
object. Such views are called axonometric orthogonal projections.

 The most commonly used axonometric projection is the isometric projection, which is
generated by aligning the projection plane (or the object) so that the plane intersects
each coordinate axis in which the object is defined, called the principal axes, at the
same distance from the origin

Orthogonal Projection Coordinates

 With the projection direction parallel to the zview axis, the transformation equations
for an orthogonal projection are trivial. For any position (x, y, z) in viewing
coordinates, as in Figure below, the projection coordinates are xp = x, yp = y

Clipping Window and Orthogonal-Projection View Volume

 In OpenGL, we set up a clipping window for three-dimensional viewing just as we


did for two-dimensional viewing, by choosing two-dimensional coordinate positions
for its lower-left and upper-right corners.
 For three-dimensional viewing, the clipping window is positioned on the view plane
with its edges parallel to the xview and yview axes, as shown in Figure below . If we
want to use some other shape or orientation for the clipping window, we must develop
our own viewing procedures

 The edges of the clipping window specify the x and y limits for the part of the scene
that we want to display.

 These limits are used to form the top, bottom, and two sides of a clipping region
called the orthogonal-projection view volume.

 Because projection lines are perpendicular to the view plane, these four boundaries
are planes that are also perpendicular to the view plane and that pass through the
edges of the clipping window to form an infinite clipping region, as in Figure below.
 These two planes are called the near-far clipping planes, or the front-back clipping
planes.

 The near and far planes allow us to exclude objects that are in front of or behind the
part of the scene that we want to display.

 When the near and far planes are specified, we obtain a finite orthogonal view volume
that is a rectangular parallelepiped, as shown in Figure below along with one
possible placement for the view plane

Normalization Transformation for an Orthogonal Projection

 Once we have established the limits for the view volume, coordinate descriptions
inside this rectangular parallelepiped are the projection coordinates, and they can be
mapped into a normalized view volume without any further projection processing.

 Some graphics packages use a unit cube for this normalized view volume, with each
of the x, y, and z coordinates normalized in the range from 0 to 1.
 Another normalization-transformation approach is to use a symmetric cube, with
coordinates in the range from −1 to 1

 We can convert projection coordinates into positions within a left-handed normalized-


coordinate reference frame, and these coordinate positions will then be transferred to
lefthanded screen coordinates by the viewport transformation.

 To illustrate the normalization transformation, we assume that the orthogonal-


projection view volume is to be mapped into the symmetric normalization cube within
a left-handed reference frame.

 Also, z-coordinate positions for the near and far planes are denoted as znear and zfar,
respectively. Figure below illustrates this normalization transformation

 The normalization transformation for the orthogonal view volume is


Aug 2022

7a. explain with example depth buffer algorithm used for visible surface detection. Discuss the
advantages and disadvantages (ia3)

Here is an example to illustrate the depth buffer algorithm:

Consider a scene with two objects, a red cube and a blue sphere, in front of a grey background. The
viewer is positioned at (0, 0, 5), looking towards the origin.

The depth buffer is initialized with the maximum depth value, which is set to infinity. The cube and
the sphere are projected onto the screen and rasterized into the depth buffer, computing the depth
value for each pixel.

Suppose the red cube has a depth value of 3 at pixel (100, 100), while the blue sphere has a depth
value of 2 at the same pixel. Since the depth value of the sphere is less than that of the cube, the
depth buffer is updated with the new depth value and the color of the pixel, which is blue.

Similarly, at pixel (150, 150), the cube has a depth value of 2, while the sphere has a depth value of
4. In this case, the depth buffer is not updated since the cube is closer to the viewer and therefore
visible.

After all objects have been rasterized into the depth buffer, the final image is output by reading the
color values from the depth buffer. In this example, the pixels with the updated colors are those
where the sphere is visible and the cube is hidden behind it.

Overall, the depth buffer algorithm is a fast and effective method for visible surface detection that is
widely used in computer graphics.

The depth buffer algorithm has several advantages and disadvantages that are important to consider
when using it for visible surface detection in computer graphics:

Advantages:

1. Accuracy: The depth buffer algorithm is a highly accurate method for visible surface detection. It
can accurately identify which surfaces are visible and which are hidden, even in complex scenes with
overlapping objects.

2. Speed: The depth buffer algorithm is a very fast method for visible surface detection. It can
quickly process large amounts of data, making it ideal for real-time applications such as video games
and simulations.

3. Flexibility: The depth buffer algorithm is a highly flexible method for visible surface detection. It
can be easily combined with other techniques such as texture mapping and lighting to create
complex and realistic scenes.
4. Easy to implement: The depth buffer algorithm is relatively easy to implement and does not
require a lot of computational resources. This makes it accessible to a wide range of developers and
users.

Disadvantages:

1. Memory usage: The depth buffer algorithm requires a large amount of memory to store the depth
values for each pixel in the scene. This can be a problem in scenes with a large number of objects or
high resolution images.

2. Limited depth precision: The depth buffer algorithm is limited in its ability to represent depth
values with high precision. This can cause problems in scenes with objects that are very close
together or very far apart.

3. Z-fighting: The depth buffer algorithm can suffer from a phenomenon known as z-fighting, where
objects with very similar depth values are rendered incorrectly. This can cause flickering or flashing
in the final image.

4. Sorting required: The depth buffer algorithm requires that the objects in the scene be sorted
based on their distance from the viewer. This can be difficult in scenes with highly complex geometry
or dynamic objects that move around in the scene.

Overall, the depth buffer algorithm is a powerful and widely used method for visible surface
detection in computer graphics. While it has some limitations, it remains one of the most accurate
and efficient methods available for this task.

7b. Explain 3D viewing pipeline with neat diagram and transformation from world to viewing
coordinates (Repeat)

8a. Explain orthogonal projection in detail (Repeat)

b. Explain perspective projection with reference point and vanishing point with the neat diagram
(REPEAT)

c. Explains symmetric perspective projection frustum

Sep 2020

7a. Explain orthogonal projections (REPEAT)

7b. Discuss the opengl visibility detection functions (ia3)

OpenGL Visibility-Detection Functions


OpenGL Polygon-Culling Functions

 Back-face removal is accomplished with the


functions glEnable (GL_CULL_FACE);

glCullFace (mode);

 where parameter mode is assigned the value GL_BACK, GL_FRONT,


GL_FRONT_AND_BACK

 By default, parameter mode in the glCullFace function has the value


GL_BACK

 The culling routine is turned off


with glDisable
(GL_CULL_FACE);

OpenGL Depth-Buffer Functions

 To use the OpenGL depth-buffer visibility-detection routines, we first need to modify


the GL Utility Toolkit (GLUT) initialization function for the display mode to include
a request for the depth buffer, as well as for the refresh buffer

glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);

 Depth buffer values can then be initialized with

glClear (GL_DEPTH_BUFFER_BIT);

 the preceding initialization sets all depth-buffer values to the maximum


value 1.0 by default

 The OpenGL depth-buffer visibility-detection routines are activated with the


following function:

glEnable (GL_DEPTH_TEST);
And we deactivate the depth-buffer routines
with glDisable (GL_DEPTH_TEST);

 We can also apply depth-buffer visibility testing using some other initial value for
themaximumdepth, and this initial value is chosen with theOpenGLfunction:

glClearDepth (maxDepth);

 Parameter maxDepth can be set to any value between 0.0 and 1.0.

 Projection coordinates in OpenGL are normalized to the range from


−1.0 to 1.0, and the depth values between the near and far clipping
planes are further normalized to the range from 0.0 to 1.0.As an
option, we can adjust these normalization values with

glDepthRange (nearNormDepth, farNormDepth);

 By default, nearNormDepth = 0.0 and farNormDepth = 1.0.

 But with the glDepthRange function, we can set these two parameters
to any values within the range from 0.0 to 1.0, including
nearNormDepth > farNormDepth

 Another option available in OpenGL is the test condition that is to be used for the depth-
buffer routines.We specify a test condition with the following function:

glDepthFunc (testCondition);

O Parameter testCondition can be assigned any one of the following eight


symbolic constants: GL_LESS, GL_GREATER, GL_EQUAL,
GL_NOTEQUAL, GL_LEQUAL, GL_GEQUAL, GL_NEVER (no points are
processed), and GL_ALWAYS.

O The default value for parameter testCondition is GL_LESS.

 We can also set the status of the depth buffer so that it is in a read-only state or in a read-
write state. This is accomplished with
glDepthMask (writeStatus);

O When writeStatus = GL_TRUE (the default value), we can both read from and
write to the depth buffer.

O With writeStatus = GL_FALSE, the write mode for the depth buffer is disabled
and we can retrieve values only for comparison in depth testing.

OpenGL Wire-Frame Surface-Visibility Methods

 A wire-frame display of a standard graphics object can be obtained in OpenGL by


requesting that only its edges are to be generated.

 We do this by setting the polygon-mode function as, for example:

glPolygonMode (GL_FRONT_AND_BACK, GL_LINE);

But this displays both visible and hidden edges

glEnable (GL_DEPTH_TEST);

glPolygonMode (GL_FRONT_AND_BACK,
GL_LINE); glColor3f (1.0, 1.0, 1.0);

/* Invoke the object-description routine. */

glPolygonMode (GL_FRONT_AND_BACK, GL_FILL);

glEnable (GL_POLYGON_OFFSET_FILL);

glPolygonOffset (1.0, 1.0);

glColor3f (0.0, 0.0, 0.0);

/* Invoke the object-description routine


again. */ glDisable
(GL_POLYGON_OFFSET_FILL);
OpenGL Depth-Cueing Function

 We can vary the brightness of an object as a function of its distance from the viewing
position with

glEnable (GL_FOG);

glFogi (GL_FOG_MODE, GL_ LINEAR);

This applies the linear depth function to object colors using dmin = 0.0 and dmax = 1.0. But we
can set different values for dmin and dmax with the following function calls:

glFogf (GL_FOG_START, minDepth);

glFogf (GL_FOG_END, maxDepth);

 In these two functions, parameters minDepth and maxDepth are assigned floating-
point values, although integer values can be used if we change the function suffix to i.

 We can use the glFog function to set an atmosphere color that is to be combined with
the color of an object after applying the linear depthcueing function

8a. explain the perspective projections (REPEAT)

b. Discuss the depth buffer method (ia3) (REPEAT)

July 2019

7a. What is 3D viewing? With the help of a block diagram, explain the 3D viewing pipeline
architecture (REPEAT)

7b. Design the transformation matrix for orthogonal and prospective projections (REPEAT)

7c. Explain depth buffer method and give the opengl visibility detection functions (ia3) (REPEAT)

8a. Explain the steps for transformation from world to viewing coordinate system (REPEAT)

b. Design the transformation matrix for perspective projection (REPEAT) and give openGL 3D
viewing functions
OpenGL Three-Dimensional Viewing Functions

OpenGL Viewing-Transformation Function

glMatrixMode (GL_MODELVIEW);

 a matrix is formed and concatenated with the current modelview matrix, We set the
modelview mode with the statement above

gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);

Viewing parameters are specified with the above GLU function


 This function designates the origin of the viewing reference frame as the world-
coordinate position P0 = (x0, y0, z0), the reference position as Pref =(xref, yref, zref), and
the view-up vector as V = (Vx, Vy, Vz).

 If we do not invoke the gluLookAt function, the default OpenGL viewing parameters
are

P0 = (0, 0, 0)

Pref = (0, 0, −1)

V=(0,1,0)

OpenGL Orthogonal-Projection Function

glMatrixMode (GL_PROJECTION);

 set up a projection-transformation matrix.

 Then, when we issue any transformation command, the resulting matrix will be
concatenated with the current projection matrix.
glOrtho (xwmin, xwmax, ywmin, ywmax, dnear, dfar);

 Orthogonal-projection parameters are chosen with the function

 All parameter values in this function are to be assigned double-precision, floating


point Numbers

 Function glOrtho generates a parallel projection that is perpendicular to the view


plane

 Parameters dnear and dfar denote distances in the negative zview direction from the
viewing-coordinate origin

 We can assign any values (positive, negative, or zero) to these parameters, so long as

dnear<dfar.

 Exa: glOrtho (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

OpenGL General Perspective-Projection Function


glFrustum (xwmin, xwmax, ywmin, ywmax, dnear,
dfar);

 specify a perspective projection that has either a symmetric frustum view volume or
an oblique frustum view volume

 All parameters in this function are assigned double-precision, floating-point numbers.


 The first four parameters set the coordinates for the clipping window on the near
plane,

and the last two parameters specify the distances from the coordinate origin to the
near and far clipping planes along the negative zview axis.

OpenGL Viewports and Display Windows


glViewport (xvmin, yvmin, vpWidth,
vpHeight);

 A rectangular viewport is defined.

 The first two parameters in this function specify the integer screen position of the
lower-left corner of the viewport relative to the lower-left corner of the display
window.

 And the last two parameters give the integer width and height of the viewport.

 To maintain the proportions of objects in a scene, we set the aspect ratio of the
viewport equal to the aspect ratio of the clipping window.

 Display windows are created and managed with GLUT routines. The default viewport
in OpenGL is the size and position of the current display window
c. Give the general classification of visible detection algorithm and explain anyone algorithm in
detail (ia3) (REPEAT)

4.10 Classification of Visible-Surface Detection Algorithms

 We can broadly classify visible-surface detection algorithms according to whether


they deal with the object definitions or with their projected images.

 Object-space methods: compares objects and parts of objects to each other within
the scene definition to determine which surfaces, as a whole, we should label as
visible.

 Image-space methods: visibility is decided point by point at each pixel position on


the projection plane.
 Although there are major differences in the basic approaches taken by the various
visible-surface detection algorithms, most use sorting and coherence methods to
improve performance.

 Sorting is used to facilitate depth comparisons by ordering the individual surfaces in a


scene according to their distance from the view plane.

 Coherence methods are used to take advantage of regularities in a scene.

Jan 2020

7a. explain the two classification of visible surface detection algorithm (ia3)

Visible surface detection algorithms are used in computer graphics to determine which surfaces or
objects are visible in a scene and which are hidden from view. There are two main types of visible
surface detection algorithms:

1. Object-space Algorithms:

Object-space algorithms operate in object space, meaning they analyze the objects themselves
rather than the image they produce. These algorithms are used to identify the visible surfaces of
each object in the scene before rendering the final image. Object-space algorithms include:

- Back-face culling: This algorithm removes all the faces of an object that face away from the viewer.
This is done by computing the normal vector of each face and checking if it points towards or away
from the viewer.

- Object-space depth ordering: This algorithm sorts the objects in the scene based on their distance
from the viewer. The objects are then rendered in order, with the closest object appearing first.

2. Image-space Algorithms:

Image-space algorithms operate in image space, meaning they analyze the final image produced by
rendering the scene. These algorithms are used to identify the visible surfaces in the final image.
Image-space algorithms include:

- Scan-line algorithms: This algorithm scans the image line by line and identifies the visible surfaces
by comparing the depth values of the pixels.
- Depth-buffer (or z-buffer) algorithm: This algorithm uses a buffer called the depth buffer or z-buffer
to store the depth values of each pixel in the scene. By comparing the depth values of each pixel, the
algorithm determines which surfaces are visible and which are hidden.

Each type of algorithm has its own strengths and weaknesses. Object-space algorithms are generally
more accurate but can be computationally expensive, while image-space algorithms are generally
faster but may not be as accurate. The choice of which algorithm to use depends on the specific
needs of the application and the trade-offs between accuracy and performance.

7b. Explain with example the depth buffer algorithm used for visible surface detection. And also
list the advantages and disadvantages of depth buffer algorithm (ia3) (REPEAT)

c. Bring out the differences between perspective and parallel projections

8a. Explain the openGL 3 dimensional viewing functions (REPEAT)


b. what is projection reference point? Obtain the general and special case perspective
transformation equations (REPEAT)

c. explain back-face detection method with example (ia3)

4.11 Back-Face Detection

 A fast and simple object-space method for locating the back faces of a polyhedron is
based on front-back tests. A point (x, y, z) is behind a polygon surface if
A x + B y + Cz + D < 0

where A, B,C, and Dare the plane parameters for the polygon
 We can simplify the back-face test by considering the direction of the normal vector
N for a polygon surface. If Vview is a vector in the viewing direction from our camera

position, as shown in Figure below, then a polygon is a back face if

Vview . N > 0

 In a right-handed viewing system with the viewing direction along the negative zv
axis (Figure below), a polygon is a back face if the z component, C, of its normal
vector N satisfies C < 0.

 Also, we cannot see any face whose normal has z component C = 0, because our
viewing direction is grazing that polygon. Thus, in general, we can label any polygon
as a back face if its normal vector has a z component value that satisfies the inequality
C <=0

 Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C, and D can be calculated from polygon
vertex coordinates specified in a clockwise direction.

 Inequality 1 then remains a valid test for points behind the polygon.

 By examining parameter C for the different plane surfaces describing an object, we


can immediately identify all the back faces.

 For other objects, such as the concave polyhedron in Figure below, more tests must be
carried out to determine whether there are additional faces that are totally or partially
obscured by other faces

 In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.

Jan 2023

7a. Define orthogonal projections. Explain clipping window and orthogonal projection view
volume in 3D (REPEAT)

7b. Explain 3 dimensional view pipeline (REPEAT)


8a. Construct prospective projection transformation coordinates and prospective projection
equations special cases (REPEAT)

b. Explain the depth buffer method and develop its algorithm (ia3) (REPEAT)

Mar 2022

7a. Explain how clipping window and orthogonal projection view volume are determined. give
normalization transformation matrix for an Orthogonal projection (REPEAT)

7b. Explain in detail about depth buffer algorithm for visible surface detection (ia3) (REPEAT)

8a. Briefly explain the concept of symmetric perspective projection frustum (REPEAT)

b. Write the code snippet for depth buffer functions and depth curing function (ia3) (REPEAT)

#include <GL/glut.h>

// Define the window dimensions

const int WIDTH = 800;

const int HEIGHT = 600;

// Define the depth buffer parameters

float maxDepth = 1.0f;

float nearNormDepth = 0.0f;

float farNormDepth = 1.0f;

GLenum testCondition = GL_LESS;

GLboolean writeStatus = GL_TRUE;

// Define the fog parameters

GLenum fogMode = GL_LINEAR;

float minDepth = 0.0f;

float fogColor[4] = { 0.5f, 0.5f, 0.5f, 1.0f };

// Function to initialize the OpenGL context

void init() {

// Set the display mode

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);


// Set the window size

glutInitWindowSize(WIDTH, HEIGHT);

// Create the window

glutCreateWindow("Depth Buffer and Depth Cueing Example");

// Set the clear color

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

// Enable depth testing

glEnable(GL_DEPTH_TEST);

// Set the clear depth value

glClearDepth(maxDepth);

// Set the depth range

glDepthRange(nearNormDepth, farNormDepth);

// Set the depth testing function

glDepthFunc(testCondition);

// Set the depth buffer write status

glDepthMask(writeStatus);

// Enable fog

glEnable(GL_FOG);

// Set the fog mode

glFogi(GL_FOG_MODE, fogMode);

// Set the fog start and end depths


glFogf(GL_FOG_START, minDepth);

glFogf(GL_FOG_END, maxDepth);

// Set the fog color

glFogfv(GL_FOG_COLOR, fogColor);

// Function to display the scene

void display() {

// Clear the color and depth buffers

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Draw the scene here

// Swap the buffers

glutSwapBuffers();

// Main function

int main(int argc, char** argv) {

// Initialize the GLUT library

glutInit(&argc, argv);

// Initialize the OpenGL context

init();

// Register the display function

glutDisplayFunc(display);

// Enter the main loop

glutMainLoop();
return 0;

```

This code initializes the OpenGL context with the specified display mode and window size. It also
enables depth testing, sets the clear depth value and range, and specifies the depth testing function
and buffer write status. Additionally, it enables fog and sets the fog mode, start and end depths, and
color.

The `display()` function can be used to draw the scene using OpenGL drawing functions. In this
example, no objects are drawn, but the function could be modified to draw simple primitives or
more complex models.

Overall, this code provides a basic example of how to use depth buffer functions and depth cueing
function in an OpenGL application.

Aug 2021

7a. List and explain different projections supported in computer graphics in detail with example

Refer notes

7b. explain two visible surface detection methods, back-face detection methods and depth buffer
method (ia3) (REPEAT)

8a. write a simple program on to demonstrate 3D viewing of an object using opengL functions

#include <GL/glut.h>

GLint winWidth = 600, winHeight = 600; // Initial display-window size.

GLfloat x0 = 100.0, y0 = 50.0, z0 = 50.0; // Viewing-coordinate origin.

GLfloat xref = 50.0, yref = 50.0, zref = 0.0; // Look-at point.

GLfloat Vx = 0.0, Vy = 1.0, Vz = 0.0; // View-up vector.

/* Set coordinate limits for the clipping window: */


GLfloat xwMin = -40.0, ywMin = -60.0, xwMax = 40.0, ywMax =
60.0; /* Set positions for near and far clipping planes: */ GLfloat
dnear = 25.0, dfar = 125.0;

void init (void)

glClearColor (1.0, 1.0, 1.0, 0.0);

glMatrixMode (GL_MODELVIEW);

gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);

glMatrixMode (GL_PROJECTION);

glFrustum (xwMin, xwMax, ywMin, ywMax, dnear, dfar);

void displayFcn (void)

glClear (GL_COLOR_BUFFER_BIT);
glColor3f (0.0, 1.0, 0.0); // Set fill color to
green. glPolygonMode (GL_FRONT,
GL_FILL);

glPolygonMode (GL_BACK, GL_LINE); // Wire-frame back face.

glBegin (GL_QUADS);

glVertex3f (0.0, 0.0, 0.0);

glVertex3f (100.0, 0.0, 0.0);

glVertex3f (100.0, 100.0, 0.0);


glVertex3f (0.0, 100.0, 0.0);

glEnd ( );

glFlush ( );

void reshapeFcn (GLint newWidth, GLint newHeight)

glViewport (0, 0, newWidth, newHeight);

winWidth = newWidth;

winHeight = newHeight;

void main (int argc, char** argv)

glutInit (&argc, argv);

glutInitDisplayMode (GLUT_SINGLE |
GLUT_RGB); glutInitWindowPosition (50, 50);
glutInitWindowSize (winWidth, winHeight);
glutCreateWindow ("Perspective View of A
Square"); init ( );

glutDisplayFunc (displayFcn);

glutReshapeFunc (reshapeFcn);
glutMainLoop ( );

b. Compare parallel projections and prospective projection (REPEAT)

Jan 2021

7a. Explain the three dimensional viewing coordinate parameters

Three-Dimensional Viewing-Coordinate Parameters

 Select a world-coordinate position P0 =(x0, y0, z0) for the viewing origin, which is
called

the view point or viewing position and we specify a view-up vector V, which defines
the yview direction.

 Figure below illustrates the positioning of a three-dimensional viewing-coordinate


frame within a world system.
A right-handed viewing-coordinate system, with axes x view, y view, and z view, relative to a right-
handed world-coordinate frame.
The View-Plane Normal Vector

 Because the viewing direction is usually along the zview axis, the view plane, also
called the projection plane, is normally assumed to be perpendicular to this axis.

 Thus, the orientation of the view plane, as well as the direction for the positive zview
axis, can be definedwith a view-plane normal vector N,

 An additional scalar parameter is used to set the position of the view plane at some
coordinate value zvp along the zview axis,
 This parameter value is usually specified as a distance from the viewing origin along
the direction of viewing, which is often taken to be in the negative zview direction.

 Vector N can be specified in various ways. In some graphics systems, the direction for
N is defined to be along the line from the world-coordinate origin to a selected point
position.
 Other systems take N to be in the direction from a reference point Pref to the viewing
origin P0,

Specifying the view-plane normal vector N as the direction from a selected reference point P ref
to the viewing-coordinate origin P0.

The View-Up Vector

 Once we have chosen a view-plane normal vector N, we can set the direction for the
view-up vector V.
 This vector is used to establish the positive direction for the yview axis.
 Usually, V is defined by selecting a position relative to the world-coordinate origin,
so that the direction for the view-up vector is from the world origin to this selected
position

 Because the view-plane normal vector N defines the direction for the zview axis, vector
V should be perpendicular to N.

 But, in general, it can be difficult to determine a direction for V that is precisely


perpendicular to N.

 Therefore, viewing routines typically adjust the user-defined orientation of vector V,

The uvn Viewing-Coordinate Reference Frame

 Left-handed viewing coordinates are sometimes used in graphics packages, with the
viewing direction in the positive zview direction.

 With a left-handed system, increasing zview values are interpreted as being farther
from the viewing position along the line of sight.

 But right-handed viewing systems are more common, because they have the same
orientation as the world-reference frame.

 Because the view-plane normal N defines the direction for the zview axis and the view-
up vector V is used to obtain the direction for the yview axis, we need only determine
the direction for the xview axis.
 Using the input values for N and V,we can compute a third vector, U, that is
perpendicular to both N and V.
 Vector U then defines the direction for the positive xview axis.

 We determine the correct direction for U by taking the vector cross product of V and
N so as to form a right-handed viewing frame.

 The vector cross product of N and U also produces the adjusted value for V,
perpendicular to both N and U, along the positive yview axis.

 Following these procedures, we obtain the following set of unit axis vectors for a
right-handed viewing coordinate system.

 The coordinate system formed with these unit vectors is often described as a uvn
viewing-coordinate reference frame

7b. explain the orthogonal projection (REPEAT)

8a. explain the depth buffer method (ia3) (REPEAT)

b. Explain perspective projection transformation matrix (REPEAT)

c. Explain 3 dimensional viewing functions (REPEAT)

July 2018
7a. Explain in detail perspective projection transformation coordinates (REPEAT)

7b. write and explain depth buffer algorithm (ia3) (REPEAT)

8a. Explain in detail symmetry prospective projection frustum (REPEAT)

Symmetric Perspective-Projection Frustum

 The line from the projection reference point through the center of the clipping
window and on through the view volume is the centerline for a perspectiveprojection
frustum.

 If this centerline is perpendicular to the view plane, we have a symmetric frustum


(with respect to its centerline)

 Because the frustum centerline intersects the view plane at the coordinate location
(xprp, yprp, zvp), we can express the corner positions for the clipping window in terms
of the window dimensions:

 Another way to specify a symmetric perspective projection is to use parameters that


approximate the properties of a camera lens.

 A photograph is produced with a symmetric perspective projection of a scene onto the


film plane.

 Reflected light rays from the objects in a scene are collected on the film plane from
within the “cone of vision” of the camera.

 This cone of vision can be referenced with a field-of-view angle, which is a measure
of the size of the camera lens.

 A large field-of-view angle, for example, corresponds to a wide-angle lens.

 In computer graphics, the cone of vision is approximated with a symmetric frustum,


and we can use a field-of-view angle to specify an angular size for the frustum.
 For a given projection reference point and view-plane position, the field-of view angle
determines the height of the clipping window from the right triangles in the diagram
of Figure below, we see that

so that the clipping-window height can be calculated as

 Therefore, the diagonal elements with the value zprp −zvp could be replaced by either
of the following two expressions

b. explaining opengl visibility detection functions (ia3) (REPEAT)

You might also like