Mod 4
Mod 4
7a. Explain the perspective projection with reference point and vanishing point with neat diagrams
We can approximate this geometric-optics effect by projecting objects to the view plane
along converging paths to a position called the projection reference point (or center of
projection).
Objects are then displayed with foreshortening effects, and projections of distant objects
are smaller than the projections of objects of the same size that are closer to the view
plane
Figure below shows the projection path of a spatial position (x, y, z) to a general
projection reference point at (xprp, yprp, zprp).
The projection line intersects the view plane at the coordinate position (xp, yp, zvp), where
zvp is some selected position for the view plane on the zview axis.
On the view plane, z’ = zvp and we can solve the z’ equation for parameter u at this
position along the projection line:
Substituting this value of u into the equations for x’ and y’, we obtain the general
perspective-transformation equations
To simplify the perspective calculations, the projection reference point could be limited
xprp = yprp = 0:
Case 2:
Sometimes the projection reference point is fixed at the coordinate origin, and
If the view plane is the uv plane and there are no restrictions on the placement of the
Case 4:
With the uv plane as the view plane and the projection reference point on the zview axis,
the perspective equations are
xprp = yprp = zvp = 0:
The view plane is usually placed between the projection reference point and the scene,
but, in general, the view plane could be placed anywhere except at the projection point.
If the projection reference point is between the view plane and the scene, objects are
inverted on the view plane (refer below figure)
Perspective effects also depend on the distance between the projection reference point
and the view plane, as illustrated in Figure below.
If the projection reference point is close to the view plane, perspective effects are
emphasized; that is, closer objects will appear much larger than =more distant objects of
the same size.
Similarly, as the projection reference point moves farther from the view plane, the
difference in the size of near and far objects decreases
The point at which a set of projected parallel lines appears to converge is called a
vanishing point.
For a set of lines that are parallel to one of the principal axes of an object, the vanishing
point is referred to as a principal vanishing point.
We control the number of principal vanishing points (one, two, or three) with the
orientation of the projection plane, and perspective projections are accordingly classified
as one-point, two-point, or three-point projections
The displayed view of a scene includes only those objects within the pyramid, just as we
cannot see objects beyond our peripheral vision, which are outside the cone of vision.
By adding near and far clipping planes that are perpendicular to the zview axis (and
parallel to the view plane), we chop off parts of the infinite, perspectiveprojection view
volume to form a truncated pyramid, or frustum, view volume
But with a perspective projection, we could also use the near clipping plane to take out
large objects close to the view plane that could project into unrecognizable shapes within
the clipping window.
Similarly, the far clipping plane could be used to cut out objects far from the projection
reference point that might project to small blots on the view plane.
h = zprp – z
The perspective-projection transformation of a viewing-coordinate position is then
accomplished in two steps.
Where,
Second, after other processes have been applied, such as the normalization
transformation and clipping routines, homogeneous coordinates are divided by
parameter h to obtain the true transformation-coordinate positions.
Parameters sz and tz are the scaling and translation factors for normalizing the
projected values of z-coordinates.
Specific values for sz and tz depend on the normalization range we select.
7b. Discuss depth buffer method with algorithm (ia3)
The algorithm is usually applied to scenes containing only polygon surfaces, because
depth values can be computed very quickly and the method is easy to implement.
because object depth is usually measured along the z axis of a viewing system
Figure above shows three surfaces at varying distances along the orthographic projection line
from position (x, y) on a view plane.
If a surface is closer than any previously processed surfaces, its surface color is
calculated and saved, along with its depth.
The visible surfaces in a scene are represented by the set of surface colors that have
been saved after all surface processing is completed
As implied by the name of this method, two buffer areas are required. A depth buffer
is used to store depth values for each (x, y) position as surfaces are processed, and
the frame buffer stores the surface-color values for each pixel position.
Depth-Buffer Algorithm
1. Initialize the depth buffer and frame buffer so that for all buffer positions
(x, y), depthBuff (x, y) = 1.0, frameBuff (x, y) =
backgndColor
• For each projected (x, y) pixel position of a polygon, calculate the depth z (if not
already
known).
• If z < depthBuff (x, y), compute the surface color at that position and set
After all surfaces have been processed, the depth buffer contains depth values for the visible
surfaces and the frame buffer contains the corresponding color values for those surfaces.
Given the depth values for the vertex positions of any polygon in a scene, we can
calculate the depth at any other point on the plane containing the polygon.
At surface position (x, y), the depth is calculated from the plane equation as
If the depth of position (x, y) has been determined to be z, then the depth z’ of the next
position (x + 1, y) along the scan line is obtained as
The ratio −A/C is constant for each surface, so succeeding depth values across a scan
line are obtained from preceding values with a single addition.
We can implement the depth-buffer algorithm by starting at a top vertex of the
polygon.
Then, we could recursively calculate the x-coordinate values down a left edge of the
polygon.
The x value for the beginning position on each scan line can be calculated from the
beginning (edge) x value of the previous scan line as
Depth values down this edge are obtained recursively as if we are processing down a
vertical edge, the slope is infinite and the recursive calculations reduce to
One slight complication with this approach is that while pixel positions are at integer
(x,
y) coordinates, the actual point of intersection of a scan line with the edge of a
polygon may not be.
The method can be applied to curved surfaces by determining depth and color values
at each surface projection point.
Objects are processed in an arbitrary order, so that a color can be computed for a
surface point that is later replaced by a closer surface.
8a. Demonstrate how transformation from world coordinate to viewing coordinate with matrix
representation
First of all, we need to choose a viewing position corresponding to where we would place a
camera.
We choose the viewing position according to whether we want to display a front, back, side,
top, or bottom view of the scene.
We could also pick a position in the middle of a group of objects or even inside a single
object, such as a building or a molecule.
Some of the viewing operations for a three-dimensional scene are the same as, or similar to,
those used in the two-dimensional viewing pipeline.
Clipping windows, viewports, and display windows are usually specified as rectangles with
their edges parallel to the coordinate axes.
The viewing position, view plane, clipping window, and clipping planes are all specified
within the viewing-coordinate reference frame.
Figure above shows the general processing steps for creating and transforming a three-
dimensional scene to device coordinates.
Once the scene has been modeled in world coordinates, a viewing-coordinate system is
selected and the description of the scene is converted to viewing coordinates
Objects are mapped to normalized coordinates, and all parts of the scene outside the view
volume are clipped off.
We will assume that the viewport is to be specified in device coordinates and that
normalized coordinates are transferred to viewport coordinates, following the clipping operations.
The final step is to map viewport coordinates to device coordinates within a selected display
window
In the three-dimensional viewing pipeline, the first step after a scene has been
constructed is to transfer object descriptions to the viewing-coordinate reference
frame.
The viewing-coordinate origin is at world position P0 = (x0, y0, z0). Therefore, the
matrix for translating the viewing origin to the world origin is
For the rotation transformation, we can use the unit vectors u, v, and n to form the
composite rotation matrix that superimposes the viewing axes onto the world frame.
This transformation matrix is
where the elements of matrix R are the components of the uvn axis vectors.
The coordinate transformation matrix is then obtained as the product of the preceding
translation and rotation matrices:
Translation factors in this matrix are calculated as the vector dot product of each of
the u, v, and n unit vectors with P 0, which represents a vector from the world origin to
the viewing origin.
These matrix elements are evaluated as
b. Explain orthogonal projections in detail
A transformation of object descriptions to a view plane along lines that are all
parallel to the view-plane normal vector N is called an orthogonal projection also
termed as orthographic projection.
Orthogonal projections are most often used to produce the front, side, and top views
of an object
Front, side, and rear orthogonal projections of an object are called elevations; and a
top orthogonal projection is called a plan view
Axonometric and Isometric Orthogonal Projections
We can also form orthogonal projections that display more than one face of an
object. Such views are called axonometric orthogonal projections.
The most commonly used axonometric projection is the isometric projection, which is
generated by aligning the projection plane (or the object) so that the plane intersects
each coordinate axis in which the object is defined, called the principal axes, at the
same distance from the origin
With the projection direction parallel to the zview axis, the transformation equations
for an orthogonal projection are trivial. For any position (x, y, z) in viewing
coordinates, as in Figure below, the projection coordinates are xp = x, yp = y
The edges of the clipping window specify the x and y limits for the part of the scene
that we want to display.
These limits are used to form the top, bottom, and two sides of a clipping region
called the orthogonal-projection view volume.
Because projection lines are perpendicular to the view plane, these four boundaries
are planes that are also perpendicular to the view plane and that pass through the
edges of the clipping window to form an infinite clipping region, as in Figure below.
These two planes are called the near-far clipping planes, or the front-back clipping
planes.
The near and far planes allow us to exclude objects that are in front of or behind the
part of the scene that we want to display.
When the near and far planes are specified, we obtain a finite orthogonal view volume
that is a rectangular parallelepiped, as shown in Figure below along with one
possible placement for the view plane
Once we have established the limits for the view volume, coordinate descriptions
inside this rectangular parallelepiped are the projection coordinates, and they can be
mapped into a normalized view volume without any further projection processing.
Some graphics packages use a unit cube for this normalized view volume, with each
of the x, y, and z coordinates normalized in the range from 0 to 1.
Another normalization-transformation approach is to use a symmetric cube, with
coordinates in the range from −1 to 1
Also, z-coordinate positions for the near and far planes are denoted as znear and zfar,
respectively. Figure below illustrates this normalization transformation
7a. explain with example depth buffer algorithm used for visible surface detection. Discuss the
advantages and disadvantages (ia3)
Consider a scene with two objects, a red cube and a blue sphere, in front of a grey background. The
viewer is positioned at (0, 0, 5), looking towards the origin.
The depth buffer is initialized with the maximum depth value, which is set to infinity. The cube and
the sphere are projected onto the screen and rasterized into the depth buffer, computing the depth
value for each pixel.
Suppose the red cube has a depth value of 3 at pixel (100, 100), while the blue sphere has a depth
value of 2 at the same pixel. Since the depth value of the sphere is less than that of the cube, the
depth buffer is updated with the new depth value and the color of the pixel, which is blue.
Similarly, at pixel (150, 150), the cube has a depth value of 2, while the sphere has a depth value of
4. In this case, the depth buffer is not updated since the cube is closer to the viewer and therefore
visible.
After all objects have been rasterized into the depth buffer, the final image is output by reading the
color values from the depth buffer. In this example, the pixels with the updated colors are those
where the sphere is visible and the cube is hidden behind it.
Overall, the depth buffer algorithm is a fast and effective method for visible surface detection that is
widely used in computer graphics.
The depth buffer algorithm has several advantages and disadvantages that are important to consider
when using it for visible surface detection in computer graphics:
Advantages:
1. Accuracy: The depth buffer algorithm is a highly accurate method for visible surface detection. It
can accurately identify which surfaces are visible and which are hidden, even in complex scenes with
overlapping objects.
2. Speed: The depth buffer algorithm is a very fast method for visible surface detection. It can
quickly process large amounts of data, making it ideal for real-time applications such as video games
and simulations.
3. Flexibility: The depth buffer algorithm is a highly flexible method for visible surface detection. It
can be easily combined with other techniques such as texture mapping and lighting to create
complex and realistic scenes.
4. Easy to implement: The depth buffer algorithm is relatively easy to implement and does not
require a lot of computational resources. This makes it accessible to a wide range of developers and
users.
Disadvantages:
1. Memory usage: The depth buffer algorithm requires a large amount of memory to store the depth
values for each pixel in the scene. This can be a problem in scenes with a large number of objects or
high resolution images.
2. Limited depth precision: The depth buffer algorithm is limited in its ability to represent depth
values with high precision. This can cause problems in scenes with objects that are very close
together or very far apart.
3. Z-fighting: The depth buffer algorithm can suffer from a phenomenon known as z-fighting, where
objects with very similar depth values are rendered incorrectly. This can cause flickering or flashing
in the final image.
4. Sorting required: The depth buffer algorithm requires that the objects in the scene be sorted
based on their distance from the viewer. This can be difficult in scenes with highly complex geometry
or dynamic objects that move around in the scene.
Overall, the depth buffer algorithm is a powerful and widely used method for visible surface
detection in computer graphics. While it has some limitations, it remains one of the most accurate
and efficient methods available for this task.
7b. Explain 3D viewing pipeline with neat diagram and transformation from world to viewing
coordinates (Repeat)
b. Explain perspective projection with reference point and vanishing point with the neat diagram
(REPEAT)
Sep 2020
glCullFace (mode);
glClear (GL_DEPTH_BUFFER_BIT);
glEnable (GL_DEPTH_TEST);
And we deactivate the depth-buffer routines
with glDisable (GL_DEPTH_TEST);
We can also apply depth-buffer visibility testing using some other initial value for
themaximumdepth, and this initial value is chosen with theOpenGLfunction:
glClearDepth (maxDepth);
Parameter maxDepth can be set to any value between 0.0 and 1.0.
But with the glDepthRange function, we can set these two parameters
to any values within the range from 0.0 to 1.0, including
nearNormDepth > farNormDepth
Another option available in OpenGL is the test condition that is to be used for the depth-
buffer routines.We specify a test condition with the following function:
glDepthFunc (testCondition);
We can also set the status of the depth buffer so that it is in a read-only state or in a read-
write state. This is accomplished with
glDepthMask (writeStatus);
O When writeStatus = GL_TRUE (the default value), we can both read from and
write to the depth buffer.
O With writeStatus = GL_FALSE, the write mode for the depth buffer is disabled
and we can retrieve values only for comparison in depth testing.
glEnable (GL_DEPTH_TEST);
glPolygonMode (GL_FRONT_AND_BACK,
GL_LINE); glColor3f (1.0, 1.0, 1.0);
glEnable (GL_POLYGON_OFFSET_FILL);
We can vary the brightness of an object as a function of its distance from the viewing
position with
glEnable (GL_FOG);
This applies the linear depth function to object colors using dmin = 0.0 and dmax = 1.0. But we
can set different values for dmin and dmax with the following function calls:
In these two functions, parameters minDepth and maxDepth are assigned floating-
point values, although integer values can be used if we change the function suffix to i.
We can use the glFog function to set an atmosphere color that is to be combined with
the color of an object after applying the linear depthcueing function
July 2019
7a. What is 3D viewing? With the help of a block diagram, explain the 3D viewing pipeline
architecture (REPEAT)
7b. Design the transformation matrix for orthogonal and prospective projections (REPEAT)
7c. Explain depth buffer method and give the opengl visibility detection functions (ia3) (REPEAT)
8a. Explain the steps for transformation from world to viewing coordinate system (REPEAT)
b. Design the transformation matrix for perspective projection (REPEAT) and give openGL 3D
viewing functions
OpenGL Three-Dimensional Viewing Functions
glMatrixMode (GL_MODELVIEW);
a matrix is formed and concatenated with the current modelview matrix, We set the
modelview mode with the statement above
gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
If we do not invoke the gluLookAt function, the default OpenGL viewing parameters
are
P0 = (0, 0, 0)
V=(0,1,0)
glMatrixMode (GL_PROJECTION);
Then, when we issue any transformation command, the resulting matrix will be
concatenated with the current projection matrix.
glOrtho (xwmin, xwmax, ywmin, ywmax, dnear, dfar);
Parameters dnear and dfar denote distances in the negative zview direction from the
viewing-coordinate origin
We can assign any values (positive, negative, or zero) to these parameters, so long as
dnear<dfar.
specify a perspective projection that has either a symmetric frustum view volume or
an oblique frustum view volume
and the last two parameters specify the distances from the coordinate origin to the
near and far clipping planes along the negative zview axis.
The first two parameters in this function specify the integer screen position of the
lower-left corner of the viewport relative to the lower-left corner of the display
window.
And the last two parameters give the integer width and height of the viewport.
To maintain the proportions of objects in a scene, we set the aspect ratio of the
viewport equal to the aspect ratio of the clipping window.
Display windows are created and managed with GLUT routines. The default viewport
in OpenGL is the size and position of the current display window
c. Give the general classification of visible detection algorithm and explain anyone algorithm in
detail (ia3) (REPEAT)
Object-space methods: compares objects and parts of objects to each other within
the scene definition to determine which surfaces, as a whole, we should label as
visible.
Jan 2020
7a. explain the two classification of visible surface detection algorithm (ia3)
Visible surface detection algorithms are used in computer graphics to determine which surfaces or
objects are visible in a scene and which are hidden from view. There are two main types of visible
surface detection algorithms:
1. Object-space Algorithms:
Object-space algorithms operate in object space, meaning they analyze the objects themselves
rather than the image they produce. These algorithms are used to identify the visible surfaces of
each object in the scene before rendering the final image. Object-space algorithms include:
- Back-face culling: This algorithm removes all the faces of an object that face away from the viewer.
This is done by computing the normal vector of each face and checking if it points towards or away
from the viewer.
- Object-space depth ordering: This algorithm sorts the objects in the scene based on their distance
from the viewer. The objects are then rendered in order, with the closest object appearing first.
2. Image-space Algorithms:
Image-space algorithms operate in image space, meaning they analyze the final image produced by
rendering the scene. These algorithms are used to identify the visible surfaces in the final image.
Image-space algorithms include:
- Scan-line algorithms: This algorithm scans the image line by line and identifies the visible surfaces
by comparing the depth values of the pixels.
- Depth-buffer (or z-buffer) algorithm: This algorithm uses a buffer called the depth buffer or z-buffer
to store the depth values of each pixel in the scene. By comparing the depth values of each pixel, the
algorithm determines which surfaces are visible and which are hidden.
Each type of algorithm has its own strengths and weaknesses. Object-space algorithms are generally
more accurate but can be computationally expensive, while image-space algorithms are generally
faster but may not be as accurate. The choice of which algorithm to use depends on the specific
needs of the application and the trade-offs between accuracy and performance.
7b. Explain with example the depth buffer algorithm used for visible surface detection. And also
list the advantages and disadvantages of depth buffer algorithm (ia3) (REPEAT)
A fast and simple object-space method for locating the back faces of a polyhedron is
based on front-back tests. A point (x, y, z) is behind a polygon surface if
A x + B y + Cz + D < 0
where A, B,C, and Dare the plane parameters for the polygon
We can simplify the back-face test by considering the direction of the normal vector
N for a polygon surface. If Vview is a vector in the viewing direction from our camera
Vview . N > 0
In a right-handed viewing system with the viewing direction along the negative zv
axis (Figure below), a polygon is a back face if the z component, C, of its normal
vector N satisfies C < 0.
Also, we cannot see any face whose normal has z component C = 0, because our
viewing direction is grazing that polygon. Thus, in general, we can label any polygon
as a back face if its normal vector has a z component value that satisfies the inequality
C <=0
Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C, and D can be calculated from polygon
vertex coordinates specified in a clockwise direction.
Inequality 1 then remains a valid test for points behind the polygon.
For other objects, such as the concave polyhedron in Figure below, more tests must be
carried out to determine whether there are additional faces that are totally or partially
obscured by other faces
In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
Jan 2023
7a. Define orthogonal projections. Explain clipping window and orthogonal projection view
volume in 3D (REPEAT)
b. Explain the depth buffer method and develop its algorithm (ia3) (REPEAT)
Mar 2022
7a. Explain how clipping window and orthogonal projection view volume are determined. give
normalization transformation matrix for an Orthogonal projection (REPEAT)
7b. Explain in detail about depth buffer algorithm for visible surface detection (ia3) (REPEAT)
8a. Briefly explain the concept of symmetric perspective projection frustum (REPEAT)
b. Write the code snippet for depth buffer functions and depth curing function (ia3) (REPEAT)
#include <GL/glut.h>
void init() {
glutInitWindowSize(WIDTH, HEIGHT);
glEnable(GL_DEPTH_TEST);
glClearDepth(maxDepth);
glDepthRange(nearNormDepth, farNormDepth);
glDepthFunc(testCondition);
glDepthMask(writeStatus);
// Enable fog
glEnable(GL_FOG);
glFogi(GL_FOG_MODE, fogMode);
glFogf(GL_FOG_END, maxDepth);
glFogfv(GL_FOG_COLOR, fogColor);
void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glutSwapBuffers();
// Main function
glutInit(&argc, argv);
init();
glutDisplayFunc(display);
glutMainLoop();
return 0;
```
This code initializes the OpenGL context with the specified display mode and window size. It also
enables depth testing, sets the clear depth value and range, and specifies the depth testing function
and buffer write status. Additionally, it enables fog and sets the fog mode, start and end depths, and
color.
The `display()` function can be used to draw the scene using OpenGL drawing functions. In this
example, no objects are drawn, but the function could be modified to draw simple primitives or
more complex models.
Overall, this code provides a basic example of how to use depth buffer functions and depth cueing
function in an OpenGL application.
Aug 2021
7a. List and explain different projections supported in computer graphics in detail with example
Refer notes
7b. explain two visible surface detection methods, back-face detection methods and depth buffer
method (ia3) (REPEAT)
8a. write a simple program on to demonstrate 3D viewing of an object using opengL functions
#include <GL/glut.h>
glMatrixMode (GL_MODELVIEW);
gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
glMatrixMode (GL_PROJECTION);
glClear (GL_COLOR_BUFFER_BIT);
glColor3f (0.0, 1.0, 0.0); // Set fill color to
green. glPolygonMode (GL_FRONT,
GL_FILL);
glBegin (GL_QUADS);
glEnd ( );
glFlush ( );
winWidth = newWidth;
winHeight = newHeight;
glutInitDisplayMode (GLUT_SINGLE |
GLUT_RGB); glutInitWindowPosition (50, 50);
glutInitWindowSize (winWidth, winHeight);
glutCreateWindow ("Perspective View of A
Square"); init ( );
glutDisplayFunc (displayFcn);
glutReshapeFunc (reshapeFcn);
glutMainLoop ( );
Jan 2021
Select a world-coordinate position P0 =(x0, y0, z0) for the viewing origin, which is
called
the view point or viewing position and we specify a view-up vector V, which defines
the yview direction.
Because the viewing direction is usually along the zview axis, the view plane, also
called the projection plane, is normally assumed to be perpendicular to this axis.
Thus, the orientation of the view plane, as well as the direction for the positive zview
axis, can be definedwith a view-plane normal vector N,
An additional scalar parameter is used to set the position of the view plane at some
coordinate value zvp along the zview axis,
This parameter value is usually specified as a distance from the viewing origin along
the direction of viewing, which is often taken to be in the negative zview direction.
Vector N can be specified in various ways. In some graphics systems, the direction for
N is defined to be along the line from the world-coordinate origin to a selected point
position.
Other systems take N to be in the direction from a reference point Pref to the viewing
origin P0,
Specifying the view-plane normal vector N as the direction from a selected reference point P ref
to the viewing-coordinate origin P0.
Once we have chosen a view-plane normal vector N, we can set the direction for the
view-up vector V.
This vector is used to establish the positive direction for the yview axis.
Usually, V is defined by selecting a position relative to the world-coordinate origin,
so that the direction for the view-up vector is from the world origin to this selected
position
Because the view-plane normal vector N defines the direction for the zview axis, vector
V should be perpendicular to N.
Left-handed viewing coordinates are sometimes used in graphics packages, with the
viewing direction in the positive zview direction.
With a left-handed system, increasing zview values are interpreted as being farther
from the viewing position along the line of sight.
But right-handed viewing systems are more common, because they have the same
orientation as the world-reference frame.
Because the view-plane normal N defines the direction for the zview axis and the view-
up vector V is used to obtain the direction for the yview axis, we need only determine
the direction for the xview axis.
Using the input values for N and V,we can compute a third vector, U, that is
perpendicular to both N and V.
Vector U then defines the direction for the positive xview axis.
We determine the correct direction for U by taking the vector cross product of V and
N so as to form a right-handed viewing frame.
The vector cross product of N and U also produces the adjusted value for V,
perpendicular to both N and U, along the positive yview axis.
Following these procedures, we obtain the following set of unit axis vectors for a
right-handed viewing coordinate system.
The coordinate system formed with these unit vectors is often described as a uvn
viewing-coordinate reference frame
July 2018
7a. Explain in detail perspective projection transformation coordinates (REPEAT)
The line from the projection reference point through the center of the clipping
window and on through the view volume is the centerline for a perspectiveprojection
frustum.
Because the frustum centerline intersects the view plane at the coordinate location
(xprp, yprp, zvp), we can express the corner positions for the clipping window in terms
of the window dimensions:
Reflected light rays from the objects in a scene are collected on the film plane from
within the “cone of vision” of the camera.
This cone of vision can be referenced with a field-of-view angle, which is a measure
of the size of the camera lens.
Therefore, the diagonal elements with the value zprp −zvp could be replaced by either
of the following two expressions