Computer Graphics
Computer Graphics
Computer Graphics
BCA Part-III
Bindiya Patel
Revised By : Ms Ujjwala
Lecturer
Deptt. of Information Technology
Biyani Girls College, Jaipur
2
Published by :
Think Tanks
Biyani Group of Colleges
ISBN: 978-93-82801-34-4
While every effort is taken to avoid errors or omissions in this Publication, any mistake or
omission that may have crept in is not intentional. It may be taken note of that neither the
publisher nor the author will be responsible for any damage or loss of any kind arising to
anyone in any manner on account of such errors and omissions.
Preface
I am glad to present this book, especially designed to serve the needs of the
students. The book has been written keeping in mind the general weakness in
understanding the fundamental concepts of the topics. The book is self-explanatory
and adopts the “Teach Yourself” style. It is based on question-answer pattern. The
language of book is quite easy and understandable based on scientific approach.
This book covers basic concepts related to the microbial understandings about
diversity, structure, economic aspects, bacterial and viral reproduction etc.
Any further improvement in the contents of the book by making corrections,
omission and inclusion is keen to be achieved based on suggestions from the
readers for which the author shall be obliged.
I acknowledge special thanks to Mr. Rajeev Biyani, Chairman & Dr. Sanjay
Biyani, Director (Acad.) Biyani Group of Colleges, who are the backbones and main
concept provider and also have been constant source of motivation throughout this
Endeavour. They played an active role in coordinating the various stages of this
Endeavour and spearheaded the publishing work.
I look forward to receiving valuable suggestions from professors of various
educational institutions, other faculty members and students for improvement of the
quality of the book. The reader may feel free to send in their comments and
suggestions to the under mentioned address.
Author
4
Syllabus
BCA III
Computer Graphics
Content
S.No. Name of Topic
1. Graphics Application and raster graphics
1.1 Introduction to Computer Graphics
1.2 Application of Computer Graphics
1.3 Video Display Devices
1.4 Raster Scan Displays
1.5 Random Scan Displays
1.6 Color CRT Monitor
1.7 Shadow Mask Methods
2. Transformation
2.1 Transformation in 2-dimension & 3-dimension
2.2 Rotation in 2-dimansion & 3-dimension
2.3 Scaling in 2-dimansion & 3-dimension
2.4 Composite Transformation
2.5 Reflection
2.6 Shear
3. Output Primitives
3.1 Line Drawing Algorithms
(a) DDA
(b) Bresenham‟s Algorithm
3.2 Circle Drawing Algorithm
3.3 Ellipse Drawing Algorithm
3.4 Boundary Fill Algorithm
3.5 Flood Fill Algorithm
4. Clipping Algorithm
4.1 Introduction to Clipping
4.2 Application of Clipping
4.3 Line Clipping Methods
(a) Cohen Sutherland Method
(b) Cyrus – Beck Algorithm
6
7. Image Processing
7.1 Introduction to Image Processing
7.2 Operations of Image Processing
7.3 Application of Image Processing
7.4 Image Enhancement Techniques
□□□
Computer Graphics 7
Chapter-1
Fig.1
Fig.2
The primary components of an electron gun in a CRT are the heated metal
cathode and control guide as in Fig 2.
Computer Graphics 9
through the coils. When electrostatic deflection is used, two pairs of parallel
palates are mounted inside the CRT envelope. One pair of plates is mounted
horizontally to control the vertical deflection, and the other pair is mounted
vertically to control horizontal deflection (Fig. 2-4).
Spots of light are produced on the screen by the transfer of the CR T beam
energy to the phosphor. When the electrons in the beam collide with the
phosphor coating, they are stopped and their energy is absorbed by the
phosphor. Part of the beam energy is converted by friction into heat energy,
and the remainder causes electrons in the phosphor atoms to move up to
higher quantum-energy levels. After a short time, the “excited” phosphor
electrons begin dropping back to their stable ground state, giving up their
extra energy as small quantum‟s of light energy. What we see on the screen is
the combined effect of all the electron light emissions: a glowing spot that
quickly fades after all the excited phosphor electrons have returned to their
ground energy level. The frequency (or color) of the light emitted by the
phosphor is proportional to the energy difference between the excited
quantum state and the ground state.
Difference kinds of phosphor are available for use in a CRT. Besides color, a
major difference between phosphor is their persistence : how long they
continue to emit light (that is, have excited electrons returning to the ground
states) after the CRT beam is removed. Persistence is defined as the time it
takes the emitted light from the screen to decay to one-tenth of its original
intensity. Lower-persistence phosphor requires higher refresh rates to
maintain a picture on the screen without flicker. A phosphor with low
persistence is useful for displaying highly complex, static, graphics monitors
are usually constructed with persistence in the range from 10 to 60
microseconds.
Figure 2-5 shows the intensity distribution of a spot on the screen. The
intensity is greatest at the center of the spot, and decreases with a Gaussian
Computer Graphics 11
distribution out to the edge of the spot. This distribution corresponds to the
cross-sectional electron density distribution of the CRT beam.
The maximum number of points that can be displayed without overlap on a
CRT is referred to as the resolution. A more precise definition of resolution is
the number of points per centimeter that can be plotted horizontally and
vertically, although it is often simply stated as the total number of points in
each direction. Spot intensity has a Gaussian distribution (Fig. 2-5), so two
adjacent spot will appear distinct as long as their separation is greater than
the diameter at which each spot has intensity in Fig. 2-6. Spot size also
depends on intensity. As more electrons are accelerated toward the phosphor
per second, the CRT beam diameter and the illuminated spot increase. In
addition, the increased excitation energy tends to spread to neighboring
phosphor atoms not directly in the path of the beam, which further increases
the spot diameter. Thus, resolution of a CRT is dependent on the type of
phosphor, the intensity to be displayed, and the focusing and deflection
system. Typing resolution on high-quality system is 1280 by 1024, with
higher resolution available on many systems. High resolution systems are
often referred to as high-definition system. The physical size of a graphics
monitor is given as the length of the screen diagonal, with sizes varying form
about 12 inches to 27 inches or more. A CRT monitor can be attached to a
variety of computer systems, so the number of screen points that can actually
be plotted depends on the capabilities of the system to which it is attached.
Another property of video monitors is aspect ratio. This number gives the
ratio of vertical points to horizontal points necessary to produce equal -length
lines in both directions on the screen. (Sometimes aspect ratio is stated in
terms of the ratio of horizontal to vertical points.) An aspect ratio of ¾ means
that a vertical line plotted with three points has the same length as a
horizontal line plotted with four points.
Q.3 Write short note on Raster–Scan Displays and Random Scan Displays.
Ans.: Raster–Scan Displays : The most common type of graphics monitor
employing a CRT is the raster-scan display, based on television technology.
In a rater-scan system, the electron beam is swept across the screen, one row
at a time from top to bottom. As the electron beam moves across each row,
the beam intensity is turned on and off to create a pattern of illuminated
spots. Picture definition is stored in a memory area called the refresh buffer
12
or frame buffer. This memory area holds the set of intensity values for all the
screen points. Stored intensity values are then retrieved from the refresh
buffer and “Painted” on the screen one row (scan line) at a time (Fig. 2-7).
Each screen point is referred to as a pixel or pel (shortened forms of picture
element). The capability of a raster-scan system to store intensity information
for each screen point makes it well suited for the realistic display of scenes
containing subtle shading and color patterns. Home television sets and
printers ate examples are examples of other system using raster-scan
methods.
Intensity range for pixel positions depends on the capability of the raster
system. In a simple black-and –white system, each screen point is either on or
off, so only one bit per pixel is needed to control the intensity of screen
position. For a bi-level system, a bit value of 1 indicates that the electron
beam is to be turned on at that position, and a value of 0 indicates that the
beam intensity is to be off. Additional bits are needed when color and
intensity variations can be displayed. Up to 24 bits per pixel are included in
high-quality system, which can require several megabytes of storage for the
frame buffer, depending on the resolution of the system. A system with 24
bits per pixel and a screen resolution of 1024 by 1024 requires 3 megabytes of
storage for the frame buffer. On a black-and-white system with one bit per
pixel, the frame buffer is commonly called a bitmap. For system with
multiple bits per pixel, the frame buffer is often referred to as a pixmap.
Q.4 Write short note on Color CRT Monitor. Explain Shadow Mask Method.
Ans.: A CRT monitor displays color picture by using a combination of phosphor
that emit different-colored light. By combining the emitted light from the
different phosphor, a range of colors can be generated. The two basic
techniques for producing color displays with a CRT are the beam-penetration
method and the shadow-mask method.
The beam-penetration method for displaying color pictures has been used
with random-scan monitors. Two layers of phosphor, usually red and green,
are coated onto the inside of the CRT screen, and the displayed color depends
on how far the electron beam penetrates into the phosphor layers. A beam of
slow electrons excites only the outer red layer. A beam of very fast electron
penetrates through the red layer and excites the inner green layer. At
intermediate beam speeds, combinations of red and green light are emitted to
show two additional colors, orange and yellow. The speed of the electrons,
and hence the screen color at any point, is controlled by the beam-
acceleration voltage. Beam penetration has been an inexpensive way to
produce color in random-scan monitor, but only four colors are possible, and
the quality of picture is not as good as with other methods.
16
systems can set intermediate intensity level for the electron beam, allowing
several million different colors to be generated.
Color graphics systems can be designed to be used with several types of CRT
display devices. Some inexpensive home-computer system and video games
are designed for use with a color TV set and an RF (radio-frequency)
modulator. The purpose of the RF modulator is to simulate the signal from a
broad-cast TV station. This means that the color and intensity information of
the picture must be combined and superimposed on the broadcast-frequency
carrier signal that the TV needs to have as input. Then the circuitry in the TV
takes this signal from the RF modulator, extracts the picture information, and
paints it on the screen. As we might expect, this extra handling of the picture
information by the RF modulator and TV circuitry decreased the quality of
displayed images.
Chapter-2
Transformation
Ans.: Transformation are basically applied to an object to reposition and resize two
dimensional objects. There are three transformation techniques.
(1) Translation : A translation is applied to an object by repositioning it
along a straight line path from one coordinate location to another. We
translate a two-dimensional point by adding translation distances tx
and ty to the original coordinate position (x, y) to more the point to a
new position (x‟, y‟).
x‟ = x + tx , y‟ = y + ty _ _ _ (1)
The translation distance pair (tx, ty) is called translation vector on shift
vector :
x1 x '1 tx
P , P‟ = , T _ _ _ (2)
x2 x'2 ty
Xr
This transformation can be described as a rotation about a rotation axis
that is perpendicular to the xy plane & passes through the pivot point.
We can express the transformation in terms of angle θ & Ø as :
x‟ = r cos(Ø+ θ) = r cos Ø cosθ - r sin Ø sin θ
y‟ = r sin(Ø+ θ) = r cos Ø sin θ + r sin Ø cos θ _ _ _ (4)
The original coordinates of the point in polar coordinators :
x = r cos Ø , y = r sin Ø _ _ _ (5)
Substituting expression (5) into (4) we obtain transformation equations
for rotating a point at position (x, y) through an angle θ about the
origin :
x‟ = x cos θ – y sin θ , y‟ = x sin θ + y cos θ _ _ _ (6)
We can write the rotation equation for rotating a point at position (x,
y) through an angle θ about the origin equation in matrix form :
P‟ = R . P _ _ _ (7)
Where rotation matrix form :
cosθ -sinθ
P‟= R . P
sinθ cosθ
(x‟, y‟)
r
r (x, y)
ø
(xr, yr)
x' sx 0 x
= _ _ _ (11)
y' 0 sy y
Or P‟ = S . P _ _ _ (12)
Where S is 2x2 scaling in eq.(11) any positive numeric values can be assigned
to the scaling factors Sx and Sy values less than 1 reduce the size of object;
values greater than 1 reduce the size of object & specifying a value of 1 for
both Sx and Sy leaves the size of object unchanged. When Sx and Sy are
assigned the same value, a uniform scaling is produced that maintains
relative object proportions unequal values of Sx and Sy result in a differential
scaling that is often used in design application where picture are constructed
form a few basic shapes that can be adjusted by scaling & positioning
transformation.
1’
Fig. 1 Fig-2 : Reflection of an object about y-axis
(i) Reflection about the line y =o, the x-axis is accomplished with
transformation matrix :
1 0 0
0 -1 0 _ _ _ (8)
0 0 1
This transformation keeps x-values the same but flips the y-
values of coordinate positions.
(ii) Reflection about the y-axis flips x-coordinates keeping y-
coordinates the same The matrix for this transformation is :
-1 0 0
0 1 0 _ _ _ (9)
0 0 1
Computer Graphics 23
Now elements of the reflection matrix can be set to values other then
±1 values whose magnitudes are greater than 1 shift the mirror Image
farther from the reflection axis & values with magnitude less than 1
brings the mirror image closer to the reflection axis.
(2) Shear : A transformation that distorts the shape of an object such that
the transformed shape appears as if the object were composed of
internal layer that had been caused to slide over each other is called a
shear two common shearing transformations are those that shift
coordinate x values and those that shift y values.
An x-direction shear relative to the x-axis is produced with
transformation Matrix.
1 Sh x 0
0 1 0 _ _ _ (3)
0 0 1
Or P‟ = T . P _ _ _ (2)
Parameter tx, ty, tz specifying translation distance for the coordinate
direction x, y, z are assigned any real values.
The matrix representation in eq.(1) is
x' = x + tx , y‟ = y + ty , z‟ = z + tz _ _ _ (3)
y-axis
(x‟, y‟, z‟)
z-axis x-axis
x1 cosθ -sinθ 0 0 x
y1 sinθ cosθ 0 0 y
1
= . _ _ _ (5)
z 0 0 1 0 z
1 0 0 0 1 1
P‟ – Rx(θ) . P _ _ _ (9)
Now we have equation for y-axis rotation is
26
x1 cosθ 0 sinθ 0 x
y1 0 1 0 0 y
= . _ _ _ (8)
z1
-sinθ 0 cosθ 0 z
1 0 0 0 1 1
Equations are :
z'= z cosθ – x sinθ
x‟ = z sinθ + x cosθ _ _ _ (11)
y‟ = y
P‟ = Ry (θ) . P _ _ _ (12)
(3) Scaling : The matrix expression for the scaling transformation of a
position P=(x,y,z) relative to the coordinate origin can be written
as :
x1 Sx 0 0 0 x
y1 0 Sy 0 0 y
1
= . _ _ _ (13)
z 0 0 Sz 0 z
1 0 0 0 1 1
P‟ = S . P _ _ _ (14)
Where scaling Parameters Sx, Sy, and Sz are assigned any positive
values.
x' = x . Sx , y‟ = y. Sy , z‟ = z . Sz _ _ _ (15)
Scaling an object with transformation changes the size of the object
and repositions the object relative to the coordinate origin. Also if the
transformation is not all equal relative dimensions in the object are
changed. We pressure the original shape of an object with uniform
scaling (Sx = Sy = Sz ).
Scaling with respect to a selected fixed position (x f. yf. zf) can be
represented with following transformation sequence
(i) Translate the fixed point to the origin.
(ii) Scale the object relative to the coordinate origin.
(iii) Translate the fixed point to its original position.
□□□
Computer Graphics 27
Chapter-3
Output Primitives
Q.1 Explain the Line Drawing Algorithms? Explain DDA and Bresenham’s
Line Algorithm.
Ans.: Slope intercept equation for a straight line is
y=mx+b _ _ _ (1)
with m representing the slope of the line and b as the y intercept. Where the
two end point of a line segment are specified at positions (x 1 , y1 ) and (x2 , y2 )
as shown is Fig.(1) we can determine values for the slope m and y intercept b
with the following calculations.
y 2 - y1
M= _ _ _ (2) y2
x 2 - x1
b = y1 – mx1 _ _ _ (3) y1
Obtain value of y interval
∆y – m . ∆x _ _ _ (4)
x1 x2
Similarly we can obtain ∆ x interval Fig. (1) Line Path between endpoint
y
∆x= position (x1 , y1 ) & (x2 , y2 )
m
For lines with slope magnitude m <1, ∆x can be set Proportional to a small
horizontal deflection voltage and the corresponding vertical deflection is then
set proportional to ∆y.
For lines with slope magnitude m >1, ∆y can be set proportional to a small
deflection voltage with the corresponding horizontal deflection voltage set
proportional to ∆x.
For lines with m = 1 ∆x = ∆y.
DDA Algorithm : The Digital Differential Analyzer (DDA) is a scan.
Conversion line Algorithm based on calculating either ∆y or ∆x using
28
equation (4) & (5). We sample the line at unit intervals in one coordinate and
determine corresponding integer values nearest. The line paths for the other
coordinate.
Now consider first a line with positive slope, as shown in Fig.(1). If the slope
is less than one or equal to 1. We sample at unit x intervals (∆x = 1) compute
each successive y values as :
yk+1 = yk + m _ _ _ (6)
Subscript k takes integer values starting form 1, for the first point & increase
by 1 until the final end point is reached.
For lines with positive slope greater than 1, we reverse the role of x and y.
That is we sample at unit y intervals (∆y = 1) and calculate each succeeding x
value as :
1
xk+1 = xk + _ _ _ (7)
m
Equation (6) and (7) are based on assumption that lines are to be processed
form left end point to the right end point.
If this processing is reversed the sign is changed
∆x = - 1 & ∆y = - 1
yk+1 = yk – m _ _ _ (8)
1
xk+1 = xk – _ _ _ (9)
m
Equations (6) to (9) are used to calculate pixel position along a line with
negative slope.
When the start endpoint is at the right we set ∆x = -1 and obtain y position
from equation (7) similarly when Absolute value of Negative slope is greater
than 1, we use ∆y = -1 & eq.(9) or we use ∆y = 1 & eq.(7).
Bresenham’s Line Algorithm : An accurate and efficient raster line
generating Algorithm, developed by Bresenham, scan concerts line using
only incremental integer calculations that can be adapted to display circles
and other curves. The vertical axes show scan-line position, & the horizontal
axes identify pixel columns as shown in Fig. (5) & (6)
Computer Graphics 29
13 Specified Line
11 49 Path
10 48
10 11 12 13 50 51 52 53 53
Fig.5 Fig.6
To illustrate Bresenham‟s approach we first consider the scan conversion
process for lines with positive slope less than 1. Pixel position along a line
path are then determined by sampling at unit x intervals starting form left
and point (x0 , y0 ) of a given line, we step at each successive column (x
position) & plot the pixel whose scan line y is closest to the line path.
Now assuming we have to determine that the pixel at (x k , yk) is to be
displayed, we next need to divide which pixel to plot in column x k+1 . Our
choices are the pixels at position (xk+1 , yk) and (xk+1 , yk+1 ).
The sign of P k is same as the sign of d 1 - d2 . Since ∆x > 0 for our example
Parameter C is constant & has the value 2∆y + ∆x (2b -1), which is
independent of pixel position. If the pixel position at y k is closer to line path
than the pixel at yk+1 (that is d1 < d2 ), then decision Parameter P k is Negative.
In that case we plot the lower pixel otherwise we plot the upper pixel.
Coordinate changes along the line owner in unit steps in either the x or
directions. Therefore we can obtain the values of successive decision
Parameter using incremental integer calculations. At step k = 1, the decision
Parameter is evaluated form eq.(12) as :
Pk+1 = 2∆y . xk+1 - 2∆x . yk+1 + C
yk+1 • d2
y • d1
yk •
xk+1 Fig.8
Subtracting eq.(12) from the preceding equation we have
Pk+1 – Pk = 2∆y (xk+1 – xk) - 2∆x (yk+1 – yk)
But xk+1 = xk + 1
So that, Pk+1 = Pk + 2∆y - 2∆x (yk+1 – yk) _ _ _ (13)
Where the term yk+1 - yk is either 0 or 1, depending on sign of Parameter P k.
This recursive calculation of decision Parameter is performed each integer x
position, starting at left coordinate endpoint of the line. The first parameter P 0
is evaluated from equation (12) at starting pixel position (x 0 , y0 ) and with m
evaluated as ∆y/∆x.
P0 = 2∆y - ∆x _ _ _ (14)
Bresenham’s Line Drawing Algorithm for m <1 :
(1) Input the two line endpoints & store the left end point in (x 0 , y0 ).
(2) Load (x0 , y0 ) into frame buffer that is plot the first point.
Computer Graphics 31
(3) Calculate constants ∆x, ∆y, 2∆y and 2∆y - 2∆x and obtain the starting
value for the decision parameter as : P0 = 2∆y - ∆x.
(4) At each xk along the line starting at k = 0, perform the following test if
Pk < 0 the next point to plot is (xk+1 , yk) and Pk+1 = Pk + 2∆y otherwise
the next point to plot is (xk+1 , yk+1 ) and Pk+1 = Pk +2∆y - 2∆x.
(5) Repeat step 4 ∆x times.
Q.2 Digitize the line with end points (20, 10) & (30, 18) using Bresenham’s Line
Drawing Algorithm.
y 2 - y1 18 - 10 8
Ans.: slope of line, m = = = = 0.8
x 2 - x1 30 - 20 10
∆x = 10 , ∆y = 8
Initial decision parameter has the value
P0 = 2∆y - ∆x = 2x8 – 10 = 6
Since P0 > 0, so next point is (xk + 1, yk + 1) (21, 11)
Now k = 0, Pk+1 = Pk + 2∆y - 2∆x
P1 = P0 + 2∆y - 2∆x
= 6 + (-4)
= 2
Since P1 > 0, Next point is (22, 12)
Now k = 1, Pk+1 = Pk + 2∆y - 2∆x
P2 = 2 + (- 4)
= -2
Since P2 < 0, Next point is (23, 12)
Now k = 2 Pk+1 = Pk + 2∆y
P2 = - 2 + 16
= 14
Since P3 > 0, Next point is (24, 13)
= 6
Since P5 > 0, Next point is (26, 15)
Now k = 5 Pk+1 = Pk + 2∆y - 2∆x
P6 = 6–4
= 2
Since P6 > 0, Next point is (27, 16)
Now k = 6 Pk+1 = Pk + 2∆y - 2∆x
P7 = 2 + (- 4)
= -2
Since P7 < 0, Next point is (28, 16)
K Pk (xk+1 , yk+1 )
0 6 (21, 11)
1 2 (22, 12)
2 -2 (23, 12)
3 14 (24, 13)
4 10 (25, 14)
5 6 (26, 15)
6 2 (27, 16)
7 -2 (28, 16)
8 14 (29, 17)
9 10 (30, 18)
Plot the graph with these following points.
Computer Graphics 33
Q.3 Write the properties of a Circle. Explain the Mid Point Circle Algorithm.
Ans.: A circle is a set of points that are at a given distance r form the center position
(xc, yc). This distance relationship is given as :
(x – xc)2 + (y – yc)2 – r2 = 0
This equation is used to calculate the position of points along the circle path
by moving in the x direction from (xc - r) to (xc + r) and determining the
corresponding y values as :
y = yc (x c - x)2 - r 2
However this method is not the best method to calculate the circle point as it
requires heavy computation. Moreover spacing between the points is not
uniform. Another method that can be used by calculating the polar
coordinates r and θ where
x = xc + r cos θ
y = yc + r sin θ
Although this method results in equal spacing between the points but it also
requires heavy computation. The efficient method is incremental calculation
of decision parameter.
Mid Point Algorithm : We move in unit steps in the x-direction and calculate
the closed pixel position along the circle path at each step. For a given radius
r & screen center position (xc, yc). We first set our Algorithm to calculate the
position of points along the coordinate position (x 0 , y0 ).
These calculated positions are then placed at this proper screen position by
adding xc to x and yc to y.
y (x = y)
(y, x) (y, x)
(-x, y) (x, y)
45º
x (x = 0)
(-x, -y) (x, -y)
yk
yk -1
xk xk+1 xk+2
Assuming we have just plotted a pixel at (xk, yk). We next need to determine
whether the pixel (xk+1 , yk) or (xk+1 , yk-1 ) is closer.
Our decision parameter is the circle function evaluated at the mid point
between these two pixels.
Computer Graphics 35
Pk = fcircle (xk + 1, yk - ½)
Or Pk = (xk + 1 )2 +(yk - ½)2 – r2 _ _ _ (3)
If Pk < 0, Mid point is inside the circle boundary and the pixel on the
scan line yk is closer to the circle boundary.
Otherwise, Mid point is on or outside the circle boundary and the point on
the scan line yk - 1 is closer.
Successive decision parameters are obtained by incremental calculations.
Next decision parameter at next sampling position.
xk+1 + 1 = xk + 2
Pk+1 = fcircle(xk+1 + 1, yk+1 - ½)
Or Pk+1 = [(xk + 1) + 1]2 + (yk+1 - ½)2 – r2
Or Pk+1 = Pk + 2(xk + 1) + (yk+12 – yk2 ) – (yk + 1 – yk) + 1 _ _ _ (4)
Successive increment for P k is 2xk+1 +1(If Pk < 0) otherwise (2xk+1 +1 - 2yk+1 )
where
2xk+1 = 2xk + 2 & 2yk+1 = 2yk – 2
Initial decision parameter P 0 is obtained as (0, r) = (x0 , y0 )
P0 = fcircle(x, y) = fcircle (1, r - ½) = 1 + (r - ½)2 – r2
5
Or P0 = -r
4
If r is a integer then P 0 = 1 – r
Algorithm :
(1) Input radius r and circle center ( xc, yc) and obtain the first point on
circumference of a circle centered on origin (x0 , y0 ) = (0, r)
5
(2) Calculate the initial value of the decision parameter as : P0 = -r
4
(3) At each xk position, starting at k = 0 if P k < 0 the next point along the
circle is (xk+1 , yk) and Pk+1 = Pk + 2xk+1 + 1, otherwise the next point
along the circle is (xk + 1, yk - 1) and Pk+1 = Pk + 2xk+1 + 1 – 2yk+1 where
2xk+1 = 2xk + 2 & 2yk+1 = 2yk – 2.
(4) Determine symmetry points in other seven octants.
36
(5) Move each calculated pixel position (x, y) onto the circular path
centered on (xc, yc) & plot coordinate values x = x + xc & y = y + yc.
(6) Repeat step (3) through (5) until x ≥ y.
Q.4 Demonstrate the Mid Point Circle Algorithm with circle radius, r = 10.
Ans.: P0 = 1 – r =1 - 10 = - 9
Now the initial point (x0 , y0 ) = (0, 10) and initial calculating terms for
calculating decision parameter are
2x0 = 0 , 2y0 = 20 Since Pk < 0, Next point is (1, 10)
P1 = - 9 +3 = - 6 Now P1 < 0, Next point is (2, 10)
P2 = - 6 + 5 = - 1 Now P2 < 0, Next point is (3, 10)
P3 = -1+ 7 = 6 Now P3 > 0, Next point is (4, 9)
P4 = 6 + 9 - 18 = - 3 Now P4 < 0, Next point is (5, 9)
P5 = - 3 + 11 = 8 Now P5 > 0, Next point is (6, 8)
P6 = 8 +13 - 16 = 5 Now P6 > 0, Next point is (7, 7)
d1 + d2 = constant _ _ _ (1)
In terms of local coordinates
F1 = (x1 , y1 ) & F2 (x2 , y2 )
Equation is :
ry
yc
rx
xc x
rx
2 2
x - xc y - yc
+ =1 _ _ _ (4)
rx ry
Now using polar coordinates r & θ
Parameter equations are :
x = xc + rx cos θ
y = yc + ry sin θ
Mid Point Ellipse Algorithm : Here approach is similar to that used in circle
rx and ry and (xc, yc) obtain points (x, y) for ellipse centered on origin.
38
Mid point ellipse algorithms process the quadrants in 2 parts. Figure shows
the division of quadrant with rx < ry.
(-x, y) (x, y)
Region1
Slope = -1
ry
Region2
rx
Thus the ellipse function fellipse(x, y) serves as a decision parameter in the mid
point Algorithm. Starting at (0, ry) we step in x direction until we reach
boundary between Region 1 & 2 slope is calculated at each step as :
dy -2r 2 x
= y2 - {from eq.(5)}
dx 2rx x
dy
At boundary = -1 So, 2ry2 x = 2rx2 y
dx
We move out of Region 1 when 2ry2 x ≥ 2rx2 y _ _ _ (7)
Figure (1) shows mid point between 2-candidate pixel at (xk+1 ), we have
selected pixel at (xk, yk) we need to determine the next pixel.
yk
•
yk-1 mid
point
xk xk+1 Fig.:1
P1k = fellipse( xk + 1, yk – ½) = ry2 ( xk + 1)2 + rx2 (yk - ½)2 – rx2 ry2 _ _ _ (8)
Next symmetric point (xk+1 +1, yk+1 - ½)
P1k +1 = ry2 [(xk + 1) + 1]2 + r2x [(yk +1 – ½)2 (yk – ½)2 ] _ _ _ (9)
Where yk + 1 is either yk or yk -1 depending on sign of P1k
2ry2 xk+1 + ry2 if Pk < 0
Increments =
2ry2 xk+1 – 2rx2 yk+1 + ry2 if Pk ≥ 0
With initial position (0, ry) the two terms evaluate to
2ry2 x = 0 , 2rx2 y = 2rx2 ry
Now when x & y are incremented the updated values are
2ry2 xk+1 = 2ry2 xk + 2ry2 , 2rx2 yk+1 = 2rx2 yk – 2rx2
And these values are compared at each step & we move out of Region 1
when condition (7) is satisfied initial decision parameter for region 1 is
calculated as :
P10 = fellipse(x0 , y0 ) = (1, ry – ½) = ry2 + rx2 ( ry - ½)2 – rx2 ry2
40
yk
yk-1 •
xk xk+1
If P2k > 0 then we select pixel at xk +1
Initial decision parameter for region (2) is calculated by taking (x 0 , y0 ) as last
point in Region (1)
P2k + 1 = fellipse(xk+1 + ½, yk+1 –1) = ry2 (xk+1 + ½)2 + rx2 [(yk – 1) -1]2 – rx2 ry2
P2k + 1 = P2k – 2rx2 (yk – 1) + rx2 + ry2 [(xk +1 + ½)2 – (xk + ½)2 ] _ _ _ (12)
At initial position (x0 , y0 )
P20 = fellipse(x0 +½, y0 – 1) = ry2 ( x0 + ½)2 + rx2 (y0 – 1)2 – rx2 ry2 _ _ _ (13)
(4) Calculate the initial value of the decision parameter in region (2) using
last point (x0 , y0 ) calculated in region 1 as P20 = ry2 (x0 + ½)2 + rx2
(y0 – 1)2 – rx2 ry2.
(5) At each yk position in region (2), starting at k = 0 perform following
test : If P2k > 0 next point is (xk, yk – 1) and P2k+1 = P2k – 2rx2 yk + 1 + rx2
otherwise next point is ( xk + 1, yk – 1) and P2k+1 = P2k + 2ry2 xk + 1 –
2rx2 yk+1 + rx2.
(6) Determine the symmetry points in other three quadrants.
(7) More each calculated pixel position (x, y). Center on (x c, yc), plot
coordinate values x = x + xc & y = y + yc.
(8) Repeat the steps for region 1 until 2ry2 x ≥ 2rx2y.
(2) Moreover the round off operations & floating point incrementation is
still time consuming.
Q.8 What do you understand by Area Filling? Discuss any one Algorithm.
Ans.: A standard out put primitive in general graphics package is a solid-color or
patterned polygon area. There are two basic approaches to area filling on
raster system :
(1) To fill an area is to determine the overlap intervals for scan lines that
cross the area.
(2) Another is t start from a given interior position & point outward from
this point until we specify the boundary conditions.
Now scan line approach is typically used in general graphics package to still
polygons, circles, ellipses and simple curses.
Fill methods starting from an interior point are useful with more complex
boundaries and in interactive painting systems.
Boundary Fill Algorithm : Another approach to area filling is to start at a
point inside a region and paint the interior outward toward the boundary. If
the boundary is specified in a single color, the fill Algorithm precedes
outward pixel by pixel until the boundary color is encountered. This method
is called Boundary Fill Algorithm.
Boundary Fill Algorithm procedure accepts as input the coordinates of an
interior point (x, y), a fill color and a boundary color. Starting with (x, y),
neighbouring boundary points are also tested. This process continues till all
the pixels up to the boundary color for the area is tested.
Diagram
Flood Fill Algorithm : Sometimes we want to fill in (or recolor) an area that
is not defined within a single color boundary suppose we take an area which
is bordered by several different color. We can paint such area by replacing a
specified interior color instead of searching for a boundary color value. This
approach is called flood fill Algorithm.
We start from interior paint (x, y) and reassign all pixel values that are
currently set to a given interior color with the desired fill color. If the area we
want to paint has move than one interior color, we can first reassign pixel
value so that all interior points have the same color.
□□□
Computer Graphics 45
Chapter-4
Clipping Algorithm
A line with both end paints inside all clipping boundaries such as line
form P1 to P2 is saved.
A line with both end points outside any one of the clip boundaries
(line P 3 , P4 in Fig.) is outside the window.
All other lines cross one or more clipping boundaries and may require
calculation of multiple intersection point. For a line segment with end
points (x1 , y1 ) and (x2 , y2 ) and one or both end points outside clipping
rectangle, the parametric representation.
x = x1 + u(x2 – x1 )
y = y1 + u(y2 – y1 ), 0 ≤ u ≤ 1
Could be used to determine values of parameter u for intersections
with the clipping boundary coordinates. If the value of u for an
intersection with a rectangle boundary edge is outside the range 0 to 1,
the line does not enter the interior of the window at that boundary. If
the value of u is within the range from 0 to 1, the line segment does
indeed cross into the clipping area.
Cohen - SutherlandLline Clipping : This is one of the oldest and most
popular line clipping procedures. Generally, the method speeds up the
processing of line segments by performing initial test that reduces the
Computer Graphics 47
0000
0001 WINDOW 0010
W
0101 0100 0110
Each bit position in the region code is used to indicate one the four relative
coordinate positions of the point with respect to the clip window: to the left,
right, top and bottom. By numbering the bit position in the region code as 1
through 4 right to left, the coordinate regions can be correlated with the bit
positions as :
bit 1 : left ; bit 2 : right ; bit 3 : below ; bit 4 : above
A value of 1 in any bit position indicates that point is in that relative position
otherwise the bit position is set to 0.
Now here bit values in the region are determined by comparing end
point coordinate values (x, y) to the clip boundaries.
Bit 1 is set to 1 if x < xwmin
Bit 2 is set to 1 it xwmax < x
Now Bit 1 sign bit of x – xwmin
Bit 2 sign bit of xwmax – x
Bit 3 sign bit of y – ywmin
Bit 4 sing bit of ywmax – y
(1) Any lines that are completely inside the clip window have a region
code 0000 for both end points few points to be kept in mind while
checking.
(2) Any lines that have 1 in same bit position for both end points are
considered to be completely outside.
48
(3) Now here we use AND operation with both region codes and if result
is not 0000 then line is completely outside.
Now for lines that cannot be identified as completely inside or completely
outside the window by this test are checked by intersection with window
boundaries.
For eg. P2
P2 ‟
P2 ‟‟
P1 ‟
P3 P3 ‟ P1
P4
Q.3 Discuss the Cyrus Beck Algorithm for Clipping in a Polygon Window.
Ans. Cyrus Beck Technique : Cyrus Beck Technique can be used to clip a 2–D line
against a rectangle or 3–D line against an arbitrary convex polyhedron in 3-d
space.
Liang Barsky Later developed a more efficient parametric line clipping
Algorithm. Now here we follow Cyrus Beck development to introduce
parametric clipping now in parametric representation of line Algorithm has a
parameter t representation of the line segment for the point at which that
segment intersects the infinite line on which the clip edge lies, Because all clip
Computer Graphics 49
edges are in general intersected by the line, four values of t are calculated.
Then a series of comparison are used to check which out of four values of (t)
correspond to actual intersection, only then are the (x, y) values of two or one
actual intersection calculated.
Advantage of this on Cohen Sutherland :
(1) It saves time because it avoids the repetitive looping needed to clip to
multiple clip rectangle edge.
Ni • [P(t) – PEi] = 0
P0 Ni
Now we can determine in which region the points lie by looking at the dot
product Ni • [P(t) – PEi].
Negative if point is inside half plane
Ni • [P(t) – PEi] Zero if point is on the line containing the edge
Positive if point lies outside half plane.
Now solve for value of t at the intersection P 0 P1 with edge.
Ni • [P(t) – PEi] = 0
Substitute for value P(t)
Ni • [P0 + (P1 – P0 ) t – PEi] = 0
Now distribute the product
Ni • [P0 – PEi] + Ni [P1 – P0 ] t = 0
Let D = (P 1 – P0 ) be the from P 0 to P1 and solve for t
Ni • [P0 - PEi]
t= _ _ _ (1)
- Ni • D
This gives a valid value of t only if denominator of the expression is non-
zero.
Note : Cyrus Beck use inward Normal Ni, but we prefer to use outward
normal for consistency with plane normal in 3d which is outward. Our
formulation differs only in testing of a sign.
(1) For this condition of denominator to be non-zero. We check the
following :
Ni ≠ 0 (i. e Normal should not be Zero)
D ≠ 0 (i.e P1 ≠ P0 )
Now hence Ni • D ≠ 0 (i.e. edge Ei and line between P 0 to P1 are not
parallel. If they were parallel then there cannot be any intersection
point.
Now eq.(1) can be used to find the intersection between the line & the
edge.
Now similarly determine normal & an arbitrary PEi say an end of edge
for each clip edge and then find four arbitrary points for the “t”.
(2) Next step after finding out the four values for “t” is to find out which
value corresponds to internal intersection of line segment.
Computer Graphics 51
(i) Now 1st only value of t outside interval [0, 1] can be discarded
since it lies outside P 0P1 .
(ii) Next is to determine whether the intersection lies on the clip
boundary.
P1 (t = 1)
PE P1 (t = 1) P1 (t = 1)
PL Line 1 PL PL
P0 PE Line 2 PL
(t = 0) P0 (t = 0) Line 3
PE PE
P0 (t = 0) Fig.2
□□□
Computer Graphics 53
Chapter-5
This Fig.(1) shows three surfaces at varying distances along the orthographic
projection line form position (x, y) in a view plane taken as x vyv plane.
Surface S1 is closest at this position, so its surface intensity value at (x, y) is
saved.
As implied by name of this method, two buffer areas are required. A depth
buffer is used to store depth values for each (x, y) position as surfaces are
processed and the refresh buffer stores the intensity values for each position.
54
yv
(x, y) xv
z
Fig.(1) : At view plane position (x, y) surface s 1 has the smallest depth form
the view plane and so is visible at that position.
Initially all values/positions in the depth buffer are set to 0 (minimum depth)
and the refresh buffer is initialized to the back ground intensity. Each surface
listed in polygon table is then processed, one scan line at a time, calculating
the depth (z-values) at each (x, y) pixel position. The calculated depth is
compared to the value previously stored in the depth Buffer at that position.
If the calculated depth is greater than the value stored in the depth buffer, the
depth value is stored and the surface intensity at that position is determined
and placed in the same xy location in the refresh Buffer.
Depth Buffer Algorithm :
(1) Initialize the depth buffer and refresh buffer so that for all buffer
position (x, y).
depth ( x, y) = 0 , refresh (x, y) = IBackgnd
(2) For each position on each polygon surface, compare depth values to
previously stored values in the depth buffer, to determine visibility.
Calculate the depth z for each (x, y) position on the polygon.
If z > depth (x, y) : then set
depth (x, y) = z , refresh (x, y) = Isurf.(x, y)
Where Ibackgnd. is the value for the background intensity Isurf.(x, y) is the
projected intensity value for the surface at pixel (x, y). After all
surfaces have been processed, the depth Buffer contains depth values
Computer Graphics 55
for the visible surfaces and the refresh Buffer contains the
corresponding intensity values for those surfaces.
Depth values for a surface position (x, y) are calculated form the plane
equation for each surface :
- Ax - By - D
Z _ _ _ (1)
C
y-axis
y
y-1
x-axis
x x+1
Fig.(2) : From position (x, y) on a scan line, the next position across the
line has coordinate (x+1, y) and the position immediately below on the
next line has coordinate (x, y–1).
The depth z‟ of the next position (x+1, y) along the scan line is
obtained from equation (1) as :
- A(x+1) - By - D
Z'= _ _ _ (2)
C
A
Z' = Z - _ _ _ (3)
C
The ratio – A/C is constant for each surface, so succeeding depth
values across a scan line are obtained form preceding values with a
single addition.
Depth values for each successive position across the scan line are then
calculated by equation (3).
Now we first determine the y–coordinate extents of each polygon and
process the surface from the topmost scan line to the bottom scan line
as shown in Fig.(3).
56
Y – Scan Line
Let Edge
Intersection Bottom Scan Line
Now using both image space and object space operations, the depth sorting
method performs following basic functions :
(1) Surfaces are sorted in order of decreasing depth.
(2) Surfaces are scan consented in order, starting with the surface of
greatest depth.
Sorting operations carried out in both image and object space, the scan
conversion of the polygon surface is performed in image space.
For e.g. : In creating an oil painting, an artist first paints the background
colors. Next the most distant objects are added, then the nearer objects and so
forth. At the final step, the foreground objects are painted on the canvas over
the Background and other objects that have been painted on the canvas. Each
layer of paint covers up the previous layer using similar technique, we first of
sort surfaces according to their distances form view plane. The intensity
value for the farthest surface are then entered into the refresh Buffer. Taking
each succeeding surface in turn (in decreasing depth) we paint the surface
intensities onto the frame buffer over the intensities of the previously
processed surfaces.
Painting polygon surfaces onto the frame buffer according to depth is carried
out in several steps.
Viewing along Z – direction, surfaces are ordered on the first pass according
to smallest Z value on each surface with the greatest depth is then compared
to other surfaces S in the list to determine whether there are any overlaps in
depth. if no depth overlaps occurs, S is scan converted.
Fig.1 : Two surfaces with no depth overlap
YV
ZMax. S
ZMin.
ZMax. S‟
ZMin.
XV
ZV
This process is then repeated for the next surface in the list. As long as no
overlaps occur, each surface is processed in depth order until all have been
scan converted.
Computer Graphics 59
If a depth overlap is detected at any point in the list we need to make some
additional comparison tests. We make the test for each surface that overlaps
with S. If any one of these test is true, no reordering is necessary for that
surface.
(1) The bounding rectangle in the xy plane for the two surfaces does not
overlap.
(2) Surface S is completely behind the overlapping surface relative to the
viewing position.
(3) The overlapping surface is completely in front of S relative to the
viewing position.
(4) The projection of the two surfaces onto the view plane do not overlap.
We perform these tests in the order listed and processed to the next
overlapping surface as soon as we find one of the test is true. If all the
overlapping surfaces pass at least one of these tests none of them is behind S.
No reordering necessary and S is scan converted.
(1) Test 1 is performed in two parts. We first check for overlap in x
direction, then we check for overlap in y direction. If either of these
directions shows no overlap the two planes can‟t obscure one another.
Fig.(2) shows depth overlap that is overlap in z – direction but no
overlap in x – direction.
YV
S S’
XV
XMin. XMax. XMin. XMax.
(2) Test 2 and 3 with an inside outside polygon test. That is, we substitute
the coordinates for all vertices of S into the plane equation for the
overlapping surface and check the sign of the result. If the plane
equations are set u, so that the outside of the surface is towards the
viewing position, then S is behind S‟ if all the vertices of S are inside S‟.
similarly S‟ is completely in front of S it all vertices of S are outside of
S‟
60
YV
S
S’
XV
S
S’
XV
(a) (b)
Computer Graphics 61
N (A, B, C)
V
Fig.(1) : Vector V in the viewing direction and a back face normal vector N of
the Polyhedron.
62
If the depth field is positive, the member stored at that position is the depth
of a single surface overlapping the corresponding pixel area. The intensity
field then stores RGB components of surface color at that point and the
percentage of pixel coverage as shown in Fig.(1).
Data for each surface in the linked list includes :
1) RGB Intensity Component 5) Surface Identifier
2) Opacity Parameter 6) Other Surface Rendering Parameter
3) Depth 7) Pointer to Next Surface
4) Percent of Area Coverage
Scan – Line Method : The image space method for removing hidden surfaces
is a extension of the scan line Algorithm for filling Polygon interior of filling
first one surface we now deal with the multiple surfaces. As each scan line is
processed, all Polygon surfaces intersecting that line are examined to
determine which are visible. Across each scan line depth calculations are
made for each overlapping surface to determine which is nearest to the view
plane. When the visible surface has been determined, the intensity value for
that position is entered into the refresh Buffer.
Two tables are used for various surfaces :
(i) Edge Table (ii) Polygon Table
Edge Table : contains coordinate end points for each line in the scene, the
inverse slope of each line and pointers into the polygon table to identify the
surface bounded by each line.
Polygon Table : Contains coefficient of the plane equation for each surface,
intensity information for the surfaces and possible pointers into the edge
table.
□□□
64
Chapter-6
Q.1 What are Bezier Curves and Surfaces. Write the properties of Bezier
Curves.
Ans.: This spline approximation method was developed by French Engineer Pierre
Bezier for use in the design of Renault automobile bodies Bezier splines have
a number of properties that make them highly useful and convenient for
curves and surface design. They are easy to implement. Bezier splines are
widely available in CAD systems.
Bezier Curves : A Bezier curve can be fitted to any number of control points.
The number of control point to be approximated and their relative position
determine the degree of Bezier polynomial. As with interpolation splines, a
Bezier curve can be specified with boundary conditions, with characterizing
matrix or with blending functions.
Suppose we have (n + 1) control point positions: P k – (xk, yk, zk,) with K
varying from 0 to n. These coordinate points can be blended to produce the
following position vector P(u), which describes the path of an approximating
Bezier Polynomial function between P 0 & Pn.
n
P (u) = ∑ Pk BEZ k,n (u) 0≤u≤1 _ _ _ (1)
K=0
The Bezier blending functions BEZ k,n (u) are the Bernstein polynomial.
BEZk,n(u) = C (n, k) u k (1 – u) n-k _ _ _ (2)
Where C(n,k) are the binomial coefficients
n
C( n ,k ) _ _ _ (3)
kn k
Equivalently we can define Bezier blending function with recursive
calculation :
Computer Graphics 65
BEZ k,n (u) = (1 – u) BEZ k,n-1 (u) + u BEZ k-1,n-1 (u) n > k ≥ 1 _ _ _ (4)
Three parametric equations for individual curve coordinates :
n n n
x(u) = ∑ xk BEZ k,n (u) y(u) = ∑ yk BEZ k,n (u) z(u) = ∑ zk BEZ k,n (u)
K=0 K=0 K=0
A Bezier Curve is a polynomial of degree one less than the number of control
points used.
Three points generates Parabola.
Four points generates Cubic Curve.
(a) (b)
Bezier Curves generated from three or four control points. Dashed lines
connect the control point positions.
Properties of Bezier Curve :
(1) It always passes through the first and last control points. That is
boundary condition at two ends of the curve are :
For two end points P (0) = P0
P (1) = Pn _ _ _ (6)
Now value for the first derivative of a Bezier Curve at the end points
can be calculated from control point coordinates as :
P‟(0) = - nP0 +nP1 First derivative for the
Q.2 What are B-Spline Line, Curves and Surfaces? Write the properties of B-
Spline Curves?
Ans.: These are most widely used class of approximating splines B-splines have
two advantage over Bezier splines.
(1) The degree of a B – spline polynomial can be set independently of the
number of control points (with certain limitations).
(2) B – spline allow local control over the shape of a spline curve or
surface.
The trade of off is that B – splines are move complex than Bezier splines.
B – spline Curves : Blending function for B – spline curve is :
n
P (u) = ∑ Pk Bk,d (u) umin ≤ u ≤ u max
K=0 2 ≤ d ≤ n+1
Where the P k are an input set of (n + 1) control points. There are several
differences between this B-spline formulation and that for Bezier splines. The
range of parameter u now depends on how we choose the B – spline
parameters. And the B – spline blending functions B k,d are polynomials of
degree d – 1, where parameter d can be chosen to be any integer. Value in the
range from 2 up to the number of control points , n + 1, Local control for B –
splines is achieved by defining the blending functions over subintervals of
the total range of u.
Blending function for B – spline curves are defined by Cox – de Boor
recursion formulas :
1 if uk ≤ u ≤ u k+1
Bk,d (u) =
0 otherwise
u uk uk d u
Bk,d u Bk, d 1 u Bk=1,d 1 u
uk d 1 uk uk d uk 1
Where each blending function is defined over d subintervals of the total
range of u. values for umin and umax then depends on the number of control
points we select, we can increase the number of values in the knot vector to
aid in curve design.
We need to specify the knot values to obtain the blending function using
recurrence relation.
Classification of B – splines according to the knot vectors :
Uniform, Periodic B – splines : When spacing between knot values is
constant. The resulting curve is called a uniform B – spline.
For e.g. : { - 1.5, -1.0, -0.5, 0.0}
0.8
0.6
0.4
0.2
0 1 2 3
P (u) = [u 3 u2 u 1] . MB . P1
P2
P3
P(0) = Pk
P(1) = Pk+1 _ _ _ (1)
P‟(0) = DPk
P‟(1) = DPk+1
With DPk and DPk+1 specifying values for the parametric derivatives (slope of
the curve) at control points P k and Pk+1 respectively.
Vector equivalent equation :
P(u) = au 3 + bu2 + cu + d 0≤u≤1 _ _ _ (2)
Where x component of P is x(u) = a xu3 + bxu2 + cxu + dx and similarly for y
and z components. The Matrix is
Computer Graphics 71
a
P(u) = [u 3 u2 u 1] b _ _ _ (3)
c
d
Derivative of the point function as :
a
P‟(u) = [3u 2 2u 1 0] b _ _ _ (4)
C PK
d P(u) = [x(u), y(u), z(u)]
PK+1
Fig.(1) Parametric point function P(u) for a Hermite curve section between
control point P k and Pk+1.
Pk 0 0 0 1 a
Pk+1 = 1 1 1 1 b _ _ _ (5)
DPk 0 0 1 0 c
DPk+1 3 2 1 0 d
a 0 0 0 1 -1 Pk Pk
b = 1 1 1 1 Pk+1 =MH Pk+1 _ _ _ (6)
c 0 0 1 0 DPk DPk
d 3 2 1 0 DPk+1 DPk+1
Where MH, the Hermite Matrix is the inverse of the boundary constraint
Matrix.
72
Pk
P(u) = [u3 u2 u 1] MH Pk+1
DPk
DPk+1
Q.5 Explain Zero Order, First Order, and Second Order Continuity in Curve
Blending?
Ans.: To ensure a smooth transition form one section of a piecewise curve to the
next. We can suppose various continuity conditions, at the connection point.
Each section of a spline is described with a set of parametric coordinate
function of the form :
Computer Graphics 73
P2 P2
P3
P0 P3 P0
Convex Hall
P1 P1
Fig.(1) convex hull shapes for two sets of control points.
Parametric Continuity : We set parametric continuity by matching the
parametric derivative of adjoining section at their common boundary.
Zero – Order Parametric Continuity : Described as C0 continuity means
simply that curves meet. That is the values of x, y, z evaluated at u2 for first
curve section is equal to values of x, y, z evaluated at u1 for the next curve
section.
First – Order Parametric Continuity : Refered as C1 continuity means that the
first Parametric derivatives (tangent lines) of the coordinate functions in
eq.(1) for two successive curve sections are equal at their joining point.
Second - Order Parametric Continuity : Described as C2 continuity means
that both first and second parametric derivatives of the two curve sections are
the same at the intersection.
With second order continuity, the rates of change of the tangent vector for
connecting sections are equal at their intersection. Thus the tangent line
transitions smoothly from one section of the curve to the Next like in Fig.(2).
Applications :
(1) First order continuity is often sufficient for digitizing drawings and
some design applications.
(2) Second order continuity is useful for setting up animation paths for
camera motion and for many precision CAD Requirements.
Geometric Continuity Conditions : An alternate method for joining two
successive curve sections is to specify conditions for geometric continuity. In
this case, we only require parametric derivatives of the two sections to be
proportional to each other at their common boundaries instead of equal to
each other.
Zero Order Continuity : Described as G0 continuity, is same as zero order
parametric continuity. That is two curve section must have the same
coordinate position at the boundary point.
First Order Geometric Continuity: - Described as G1 continuity means those
parametric first derivatives are proportional at the intersection of two
successive sections.
Suppose if we denote the parametric position on the curve as P(u) the
direction of the tangent vector P‟(u), but not necessarily its magnitude will be
same of two successive curve sections at their joining point under G1
Continuity.
Second Order Geometric Continuity : Described as G2 continuity means that
both the first and second parametric derivative of the two curve sections are
proportional at their boundary under G2 continuity, curative of two curve
section will match at the joining position.
A curve generated with geometric continuity conditions is similar to one
generated with parametric continuity but with slight difference in curve
shape.
□□□
Computer Graphics 75
Chapter-7
Image Processing
Q.1 What is Image Processing? What are the operations of Image Processing?
Ans.: Image Processing is any form of signal, processing for which Input is an
Image such as photographs or frames of video, the output of Image
processing can be either an Image or a set of characteristics or parameters
related to the Image. Most Image processing techniques in values treating the
image as 2–d signal and applying standard signal processing technique to it.
Image processing usually refers to digital Image processing but optical and
analog image Processing are also possible.
Operations of Image Processing :
(1) Geometric Transformation (Such as enlargement, reduction and
rotation)
(2) Color Correction (Such as brightness and contrast adjustment
quantization or conversion of different color)
(3) Digital compositing or optical compositing (combination of two or
more Images)
(4) Interpolation and recovery of full image from a draw image format.
(5) Image editing
(6) Image registration
(7) Image stabilization
(8) Image segmentation
(9) Extending dynamic range by combining differently exposed Images.
(1) Computer vision : This is a science and technique of machine that can
see computer vision is concern with the theory for building artificial
system that obtains information form Images.
This include system for :
(i) Controlling Process (e.g. : Industrial Robot)
(ii) Detecting Events (for like Counting People)
(iii) Organizing Information
(iv) Modeling Objects or Environment (Medical Image Analyses)
(v) Interaction (as the Input to a Device for Computer Human
Interaction)
(2) Face Detection : This Computer technology that determines the
location and sizes of human faces in arbitrary digital Images. It detects
facial features and ignores everything else like trees buildings.
(3) Feature Detection : This refers to methods that air at comforting,
abstracting of image Information and making decision at every Image
point whether an image feature of a given type at that point or not.
(4) Land Departure Warning System : This mechanism is designed to
warn a driver when vehicle begins to move out of its lane.
(5) Medical Image Processing : This refers to techniques and process used
to create Images of human body for clinical purposes like entropy
radiology.
(6) Microscope Image Processing : This is the broad term that covers to
use the digital image processing technique to process analyze and
Present Images obtained from microscope like in drug testing and
metallurgical processes.
(7) Morphological Image Processing : Theoretical model for digital
images built upon lattice theory and topology also uses Image
Processing Technique.
(8) Remote sensing : This is a small or large scale acquisition of
information of an object or phenomenon by the use of either recording
or real time sensing device that is not physical but way of air craft,
space craft & satellites.
(9) Photo Manipulation
Computer Graphics 77
(1) Point Operation : These operation are zero order Memory operations
where a given grey level UE [0, L] is mapped into a grey level VE [0, L]
according to Transformation.
(a) Contrast Stretching : Low contrast Images occur often due to
poor or non uniform lightening conditions or due to non –
sincerity or small dynamic range of the Image sensor.
Contrast stretching Transformations are applied slope of the
transformation is chosen greater than unity in the region of
stretch.
78
v
1
for >1 a=
3
2
for >1 b= L
3
for >1 c=L
u
a b c
For dark region stretch > 1 for mid region > 1 and for
brighter region >1
v v v
Threshold
L u
(c) Window Slicing (Intensity Level Slices) :
without background condition
v
L a≤u≤b
V = L
0 otherwise
a b u
without Background condition
v
L a≤u≤b
V =
u otherwise
u
These transformation permit segmentation of certain grey level
regions from the rest of the image. This technique is useful
when different features of an image are contained in different
grey levels.
Bit extraction (It is a minor part of point operation) : Suppose
each image pixel is uniformly quantized to b bits. It is desired to
extract the nth most significant bit and display it then apply the
transformation. This transformation is useful in determine the
no of visually significant bit in an Image.
(d) Histogram Modeling : The histogram of an Image represents
the relative frequency of occurrence of the various grey levels in
the Image histogram modeling Technique modify an image so
that its histogram has a desired shape. This is useful in
80
hHP(m, n) = m, n – h LP m, n
hHP denotes high pass filter h LP denotes low Pass filter such a filter
can be implemented by simply subtracting the low Pass filter output
form its Input. Low Pass filter are useful for Noise smoothening &
interpolation. High Pass filter are useful in extracting edges & in
sharpening Images Band pass filters are useful in enhancement of
edges & other high Pass image characteristics in presence of noise.
□□□
82
4) The relations hip among the data and objects which are stored in the database called
application database, and referred by the?
A. Application programs
B. application model
C. graphics display
D. both a and b
10) RGB system needs __of storage for the frame buffer?
A. 100 megabytes
B. 10 megabytes
C. 3 megabytes
D. 2 Gb
11) The SRGP package provides the __to wide the variety of display devices?
A. interface
B. connection
C. link
D. way
15) The midpoint circle drawing algorithm also uses the __of the circle to generate?
A. two-way symmetry
B. four-way symmetry
C. eight-way symmetry
D. both a & b
16) a polygon in which the line segment joining any 2 points within the polygon lies
completely inside the polygon is called__?
A. convex polygon
B. concave polygon
C. both a and b
D. both a and b
17) A polygon in which the line segment joining any 2 points within the polygon may
not lie completely inside the polygon is called __?
A. convex polygon
B. concave polygon
C. both a and b
D. Hexagon
Computer Graphics 85
20) Line produced by moving pen is __ at the end points than the line produced by the
pixel replication?
A. thin
B. straight
C. thicker
D. both a and b
21) The process of selecting and vie wing the picture with different views is called__?
A. Windowing
B. clipping
C. projecting
D. both a and b
22) The process which divides each element of the picture into its visible and invisible
portions, allowing the invisible portion to be discarded is called__?
A. clipping
B. Windowing
C. both a and b
D. Projecting
25) A method used to test lines for total clipping is equivalent to the__?
A. logical XOR operator
B. logical OR operator
C. logical AND operator
D. both a and b
26) A process of changing the position of an object in a straight line path from one
coordinate location to another is called__?
A. Translation
B. rotation
C. motion
D. both b and c
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
A B A A C B C A A C A C A A C
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B A A C A A A A C A C A A A
88
Case study
1.) Implement the polyline function using the DDA algorithm, given any number (n) of
input points.
2.) How much time is used up scanning across each row of pixels for the duration of
screen refresh on a raster system with a resolution of 1290 by 1024 and a refresh rate
of 60 frames per second?
3.) Implement a procedure to perform a one-point perspective projection of an object.
4.) Implement a procedure to perform a one-point perspective projection of an object.
Computer Graphics 89
Glossary
2D Graphics
Displayed representation of a scene or an object along two axes of reference: height and width (x and
y).
3D Graphics
Displayed representation of a scene or an object that appears to have three axes of reference: height,
width, and depth (x, y, and z).
3D Pipeline
The process of 3D graphics can be divided into three-stages: tessellation, geometry, and rendering. In
the tessellation stage, a described model of an object is created, and the object is then converted to a
set of polygons. The geometry stage includes transformation, lighting, and setup. The rendering stage,
which is critical for 3D image quality, creates a two dimensional display from the polygons created in
the geometry stage.
Anti-aliasing
Anti-aliasing is sub pixel interpolation, a technique that makes edges appear to have better resolution.
Bitmap
A Bitmap is a pixel by pixel image.
Blending
Blending is the combining of two or more objects by adding them on a pixel-by-pixel basis.
Depth Cueing
Depth cueing is the lowering of intensity as objects move away from the viewpoint.
Dithering
Dithering is a technique for archiving 24-bit quality in 8 or 16-bit frame buffers. Dithering uses two
colors to create the appearance of a third, giving a smooth appearance to an otherwise abrupt
transition.
90
Flat Shading
The flat shading method is also called constant shading. For rendering, it assigns a uniform color
throughout an entire polygon. This shading results in the lowest quality, an object surface with a
faceted appearance and a visible underlying geometry that looks 'blocky'.
Hidden Surface Removal or visible surface determination entails displaying only those surfaces that
are visible to a viewer because objects are a collection of surfaces or solids.
Interpolation
Interpolation is a mathematical way of regenerating missing or needed information. For example, an
image needs to be scaled up by a factor of two, from 100 pixels to 200 pixels. The missing pixels are
generated by interpolating between the two pixels that are on either side of the pixel that needs to be
generated. After all of the 'missing' pixels have been interpolated, 200 pixels exist where only 100
existed before, and the image is twice as big as it used to be.
Lighting
There are many techniques for creating realistic graphical effects to simulate a real-life 3-D object on
a 2-D display. One technique is lighting. Lighting creates a real-world environment by means of
rendering the different grades of darkness and brightness of an object's appearance to make the object
look solid.
Line Buffer
A line buffer is a memory buffer used to hold one line of video. If the horizontal resolution of the
screen is 640 pixels and RGB is used as the color space, the line buffer would have to be 640
locations long by 3 bytes wide. This amounts to one location for each pixel and each color plane. Line
buffers are typically used in filtering algorithms.
Projection
The process of reducing three dimensions to two dimensions for display is called Projection. It is the
mapping of the visible part of a three dimensional object onto a two dimension screen.
Rasterization
Rendering
Computer Graphics 91
The process of creating life-like images on a screen using mathematical models and formulas to add
shading, color, and lamination to a 2D or 3D wireframe.
Transformation
Change of coordinates; a series of mathematical operations that act on output primitives and
geometric attributes to convert them from modeling coordinates to device coordinates.
Z-buffer
A part of off-screen memory that holds the distance from the viewpoint for each pixel, the Z-value.
When objects are rendered into a 2D frame buffer, the rendering engine must remove hidden surfaces.
Z-buffering
A process of removing hidden surfaces using the depth value stored in the Z-buffer. Before bringing
in a new frame, the rendering engine clears the buffer, setting all Z-values to 'infinity'. When
rendering objects, the engine assigns a Z-value to each pixel: the closer the pixel to the viewer, the
smaller the Z value. When a new pixel is rendered, its depth is compared with the stored depth in the
Z-buffer. The new pixel is written into the frame buffer only if its depth value is less than the stored
one.
Z-sorting
A process of removing hidden surfaces by sorting polygons in back-to-front order prior to rendering.
Thus, when the polygons are rendered, the forward-most surfaces are rendered last. The rendering
results are correct unless objects are close to or intersect each other. The advantage is not requiring
memory for storing depth values. The disadvantage is the cost in more CPU cycles and limitations
when objects penetrate each other.
Filtering
This is a broad word which can mean the removal of coffee grinds from the coffee. However,
within the narrow usage of this book, a filtering operation is the same as a convolution
operation (see "convolution"). Anti-aliasing is usually done by filtering.
flat projection
A method of projecting a 3D scene onto a 2D image such that the resulting object sizes are
not dependent on their position. Flat projection can be useful when a constant scale is needed
throughout an image, such as in some mechanical drawings.
Frame
92
One complete video image. When interlacing is used, a frame is composed of two fields,
each containing only half the scan lines.
GIF
A file format for storing images. GIF stands for Graphics Interchange format, and is owned
by Compuserve, Inc.
key frame
A selected frame of an animation at which all the scene state is defined. In the key frame
animation method, the scene state at key frames is interpolated to create the scene state at the
in-between frames.
An animation control method that works by specifying the complete scene state at selected,
or key, frames. The scene state for the remaining frames is inte rpolated from the state at the
key frames.
Raster Scan
The name for the pattern the electron beam sweeps out on a CRT face. The image is made of
closely spaced scan lines, or horizontal sweeps.
Refresh Rate
The rate at which parts of the image on a CRT are re-painted, or refreshed. The horizontal
refresh rate is the rate at which individual scan lines are drawn. The vertical refresh rate is the
rate at which fields are drawn in interlaced mode, or whole frames are drawn in non-
interlaced mode.
The rate at which scan lines are drawn when the image on a CRT is re-drawn, or refreshed.
The rate at which fields are re-drawn on a CRT when in interlaced mode, or the rate at which
the whole image is re-drawn when in non- interlaced mode.
Scan Line
Computer Graphics 93
One line in a raster scan. Also used to mean one horizontal row of pixels.
94
Answers of all the questions (objectives as well as descriptive) are to be given in the
main answer-book only. Answers of objective type questions must be given in
sequential order. Similarly all the parts of one questions of descriptive part should
be answered at one place in the answer book. One complete questions should not be
answered at different places in the answer book.
No supplementary answer-book will be given to any candidate, hence the
candidates should write their answers precisely.
Objective : Part-I
Maximum Marks: 20
It contains 40 multiple choice questions with four choice and student will have to
pick the correct one (each carrying 1/2 mark)
1. Sub-dividing the total and determining the number of sub-pixels inside the area
boundary is called:
(a) Pixel Phasing
(b) Pixel Weighting
(c) Filtering
(d) Super Sampling ( )
(a) Clipping
(b) Clip Window
(c) View Port
Computer Graphics 95
3. DAC is:
(a) Direct Access Code
(b) Digital Align Code
(c) Direct Area Clipping
(d) Digital to Analog converter ( )
11. In LCDs picture definition are stored in a refresh buffer, and the screen is refreshed
at the rate of:
(a) 50 Frames per second
(b) 60 Frames per second
(c) 70 Frames per second
(d) 80 Frames per second ( )
13. In Wailer-Atherton algorithm for polygon clipping the subject polygon is:
(a) Clipping window
(b) Polygon to be clipped
(c) Any to the above
(d) None of the above ( )
14. In which LCD displays transistors are used to control the voltage at pixel locations
and to prevent charges from leaking out of the liquid crystal cells;
(a) Passive matrix LCD
(b) Active Matrix LCD
(c) Both (a) and (b)
(d) None of the above ( )
17. Conceptually drawing a line from any position P to a distant point outside the
coordinate extents of the object and counting the number of edge crossing along the
lines:
(a) Odd-even rule
(b) Odd priority rule
(c) Even-of rule
Computer Graphics 97
19. 45 degree rotation about y axis will move a vector on x axis to:
(a) xz-plane
(b) xy-plane
(c) yz - plane
(d) z-axis ( )
21. A random scan (Vector) system stores point plotting instruction in the:
(a) View list
(b) Display list
(c) Both (a) and (b)
(d) None of the above ( )
27. The set of points such that the sum of the distances from fixed positions the same for
all points, is called:
(a) Mid-point circle
(b) Ellipse
(c) Line function
(d) All of the above ( )
32. Beam penetration method for displaying color pictures has been used with:
(a) Random scan monitors
(b) Raster scan monitors
(c) Both (a) and (b)
(d) None of the above ( )
Computer Graphics 99
33. A video sequence is usually capered by a video recorder at the rate of:
(a) 1 Frame/sec.
(b) 2 frames/sec
(c) 25 frame/sec.
(d) 100 frames/sec. ( )
35. The transformation that distorts the shape of the object such that the shape appears as
if the object were composed of internal layer is:
(a) Scaling (b) Shearing
(c) Rotation (d) Reflection ( )
38. Which of the following algorithms has the usage of parametric line equation?
(a) Z-buffer algorithm (b) Painter's Algorithm
(c) Sutherland-Cohen Algorithm (d) Cyrus-Beck Algorithm ( )
40. The intensity distribution of the spot on the screen of CRT follows:
(a) Poisson distribution
(b) Gaussian distribution
(c) Any of the above
(d) None of the above ( )
100
DESCRIPTIVE PART – II
Year – 2011
Time allowed : 2 Hours Maximum Marks : 30
Attempt any four questions out of the six. All questions carry 7½ marks each.
4. What is two-dimensional clipping? Explain depth sorting method for visible surface
detection method.
Year – 2010
Time : 1 Hr. M. M. : 20
The question paper contains 40 multiple choice questions with four choices and students will
have to pick the correct one (each carrying ½ mark).
2. The perspective anomaly in which the object behind the centre of projection is
projected
(a) Perspective foreshortening
(b) Vanishing view
(c) View confusion
(d) Topological distortion ( )
3. The entire graph of the function: f(x) = x2 + kx –x +9 is strictly above the –axis if and
only if :
(a) -3 <k <5 (b) -3 <k <2
(c) -3 <k <7 (d) -5 <k <7 ( )
7. The best suited hidden surface algorithm to deal with non-polygonal non-planar
surface patches is:
102
8. A view graph is :
(a) An oversized slide designed for presentation on an O.H.P.
(b) Designed and created by exposing film to the output of the graphics system.
(c) A hard copy chart
(d) None of the above ( )
11. The point at which a set of projected parallel lines appear to converge in called as a :-
(a) Convergence point (b) Vanishing point
(c) Point of illusion (d) Point of delusion ( )
16. When the computer is not able to maintain operations and display, bright spot occurs
on the screen? This is called :
(a) Dropping out (b) Snowing
(c) Flickering (d) Blanking ( )
18. All the following hidden surface algorithm employ image space approach except :
(a) Depth buffer method (b) Back face removal
(c) Scan line method (d) Depth sort method ( )
23. The process by which one object acquires the properties of another object is :
(a) Polymorphism (b) Inheritance
(c) Encapsulation (d) All of the above ( )
26. The …………….. layer lies between the transport and application layer.
(a) Data link (b) Physical
(c) Network (d) Session ( )
27. To provide comfort to computer user, the graphics screen must be refreshed at the
rate of ………………
(a) 5 frames / sec (b) 20 frames / sec
(c) 90 frames / sec (d) 200 f/s ( )
29. 45 degree volution about y-axis will move a vector on x-axis to:
(a) xz-plane (b) xy-plane
(c) yz-plane (d) z-axis ( )
36. The period of time betweenan allocation and its subsequent disposal is called :
(a) Scope (b) Binding
(c) Lifetime (d) Longevity ( )
38. Assuming that one allows 256 depth value levels to be used how much memory
would a 512 x 512 pixel display require to store the z-buffer :
(a) 5l2 k (b) 256 k
(c) 1024 k (d) 128 k ( )
Computer Graphics 105
DESCRIPTIVE PART – II
Year – 2010
Time allowed : 2 Hours Maximum Marks : 30
Attempt any four questions out of the six. All questions carry 7½ marks each.
Prob.1 (a) What is gray scale? Write the applications of Raster scan displays.
(a) What is homogenous coordinate system?
Prob. 2 (a) Write the line drawing algorithm. What is graphical data?
(b)What are the elements of digital image processing system? What is image
enhancement?
Prob. 4 (a) Explain the terms ‘scaling’ and translation in the context of 3-D
transformation.
(b) Write Sutheriand-Cohen sub-division algorithm.
Prob. 5 (a) What is LCD technology? What do you mean by visual quantization?
(b) How is a digital image captured and stored? What is parallel projection?
Year – 2009
Time : 1 Hr. M. M. : 20
The question paper contains 40 multiple choice questions with four choices and students will
have to pick the correct one (each carrying ½ mark).
l. Pixel is:
(a) The smallest addressable point on the screen
(b) An input device
(c) A memory block p
(d) A deals structure ( )
5. Frame buffer is :
(a) The memory area in which the image being displayed is stored
108
6. Aliasing means :
(a) Rendering effect (b) Shading effect
(c) Staircase effect (d) Cueing effect ( )
9. A simple 3-bit plane frame buffer can have ........ number color combinations.
(a) 8 (b) 16
(c) 24 (d) 3 ( )
13. A 24-bit plane color frame buffer with three 10-bit wide color look up table can have
............ number of colors.
(a) 224 (b) 28
48
(c) 2 (d) 230 ( )
16. The slope of the line joining the points (l, 2) and (3, 4) is :
(a) 0 (b) 1
(c) 2 (d) 3 ( )
18. In Bresenhamn’s circle generation algorithm, if (x, y) is the current pixel position
then y-value of the next pixel position is :
(a) y or y + l (b) y alone
(c) y + 1 or y - 1 (d) y or y—1 ( )
20. The property that adjacent pixels on a scan line are likely to have the same
characteristics is called:
(a) Spatial coherence (b) Area coherence
(c) Scan line coherence (d) Pixel coherence ( )
21. The technique of using a minimum number of intensity levels to obtain increased
visual resolution is called:
(a) Dithering (b) Half toning
(c) Depth cueing (d) Rendering ( )
22. If XL, XR, YB, YT represent the four parameters of x-left, x-right, y—bottom and y-
top of a clipping window and (x, y) is a point such that y > yT then (x, y) lies:
(a) Inside the Window
(b) Outside the Window
(c) On the boundary of the Window
(d) None of the above ( )
23. The Cohen-Sutherland line clipping algorithm decides the entire region into ...............
number of sub-regions.
(a) 4 (b) 8
(c) 9 (d) 10 ( )
24. ................ number of bits are used for representing eac h subregion of Cohen-
Sutherland line clipping algorithm :
(a) 1 (b) 2
(c) 3 (d) 4 ( )
110
25. lf bits are zeros and two bits are ones in a code of subregion in Cohen-Sutherland line
clipping algorithm then the subregion is :
(a) Comer region (b) Middle region
(c) Central region (d) None of the above ( )
26. In the Cohen-Sutherland line clipping algorithm, if codes of the two points P and Q
are 0101 and 0001 then the line segment joining the points P and Q will be the
clipping window:
(a) Totally outside (b) Partially outside
(c) Totally inside (d) None of the above ( )
27. 1f XL, .XR, YB, YT represent the four parameters of x- left, x—right, y—bottom and y-
top of a clipping window and (x, y) is a point inside the window such that x > XL and
x > XL and YB < y < YT , then the code of the point (x, y) in Cohen—Sutherland
algorithm :
(a) 1100 (b) 1000
(c) 1110 (d) 0000 ( )
28. Suppose (x1 , y1 ), (x2 , y2 ) .......... (xn , yn ) are n vertices of a closed polygon and (x ,y) is
a point such that x is less than the minimum of { x1 , x2 .............. xn } then the point (x,
y) lies ......... the polygon.
(a) inside (b) outside
(c) on (d) as vertex of ( )
29. Suppose (x1 , y1 ), (x2 , y2 ) .......... (xn , yn ) are n vertices of a closed polygon and (x, y)
is a point such that y is less than the minimum of {y1 , y2 .......... yn } then the point (x,
y) lies .................. the polygon.
(a) inside (b) outside
(c) on (d) as vertex of ( )
30. ln a polygon inside test, if the winding number of a point is zero, then the point lies
........... the polygon.
(a) inside (b) outside
(c) on (d) as vertex of ( )
35. If (x, y, w), w ¢ 0 is a point in a homogeneous co-ordinate system then its equivalent
in two-dimensional system is:
(a) (my. 1) (b) (x,y,0)
(c) (x/w, y/w) (d) (x,y, x—y) ( )
36. The two dimensional matrix transformation for scaling with a units along x-axis and
b-units along y-axis is:
a 0 0 a 0 0
(a) 0 b 0 (b) 0 b 0
0 0 1 a b 1
1 0 0 1 0 0
(c) 0 1 0 (d) 0 1 0 ( )
a b 1 b a 1
37. The two-dimension at matrix transformation for reflection o f a point with respect to
x-axis is :
1 0 0 1 0 0
(a) 0 1 0 (b) 0 1 0
0 0 1 0 0 1
1 0 0 1 0 0
(c) 1 0 0 (d) 0 1 0 ( )
0 0 1 0 0 1
40. The three-dimensional matrix transformation for relation with an angle 0 with respect
to y-axis in the positive direction is :
cos 0 sin 0 cos 0 sin 0
sin 0 cos 0 sin 0 cos 0
(a) (b)
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1
1 cos sin 0 cos sin 0 0
0 sin cos 0 sin cos 0 0
(c) (d) ( )
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1
Ans wer Key:
1. (a) 2. (a) 3. (a) 4. (c) 5. (d) 6. (c) 7. (b) 8. (a) 9. (d) 10. (c)
11. (b) 12. (d) 13. (a) 14. (a) 15. (a) 16. (b) 17. (a) 18. (d) 19. (c) 20. (c)
21. (b) 22. (b) 23. (c) 24. (d) 25. (a) 26. (a) 27. (d) 28. (b) 29. (b) 30. (b)
31. (a) 32. (b) 33. (a) 34. (a) 35. (a) 36. (a) 37. (b) 38. (c) 39. (c) 40. (a)
Computer Graphics 113
DESCRIPTIVE PART – II
Year – 2009
Prob. 1 (a) Discuss the terms lookup table and aspect ratio in detail.
(b) Explain the RGB color model in brief.
Prob. 2 (a) Discuses the terms: Pixel, Frame Buffer, bit plane and dpi.
(b) What is the rate of a 1024 x 1024 frame buffer with an average access rate per
pixel of 200 nanoseconds on a simple color display?
Prob. 2 Write and discuss the line drawing algorithms. Which one is best? Why?
Prob. 3 (a) How many geometrical transformation are there? Write their names. Discus s one
of them in brief.
(b) Derive the transformation that rotates an object point 0 o a about the origin. Write
the matrix representation for the rotation.
Prob. 5 (a) Explain the scaling and translation in the context of three dimensional
transformations.
(b) Define the tilting as a rotation about the x-axis followed by the rotation about y-
axis. Find the tilting matrix for the same.
Prob. 6 (a) Define digital image processing. Write and explain the difference between DIP
and computer graphics.
(b) How can you store and capture a digital image? Discuss.
114
Year – 2008
Time : 1 Hr. M. M. : 20
The question paper contains 40 multiple choice questions with four choices and students will
have to pick the correct one (each carrying ½ mark).
2. In LCD’s picture definition are stored in a refresh buffer, and the screen is refreshed
at the rate of:
(a) 50 frames per second (b) 60 frames per second
(c) 70 frames per second (d) 80 frames per second ( )
3. In which LCD displays transistors are used to control the voltage at pixel locations
and to prevent charges from leaking out of the liquid crystal cells:
(a) Passive Matrix LCD (b) Active Matrix LCD
(c) Both (a) and (b) (d) None of the above ( )
7. I f a user wants to recolor an area that is not defined within a single color boundary,
then the algorithm is used :
(a) Boundary fill algorithm
Computer Graphics 115
8. Subdividing the total area and determining the number of sub pixels inside the area
boundary is called:
(a) Pixel phasing (b) Pixel weighting
(c) Filtering (d) Super sampling ( )
11. In Cohen-Sutherland line clipping, every line end point in a picture is assigned a four
digit binary code, called:
(a) Front point (b) End point
(c) Region point (d) All of the above ( )
16. A random scan (vector) system stores point plotting instructions in the :
116
17. The set of points such that the sum of the distances from two fixed positions is the
same for all points, is called :
(a) Midpoint circle (b) Ellipse
(c) Line function (d) All of the above ( )
18. Conceptually drawing a line from any position P to a distant point outside the
coordinate extents of the object and counting the number of edge crossing along the
lines:
(a) Odd-even rule (b) Odd priority rule
(c) Even-odd rule (d) All of the above ( )
23. Beam penetration method for displaying color pictures has been used with :
(a) Random scan monitors (b) Raster scan monitors
(c) Both (a) and (b) (d) None of the above ( )
24. ln Weiler-Atherton algorithm for polygon clipping the subject polygon is:
(a) Clipping window (b) Polygon to be clipped
(c) Any of the above (d) None of the above ( )
25. Dragging in computer graphics can be achieved through the following transformation
:
(a) Translation (b) Rotation
(c) Scaling (d) Mirror retlection ( )
Computer Graphics 117
33. The transformation that distorts the shape of the object such that the shape appears as
if the object were composed of internal layer is : X
(a) Scaling (b) Shearing .
(c) Rotation (d) Reflection ( )
39. Which of the following algorithms has the usage of parametric line equation:
(a) Z-buffer Algorithm
(b) Painter’s Algorithm
(c) Sutherland Cohen Algorithm
(d) Cyrus-Beck Algorithm ( )
DESCRIPTIVE PART – II
Year – 2008
Prob. 3 (a) Describe the generalized Bresenhamn’s line algorithm. Also give a suitable
example.
Prob. 4 What do you mean by clipping operations? Explain different types of clipping
operation algorithms. Also give a suitable example.
Year – 2007
Time : 1 Hr. M. M. : 20
The question pape r contains 40 multiple choice questions with four choices and
students will have to pick the correct one (each carrying ½ mark).
2. To provide comfort to the computer user, the graphics screen must be refreshed at the
rate of :
(a) 5 Eames per second (b) 20 Eames per second
(c) 50 Eames per second (d) 200 Eames per second ( )
7. A rectangle has been drawn on the screen. lt is desired to carry out a zoom- in-process
to double the size of the rectangle. This process would involve :
Computer Graphics 121
11. Which of the following attributes is important for presenting text in multimedia
document:
(a) Font (b) Character format
(c) Colour (d) All the three mentioned. above ( )
l2. The dot product of two vector is l2. lf one of the vector is 3l, the other ? vector is 2 I
(a) 4J (b) 4I + 4J +4K
(c) 4K (d) 2J + 2 K (
)
17. The end points of a given line are (0, 0) and (6, 8). The slope and y intercept are
computed as :
(a) 3, 3 (b) 0, 3
(c) 3, 0 (d) 3,-3 ( )
19. The principal vanishing points for the standard probative transformation are :
(a) Three (b) Two
(c) One (d) None of the above 4 ( )
20. If the window parameters- are X wmin = 1, X wmin = 3 and the viewport parameters X Vmin
= O, X Vmin = 1, the scale SX is :
(a) 1 (b) 1/2
(c) 2 (d) 1/4 ( )
31. For carrying out Sutherland-Cohen clipping the end point codes of 4 line are given
below. Find out which one will be totally invisible from the clipping window :
(a) 0000, 0000 (b) 0l00, 1000
(c) 000l, 1000 (d) 0ll0, 1010 ( )
32. Hypermedia :
(a) ls another media like graphics, text etc
(b) Provides link between two media .
(c) Is a facility to permit two media to be played together
(d) Is another name for multimedia ( )
33. AutoCAD is :
(a) An abbreviation for automatic calculation and drawing
(b) A standard software for drawing and building machine parts
(c) A software to generate scenarios
(d) Used to provide shading effects in drawings ( )
35. The scope of a Cubic Benzier curve at the shut ofthe curve is controlled by :
(a) First control point (b) First 2 control points
(c) First 3 control points (d) All 4 control points ( )
40. To store good quality sound the audio signal in a multimedia PC is sampled at the rate
of :
(a) 44.l Hz (b) 4.4l kHz
(c) 44kHz (d) 4.4l MHz ( )
DESCRIPTIVE PART – II
Year – 2007
Time allowed : 2 Hours Maximum Marks : 30
Attempt any four questions out of the six. All questions carry 7½ marks each.
Prob. 2 What are the various input devices used for graphics? Explain Scan Line Seed Fill
Algorithm.
Prob. 3 (a) Discuss the merits and demerits of various clipping algorithms.
(b) Distinguish between windows and viewport.
Year – 2006
Time : 1 Hr. M. M. : 20
The question paper contains 40 multiple choice questions with four choices and students will
have to pick the correct one (each carrying ½ mark).
5. In Surtherland Cohen Algorithm for clipping, an end point lying outside the left
margin of the window has the following code
(a) 00l0 (b) 1001
(c) 1000 (d) 0001 ( )
I4. In which type of parallel projections, the foreshortening factors are all different ? .
(a) Axonometric (b) Trimetric
(c) Isometric (d) Diametric ( )
16. In cavalier oblique projection, the angle between the oblique projector and the plane
of projection is :
(a) 63.43° (b) 30° 1
(c) 45° V A (6) 60° - ( )
128
17. What does ‘K’ in the CMYK colour model stand for ?
(a) Krimson
(b) Black P
(c) Kohen
(d) 'Ihere isnosiiclimodelasCMYK ( )
18. If a line is drawn by using DDA, brightness will be uniform only in case of : t — §
(a) Horizontal lines (b) Vertical lines t
(c) inclined-lines with slopes (d) All of the above ( )
20. Which of the following term is not associated with computer graphics ?
(a) Locator (b) Stroke e
(c) Pick (d) Roll-over ( )
27. The process of determining the portion of a primitive lying within a region is called :
(a) Back Face Removal (b) Visible Surface Detection
(c) Clipping (d) Area Filling ( )
28. How many bits per pixel are generally required for ‘true colour’ system :
(a) 32 bits (b) 24 bits
(c) l6 bits (d) 8 bits ( )
30. In mid-point ellipse algorithm, the basic equation that is used to identify . whether a
point (x, y) is outside, or inside or on the boundary of ellipse is; A
X2 y2
ta x’+y’ =r;..... oi g+;g·‘ e
x2` y2 _ A ’
. (c) ’;? ij? = 0 (d) None ofthe above ` g ( ) .
31. The means of accomplishing changes in orientation, size and shape that alter the
coordinate description of objects :
(a) Translation (b) Transformation
(c) Projection (d) Objection ( )
34. Which ofthe following algorithm has the usage of pamnetric line equation?
(a) Painter’s Algorithm (b) Z-Buffer Algorithm
(c) Sutherland OohenAlgorithm (d) Cyrus-Beck Algorithm ( )
36. Creatinganenlargedviewofaportionoftliesceneintiteirnageiscalled:
130
39. Which of the following criteria must be fulfilled by a line drawing algorithm ‘?
(a) Line should appear as a straight line
(b) It should start and end accurately
(c) Line should have constant brightness .
(d) All of the above ( )
DESCRIPTIVE PART – II
Year – 2006
Time allowed : 2 Hours Maximum Marks : 30
Attempt any four questions out of the six. All questions carry 7½ marks each.
Prob. 2 Describe the generalized Bresenbamn’s line drawing algorithm in detail. Give
example.
(Hint: General Bresenhamn’s line drawing Algorithm is different from DDA)
Prob. 3 Discuss the Cyrus-Beck Algorithm for clipping lines in a non-rectangular clipping
window.
(Hint: Consider the clipping window to be polygonal)
Prob. 4 Write the transformation matrix for the following 2 – dimensional transformations in
homogeneous coordinate system:
(a) Reflection about y = - x axis.
(b) Rotation by 0o in counterclockwise direction.
(c) Moving an object 2.5 units right and 4 units down.
(d) Move an object 3 unit left and reflect about the x axis.
(e) Simultaneous shearing.
Prob. 6 (a) Explain how the light (and its path) is changed in context of liquid crystal
displays.
(b) How can a digital image be captured? List out four file formats to store
digital image.
(c) Discuss the HSV colour model.
132
Bibliography
1. http://www.cgsociety.org/
2. http://forums.cgsociety.org/
3. http://www.maacindia.com/
4. http://cg.tutsplus.com/
5. http://programmedlessons.org/VectorLessons/index.html