0% found this document useful (0 votes)
14 views99 pages

Core-38-Computer-Graphics-and-Visualization-Sem-VI

The document is a study material for the B.Sc. Computer Science course focusing on Computer Graphics and Visualization for the academic year 2022-2023. It covers various topics including graphics systems, output primitives, two-dimensional and three-dimensional viewing, and graphical user interfaces. The material is prepared by the Department of Computer Science at Kamaraj College, Thoothukudi, and includes detailed explanations of visual display devices, input devices, and their functionalities.

Uploaded by

Dillibabu G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views99 pages

Core-38-Computer-Graphics-and-Visualization-Sem-VI

The document is a study material for the B.Sc. Computer Science course focusing on Computer Graphics and Visualization for the academic year 2022-2023. It covers various topics including graphics systems, output primitives, two-dimensional and three-dimensional viewing, and graphical user interfaces. The material is prepared by the Department of Computer Science at Kamaraj College, Thoothukudi, and includes detailed explanations of visual display devices, input devices, and their functionalities.

Uploaded by

Dillibabu G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

KAMARAJ COLLEGE

SELF FINANVCING COURSES


(Reaccredited with “A+” Grade by NAAC)
(Affiliated to Manonmaniam Sundaranar University)
THOOTHUKUDI - 628 003

STUDY MATERIAL FOR


B.Sc. COMPUTER SCIENCE

COMPUTER GRAPHICS & VISUALIZATION


VI – SEMESTER

ACADEMIC YEAR 2022 - 2023


PREPARED BY
DEPARTMENT OF COMPUTER SCIENCE (SF)
KAMARAJ COLLEGE,
THOOTHUKUDI
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

UNIT CONTENT PAGE Nr

I OVERVIEW OF GRAPHICS SYSTEM 02

II ATTRIBUTES OF OUTPUT PRIMITIVES 41

III TWO DIMENSIONAL VIEWING 60

IV GRAPHICAL USER INTERFACES AND INTERACTIVE 72

V THREE-DIMENSIONAL VIEWING 84

Page 2 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

UNIT - I
OVERVIEW OF GRAPHICS SYSTEM

Visual Display Devices:


The primary output device in a graphics system is a video monitor.
Although many technologies exist, but the operation of most video monitors is
based on the standard Cathode Ray Tube (CRT) design.
Cathode Ray Tubes (CRT):
A cathode ray tube (CRT) is a specialized vacuum tube in which images
are produced when an electron beam strikes a phosphorescent surface.It
modulates, accelerates, and deflects electron beam(s) onto the screen to create
the images. Most desktop computer displays make use of CRT for image
displaying purposes.
Construction of a CRT:
 The primary components are the heated metal cathode and a control
grid.
 The heat is supplied to the cathode (by passing current through the
filament). This way the electrons get heated up and start getting ejected
out of the cathode filament.
 This stream of negatively charged electrons is accelerated towards the
phosphor screen by supplying a high positive voltage.
 This acceleration is generally produced by means of an accelerating
anode.
 Next component is the Focusing System, which is used to force the
electron beam to converge to small spot on the screen.
 If there will not be any focusing system, the electrons will be scattered
because of their own repulsions and hence we won’t get a sharp image of
the object.
 This focusing can be either by means of electrostatic fields or magnetic
fields.

Page 3 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Types of Deflection:
1. ElectrostaticDeflection
The electron beam (cathode rays) passes through a highly positively
charged metal cylinder that forms an electrostatic lens. This electrostatic lens
focuses the cathode rays to the center of the screen in the same way like an optical
lens focuses the beam of light. Two pairs of parallel plates are mounted inside the
CRT tube.

2. Magnetic Deflection:
Here, two pairs of coils are used. One pair is mounted on the top and bottom
of the CRT tube, and the other pair on the two opposite sides. The magnetic field

Page 4 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

produced by both these pairs is such that a force is generated on the electron beam
in a direction which is perpendicular to both the direction of magnetic field, and
to the direction of flow of the beam. One pair is mounted horizontally and the
other vertically.
 Different kinds of phosphors are used in a CRT. The difference is based
upon the time for how long the phosphor continues to emit light after the
CRT beam has been removed. This property is referred to as Persistence.
 The number of points displayed on a CRT is referred to as resolutions (eg.
1024x768).
Raster-Scan
 The electron beam is swept across the screen one row at a time from top to
bottom. As it moves across each row, the beam intensity is turned on and
off to create a pattern of illuminated spots. This scanning process is called
refreshing.
 Each complete scanning of a screen is normally called a frame. The
refreshing rate, called the frame rate, is normally 60 to 80 frames per
second, or described as 60 Hz to 80 Hz.
 Picture definition is stored in a memory area called the frame buffer. This
frame buffer stores the intensity values for all the screen points. Each
screen point is called a pixel (picture element or pel).
 On black and white systems, the frame buffer storing the values of the
pixels is called a bitmap. Each entry in the bitmap is a 1-bit data which
determine the on (1) and off (0) of the intensity of the pixel.
 On color systems, the frame buffer storing the values of the pixels is called
a pixmap(Though nowadays many graphics libraries name it as bitmap
too). Each entry in the pixmap occupies a number of bits to represent the
color of the pixel. For a true color display, the number of bits for each entry
is 24 (8 bits per red/green/blue channel, each channel 28 =256 levels of
intensity value, ie. 256 voltage settings for each of the red/green/blue
electron guns).

Page 5 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Random-Scan (Vector Display) or stroke-writing or calligraphic displays:

 The CRT's electron beam is directed only to the parts of the screen where
a picture is to be drawn. The picture definition is stored as a set of line-
drawing commands in a refresh display file or a refresh buffer in memory.
 Random-scan generally have higher resolution than raster systems and can
produce smooth line drawings, however it cannot display realistic shaded
scenes.
Color CRT Monitors:
The CRT Monitor display by using a combination of phosphors. The phosphors
are different colors.
There are two popular approaches for producing color displays with a CRT are:
 Beam Penetration Method
 Shadow-Mask Method

1. Beam Penetration Method:


The Beam-Penetration method has been used with random-scan monitors.
In this method, the CRT screen is coated with two layers of phosphor, red and
green and the displayed color depends on how far the electron beam penetrates
the phosphor layers. This method produces four colors only, red, green, orange
and yellow. A beam of slow electrons excites the outer red layer only; hence
screen shows red color only. A beam of high-speed electrons excites the inner
green layer. Thus screen shows a green color.

Page 6 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Advantages:
 Inexpensive
Disadvantages:
 Only four colors are possible
 Quality of pictures is not as good as with another method.
2. Shadow-Mask Method:
 Shadow Mask Method is commonly used in Raster-Scan System because
they produce a much wider range of colors than the beam-penetration
method.
 It is used in the majority of color TV sets and monitors.

Construction:
A shadow mask CRT has 3 phosphor color dots at each pixel position.
 One phosphor dot emits: red light
 Another emits: green light
 Third emits: blue light

This type of CRT has 3 electron guns, one for each color dot and a shadow mask
grid just behind the phosphor coated screen. Shadow mask grid is pierced with
small round holes in a triangular pattern.

Page 7 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Working:
 The deflection system of the CRT operates on all 3 electron beams
simultaneously; the 3 electron beams are deflected and focused as a group
onto the shadow mask, which contains a sequence of holes aligned with the
phosphor- dot patterns.
 When the three beams pass through a hole in the shadow mask, they
activate a dotted triangle, which occurs as a small color spot on the screen.
 The phosphor dots in the triangles are organized so that each electron beam
can activate only its corresponding color dot when it passes through the
shadow mask.
Advantage:
 Realistic image
 Million different colors to be generated
 Shadow scenes are possible

Disadvantage:
 Relatively expensive compared with the monochrome CRT.
 Relatively poor resolution
 Convergence Problem
Direct View Storage Tubes:
DVST terminals also use the random scan approach to generate the image
on the CRT screen. The term "storage tube" refers to the ability of the screen to
retain the image which has been projected against it, thus avoiding the need to
rewrite the image constantly.
Function of guns: Two guns are used in DVST
 Primary guns: It is used to store the picture pattern.
 Flood gun or Secondary gun: It is used to maintain picture display.

Page 8 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Advantage:
 No refreshing is needed.
 High Resolution
 Cost is very less
Disadvantage:
 It is not possible to erase the selected part of a picture.
 It is not suitable for dynamic graphics applications.
 If a part of picture is to modify, then time is consumed.
Flat Panel Display:
The Flat-Panel display refers to a class of video devices that have
reduced volume, weight and power requirement compare to CRT.
Example: Small T.V. monitor, calculator, pocket video games, laptop
computers, an advertisement board in elevator.
1. Emissive Display:
The emissive displays are devices that convert electrical energy into light.
Examples are Plasma Panel, thin film electroluminescent display and LED (Light
Emitting Diodes).
2. Non-Emissive Display:
The Non-Emissive displays use optical effects to convert sunlight or
light from some other source into graphics patterns. Examples are LCD (Liquid
Crystal Device).
Plasma Panel Display:
Plasma-Panels are also called as Gas-Discharge Display. It consists of an
array of small lights. Lights are fluorescent in nature. The essential components
of the plasma-panel display are:
1. Cathode:
It consists of fine wires. It delivers negative voltage to gas cells. The voltage is
released along with the negative axis.

Page 9 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

2. Anode:
It also consists of line wires. It delivers positive voltage. The voltage is supplied
along positive axis.
3. Fluorescent cells:
It consists of small pockets of gas liquids when the voltage is applied to this liquid
(neon gas) it emits light.
4. Glass Plates:
These plates act as capacitors. The voltage will be applied, the cell will glow
continuously.
The gas will slow when there is a significant voltage difference between
horizontal and vertical wires. The voltage level is kept between 90 volts to 120
volts. Plasma level does not require refreshing. Erasing is done by reducing the
voltage to 90 volts.
Each cell of plasma has two states, so cell is said to be stable. Displayable
point in plasma panel is made by the crossing of the horizontal and vertical grid.
The resolution of the plasma panel can be up to 512 * 512 pixels.
Advantage:
 High Resolution
 Large screen size is also possible.
 Less Volume
 Less weight
 Flicker Free Display
Disadvantage:
 Poor Resolution
 Wiring requirement anode and the cathode is complex.
 Its addressing is also complex.

Page 10 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

LED (Light Emitting Diode):


In an LED, a matrix of diodes is organized to form the pixel positions in
the display and picture definition is stored in a refresh buffer. Data is read from
the refresh buffer and converted to voltage levels that are applied to the diodes to
produce the light pattern in the display.
LCD (Liquid Crystal Display):
 Liquid Crystal Displays are the devices that produce a picture by passing
polarized light from the surroundings or from an internal light source
through a liquid-crystal material that transmits the light.
 LCD uses the liquid-crystal material between two glass plates; each plate
is the right angle to each other between plates liquid is filled. One glass
plate consists of rows of conductors arranged in vertical direction. Another
glass plate is consisting of a row of conductors arranged in horizontal
direction. The pixel position is determined by the intersection of the
vertical & horizontal conductor. This position is an active part of the
screen.
 Liquid crystal display is temperature dependent. It is between zeros to
seventy degree Celsius. It is flat and requires very little power to operate.
Advantage:
 Low power consumption.
 Small Size
 Low Cost
Disadvantage:
 LCDs are temperature-dependent (0-70°C)
 LCDs do not emit light; as a result, the image has very little contrast.
 LCDs have no color capability.
 The resolution is not as good as that of a CRT.

Page 11 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Input Devices
The Input Devices are the hardware that is used to transfer transfers input
to the computer. The data can be in the form of text, graphics, sound, and text.
Output device display data from the memory of the computer. Output can be
text, numeric data, line, polygon, and other objects.

These Devices include:


 Keyboard
 Mouse
 Trackball
 Spaceball
 Joystick
 Light Pen
 Digitizer
 Touch Panels
 Voice Recognition
 Image Scanner
Keyboard:
The most commonly used input device is a keyboard. The data is entered
by pressing the set of keys. All keys are labeled. A keyboard with 101 keys is
called a QWERTY keyboard.
The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.
1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

Page 12 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

2. Alphabetic keys: a to z (lower case), A to Z (upper case)


3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3 F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used
for fast entry of numeric data.
Function of Keyboard:
1. Alphanumeric Keyboards are used in CAD. (Computer Aided Drafting)
2. Keyboards are available with special features line screen co-ordinates entry,
Menu selection or graphics functions, etc.
3. Special purpose keyboards are available having buttons, dials, and switches.
Dials are used to enter scalar values. Dials also enter real numbers.
Buttons and switches are used to enter predefined function values.
Advantage:
 Suitable for entering numeric data.
 Function keys are a fast and effective method of using commands, with
fewer errors.
Disadvantage:
Keyboard is not suitable for graphics input.
Mouse

A Mouse is a pointing device and used to position the pointer on the screen.

Page 13 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

It is a small palm size box. There are two or three depression switches on the top.
The movement of the mouse along the x-axis helps in the horizontal movement
of the cursor and the movement along the y-axis helps in the vertical movement
of the cursor on the screen. The mouse cannot be used to enter text. Therefore,
they are used in conjunction with a keyboard.
Advantage:
 Easy to use
 Not very expensive

Trackball
It is a pointing device. It is similar to a mouse. This is mainly used in
notebook or laptop computer, instead of a mouse. This is a ball which is half
inserted, and by changing fingers on the ball, the pointer can be moved.

Advantage
 Trackball is stationary, so it does not require much space to use it.
 Compact Size
Space ball
It is similar to trackball, but it can move in six directions where trackball
can move in two directions only. The movement is recorded by the strain gauge.
Strain gauge is applied with pressure. It can be pushed and pulled in various
directions. The ball has a diameter around 7.5 cm. The ball is mounted in the base
using rollers. One-third of the ball is an inside box, the rest is outside.

Page 14 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Applications
 It is used for three-dimensional positioning of the object.
 It is used to select various functions in the field of virtual reality.
 It is applicable in CAD applications.
 Animation is also done using spaceball
 It is used in the area of simulation and modeling.
Joystick

A Joystick is also a pointing device which is used to change cursor position on a


monitor screen. Joystick is a stick having a spherical ball as its both lower and
upper ends as shown in fig. The lower spherical ball moves in a socket. The
joystick can be changed in all four directions. The function of a joystick is similar
to that of the mouse. It is mainly used in Computer Aided Designing (CAD) and
playing computer games.
Light Pen

Light Pen (similar to the pen) is a pointing device which is used to select a
displayed menu item or draw pictures on the monitor screen. It consists of a
photocell and an optical system placed in a small tube. When its tip is moved over
the monitor screen, and pen button is pressed, its photocell sensing element

Page 15 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

detects the screen location and sends the corresponding signals to the CPU.
Uses
1. Light Pens can be used as input coordinate posins by providing necessary arrangements.
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics system.
4. It can be used as stroke input devices.
It can be used as valuatorsDigitizers:

The digitizer is an operator input device, which contains a large, smooth


board (the appearance is similar to the mechanical drawing board) & an electronic
tracking device, which can be changed over the surface to follow existing lines.
The electronic tracking device contains a switch for the user to record the desire
x & y coordinate positions. The coordinates can be entered into the computer
memory or stored or an off-line storage medium such as magnetic tape.
Advantages:
 Drawing can easily be changed.
 It provides the capability of interactive graphics.
Disadvantages:
 Costly
 Suitable only for applications which required high-resolution graphics.
Touch Panels:
 Touch Panels is a type of display screen that has a touch-sensitive
transparent panel covering the screen. A touch screen registers input when
a finger or other object comes in contact with the screen.

Page 16 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

 When the wave signals are interrupted by some contact with the screen,
that located is recorded. Touch screens have long been used in military
applications.
Voice Systems (Voice Recognition):
 Voice Recognition is one of the newest, most complex input techniques
used to interact with the computer. The user inputs data by speaking into a
microphone. The simplest form of voice recognition is a one-word
command spoken by one person. Each command is isolated with pauses
between the words.
 Voice Recognition is used in some graphics workstations as input devices
to accept voice commands. The voice-system input can be used to initiate
graphics operations or to enter data. These systems operate by matching an
input against a predefined dictionary of words and phrases.
Advantage:
 More efficient device.
 This acceleration is generally produced by means of an accelerating
anode.
 Next component is the Focusing System, which is used to force the
electron beam to converge to small spot on the screen.
 If there will not be any focusing system, the electrons will be scattered
because of their own repulsions and hence we won’t get a sharp image of
the object.
 This focusing can be either by means of electrostatic fields or magnetic
fields.
 Easy to use
 Unauthorized speakers can be identified
Disadvantages:
 Very limited vocabulary
 Voice of different operators can't be distinguished.
Image Scanner
 It is an input device. The data or text is written on paper. The paper is
feeded to scanner. The paper written information is converted into

Page 17 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

electronic format; this format is stored in the computer. The input


documents can contain text, handwritten material, picture extra.
 By storing the document in a computer document became safe for longer
period of time. The document will be permanently stored for the future.
We can change the document when we need. The document can be printed
when needed.
 Scanning can be of the black and white or colored picture. On stored picture
2D or 3D rotations, scaling and other operations can be applied.
Types of image Scanner:
1. Flat Bed Scanner

It resembles a photocopy machine. It has a glass top on its top. Glass top
in further covered using a lid. The document to be scanned is kept on glass plate.

The light is passed underneath side of glass plate. The light is moved left
to right. The scanning is done the line by line. The process is repeated until the
complete line is scanned. Within 20-25 seconds a document of 4" * 6" can be
scanned.
2. Hand Held Scanner:

Page 18 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

It has a number of LED's (Light Emitting Diodes) the LED's are arranged
in the small case. It is called a Hand held Scanner because it can be kept in hand
which performs scanning. For scanning the scanner is moved over document from
the top towards the bottom. Its light is on, while we move it on document. It is
dragged very slowly over document. If dragging of the scanner over the document
is not proper, the conversion will not correct.
Graphics software
There are mainly two types of graphics software:
 General programming package
 Special−purpose application package
General programming package
 A general programming package provides an extensive set of graphics
function that can be used in high level programming language such as C or
FORTRAN.
 It includes basic drawing element shape like line, curves, polygon, color
of element transformation etc.
 Example: − GL (Graphics Library).

Special-purpose application package


 Special−purpose application package are customize for particular
application which implement required facility and provides interface so
that user need not to worry about how it will work (programming). User
can simply use it by interfacing with application.
 Example: − CAD, medical and business systems.
Coordinate representations
 Except few all other general packages are designed to be used with
Cartesian coordinate specifications.
 If coordinate values for a picture are specified is some other reference
frame they must be converted to Cartesian coordinate before giving input

Page 19 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

to graphics package.
 Special−purpose package may allow use of other coordinates which suits
application.
 In general several different Cartesian reference frames are used to construct
and display scene.
 We can construct shape of object with separate coordinate system called
modeling coordinates or sometimes local coordinates or master
coordinates.
 Once individual object shapes have been specified we can place the objects
into appropriate positions called world coordinates.
 Finally the World−coordinates description of the scene is transferred to one
or more output device reference frame for display. These display
coordinates system are referred to as “Device Coordinates” or “Screen
Coordinates”.
 Generally a graphic system first converts the world−coordinates position
to normalized device coordinates. In the range from 0 to 1 before final
conversion to specific device coordinates.
 An initial modeling coordinates position ( Xmc,Ymc) in this illustration is
transferred to a device coordinates position (Xdc,Ydc) withthesequence
(Xmc,Ymc)→(Xwc,Ywc)→(Xnc,Ync)→(Xdc,Ydc).

Graphic Function
 A general purpose graphics package provides user with Varity of function
for creating and manipulating pictures.
 The basic building blocks for pictures are referred to as output primitives.
They includes character, string, and geometry entities such as point,
straight lines, curved lines, filled areas and shapes defined with arrays of
color points.
 Input functions are used for control & process the various input device
such as mouse, tablet, etc.
 Control operations are used to controlling and housekeeping tasks such as

Page 20 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

clearing display screen etc.


 All such inbuilt function which we can use for our purpose are known as
graphics function
Software Standard
 Primary goal of standardize graphics software is portability so that it can
be used in any hardware systems & avoid rewriting of software program
for different system
 Some of these standards are discussed below

Graphical Kernel System (GKS)


 This system was adopted as a first graphics software standard by the
international standard organization (ISO) and various national standard
organizations including ANSI.
 GKS was originally designed as the two dimensional graphics package
and then later extension was developed for three dimensions.

PHIGS (Programmer’s Hierarchical Interactive Graphic Standard)


 PHIGS is extension of GKS. Increased capability for object modeling,
color specifications, surface rendering, and picture manipulation are
provided in PHIGS.
 Extension of PHIGS called “PHIGS+” was developed to provide three
dimensional surface shading capabilities not available in PHIGS.

OUTPUT PRIMITIVES
Points and Lines
 Point plotting is done by converting a single coordinate position
furnished by an application program into appropriate operations for the
output device in use.
 Line drawing is done by calculating intermediate positions along the
line path between two specified end point positions.

Page 21 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

 The output device is then directed to fill in those positions between


the end points with some color.
 For some device such as a pen plotter or random scan display, a straight
line can be drawn smoothly from one end point to other.
 Digital devices display a straight line segment by plotting discrete points
between the two endpoints.
 Discrete coordinate positions along the line path are calculated from
the equation of the line.
 For a raster video display, the line intensity is loaded in frame buffer at
the corresponding pixel positions.
 Reading from the frame buffer, the video controller then plots the screen
pixels.
 Screen locations are referenced with integer values, so plotted positions
may only approximate actual line positions between two specified end
points.
 For example line position of (12.36, 23.87) would be converted to pixel
position (12,24).
 This rounding of coordinate values to integers causes lines to be displayed
with as step appearance(“the jaggiest”), as represented in fig

Page 22 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Fig. Stair step effect produced when line is generated as a series of pixel
positions.
 The stair step shape is noticeable in low resolution system, and we can
improve their appearance somewhat by displaying them on high resolution
system.
 More effective techniques for smoothing raster lines are based on
adjusting pixel intensities along the line paths.
 For raster graphics device−level algorithms discuss here, object
positions are specified directly in integer device coordinates.
 Pixel position will referenced according to scan−line number and
column number which is illustrated by following figure.
6
5
4
3
2
1
0
01 2 3 4 6

Pixel positions referenced by scan−line number and column number


 To load the specified color into the frame buffer at a particular position,
we will assume we have available low−level procedure of the form
setpixel (x,y).
 Similarly for retrieve the current frame buffer intensity we assume to
have proceduregetpixel(x,y).
DDA Algorithm
DDA stands for Digital Differential Analyzer. It is an incremental method

Page 23 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

of scan conversion of line. In this method calculation is performed at each step


but by using results of previous steps. Digital Differential Analyzer algorithm is
the simple line generation algorithm which is explained step by step here.
Suppose at step i, the pixels is (xi,yi) The line of equation for step i yi=mxi+b
..................................... 1
Next value will be yi+1=mxi+1+b 2
m=
yi+1-yi=∆y .................. 3
yi+1-xi=∆x ................... 4
yi+1=yi+∆y
∆y=m∆x
yi+1=yi+m∆x
∆x=∆y/m xi+1=xi+∆x xi+1=xi+∆y/m
Case1: When |m|<1 then (assume that x1<x2)
x= x1,y=y1 set ∆x=1 yi+1=y1+m, x=x+1 Until x = x2
Case2: When |m|>1 then (assume that y1<y2)
x= x1,y=y1 set ∆y=1
xi+1= , y=y+1
Until y → y2

Advantage:
1. It is a faster method than method of using direct use of line equation.
2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y ,so plotting of same
point twice is not possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.

Page 24 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations
of round off error cause accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of
time.
3. It is more suitable for generating line using the software. But it is less suited
for hardware implementation.

DDA Algorithm:
Step1 : Start Algorithm
Step2 : Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.
Step3 : Enter value of x1,y1,x2,y2.
Step4 : Calculate dx = x2-x1
Step5 : Calculate dy = y2-y1
Step6 : If ABS (dx) > ABS (dy)
Then step = abs (dx) Else
Step7 : xinc=dx/step
yinc=dy/step assign x = x1
assign y = y1
Step8 : Set pixel (x, y)
Step9 : x = x + xinc y = y + yinc
Set pixels (Round (x), Round (y))
Step10: Repeat step 9 until x = x2
Step11: End Algorithm

Example: If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many
points will needed to generate such line?

Page 25 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Solution
P1 (2,3) P11 (6,15)
x1=2 y1=3
x2= 6 y2=15 dx = 6 - 2 = 4
dy = 15 - 3 = 12
m =For calculating next value of x takes x = x

Page 26 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Bresenham's Line Algorithm


This algorithm is used for scan converting a line. It was developed by
Bresenham. It is an efficient method because it involves only integer addition,
subtractions, and multiplication operations. These operations can be performed
very rapidly so lines can be generated quickly. In this method, next pixel selected
is that one who has the least distance from true line.
The method works as follows:
Assume a pixel P1'(x1',y1'),then select subsequent pixels as one pixel
position at a time in the horizontal direction toward P2'(x2',y2').Once a pixel in
choose at any step.
The next pixel is either the one to its right (lower-bound for the line)One top its
right and up (upper-bound for the line). The line is best approximated by those
pixels that fall the least distance from the path between P1',P2'.
To chooses the next one between the bottom pixel S and top pixel T. If S is chosen
We have xi+1=xi+1 and yi+1=yi If T is chosen
We have xi+1=xi+1 and yi+1=yi+1
The actual y coordinates of the line at x = xi+1is y=mxi+1+b

(1)
The distance from S to the actual line in y direction s = y-yi
The distance from T to the actual line in y direction t = (yi+1)-y
Now consider the difference between these 2 distance values s - t
When (s-t) <0 ⟹ s < t The closest pixel is S When (s-t) ≥0 ⟹ s < t The closest
pixel is T This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y - 2yi

Page 27 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Bresenham's Line Algorithm:


Step1: Start Algorithm
Step2: Declare variable x1,x2,y1,y2,d,i1,i2,dx,dy
Step3: Enter value of x1,y1,x2,y2
Where x1,y1are coordinates of starting point
And x2,y2 are coordinates of Ending point
Step4: Calculate dx = x2-x1 Calculate dy = y2-y1 Calculate i1=2*dy
Calculate i2=2*(dy-dx) Calculate d=i1-dx
Step5: Consider (x, y) as starting point and xendas maximum possible value
of x.
If dx < 0
Then x = x2 y = y2 xend=x1
If dx > 0 Then x = x1
y = y1
x end=x2
Step6: Generate point at (x,y)coordinates.
Step7: Check if whole line is generated.
If x > = xend Stop.
Step8: Calculate co-ordinates of the next pixel If d < 0
Then d = d + i1
If d ≥ 0
Then d = d + i2 Increment y = y + 1
Step9: Increment x = x + 1
Step10: Draw a point of latest (x, y) coordinates
Step11: Go to step 7
Step12: End of Algorithm

Page 28 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find
intermediate points.

Solution: x1=1 y1=1 x2=8 y2=5


dx= x2-x1=8-1=7 dy=y2-y1=5-1=4
I1=2* ∆y=2*4=8 I2=2*(∆y-∆x)=2*(4-7)=-6 d = I1-∆x=8-7=1

X y d=d+I1 or I2
1 1 d+I2=1+(-6)=-5
2 2 d+I1=-5+8=3
3 2 d+I2=3+(-6)=-3
4 3 d+I1=-3+8=5
5 3 d+I2=5+(-6)=-1
6 4 d+I1=-1+8=7
7 4 d+I2=7+(-6)=1
8 5

Advantage:
1. It involves only integer arithmetic, so it is simple.
2. It avoids the generation of duplicate points.
3. It can be implemented using hardware because it does not use multiplication
and division.
4. It is faster as compared to DDA (Digital Differential Analyzer) because it
does not involve floating point calculations like DDA Algorithm.

Page 29 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Disadvantage:
1. This algorithm is meant for basic line drawing only Initializing is not a part of
Bresenham's line algorithm. So to draw smooth lines, you should want to look
into a different algorithm.

DDA Algorithm Bresenham's Line Algorithm

 DDA Algorithm use floating point,  Bresenham's Line Algorithm use


i.e., Real Arithmetic. fixed point, i.e., Integer
Arithmetic

 DDA Algorithms uses multiplication  Bresenham's Line Algorithm


& division its operation uses only subtraction and
addition its operation

 DDA Algorithm is slowly than  Bresenham's Algorithm is faster


Bresenham's Line Algorithm in line than DDA Algorithm in line
drawing because it uses real arithmetic because it involves only addition
(Floating Point operation) & subtraction in its calculation
and uses only integer arithmetic.

 DDA Algorithm is not accurate and  Bresenham's Line Algorithm is


efficient as Bresenham's Line more accurate and efficient at
Algorithm. DDA Algorithm.

 DDA Algorithm can draw circle and  Bresenham's Line Algorithm can
curves but are not accurate as draw circle and curves with more
Bresenham's Line Algorithm accurate than DDA Algorithm.

Mid-Point Circle Drawing Algorithm-


Given the center point and radius of circle,
Mid-Point Circle Drawing Algorithm attempts to generate the points of one octant.
The points for other octacts are generated using the eight symmetry property.

Page 30 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Procedure-
Given- Centre point of Circle = (X0, Y0) Radius of Circle = R
The point’s generation using Mid-Point Circle Drawing Algorithm involves the
following steps-
Step-01:
Assign the starting point coordinates (X0, Y0) as- X0 = 0
Y0 = R
Step-02:
Calculate the value of initial decision parameter P0 as- P0 = 1 – R
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point of the first octant depending on the value of decision parameter
Pk.
Follow the below two cases-

Step-04:
If the given center point (X0, Y0) is not (0, 0), then do the following and plot the
point- Xplot = Xc + X0
Yplot = Yc + Y0
Here, (Xc, Yc) denotes the current value of X and Y coordinates.

Page 31 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Step-05:
Keep repeating Step-03 and Step-04 until Xplot >= Yplot.
Step-06:
Step-05 generates all the points for one octant.
To find the points for other seven octants, follow the eight symmetry property of
circle. This is depicted by the following figure-

Problem-01:
Given the center point coordinates (0, 0) and radius as 10, generate all the
points to form a circle.
Solution-
Given-
Centre Coordinates of Circle (X0, Y0) = (0, 0) Radius of Circle = 10
Step-01:
Assign the starting point coordinates (X0, Y0) as- X0 = 0Y0 = R = 10
Step-02:
Calculate the value of initial decision parameter P0 as- P0 = 1 – RP0 = 1 – 10P0
= -9
Step-03:
As Pinitial < 0, so case-01 is satisfied. Thus,Xk+1 = Xk + 1 = 0 + 1 = 1

Page 32 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Yk+1 = Yk = 10
Pk+1 = Pk + 2 x Xk+1 + 1 = -9 + (2 x 1) + 1 = -6
Step-04:
This step is not applicable here as the given center point coordinates is (0, 0).

Step-05:
Step-03 is executed similarly until Xk+1 >= Yk+1 as follows

Pk Pk+1 (Xk+1, Yk+1)


(0, 10)
-9 -6 (1, 10)
-6 -1 (2, 10)
-1 6 (3, 10)
6 -3 (4, 9)
-3 8 (5, 9)
8 5 (6, 8)

Algorithm Terminates
These are all points for Octant-1.
Algorithm calculates all the points of octant-1 and terminates.
Now, the points of octant-2 are obtained using the mirror effect by swapping X
and Y coordinates.

Page 33 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Octant-1 Points Octant-2 Points


(0, 10) (8, 6)
(1, 10) (9, 5)
(2, 10) (9, 4)
(3, 10) (10, 3)
(4, 9) (10, 2)
(5, 9) (10, 1)
(6, 8) (10, 0)

Now, the points for rest of the part are generated by following the signs
of other quadrants. The other points can also be generated by calculating each
octant separately.

Page 34 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Here, all the points have been generated with respect to quadrant-1-

Quadrant-1 Quadrant-2 (- Quadrant-3 (-X,- Quadrant-4 (X,-


(X,Y) X,Y) Y) Y)
(0, 10) (0, 10) (0, -10) (0, -10)

(1, 10) (-1, 10) (-1, -10) (1, -10)

(2, 10) (-2, 10) (-2, -10) (2, -10)

(3, 10) (-3, 10) (-3, -10) (3, -10)

(4, 9) (-4, 9) (-4, -9) (4, -9)

(5, 9) (-5, 9) (-5, -9) (5, -9)

(6, 8) (-6, 8) (-6, -8) (6, -8)

(8, 6) (-8, 6) (-8, -6) (8, -6)

(9, 5) (-9, 5) (-9, -5) (9, -5)

(9, 4) (-9, 4) (-9, -4) (9, -4)

(10, 3) (-10, 3) (-10, -3) (10, -3)

(10, 2) (-10, 2) (-10, -2) (10, -2)

(10, 1) (-10, 1) (-10, -1) (10, -1)

(10, 0) (-10, 0) (-10, 0) (10, 0)

Page 35 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

These are all points of the Circle.


Advantages of Mid-Point Circle Drawing Algorithm-
 It is a powerful and efficient algorithm.
 The entire algorithm is based on the simple equation of circle X2 + Y2 =
R2.
 It is easy to implement from the programmer’s perspective.
 This algorithm is used to generate curves on raster displays.

Disadvantages of Mid-Point Circle Drawing Algorithm-

 Accuracy of the generating points is an issue in this algorithm.


 The circle generated by this algorithm is not smooth.
 This algorithm is time consuming
Important Points
 Circle drawing algorithms take the advantage of 8 symmetry property of
circle.
 Every circle has 8 octants and the circle drawing algorithm generates all
the points for one octant.
 The points for other 7 octants are generated by changing the sign towards
X and Y coordinates.
 To take the advantage of 8 symmetry property, the circle must be formed
assuming that the center point coordinates is (0, 0).
 If the center coordinates are other than (0, 0), then we add the X and Y
coordinate values with each point of circle with the coordinate values
generated by assuming (0, 0) as centre point.
Filled Area Primitives
Region filling is the process of filling image or region. Filling can be of
boundary or interior region as shown in fig. Boundary Fill algorithms are used to
fill the boundary and flood- fill algorithm are used to fill the interior.

Page 36 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Boundary Filled Algorithm:

This algorithm uses the recursive method. First of all, a starting pixel called
as the seed is considered. The algorithm checks boundary pixel or adjacent pixels
are colored or not. If the adjacent pixel is already filled or colored then leave it,
otherwise fill it. The filling is done using four connected or eight connected
approaches.

Connected pixels:
After painting a pixel, the function is called for four neighboring points.
These are the pixel positions that are right, left, above and below the current pixel.
Areas filled by this method are called 4-connected. Below given is the algorithm
Algorithm for boundary fill (4-connected)
Boundary fill (x, y, fill, boundary)

Page 37 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

 Initialize boundary of the region, and variable fill with color.


 Let the interior pixel(x,y)
 (Now take an integer called current and assign it to (x, y)
Current=getpixel(x,y)
 If current is not equal to boundary and current is not equal to fill then set
pixel (x, y, fill) boundary fill 4(x+1,y,fill,boundary) boundary fill 4(x-
1,y,fill,boundary) boundary fill 4(x,y+1,fill,boundary) boundary fill
4(x,y-1,fill,boundary)
Connected Pixels:
More complex figures are filled using this approach. The pixels to be tested
are the 8 neighboring pixels, the pixel on the right, left, above, below and the 4
diagonal pixels. Areas
Filled by this method are called 8-connected.

Algorithm:
Void boundaryFill8 (int x, int y, intfill_color,intboundary_color)
{ if(getpixel(x, y) != boundary_color&& getpixel(x, y) != fill_color)
{putpixel(x, y, fill_color);
boundaryFill8(x + 1, y, fill_color, boundary_color); boundaryFill8(x, y + 1,
fill_color, boundary_color); boundaryFill8(x - 1, y, fill_color, boundary_color);
boundaryFill8(x, y - 1, fill_color, boundary_color); boundaryFill8(x - 1, y - 1,
fill_color, boundary_color); boundaryFill8(x - 1, y + 1, fill_color,
boundary_color); boundaryFill8(x + 1, y - 1, fill_color, boundary_color);
boundaryFill8(x + 1, y + 1, fill_color, boundary_color);}}

Four connected approaches is more suitable than the eight connected approaches.

Page 38 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

1. Four connected approaches:


In this approach, left, right, above, below pixels are tested.
2. Eight connected approaches:
In this approach, left, right, above, below and four diagonals are selected.
Boundary can be checked by seeing pixels from left and right first. Then pixels
are checked by seeing pixels from top to bottom. The algorithm takes time and
memory because some recursive calls are needed.
Problem with recursive boundary fill algorithm:
It may not fill regions sometimes correctly when some interior pixel is
already filled with color. The algorithm will check this boundary pixel for filling
and will found already filled so recursive process will terminate. This may vary
because of another interior pixel unfilled. So check all pixels color before
applying the algorithm.
Flood Fill Algorithm
The flood fill algorithm has many characters similar to boundary fill. But
this method is more suitable for filling multiple colors boundary. When boundary
is of many colors and interior is to be filled with one color we use this algorithm.
Algorithm for Flood fill algorithm

floodfill4 (x, y, fill color, old color: integer)


Begin If get pixel (x, y) = old color then Begin
Set pixel (x ,y, fill color)
floodfill4 (x+1, y, fill color, old color) floodfill4 (x-1, y, fill color, old color)
floodfill4 (x, y+1, fill color, old color) floodfill4 (x, y-1, fill color, old color)

Page 39 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

End.
In fill algorithm, we start from a specified interior point (x, y) and reassign
all pixel values are currently set to a given interior color with the desired color.
Using either a 4-connected or 8-connected approaches, we then step through pixel
positions until all interior points have been repainted.
Disadvantage:
 Very slow algorithm
 May be fail for large polygons
 Initial pixel required more knowledge about surrounding pixels.

Page 40 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

UNIT - II
ATTRIBUTES OF OUTPUT PRIMITIVES

Attributes of Output Primitives


Any parameter that affects the way a primitive is to be displayed is
referred to as an attribute parameter. Example attribute parameters are color,
size etc. A line drawing function for example could contain parameter to set
color, width and other properties.
 Line Attributes
 Curve Attributes
 Color and Gray scale Levels
 Area Fill Attributes
 Character Attributes
 Bundled Attributes
Line Attributes
Basic attributes of a straight line segment are its type, its width, and
its color. In some graphics packages, lines can also be displayed using
selected pen or brush options
 Line Type
 Linewidth
 Pen and Brush Options
 Line Color
Line type
Possible selection of line type attribute includes solid lines, dashed lines
and dotted lines. To set line type attributes in a PHIG Supplication program,
a user invokes the function set Line type (lt)

Page 41 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Where parameter it is assigned a positive integer value of 1, 2, 3 or 4 to


generate lines that are solid, dashed, dash dotted respectively. Other values
for line type parameter it could be used to display variations in dot-dash
patterns.
Line width
Implementation of line width option depends on the capabilities of the
output device to set the line width attributes.
Set Line width Scale Factor (lw)
Line width parameter lw is assigned a positive number to indicate the
relative width of line to be displayed. A value of 1 specifies a standard width
line. A user could set lw to a value of 0.5 to plot a line whose width is half
that of the standard line. Values greater than 1 produce lines thicker than the
standard.
Line Cap
We can adjust the shape of the line ends to give them a better
appearance by adding line caps. There are three types of line cap.
They are:
 Butt cap
 Round cap
 Projecting square cap
Butt cap
Obtained by adjusting the end positions of the component parallel lines so
that the thick line is displayed with square ends that are perpendicular to the
line path.

Page 42 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Round cap
Obtained by adding a filled semicircle to each butt cap. The circular
arcs are centered on the line endpoints and have a diameter equal to the line
thickness.
Projecting square cap
Extend the line and add butt caps that are positioned one- half of the
line width beyond the specified endpoints.

Three possible methods for smoothly joining two line segments


 Mitter Join
 Round Join
 Bevel Join
A miter join accomplished by extending the outer boundaries of each of the
two lines until they meet.
A round join is produced by capping the connection between the two
segments with a circular boundary whose diameter is equal to the width.
A bevel join is generated by displaying the line segment with but caps
and filling in tri angular gap where the segments meet.

Page 43 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Pen and Brush Options

With some packages, lines can be displayed with pen or brush


selections. Options in this category include shape, size, and pattern. Some
possible pen or brush shapes are given in Figure
Line color
A poly line routine displays a line in the current color by setting this
color value in the frame buffer at pixel locations along the line path using the
set pixel procedure. We set the line color value in PHlGS with the function
Set Polyline Colour Index (lc)
Non-negative integer values, corresponding to allowed color choices,
are assigned to the line color parameter lc
Example: Various line attribute commands in an applications program is
given by the following sequence of statements Set line type(2); set Line width
Scale Factor(2); set Poly line Colour Index (5); polyline(n1, wc points1); set
Poly line Color Iindex (6); polyline (n2, wcpoints2);
This program segment would display two figures, drawn with double-
wide dashed lines. The first is displayed in a color corresponding to code 5,
and the second in color 6.
Curve attributes
Parameters for curve attribute are same as those for line segments.
Curves displayed with varying colors, widths, dot – dash patterns and
available pen or brush options.

Page 44 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Character Attributes
The appearance of displayed character is controlled by attributes such
as font, size, color and orientation. Attributes can be set both for entire
character strings (text) and for individual characters defined as marker
symbols
Text Attributes
The choice of font or type face is set of characters with a particular
design style as courier, Helvetica, times roman, and various symbol groups.
The characters in a selected font also be displayed with styles. (solid,
dotted, double) in bold face in italics, and in outline or shadow styles. A
particular font and associated style is selected in a PHIGS program by setting
an integer code for the text font parameter tf in the function
Set Text Font (tf)
Control of text color (or intensity) is managed from an application
program with
Set Text Colour Index (tc)
Where text color parameter tc specifies an allowable color code.Text
size can be adjusted without changing the width to height ratio of characters
with
Set Character Height (SCH)

Parameter ch is assigned a real value greater than 0 to set the


coordinate height of capital letters The width only of text can be set with
function. Set Character Expansion Factor (cw)

Page 45 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Where the character width parameter cw is set to a positive real value


that scales the body width of character
Set Character Spacing (cs)

Spacing between characters is controlled separately Where the


character-spacing parameter cs can he assigned any real value.
The orientation for a displayed character string is set according to the
direction of the character up vector
Set Character Up Vector (upvect)

Parameter upvect in this function is assigned two values that specify


the x and y vector components. For example, with upvect = (1, 1), the
direction of the up vector is 45o and text would be displayed as shown in
Figure.
To arrange character strings vertically or horizontally
Set Text Path (tp)
Can be assigned the value: right, left, up, or down

Another handy attribute for character strings is alignment. This attribute

Page 46 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

specifies how text is to be positioned with respect to the $tart coordinates.


Alignment attributes are set with Set Text Alignment (h,v)
Where parameters h and v control horizontal and vertical alignment.
Horizontal alignment is set by assigning h a value of left, center, or right.
Vertical alignment is set by assigning v a value of top, cap, half, base or
bottom. A precision specification for text display is given with.
Set Text Precision (tpr)
tpr is assigned one of values string, char or stroke.
Marker Attributes
A marker symbol is a single character that can he displayed in different
colors and in different sizes. Marker attributes are implemented by procedures
that load the chosen character into the raster at the defined positions with the
specified color and size. We select a particular character to be the marker
symbol with
Set Marker Type (mt)
Where marker type parameter mt is set to an integer code. Typical codes
for marker type are the integers 1 through 5, specifying, respectively, a dot (.)
a vertical cross (+), an asterisk (*), a circle (o), and a diagonal cross (X).
We set the marker size with
Set Marker Size Scale Factor (ms)
With parameter marker size ms assigned a positive number. This
scaling parameter is applied to the nominal size for the particular marker
symbol chosen. Values greater than 1 produce character enlargement; values
less than 1 reduce the marker size.
Marker color is specified with
Set Poly marker Colour Index (mc)
A selected color code parameter mc is stored in the current attribute list
and used to display subsequently specified marker primitives.
Two Dimensional Geometric Transformations
Changes in orientations, size and shape are accomplished with
geometric transformations that alter the coordinate description of objects.

Page 47 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Basic transformation
Translation
 T(tx,ty)
 Translation distances Scale
 S(sx,sy)
 Scalefactors Rotation
 - R()
 Rotationangle
Translation
Translation is applied to an object by representing it along a straight
line path from one coordinate location to another adding translation distances,
tx, ty to original coordinate position (x,y) to move the point to a new position
(x’,y’) to
x’ = x + tx, y’ = y + ty
The translation distance point (tx,ty) is called translation vector or shift
vector.
Translation equation can be expressed as single matrix equation by
using column vectors to represent the coordinate position and the translation
vector as

Page 48 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Moving a polygon from one position to another position with the


translation vector (-5.5, 3.75)
Rotations:
A two-dimensional rotation is applied to an object by repositioning it
along a circular path on xy plane. To generate a rotation, specify a rotation
angle θ and the position (xr, yr) of the rotation point (pivot point) about which
the object is to be rotated.
Positive values for the rotation angle define counter clock wise rotation
about pivot point. Negative value of angle rotate objects in clock wise
direction. The transformation can also be described as a rotation about a
rotation axis perpendicular to xy plane and passes through pivot point.

Rotation of a point from position (x, y) to position (x’, y’) through


angle θ relative to coordinate origin
The transformation equations for rotation of a point position P when
the pivot point is at coordinate origin. In figure r is constant distance of the
point positions Ф is the original angular of the point from horizontal and θ is
the rotation angle.
The transformed coordinates in terms of angle θ and Ф
x’ = rcos(θ+Ф) = rcosθosФ – rsinθsinФ y’ = rsin(θ+Ф) = rsinθcosФ +
rcosθsinФ
The original coordinates of the point in polar coordinates
x = rcosФ, y = rsinФ
The transformation equation for rotating a point at position (x,y) through an
angle θ about Origin
x’ = xcosθ – ysinθ y’ = xsinθ + ycosθ

Page 49 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Rotation Equation
Rotation Matrix R=𝑐𝑜𝑠∅ −𝑠𝑖𝑛∅

𝑠𝑖𝑛∅ 𝑐𝑜𝑠∅ P’ = R . P
Note: Positive values for the rotation angle define counterclockwise
rotations about the rotation point and negative values rotate objects in the
clockwise.
Scaling
A scaling transformation alters the size of an object. This operation can
be carried out for polygons by multiplying the coordinate values (x, y) to each
vertex by scaling factor Sx&Sy to produce the transformed coordinates (x’,
y’)
x’=x.Sx y’ =y.Sy
Scaling factor Sx scales object in x direction while Syscales in y
direction. The transformation equation
𝑥′ 𝑆𝑥 0 𝑥
in matrix form𝑦′= 0 𝑆𝑦 . 𝑦
(or)
P’ = S. P
Where S is 2 by 2 scaling matrix.

Page 50 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and
sy = 1
Any positive numeric values are valid for scaling factors sx and sy.
Values less than 1 reduce the size of the objects and values greater than 1
produce an enlarged object.
There are two types of Scaling. They are
 Uniform scaling
 Non Uniform Scaling
To get uniform scaling it is necessary to assign same value for sx
and sy. Unequal values for sx and sy result in a non-uniform scaling.
Matrix Representation and Homogeneous Coordinates
Many graphics applications involve sequences of geometric
transformations. An animation, for example, might require an object to be
translated and rotated at each increment of the motion. In order to combine
sequence of transformations we have to eliminate the matrix addition. To
achieve this we have represent matrix as 3X3 instead of 2X2 introducing an
additional dummy coordinate. Here points are specified by three numbers
instead of two. This coordinate system is called as Homogeneous coordinate
system and it allows to express transformation equation as matrix
multiplication. Cartesian coordinate position (x, y) is represented as
homogeneous coordinate triple(x, y, h)
 Represent coordinates as (x, y,h)
 Actual coordinates drawn will be (x/h,y/h)
For Translation

For Scaling

Page 51 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

For Rotation

Composite Transformations
A composite transformation is a sequence of transformations; one
followed by the other. We can set up a matrix for any sequence of
transformations as a composite transformation matrix by calculating the
matrix product of the individual transformations
Translation If two successive translation vectors (tx1,ty1) and (tx2,ty2) are
applied to a coordinate position P, the final transformed location P’ is calculated
as
P’ = T(tx2, ty2). {T(tx1, ty1).P}
= {T(tx2, ty2).T(tx1,ty1)}.P
Where P and P’ are represented as homogeneous-coordinate column vectors.
Which demonstrated the two successive translations are additive.
Rotations
Two successive rotations applied to point P produce the transformed position
P’ = R(θ2).{R(θ1).P} = {R(θ2).R(θ1)}.P
By multiplying the two rotation matrices, we can verify that two successive
rotation are additive
R(θ2).R(θ1) = R(θ1 + θ2)
So that the final rotated coordinates can be calculated with the composite
rotation matrix as

Page 52 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

P’ = R(θ1 + θ2).P
Scaling
Concatenating transformation matrices for two successive scaling
operations produces the following composite scaling matrix

General Pivot-Point Rotation

 Translate the object so that pivot-position is moved to the coordinate


origin
 Rotate the object about the coordinate origin
Translate the object so that the pivot point is returned to its original position
The composite transformation matrix for this sequence is obtain with the
concatenation.
Which can also be expressed as T(xr, yr).R(θ).T(-xr, -yr) = R(xr, yr, θ)

Page 53 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

General fixed point Scaling


1. Translate object so that the fixed point coincides with the coordinate
origin
2. Scale the object with respect to the coordinate origin

Use the inverse translation of step 1 to return the object to its original
position concatenating the matrices for these three operations produces the
required scaling matix

Can also be expressed as T(xf, yf).S(sx, sy).T(-xf, -yf) = S(xf, yf, sx, sy)
Note: Transformations can be combined by matrix multiplication

Other Transformations
 Reflection
 Shear

Page 54 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Reflection
A reflection is a transformation that produces a mirror image of an
object. The mirror image for a two-dimensional reflection is generated
relative to an axis of reflection by We can choose an axis of reflection in the
xy plane or perpendicular to the xy plane or coordinate origin.
Reflection of an object about the x axis
Reflection the x axis is accomplished with the transformation matrix
1 0 0
0 −1 0
0 0 1

Reflection of an object about the y axis

Reflection they-axis is accomplished with the transformation matrix

Reflection of an object about the coordinate origin

Page 55 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Reflection about origin is accomplished with the transformation matrix


Reflection axis as the diagonal line y = x
To obtain transformation matrix for reflection about diagonal y=x the
transformation sequence is
 Clock wise rotation by45°
 Reflection about x-axis
 Counter clock wise by45°
Reflection about the diagonal line y = x is accomplished with the
transformation matrix

Page 56 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Reflection axis as the diagonal line y = -x

To obtain transformation matrix for reflection about diagonal y = -x the


transformation sequence is
 Clock wise rotation by45°
 Reflection about y-axis
 counter clock wise by45°
Reflection about the diagonal line y = -x is accomplished with the
transformation matrix
Shear
A Transformation that slants the shape of an object is called the shear
transformation. Two common shearing transformations are used. One
shifts x coordinate values and other shifty coordinate values. However in both the
case sonly one coordinate (xory) changes its coordinates and other preserves
its values.

Page 57 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

X - Shear
The x shear preserves the y coordinates, but changes the x values
which cause vertical lines to tilt right or left as shown in figure
The Transformations matrix for x-shear is which transforms the coordinates
as x’ =x+ shx .y ;y’ =y
Y - Shear
The y shear preserves the x coordinates, but changes the y values which
cause horizontal lines which slope up or down The Transformations matrix
for y- shear is Which transforms the coordinates as

x’ = x
y’ = y + y shx .x

XY - Shear
The transformation matrix for xy-shear which transforms the coordinates as
x’ = x +x shx.y y’ = y +yshx

Shearing Relative to other reference line


We can apply x shear and y shear transformations relative to other
reference lines. In x shear transformations we can use y reference line and in
y shear we can use x reference line.

Page 58 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

X - shear with y reference line


We can generate x-direction shears relative to other reference lines with the
transformation matrix which transforms the coordinates as

x’ = x+xshx (yref y)
y’ = y
Example Shx = ½ and yref = -1

Y - shear with x reference line


We can generate y-direction shears relative to other reference lines with the
transformation matrix which transforms the coordinates as

x’ = x
y’ = shy (x - xref) + y
Example

Page 59 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

UNIT - III
TWO DIMENSIONAL VIEWING

The process of selecting and viewing the picture with different views is
called windowing and a process which divides each element of the picture into
its visible and invisible portions, allowing the invisible portion to be discarded is
called clipping.
The viewing pipeline
A world coordinate area selected for display is called a window. An area
on a display device to which a window is mapped is called a view port. The
window defines what is to be viewed the view port defines where it is to be
displayed.
The mapping of a part of a world coordinate scene to device coordinate is
referred to as viewing transformation. The two dimensional viewing
transformation is referred to as window to view port transformation of windowing
transformation.
A viewing transformation using standard rectangles for the window and
viewport

Page 60 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

The two dimensional viewing transformation pipeline


The viewing transformation in several steps, as indicated in Fig. First, we
construct the scene in world coordinates using the output primitives. Next to
obtain a particular orientation for the window, we can set up a two-dimensional
viewing-coordinate system in the world coordinate plane, and define a window
in the viewing-coordinate system.
The viewing- coordinate reference frame is used to provide a method for
setting up arbitrary orientations for rectangular windows. Once the viewing
reference frame is established, we can transform descriptions in world
coordinates to viewing coordinates.
We then define a viewport in normalized coordinates (in the range from 0
to 1) and map the viewing- coordinate description of the scene to normalized
coordinates.
At the final step all parts of the picture that lie outside the viewport are
clipped, and the contents of the viewport are transferred to device coordinates.
By changing the position of the viewport, we can view objects at different
positions on the display area of an output device.
Viewing coordinate reference frame

The window defined in world co-ordinate is first transformed into the


viewport co- ordinate. Viewing co- ordinate reference frame is used. It provides
a method for setting up arbitrary orientations for rectangular windows.

Page 61 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

To obtain the matrix for converting world co-ordinate positions to viewing


co-ordinates, we have to translate the viewing origin to the world origin and then
align the two co-ordinate reference frames by rotation.
Matrix used for this composite transformation is given by M wc,vc= T.R
Where T is the translation matrix that takes the viewing origin point P0 to
the world origin and R is the rotation matrix that aligns the axes of the two
reference frames.
Window to view port coordinates function
A point at position (xw,yw) in a designated window is mapped to viewport
coordinates (xv,yv) so that relative positions in the two areas are the same. The
figure illustrates the window to view port mapping. A point at position (x w,yw)
in the window is mapped into position (xv,yv) in the associated view port. To
maintain the same relative placement in view port as in window, we require that

Solving these impressions for the viewport position (xv, yv), we have
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy .....equation 2
Where scaling factors are

The conversion is performed with the following sequence of transformations.


 Perform a scaling transformation using point position of (xwmin, yw
min) that scales the window area to the size of viewport.
 Translate the scaled window area to the position of view port. Relative
proportions of objects are maintained if scaling factor are the same

Page 62 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

(Sx=Sy).
Otherwise world objects will be stretched or contracted in either the x or y
direction when displayed on output device. For normalized coordinates, object
descriptions are mapped to various display devices.
Any number of output devices can be open in particular application and
another window view port transformation can be performed for each open output
device. This mapping called the work station transformation is accomplished by
selecting a window area in normalized apace and a view port are in coordinates
of display device.

Mapping selected parts of a scene in normalized coordinate to different video


monitors with work station transformation.

Two Dimensional viewing functions


1. Viewing reference system in a PHIGS application program has
following function.
Evaluate View Orientation Matrix(x0,y0,xv,yv,error, view Matrix)
Where x0,y0 are coordinate of viewing origin and parameter xv, yv are the
world coordinate positions for view up vector. An integer error code is generated
if the input parameters are in error otherwise the view matrix for world-to-
viewing transformation is calculated. Any number of viewing transformation
matrices can be defined in an application.

Page 63 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

2. To set up elements of window to view port mapping


Evaluate View Mapping Matrix (xwmin, xwmax, ywmin, ywmax, xvmin,
xvmax, yvmin, yv max, error, view Mapping Matrix)
Here window limits in viewing coordinates are chosen with parameters xwmin,
xwmax, ywmin,ywmax and the viewport limits are set with normalized
coordinate positions xvmin, xvmax, yv min, yvmax.

3. The combinations of viewing and window view port mapping for


various workstations in a viewing table with
Set View Representation (ws, view Index, view Matrix, view Mapping Matrix,
x clip min, x clip max,y clip min, y clip max, c lip xy)
Where parameter ws designates the output device and parameter view index sets
an integer identifier for this window-view port point. The matrices view Matrix
and view Mapping Matrix can be concatenated and referenced by view Index. Set
View Index (view Index)

4. At the final stage we apply a workstation transformation by selecting a


work station window view port pair.
Set Work station Window (ws, xws Wind min, xws Wind max, y w sWind min,
y w s Wind max) set Work station View port (ws, x w s V Port min, x w s V
Port max, y w s V Port min, y w s V Port max) where was gives the workstation
number. Window-coordinate extents are specified in the range from 0 to 1 and
viewport limits are in integer device coordinates.
Clipping operation
Any procedure that identifies those portions of a picture that are inside or
outside of a specified region of space is referred to as clipping algorithm or
clipping. The region against which an object is to be clipped is called clip
window.
Algorithm for clipping primitive types:
Point clipping
Line clipping (Straight-line segment) Area clipping

Page 64 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Curve clipping Text clipping


Line and polygon clipping routines are standard components of graphics
packages.
Point Clipping
Clip window is a rectangle in standard position. A point P=(x,y) for display, if
following inequalities are satisfied:
X w min<= x <= xwmaxywmin<=<=ywmax
Where the edges of the clip window (x w min, x w max, y w min, y w
max) can be either the world co- ordinate window boundaries or viewport
boundaries. If any one of these four inequalities is not satisfied, the point is
clipped (not saved for display).
Line Clipping
A line clipping procedure involves several parts. First we test a given line
segment whether it lies completely inside the clipping window. If it does not we
try to determine whether it lies completely outside the window. Finally if we
cannot identify a line as completely inside or completely outside, we perform
intersection calculations with one or more clipping boundaries.Process lines
through “inside-outside” tests by checking the line endpoints. A line with both
endpoints inside all clipping boundaries such as line from P1 to P2 is saved. A
line with both end point outside any one of the clip boundaries line P3P4 is
outside the window.
Line clipping against a rectangular clip window

Page 65 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

All other lines cross one or more clipping boundaries. For a line segment with
end points (x1,y1) and (x2,y2) one or both end points outside clipping rectangle,
the parametric representation
x=x1+u(x2-x1),
y=y1+u( y2-y1),0<=u<=1
Could be used to determine values of u for an intersection with the clipping
boundary coordinates. If the value of u for an intersection with a rectangle
boundary edge is outside the range of 0 to 1, the line does not enter the interior of
the window at that boundary. If the value of u is within the range from 0 to 1,the
line segment does indeed cross into the clipping area. This method can be applied
to each clipping boundary edge in to determine whether any part of line segment
is to displayed.

Cohen-Sutherland Line Clipping


This is one of the oldest and most popular line-clipping procedures. The
method speeds up the processing of line segments by performing initial tests that
reduce the number of intersections that must be calculated.
Every line endpoint in a picture is assigned a four digit binary code called
a region code that identifies the location of the point relative to the boundaries
of the clipping rectangle.

Page 66 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Binary region codes assigned to line end points according to relative


position with respect to the clipping rectangle.
Regions are set up in reference to the boundaries. Each bit position in
region code is used to indicate one of four relative coordinate positions of points
with respect to clip window: to the left, right, top or bottom. By numbering the bit
positions in the region code as 1 through 4 from right to left, the coordinate regions
are corrected with bit positions as
bit1: left bit2: right bit3: below bit4: above
A value of 1 in any bit position indicates that the point is in that relative
position. Otherwise the bit position is set to 0. If a point is within the clipping
rectangle the region code is 0000. A point that is below and to the left of the
rectangle has a region code of0101.
Bit values in the region code are determined by comparing endpoint
coordinate values (x,y) to clip boundaries. Bit1 is set to 1 if x<xwmin.
For programming language in which bit manipulation is possible region-code bit
values can be determined with following two steps.

 Calculate differences between endpoint coordinates and clipping


boundaries.
 Use the resultant sign bit of each difference calculation to set the
corresponding value in the region code.
Bit1is the sign bit of x – xwminbit2 is the sign bit of xwmax - x bit 3 is the sign
bit of y – ywminbit4 is the sign bit of ywmax - y.
Once we have established region codes for all line endpoints, we can
quickly determine which lines are completely inside the clip window and which
are clearly outside.
Any lines that are completely contained within the window boundaries
have a region code of 0000 for both endpoints, and we accept these lines. Any
lines that have a 1 in the same bit position in the region codes for each endpoint
are completely outside the clipping rectangle, and we reject the selines.
We would discard the line that has a region code of 1001 for one endpoint
and a code of 0101 for the other endpoint. Both endpoints of this line are left of

Page 67 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

the clipping rectangle, as indicated by the 1 in the first bit position of each region
code.
A method that can be used to test lines for total clipping is to perform the
logical and operation with both region codes. If the result is not 0000,the line is
completely outside the clipping region.
Lines that cannot be identified as completely inside or completely outside
a clip window by these tests are checked for intersection with window boundaries.

Line extending from one coordinates region to another may pass through the
clip window, or they may intersect clipping boundaries without entering
window.
 Cohen-Sutherland line clipping starting with bottom endpoint left, right ,
bottom and top boundaries in turn and find that this point is below the
clipping rectangle.
 Starting with the bottom endpoint of the line from P1to P2, we check P1
against the left, right, and bottom boundaries in turn and find that this point
is below the clipping rectangle. We then find the intersection point P1‟
with the bottom boundary and discard the line section from P1 toP1‟.
 The line now has been reduced to the section from P1‟ to P2,SinceP2, is
outside the clip window, we check this endpoint against the boundaries and
find that it is to the left of the window. Intersection point P2‟ is calculated,
but this point is above the window. So the final inter section
calculation yields P2”, and the line from P1‟ to P2” is saved. This completes
processing for this line, so we save this part and go on to the next line. Point
P3in the next line is to the left of the clipping rectangle, so we determine
the intersection P3‟, and eliminate the line section from P3to P3'.
 By checking region codes for the line section from P3'to P4 we find that the

Page 68 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

remainder of the line is below the clip window and can be discarded also.
 Intersection points with a clipping boundary can be calculated using the
slope- intercept form of the line equation. For a line with endpoint
coordinates (x1,y1) and (x2,y2) and the y coordinate of the intersection
point with a vertical boundary can be obtained with the calculation.
y =y1 +m (x-x1) where x value is set either to x wmin or to x wmax and slope of
line is calculated as m = (y2- y1) / (x2- x1) the intersection with a horizontal
boundary the x coordinate can be calculated as x= x1+( y- y1) / m with y set to
either to ywmin or to ywmax.
Polygon Clipping
To clip polygons, we need to modify the line-clipping procedures. A
polygon boundary processed with a line clipper may be displayed as a series of
unconnected line segments (Fig.), depending on the orientation of the polygon to
the clipping window.
Display of a polygon processed by a line clipping algorithm

For polygon clipping, we require an algorithm that will generate one or


more closed areas that are then scan converted for the appropriate area fill. The
output of a polygon clipper should be a sequence of vertices that defines the
clipped polygon boundaries.
Sutherland – Hodgeman polygon clipping:
A polygon can be clipped by processing the polygon boundary as a whole
against each window edge. This could be accomplished by processing all polygon

Page 69 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

vertices against each clip rectangle boundary.


There are four possible cases when processing vertices in sequence around the
perimeter of a polygon. As each point of adjacent polygon vertices is passed to a
window boundary clipper, make the following tests:
 If the first vertex is outside the window boundary and second vertex is
inside, both the intersection point of the polygon edge with window
boundary and second vertex are added to output vertex list.
 If both input vertices are inside the window boundary, only the second
vertex is added to the output vertex list.
 If first vertex is inside the window boundary and second vertex is outside
only the edge intersection with window boundary is added to output vertex
list.
 If both input vertices are outside the window boundary nothing is added
to the output list.

Clipping a polygon against successive window boundaries

Successive processing of pairs of polygon vertices against the left window


boundary clipping a polygon against the left boundary of a window, starting with
vertex 1.

Page 70 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Primed numbers are used to label the points in the output vertex list for this
window boundary.
Vertices 1 and 2 are found to be on outside of boundary. Moving along
vertex 3 which is inside, calculate the intersection and save both the intersection
point and vertex 3. Vertex 4 and 5 are determined to be inside and are saved.
Vertex 6 is outside so we find and save the intersection point. Using the five saved
points we repeat the process for next window boundary.

A polygon overlapping a rectangular clip window


Input

Output
Left Clipper Right Bottom Clipper Top Clipper
Clipper

Implementing the algorithm as described requires setting up storage for an


output list of vertices as a polygon clipped against each window boundary. We
eliminate the intermediate output vertex lists by simply by clipping individual
vertices at each step and passing the clipped vertices on to the next boundary
clipper.
A point is added to the output vertex list only after it has been determined
to be inside or on a window boundary by all boundary clippers. Otherwise the
point does not continue in the pipeline.

Page 71 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

UNIT - IV
GRAPHICAL USER INTERFACES AND INTERACTIVE INPUT
METHODS

In order to be able to interact with the graphical image input methods are
required. These can be used to just change the location and orientation of the
camera, or to change specific settings of the rendering itself.
Different devices are more suitable for changing some settings then others.
In this chapter we will specify different types of these devices and discuss their
advantages.
Input methods can be classified using the following categories:
 Locator
 Stroke
 String
 Valuator
 Choice
 Pick

Locator
A device that allows the user to specify one co-ordinate position. Different
methods can be used, such as a mouse cursor, where a location is chosen by
clicking a button, or a cursor that is moved using different keys on the keyboard.
Touch screens can also be used as locators; the user specifies the location by
inducing force onto the desired coordinate on the screen.
Stroke
A device that allows the user to specify a set of coordinate positions. The
positions can be specified, for example, by dragging the mouse across the screen
while a mouse button is kept pressed. On release, a second coordinate can be
used to define a rectangular area using the first coordinate in addition.

Page 72 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

String
A device that allows the user to specify text input. A text input widget in
combination with the keyboard is used to input the text. Also, virtual keyboards
displayed on the screen where the characters can be picked using the mouse can
be used if keyboards are not available to the application.
Valuator
A device that allows the user to specify a scalar value. Similar to string
inputs, numeric values can be specified using the keyboard. Often, up-down-
arrows are added to increase or decrease the current value. Rotary devices, such
as wheels can also be used for specifying numerical values. Often times, it is
useful to limit the range of the numerical value depending on the value.
Choice
A device that allows the user to specify a menu option. Typical choice
devices are menus or radio buttons which provide various options the user can
choose from. For radio buttons, often only one option can be chosen at a time.
Once another option is picked, the previous one gets cleared.
Pick
A device that allows the user to specify a component of a picture. Similar
to locator devices, a coordinates specified using the mouse or other cursor input
devices and then back- projected into the scene to determine the selected 3-D
object. It is often useful to allow a certain “error tolerance” so that an object is
picked even though the user did not exactly onto the object but close enough next
to it. Also, highlighting objects within the scene can be used to traverse through a
list of objects that fulfill the proximity criterion.
INPUT FUNCTIONS
Input Modes
Specify how the program and input device should interact including
request mode, sample mode and event mode

Page 73 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Request Mode
The application program initiate data entry
Sample Mode
The application program and input devices operate independently
Event Mode
The input device initiate data input to the application program Other functions are
used to specify physical devices for the logical data classes
THREE DIMENSIONAL DISPLAY METHODS
To obtain display of a three-dimensional scene that has been modeled in
world coordinates. We must first set up a coordinate reference for the "camera".
This coordinate reference defines the position and orientation for the plane of the
camera film which is the plane we want to us to display a view of the objects in
the scene. Object descriptions are then transferred to the camera reference
coordinates and projected onto the selected display plane. We can then display
the objects in wireframe (outline) form, or we can apply lighting surface
rendering techniques to shade the visible surfaces.
Parallel Projection
In a parallel projection, parallel lines in the world-coordinate scene
projected into parallel lines on the two-dimensional display plane.
Perspective Projection
Another method for generating a view of a three-dimensional scene is to
project points to the display plane along converging paths. This causes objects
farther from the viewing position to be displayed smaller than objects of the same
size that are nearer to the viewing position. In a perspective projection, parallel
lines in a scene that are not parallel to the display plane are projected into
converging lines
Depth Cueing
A simple method for indicating depth with wireframe displays is to vary
the intensity of objects according to their distance from the viewing position. The
viewing position are displayed with the highest intensities, and lines farther away
are displayed with decreasing intensities.

Page 74 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Visible Line and Surface Identification


We can also clarify depth relationships in a wireframe display by
identifying visible lines in some way. The simplest method is to highlight the
visible lines or to display them in a different color. Another technique, commonly
used for engineering drawings, is to display the non-visible lines as dashed lines.
Another approach is to simply remove the non-visible lines.
Surface Rendering
Added realism is attained in displays by setting the surface intensity of
objects according to the lighting conditions in the scene and according to assigned
surface characteristics. Lighting specifications include the intensity and positions
of light sources and the general background illumination required for a scene.
Surface properties of objects include degree of transparency and how rough or
smooth the surfaces are to be. Procedures can then be applied to generate the
correct illumination and shadow regions for the scene.
Exploded and Cutaway View
Exploded and cutaway views of such objects can then be used to show
the internal structure and relationship of the object Parts.
Three-Dimensional and Stereoscopic View
Three-dimensional views can be obtained by reflecting a raster image from
a vibrating flexible mirror. The vibrations of the mirror are synchronized with the
display of the scene on the CRT. As the mirror vibrates, the focal length varies so
that each point in the scene is projected to a position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the
other for the right eye.
THREE DIMENSIONAL GEOMETRIC AND MODELING
TRANSFORMATIONS
Geometric transformations and object modeling in three dimensions are
extended from two dimensional methods by including considerations for the z-
coordinate.

Page 75 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Translation
In a three dimensional homogeneous coordinate representation, a point or
an object is translated from position P = (x,y,z) to position P’ = (x’,y’,z’) with the
matrix operation.

(or) P’=T.P (2)


Parameters tx, ty and tz specifying translation distances for the coordinate
directions x,y and z are assigned any real values.The matrix representation in
equation (1) is equivalent to the three equations x‟ = x +tx
Y’= y+ty
Z’= z+tz (3)
Translating a point with translation vector T = (tx, ty, tz)

 Inverse of the translation matrix in equation (1) can be obtained by


negating the translation distance tx,ty and tz.
 This produces a translation in the opposite direction and the product of a
translation matrix and its inverse produces the identity matrix.

Page 76 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Rotation
 To generate a rotation transformation for an object an axis of rotation must
be designed to rotate the object and the amount of angular rotation is also be
specified.
 Positive rotation angles produce counter clockwise rotations about a
coordinate axis.
Co-ordinate Axes Rotations
The 2D z axis rotation equations are easily extended to3D.
x‟ = x cosθ– y sinθ
y‟=xsinθ+ycosθ
z‟=z (2)

Parameters θ specifies the rotation angle. In homogeneous coordinate form,


the 3D z axis rotation equations are expressed as
Z axis Rotation
x‟ cosθ -sinθ 0 0 x
y‟ = sinθ cosθ 0 0 y
z‟ 0 0 1 0 . z -------
(3)
1 0 0 0 1 1

which we can write more compactly as


P’ = Rz (θ).P-------------------------------------------------------------- (4)
The below figure illustrates rotation of an object about the z axis.
Transformation equations for rotation about the other two coordinate axes
can be obtained with a cyclic permutation of the coordinate parameters x, y and
z in equation (2) i.e., we use the replacementsx→y→z→x (5)
Substituting permutations (5) in equation (2), we getthe equations for an x-axis

Page 77 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

rotation

y‟ = ycosθ - zsinθ
z‟ = ysinθ + zcosθ ---------------(6)
= x

x‟

which can be written in the homogeneous coordinate form

1 0 0 0 x
x‟
y‟ = 0 cosθ -sinθ 0 y
z‟ 0 sinθ cosθ 0 Z ------------------- (7)
1 0 0 0 1 1

P’ = Rx (θ). P-----------(8)
(or)
Rotation of an object around the x-axis is demonstrated in below.Cyclically
permuting coordinates in equation (6) give the transformation equation for a y
axis rotation.

Page 78 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

z‟ = zcosθ - xsinθ ---------------(9)


x‟ = zsinθ + xcosθ
y‟ = y
The x‟ matri presenc tation ory- otat ion is x
x f0 axis
osθ
sinθ 0
y‟ = 0 1 0 0 y
z‟ -sinθ 0 Cosθ 0 Z --------------------- (10)
1 0 0 0 1 1
(or)
P’ = Ry(θ).P --------------------- ( 11)
An example of y axis rotation is shown in below figure
 An inverse rotation matrix is formed by replacing the rotation angle θ by–θ.
 Negative values for rotation angles generate rotations in a clockwise
direction, so the identity matrix is produces when any rotation matrix is
multiplied by itsinverse.
 Since only the sine function is affected by the change in sign of the rotation
angle, the inverse matrix can also be obtained by interchanging rows and
columns. (i.e.,) we can calculate the inverse of any rotation matrix R by
evaluating its transpose (R-1 =RT).
General Three Dimensional Rotations
A rotation matrix for any axis that does not coincide with a coordinate axis
can be set up as a composite transformation involving combinations of
translations and the coordinate axes rotations. We obtain the required composite
matrix by
1. Setting up the transformation sequence that moves the selected rotation
axis onto one of the co-ordinate axis.
2. Then set up the rotation matrix about that coordinate axis for the specified
rotation angle.

Page 79 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

3. Obtaining the inverse transformation sequence that returns the rotation


axis to its original position.
In the special case where an object is to be rotated about an axis that is
parallel to one of the coordinate axes, we can attain the desired rotation with the
following transformation sequence:
1. Translate the object so that the rotation axis coincides with the parallel
coordinate axis.
2. Perform the specified rotation about the axis.
3. Translate the object so that the rotation axis is moved back to its original
position.
 When an object is to be rotated about an axis that is not parallel to
one of the coordinate axes, we need to perform some additional
transformations.
 In such case, we need rotations to align the axis with a selected coordinate
axis and to bring the axis back to its original orientation.
 Given the specifications for the rotation axis and the rotation angle.
We can accomplish the required rotation in five steps:
1. Translate the object so that the rotation axis passes through the coordinate
origin.
2. Rotate the object so that the axis of rotation coincides with one of the
coordinate axes.
3. Perform the specified rotation about that coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original
orientation.
5. Apply the inverse translation to bring the rotation axis back to its original
position.
Five transformation steps

Page 80 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Scaling
The matrix expression for the scaling transformation of a position P = (x,y,z)
relative to the coordinate origin can be written as
x‟ sx 0 0 0 x
y‟ = 0 sy 0 0 y
z‟ 0 0 sz 0 Z -------------------- (11)
1 0 0 0 1 1
(or) P’ = S.P ---------(12)
Where scaling parameters sx,sy, and sz are assigned any position values.
Explicit expressions for the coordinate transformations for scaling relative to the
originare x‟ =x.sx

y‟=y.sy (3) z‟=z.sz

 Scaling an object changes the size of the object and repositions the object
relatives to the coordinate origin.
 If the transformation parameters are not equal, relative dimensions in the
object are changed.
 The original shape of the object is preserved with a uniform scaling (sx=
sy= sz).
 Scaling with respect to a selected fixed position (x f, yf, zf) can be
represented with the following transformation sequence:
1. Translate the fixed point to the origin.
2. Scale the object relative to the coordinate origin usingEq.11.
3. Translate the fixed point back to its original position. This sequence of
transformation is shown in the below figure.

Page 81 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

The matrix representation for an arbitrary fixed point scaling can be expressed as
the concatenation of the Translate scale translate transformation are T (xf, yf,zf)
. S(sx, sy, sz ). T(-xf,- yf, -zf ) =

sx 0 0 (1-sx)xf
0 sy 0 (1-sy)yf -------------(14)
0 0 sz (1-sz)zf
0 0 0 1

 Inverse scaling matrix m formed by replacing the scaling parameters sx,

Page 82 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

sy and sz with their reciprocals.


 The inverse matrix generates an opposite scaling transformation, so the
concatenation of any scaling matrix and its inverse produces the identity
matrix.

Page 83 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

UNIT - V
THREE – DIMENSIONAL VIEWING
Three Dimensional Viewing
In three dimensional graphics applications,
 We can view an object from any spatial position, from the front, from
above or from the back.
 We could generate a view of what we could see if we were standing in the
middle of a group of objects or inside object, such as abuilding.

Viewing Pipeline:
In the view of a three dimensional scene, to take a snapshot we need to do the
following steps.
 Positioning the camera at a particular point in space.
 Deciding the camera orientation (i.e.,) pointing the camera and rotating it
around the line of right to set up the direction for the picture.
 When snap the shutter, the scene is cropped to the size of the „window‟ of
the camera and light from the visible surfaces is projected into the camera
film.
In such a way the below figure shows the three dimensional transformation
pipeline, from modeling coordinates to final device coordinate.

Co-ordinates W.Co-ordinatesViewing Modeling Projection


Projection Co-ordinates

Page 84 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Processing Steps
 Once the scene has been modeled, world coordinates position is converted
to viewing coordinates.
 The viewing coordinates system is used in graphics packages as a reference
for specifying the observer viewing position and the position of the
projection plane.
 Projection operations are performed to convert the viewing coordinate
description of the scene to coordinate positions on the projection plane,
which will then be mapped to the output device.
 Objects outside the viewing limits are clipped from further consideration,
and the remaining objects are processed through visible surface
identification and surface rendering procedures to produce the display
within the device view port.
Viewing Coordinates specifying the viewplane

 The view for a scene is chosen by establishing the viewing coordinate


system, also calledtheview reference coordinate system.
 A viewplaneor projection plane is set-up perpendicular to the viewing
Zvaxis.
 World coordinate positions in the scene are transformed to viewing
coordinates, then viewing coordinates are projected to the viewplane.
 The view reference point is a world coordinate position, which is the
origin of the viewing coordinate system. It is chosen to be close to or on
the surface of some object in ascene.
 Then we select the positive direction for the viewing Zvaxis, and the
orientation of the viewplane by specifying the view plane normal vector,

Page 85 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

N. Here the world coordinate position establishes the direction for N


relative either to the world origin or to the viewing coordinate origin.

Transformation from world to viewing coordinates


Before object descriptions can be projected to the view plane, they must
be transferred to viewing coordinate.
This transformation sequenceis,
 Translate the view reference point to the origin of the world coordinate
system.
 Apply rotations to align the xv, yvand zv axes with the world xw,yw and
zw axes respectively.
PROJECTIONS
Once world coordinate descriptions of the objects are converted to viewing
coordinates, we can project the 3 dimensional objects onto the two dimensional
view planes.
There are two basic types of projection:
1. Parallel Projection:
Here the coordinate positions are transformed to the view plane along parallel
lines.

Page 86 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Parallel projection of an object to the view plane

2. Perspective Projection:
Here, object positions are transformed to the view plane along lines that converge
to a point called the projection reference point.

Perspective projection of an object to the view plane


Parallel Projections
 Parallel projections are specified with a projection vector that defines the
direction for the projection lines.
 When the projection in perpendicular to the view plane, it is said to be an
Orthographic parallel projection, otherwise it said to be an Oblique
parallel projection.
Orthographic Projection:
 Orthographic projections are used to produce the front, side and top views
of an object.
 Front, side and rear orthographic projections of an object are called
elevations.
 A top orthographic projection is called a plan view.
 This projection gives the measurement of lengths and angles accurately.

Page 87 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Orthographic projections of an object, displaying plan and elevation views

The orthographic projection that displays more than one face of an object is called
axonometric orthographic projections.
ISOMETRIC PROJECTION
 The most commonly used axonometric projection is the is ometric
projection.
 It can be generated by aligning the projection plane so that it intersects
each coordinate axis in which the object is defined as the same distance
from the origin.

Isometric projection for a cube


 Transformation equations for an orthographic parallel projection are
straightforward.
 If the view plane is placed at position zvp along the zvaxis then any
point (x,y,z) in viewing coordinates is transformed to projection
coordinates as xp = x, yp = y
Where the original z coordinates value is kept for the depth information needed
in depth cueing and visible surface determination procedures.
Oblique Projection

Page 88 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

 An oblique projection in obtained by projecting points along parallel lines


that are not perpendicular to the projectionplane.
 The below figure α and φ are twoangles.

 Point (x,y,z) is projected to position (xp,yp) on the view plane.


 The oblique projection line form (x,y,z) to (xp,yp) makes an angle α with
the line on the projection plane that joins (xp,yp) and(x,y).
 This line of length L in at an angle φ with the horizontal direction in the
projection plane.
 The projection coordinates are expressed in terms of x,y, L and φ as
xp =x+Lcosφ (1)
yp = y + Lsinφ
 Length L depends on the angle α and the z coordinate of the point to be
projected:
tanα = z / Lthus, L = z /tanα= z L1
where L1 is the inverse of tanα, which is also the value of L when z = 1.
The oblique projection equation (1) can be written as xp = x +z(L1cosφ)
yp= y +z(L1sinφ)
 The transformation matrix for producing any parallel projection onto the
xvyvplaneis

1 0 L1cosφ 0

Page 89 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Mparall = 0 1 L1sinφ 0
el
0 0 1 0
0 0 0 1
 An orthographic projection is obtained when L1 = 0 (which occurs at a
projection angle α of900)
 Oblique projections are generated with non zero values forL1.
Perspective Projections
 To obtain perspective projection of a 3D object, we transform points along
projection lines that meet at the projection reference point.
 If the projection reference point is set at position zprp along the zv axis and
the view plane is placed at zvp as in fig , we can write equations describing
coordinate positions along this perspective projection line in parametric
formas
x‟ = x - xu y‟= y - yu
z‟=z–(z–zprp)u

Perspective projection of a point P with coordinates (x,y,z). to position (xp,


yp,zvp) on the view plane.
 Parameter takes values from 0 to1 and coordinate position (x‟,y‟,z‟)
represents any point along the projection line.
 When u = 0, the point is at P =(x,y,z).
 At the other end of the line, u = 1 and the projection reference point
coordinates(0,0,zprp)
 On the view plane z` = zvp and z` can be solved for parameter u at this
position along the projection line:
 Substituting this value of u into the equations for x` and y`, we obtain the
perspective transformation equations.
xp = x((zprp – zvp) / (zprp – z)) = x( dp/(zprp – z))

Page 90 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

yp= y((zprp - zvp) / (zprp – z)) = y(dp/ (zprp– z)) ------------------------- (2)
where dp = zprp – zvp is the distance of the view plane from the projection
reference point.
Using a 3D homogeneous coordinate representation we can write the
perspective projection transformation (2) in matrix formats
xh 1 0 0 0 x
yh = 0 1 0 0 y
zh 0 0 -(zvp/dp) zvp(zprp/d z -------------- (3)
p)
H 0 0 -1/dp zprp/dp 1
In this representation, the homogeneous factoris
h=(zprp-z)/dp (4)
and the projection coordinates on the view plane are calculated from eq (2)the
homogeneous coordinates as
xp = xh / h
yp= yh/h (5)
where the original z coordinate value retains in projection coordinates for depth
processing.
VISIBLE SURFACEDETECTION
A major consideration in the generation of realistic graphics displays
is identifying those parts of a scene that are visible from a chosen viewing
position.
Classification of Visible Surface Detection Algorithms
These are classified into two types based on whether they deal with object
definitions directly or with their projected images
1. Object space methods: compares objects and parts of objects to each other
within the scene definition to determine which surfaces as a whole we should
label as visible.
2. Image space methods: visibility is decided point by point at each pixel
position on the projection plane. Most Visible Surface Detection Algorithms
use image space methods.

Page 91 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Back Face Detection(Back faceremoval)


 A point (x, y,z) is "inside" a polygon surface with plane parameters A, B,
C, and D if
Ax+By+Cz+D<0 (1)
 When an inside point is along the line of sight to the surface, the polygon
must be a back face
 We can simplify this test by considering the normal vector N to a polygon
surface, which has Cartesian components (A, B, C). In general, if V is a
vector in the viewing direction from the eye position, as shown in Fig.,

Then this polygon is a back face if V . N > 0


Furthermore, if object descriptions have been converted to projection
coordinates and our viewing direction is parallel to the viewing zv. axis, then V
= (0, 0, Vz) and V . N = VzCso that we only need to consider the sign of C, the ;
component of the normal vector N.
In a right-handed viewing system with viewing direction along the negative zv
axis in the below Fig. the polygon is a back face if C < 0.

Thus, in general, we can label any polygon as a back face if its normal vector has
a z component value
C<= 0
By examining parameter C for the different planes defining an object, we can
immediately identify all the blackfaces.

Page 92 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Depth Buffer Method (Z Buffer method)


A commonly used image-space approach to detecting visible surfaces is
the depth- buffer method, which compares surface depths at each pixel position
on the projection plane. This procedure is also referred to as the z-buffer method.
Each surface of a scene is processed separately, one point at a time across
the surface. The method is usually applied to scenes containing only polygon
surfaces, because depth values can be computed very quickly and the method is
easy to implement. But the method can be applied to non-planar surfaces.
With object descriptions converted to projection coordinates, each (x, y, z)
position on a polygon surface corresponds to the orthographic projection point (x,
y) on the view plane.

Therefore, for each pixel position (x, y) on the view plane, object depths
can be compared by comparing z values. The figure showsthree surfaces at
varying distances along the orthographic projection line from position (x,y ) in a
view plane taken as the (xv,yv) plane. Surface S1, is closest at this position, so
its surface intensity value at (x,y) is saved.
We can implement the depth-buffer algorithm in normalized coordinates, so
that z values range from 0 at the back clipping plane to Zmaxat the front clipping
plane.
Two buffer areas are required. A depth buffer is used to store depth values
for each (x, y) position as surfaces are processed, and the refresh buffer stores
the intensity values for each position.
Initially, all positions in the depth buffer are set to 0 (minimum depth), and the
refresh buffer is initialized to the background intensity.

We summarize the steps of a depth-buffer algorithm as follows:


 Initialize the depth buffer and refresh buffer so that for all buffer positions
(x, y), depth (x, y)=0,refresh(x , y)=Ibackend
 For each position on each polygon surface, compare depth values to
previously stored values in the depth buffer to determine visibility.

Page 93 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Calculate the depth z for each (x, y) position on the polygon.


If z > depth(x, y), then set depth (x,y)=z,refresh(x,y)= Isurf(x, y) where
Ibackend is the value for the background intensity, and Isurf(x, y) is the
projected intensity value for the surface at pixel position (x,y).
After all surfaces have been processed, the depth buffer contains depth
values for the visible surfaces and the refresh buffer contains the corresponding
intensity values for those surfaces.
Depth values for a surface position (x, y) are calculated from the plane
equation for each surface

z =-Ax –By-D
C (1)

For any scan line adjacent horizontal positions across the linediffer by1,
and a vertical y value on an adjacent scan line differs by 1. If the depth of
position(x, y) has been determined to be z, then the depth z' of the next position
(x +1, y) along the scan line is obtained from Eq. (1)as

z'=-A(x+1)-By- -----------------------(2)
DC
Aor
z' =z- -----------------------(3)

Page 94 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

On each scan line, we start by calculating the depth on a left edge of the
polygon that intersects that scan line in the below fig. Depth values at each
successive position across the scan line are then calculated by Eq. (3).
Scan lines intersecting a polygon surface
We first determine the y-coordinate extents of each polygon, and process
the surface from the topmost scan line to the bottom scan line. Starting at a top
vertex, we can recursively calculate x positions down a left edge of the polygon
as x' = x - l/m, where m is the slope of theedge.
Depth values down the edge are then obtained recursively as
A/m+B
z'=z+ C---------------------(4)

Intersection positions on successive scan lines along a left polygon edge

If we are processing down a vertical edge, the slope is infinite and the
recursive calculations reduce to
z' =z +B/C -(5)
An alternate approach is to use a midpoint method or Bresenham-type
algorithm for determining x values on left edges for each scan line. Also the
method can be applied to curved surfaces by determining depth and intensity
values at each surface projection point.
For polygon surfaces, the depth-buffer method is very easy to implement,
and it requires no sorting of the surfaces in a scene. But it does require the
availability of a second buffer in addition to the refresh buffer.

Page 95 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

A- BUFFERMETHOD
An extension of the ideas in the depth-buffer method is the A-buffer
method. The A buffer method represents an antialiased, area-averaged,
accumulation-buffer method developed by Lucas film for implementation in
the surface-rendering system called REYES (an acronym for "Renders
Everything You Ever Saw").
A drawback of the depth-buffer method is that it can only find one visible
surface at each pixel position. The A-buffer method expands the depth buffer so
that each position in the buffer can reference a linked list of surfaces.
Thus, more than one surface intensity can be taken into consideration at
each pixel position, and object edges can be antialiased.
Each position in the A-buffer has two fields:
1) Depth field - stores a positive or negative real number
2) Intensity field - stores surface-intensity information or a pointer value.
If the depth field is positive, the number stored at that position is the depth
of a single surface overlapping the corresponding pixel area. The intensity field
then stores the RCB components of the surface color at that point and the percent
of pixel coverage, as illustrated in Fig.A
If the depth field is negative, this indicates multiple-surface contributions
to the pixel intensity. The intensity field then stores a pointer to a linked Iist of
surface data, as in Fig. B.

Page 96 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

Organization of an A-buffer pixel position (A) single surface overlap of the


corresponding pixel area (B) multiple surfaceoverlap
Data for each surface in the linked list includes
 RGB intensity components
 opacity parameter (percent of transparency)
 depth
 percent of area coverage
 surface identifier
 other surface-rendering parameters
 pointer to next surface
SCAN-LINEMETHOD
This image-space method for removing hidden surfaces is an extension of
the scan-line algorithm for filling polygon interiors. As each scan line is
processed, all polygon surfaces intersecting that line are examined to determine
which are visible. Across each scan line, depth calculations are made for each
overlapping surface to determine which is nearest to the view plane. When the
visible surface has been determined, the intensity value for that position is entered
into the refresh buffer.
We assume that tables are set up for the various surfaces, which include
both an edge table and a polygon table. The edge table contains coordinate
endpoints for each line in-the scene, the inverse slope of each line, and pointers
into the polygon table to identify the surfaces bounded by each line.
The polygon table contains coefficients of the plane equation for each
surface, intensity information for the surfaces, and possibly pointers into the edge
table.
To facilitate the search for surfaces crossing a given scan line, we can set
up an active list of edges from information in the edge table. This active list will
contain only edges that cross the current scan line, sorted in order of increasing
gx.
In addition, we define a flag for each surface that is set on or off to indicate

Page 97 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

whether a position along a scan line is inside or outside of the surface. Scan lines
are processed from left to right. At the leftmost boundary of a surface, the surface
flag is turned on; and at the rightmost boundary, it is turned off.

Scan lines crossing the projection of two surfaces S1 and S2 in the view plane.
Dashed lines indicate the boundaries of hidden surfaces. The figure illustrates
the scan-line method for locating visible portions of surfaces for pixel positions
along the line.
The active list for scan line 1 contains information from the edge table for
edges AB, BC, EH, and FG.For positions along this scan line between edgesAB
and BC, only the flag for surface S1 is on.
Therefore no depth calculations are necessary, and intensity information
for surface S1, is entered from the polygon table into the refresh buffer.
Similarly, between edges EH and FG, only the flag for surface S2 is on. No
other positions along scan line 1 intersect surfaces, so the intensity values in the
other areas are set to the background intensity.
For scan lines 2 and 3 , the active edge list contains edges AD,EH, BC, and
FG. Along scan line 2 from edge AD to edge EH, only the flag for surface S1, is
on. But between edges EH and BC, the flags for both surfaces are on.
In this interval, depth calculations must be made using the plane
coefficients for the two surfaces. For this example, the depth of surface S1is
assumed to be less than that of S2, so intensities for surface S1, are loaded into
the refresh buffer until boundary BC is encountered. Then the flag for surface S1

Page 98 of 99
STUDY MATERIAL FOR B.Sc. COMPUTER SCIENCE
COMPUTER GRAPHICS & VISUALIZATION
VI - SEMESTER, ACADEMIC YEAR 2022-2023

goes off, and intensities for surface S2 are stored until edge FG is passed.
Any number of overlapping polygon surfaces can be processed with this
scan-line method. Flags for the surfaces are set to indicate whether a position is
inside or outside, and depth calculations are performed when surfaces overlap.

Page 99 of 99

You might also like