0% found this document useful (0 votes)
85 views

Chapter 3

The document discusses the rendering process in OpenGL. It describes the 9 steps in the OpenGL rendering pipeline including vertex specification, vertex and fragment shaders, rasterization, and per-sample operations. It also discusses the different coordinate systems involved such as local/model space, world space, and view/camera space and how transformations move objects between these spaces.

Uploaded by

Sabona
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Chapter 3

The document discusses the rendering process in OpenGL. It describes the 9 steps in the OpenGL rendering pipeline including vertex specification, vertex and fragment shaders, rasterization, and per-sample operations. It also discusses the different coordinate systems involved such as local/model space, world space, and view/camera space and how transformations move objects between these spaces.

Uploaded by

Sabona
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

CHAPTER THREE

Introduction to The Rendering Process with OpenGL

By : Firomsa L.(Ass. Lecturer)

1
Contents

 The Role of OpenGL in the Reference Model.


 Coordinate Systems
 Viewing Using a Synthetic Camera
 Output Primitives and Attributes

2
WHAT IS OPENGL?
 OpenGL is a software interface to graphics hardware. This
interface consists of more than 700 distinct commands (about
670 commands as specified for OpenGL Version 3.0 and
another 50 in the OpenGL Utility Library) that you use to
specify the objects and operations needed to produce
interactive three-dimensional applications.
 OpenGL is designed as a streamlined, hardware-independent
interface to be implemented on many different hardware
platforms.
 To achieve these qualities, no commands for performing
windowing tasks or obtaining user input are included in
OpenGL; instead, you must work through whatever
windowing system controls the particular hardware you’re 3

using.
CONT.…
 Rendering Pipeline is the sequence of steps that OpenGL takes when
rendering objects. Vertex attribute and other data go through a sequence of
steps to generate the final image on the screen. There are usually 9-steps in
this pipeline most of which are optional and many are programmable.
 Sequence of steps taken by openGL to generate an image :

4
CONT…
1. Vertex Specification : In Vertex Specification, list an ordered list of
vertices that define the boundaries of the primitive. Along with this,
one can define other vertex attributes like color, texture coordinates
etc. Later this data is sent down and manipulated by the pipeline
2. Vertex Shader : The vertex specification defined above now pass
through Vertex Shader. Vertex Shader is a program written in GLSL
that manipulate the vertex data. The ultimate goal of vertex shader is
to calculate final vertex position of each vertex. Vertex shaders are
executed once for every vertex(in case of a triangle it will execute 3-
times) that the GPU processes. So if the scene consists of one
million vertices, the vertex shader will execute one million times
once for each vertex. The main job of a vertex shader is to calculate
the final positions of the vertices in the scene.
3. Tessellation : This is a optional stage. In this stage primitives are
5
tessellated i.e. divided into smoother mesh of triangles.
CONT.…
4. Geometry Shader : This shader stage is also optional. The work of
Geometry Shader is to take an input primitive and generate zero or
more output primitive. If a triangle strip is sent as a single primitive,
geometry shader will visualize a series of triangles. Geometry Shader
is able to remove primitives or tessellate them by outputting many
primitives for a single input. Geometry shaders can also convert
primitives to different types. For example, point primitive can become
triangles.
5. Vertex Post Processing : This is a fixed function stage i.e. user has a
very limited to no control over these stages. The most important part
of this stage is Clipping. Clipping discards the area of primitives that
lie outside the viewing volume.
6. Primitive Assembly : This stage collects the vertex data into a
ordered sequence of simple primitives(lines, points or triangles).
7. Rasterization : This is an important step in this pipeline. The output 6

of rasterization is a fragments.
CONT…

8. Fragment Shader : Although not necessary a required


stage but 96% of the time it is used. This user-written
program in GLSL calculates the color of each fragment
that user sees on the screen. The fragment shader runs
for each fragment in the geometry. The job of the
fragment shader is to determine the final color for each
fragment.
9. Per-sample Operations : There are few tests that are
performed based on user has activated them or not.
Some of these tests for example are Pixel ownership
test, Scissor Test, Stencil Test, Depth Test.
7
THE ROLE OF OPENGL IN THE REFERENCE
MODEL
 OpenGL (Open Graphics Library) is a cross-platform, hardware-
accelerated, language-independent, industrial standard API for
producing 3D (including 2D) graphics. Modern computers have
dedicated GPU (Graphics Processing Unit) with its own memory to
speed up graphics rendering. OpenGL is the software interface to
graphics hardware. In other words, OpenGL graphic rendering
commands issued by your applications could be directed to the
graphic hardware and accelerated.
We use 3 sets of libraries in our OpenGL programs:
1. Core OpenGL (GL): consists of hundreds of commands, which begin
with a prefix "gl“ (e.g., glColor, glVertex, glTranslate, glRotate). The Core
OpenGL models an object via a set of geometric primitives such as point,
line and polygon.
2. OpenGL Utility Library (GLU): built on-top of the core OpenGL to
provide important utilities (such as setting camera view and projection)
and more building models (such as qradric surfaces and polygon 8
tessellation). GLU commands start with a prefix "glu"
(e.g., gluLookAt, gluPerspective).
THE ROLE OF OPENGL IN THE REFERENCE
MODEL
3. OpenGL Utilities Toolkit (GLUT): OpenGL is designed to be
independent of the windowing system or operating system. GLUT is
needed to interact with the Operating System (such as creating a
window, handling key and mouse inputs); it also provides more
building models (such as sphere and torus). GLUT commands start
with a prefix of "glut" (e.g., glutCreatewindow, glutMouseFunc).
GLUT is platform independent, which is built on top of platform-
specific OpenGL extension such as GLX for X Window System,
WGL for Microsoft Window, and AGL, CGL or Cocoa for Mac OS.
Quoting from the opengl.org: "GLUT is designed for constructing
small to medium sized OpenGL programs. While GLUT is well-suited
to learning OpenGL and developing simple OpenGL applications,
GLUT is not a full-featured toolkit so large applications requiring
sophisticated user interfaces are better off using native window system
toolkits. GLUT is simple, easy, and small
9
THE ROLE OF OPENGL IN THE REFERENCE
MODEL

“Alternative of GLUT includes SDL, ....


4. OpenGL Extension Wrangler Library (GLEW):
"GLEW is a cross-platform open-source C/C++
extension loading library. GLEW provides efficient
run-time mechanisms for determining which OpenGL
extensions are supported on the target platform."
Source and pre-build binary available at
http://glew.sourceforge.net/. A standalone utility called
"glewinfo.exe" (under the "bin" directory) can be used
to produce the list of OpenGL functions supported by
your graphics system.
5. Others 10
COORDINATE SYSTEMS
 A point can be mapped into different coordinate systems
using matrix multiplications.
 Let’s follow a box through the coordinate systems it has
to go through to be displayed on your monitor.
 The box starts in the local space. It can be helpful to
imagine that this is a space where only vertices of the
box exist, and their coordinates are all relative to
its origin (0, 0, 0).
 When we hard-coded a triangle’s vertices like below:

11
COORDINATE SYSTEMS
 We were specifying the vertices of the triangle by their
local space coordinates. Try imagining where the origin
would be visualized on the triangle.
 The box then needs to be placed in the world space, a
coordinate system which describes how it exists in the
“world” along with every other object. The box
is scaled, then rotated, finally translated into the world
space.
 Thethree transformations correspond to three different
matrices, and the order of multiplication matters! Their
product is what we call a model matrix.
12
COORDINATE SYSTEMS
 Then, for a camera observing this world, imagine how it
perceives the world - It knows what is in front, left,
right, back, up, or down of it. These directions form
another coordinate system describing what the camera
sees. Our box is then sent into this coordinate
system, view space. The transformation used is
represented by a view matrix.
 Every camera has its own view space, but for rendering to the
screen, we usually only care about one particular camera’s
view space.
 Usually, the basis of the coordinate system is taken to be
the up, right, and front vectors of the camera in world space.
13
COORDINATE SYSTEMS
 From the camera’s point of view, there are sections where it simply
cannot see. We don’t need to render what it cannot see, so we clip it,
and only what the camera can see is left.
• You can imagine this process to be like how everything around
you reflects light and projects how they look into your eyes. The
back of your chair, for example, would not be able to reflect any
light that can project onto your eyes, so it is “clipped” from your
point of view.
• Note that this is a projection from a three-dimensional space to a
two-dimensional space, represented by a projection matrix.
 Coordinates in the clip space are then mapped onto your viewport,
into screen coordinates, which directly correspond
to fragments which your fragment shader would process.
14
COORDINATE SYSTEMS
 Just like how you can switch out a camera’s lens for
different photographic effects, you can change how the
camera perceives the view space by changing how it
is projected into its clip space. In other words, changing
its projection matrix, which describes a frustrum box
where everything that falls out of it gets clipped.
 The orthographic projection is best described by
how every parallel line in the world space remains parallel
after the projection. It is often used in engineering.
 The frustrum of the orthographic projection is represented
by two identically-sized planes, the near plane and the far
plane, and the distance between them. Note that the planes 15
are parallel to each other.
COORDINATE SYSTEMS
 Below is the function that generates the projection
matrix of an orthographic projection.

 The perspective projection is more realistic than the


orthographic projection, taking in account of the curved
surface of an eye or a camera’s lens. In short, parallel
lines coincide at an infinitely-far point. 16
COORDINATE SYSTEMS
 This is achieved by changing the ww component of a
vertex such that it increases as the vertex gets farther
away from the camera. The xx,yy, and zz components of
the vertex are divided by ww in a step called
the perspective division to normalize the coordinates and
achieve the effect of points seeming to converge to a
single point infinitely far away.

 Below is the function that generates the projection


matrix of a perspective projection.
17
COORDINATE SYSTEMS

 The product of the transformation


matrices MM (Model), VV (View),and PP (Projection), P×V×MP×
V×M (in this order!), is called the Model-View-Projection matrix,
which, when applied to a point, sends it all the way from local space
to clip space. You should send the MVP matrix to your shader as
a uniform. (As it remains constant throughout the entire frame,
across every point the shader is going to process) 18
VIEWING USING A SYNTHETIC CAMERA
 Synthetic camera A type of rendering technique that
seeks to replicate the characteristics – especially the
distortions (e.g. out of focus, aberration) – of a real
camera or the human eye, rather than the perfectly sharp
achromatic pictures usually produced by
computer graphics.
 The object of the exercise is to generate a rendering of a
3D scene, just as you might take a photograph. The
DATA within the computer program act as objects in the
real world. What, then, takes the place of our EYE or
CAMERA? What about the camera Lens? and film?
19
VIEWING USING A SYNTHETIC
CAMERA
 The rendering process, or graphics pipeline, takes the
place of the optics and sensory nerves of the eye, or of
the lens, shutter, and film of the camera. However, as
with a real camera, the image produced by the graphics
pipeline depends of several factors, including:
 The Position of the camera in space,
 The Orientation of the camera (the direction in which it is
facing),
 The Projection type (analagous to the lens of the camera), and
 The kind of 'film' that stores the image (in this case, a product
of the rendering process)
20
VIEWING USING A SYNTHETIC CAMERA
 Pinhole camera, aka camera obscura:

21
VIEWING USING A SYNTHETIC
CAMERA
 Pinhole camera with object and its projection:

 Pinhole camera showing similar triangles:

22
VIEWING USING A SYNTHETIC CAMERA
 The synthetic camera moves the image plane in front of
the center of projection to eliminate negatives:

23
VIEWING USING A SYNTHETIC
CAMERA
 We'll use (X,Y,Z)(X,Y,Z) to refer to the 3D coordinates
of a point in space, and (x,y,z)(x,y,z) to refer to the
projected image coordinates. In the above diagram, the
negative zz axis points to the right, so z=−dz=−d.
 Consider the yy coordinate in the above pictures. Using
similar triangles, Y/Z=y/d Y/Z=y/d, so y=d(Y/Z)y=d(Y/Z),
which can be written as y=Y/(Z/d)y=Y/(Z/d)
 We can derive an expression for xx in a similar
way: x=X/(Z/d)

24
VISUALIZATION OF THE CAMERA API
 The diagram below summarizes the key elements of the
camera setup:

25
VISUALIZATION OF THE CAMERA API
 In the above code example, the camera parameters (adjusted in the
GUI) are stored in an object named cameraParams:

26
VISUALIZATION OF THE CAMERA API
 Note that you also need to do something like this at the end of your
code:

 TW's camera setup did this for us automatically. With this code,
though, we can cut the TW apron strings.
 Question: Can we set up multiple cameras in the same scene? Not
really, but see this example from the Three.js site
27
CREATING DESIRED CAMERA VIEWS

Exercise: Setting up a Camera


 Using this town-view-before.html, set up a camera to view the
snowman from above and to the right. Something like this:

28
OUTPUT PRIMITIVES AND ATTRIBUTES
 Any parameter that affects the way a primitive is to be
displayed is called an attribute parameter
Line Attributes
 Basic attributes of a straight line segment are line

 type, line width and line colour.

Line type:
 Lines are categorized into solid lines, dashed lines and
dotted lines.
 Dashed lines could be displayed by generating an

interdash spacing that is equal to the length of the solid


sections.
 Dotted lines can be displayed by generating very short
29
dashes with the spacing equal to or greater than the dash size
OUTPUT PRIMITIVES AND ATTRIBUTES

 In PHIGS program, the function setLinetype (lt); is used


to create a line, where parameter .
 It is assigned a positive integer value of 1,2,3, or 4 to
generate lines such as solid, dashed, dotted or dashed-
dotted respectively.

30
OUTPUT PRIMITIVES AND ATTRIBUTES
LINE WIDTH
The line width attribute is set with the command,
setLinewidth ScaleFactor (/w);
 The linewidth parameter Iw is assigned a positive
number to indicate the relative width of the line to be
displayed.
 The value I speciûed the standard-width line.

 Values greater than I produce line thicker than the


standard.

31
OUTPUT PRIMITIVES AND ATTRIBUTES
LINE COLOUR
 The line colour value is set with the function

set PolylineColourIndex (Ic);


 Non negative integer values, corresponding to allowed
colour choices, are assigned to the line colour parameter
Ic
CURVE ATTRIBUTES
 The curves can be displayed with varying colors, widths,
dot-dash patterns and available pen or brush options.

32
33

You might also like