Chapter 3
Chapter 3
1
Contents
2
WHAT IS OPENGL?
OpenGL is a software interface to graphics hardware. This
interface consists of more than 700 distinct commands (about
670 commands as specified for OpenGL Version 3.0 and
another 50 in the OpenGL Utility Library) that you use to
specify the objects and operations needed to produce
interactive three-dimensional applications.
OpenGL is designed as a streamlined, hardware-independent
interface to be implemented on many different hardware
platforms.
To achieve these qualities, no commands for performing
windowing tasks or obtaining user input are included in
OpenGL; instead, you must work through whatever
windowing system controls the particular hardware you’re 3
using.
CONT.…
Rendering Pipeline is the sequence of steps that OpenGL takes when
rendering objects. Vertex attribute and other data go through a sequence of
steps to generate the final image on the screen. There are usually 9-steps in
this pipeline most of which are optional and many are programmable.
Sequence of steps taken by openGL to generate an image :
4
CONT…
1. Vertex Specification : In Vertex Specification, list an ordered list of
vertices that define the boundaries of the primitive. Along with this,
one can define other vertex attributes like color, texture coordinates
etc. Later this data is sent down and manipulated by the pipeline
2. Vertex Shader : The vertex specification defined above now pass
through Vertex Shader. Vertex Shader is a program written in GLSL
that manipulate the vertex data. The ultimate goal of vertex shader is
to calculate final vertex position of each vertex. Vertex shaders are
executed once for every vertex(in case of a triangle it will execute 3-
times) that the GPU processes. So if the scene consists of one
million vertices, the vertex shader will execute one million times
once for each vertex. The main job of a vertex shader is to calculate
the final positions of the vertices in the scene.
3. Tessellation : This is a optional stage. In this stage primitives are
5
tessellated i.e. divided into smoother mesh of triangles.
CONT.…
4. Geometry Shader : This shader stage is also optional. The work of
Geometry Shader is to take an input primitive and generate zero or
more output primitive. If a triangle strip is sent as a single primitive,
geometry shader will visualize a series of triangles. Geometry Shader
is able to remove primitives or tessellate them by outputting many
primitives for a single input. Geometry shaders can also convert
primitives to different types. For example, point primitive can become
triangles.
5. Vertex Post Processing : This is a fixed function stage i.e. user has a
very limited to no control over these stages. The most important part
of this stage is Clipping. Clipping discards the area of primitives that
lie outside the viewing volume.
6. Primitive Assembly : This stage collects the vertex data into a
ordered sequence of simple primitives(lines, points or triangles).
7. Rasterization : This is an important step in this pipeline. The output 6
of rasterization is a fragments.
CONT…
11
COORDINATE SYSTEMS
We were specifying the vertices of the triangle by their
local space coordinates. Try imagining where the origin
would be visualized on the triangle.
The box then needs to be placed in the world space, a
coordinate system which describes how it exists in the
“world” along with every other object. The box
is scaled, then rotated, finally translated into the world
space.
Thethree transformations correspond to three different
matrices, and the order of multiplication matters! Their
product is what we call a model matrix.
12
COORDINATE SYSTEMS
Then, for a camera observing this world, imagine how it
perceives the world - It knows what is in front, left,
right, back, up, or down of it. These directions form
another coordinate system describing what the camera
sees. Our box is then sent into this coordinate
system, view space. The transformation used is
represented by a view matrix.
Every camera has its own view space, but for rendering to the
screen, we usually only care about one particular camera’s
view space.
Usually, the basis of the coordinate system is taken to be
the up, right, and front vectors of the camera in world space.
13
COORDINATE SYSTEMS
From the camera’s point of view, there are sections where it simply
cannot see. We don’t need to render what it cannot see, so we clip it,
and only what the camera can see is left.
• You can imagine this process to be like how everything around
you reflects light and projects how they look into your eyes. The
back of your chair, for example, would not be able to reflect any
light that can project onto your eyes, so it is “clipped” from your
point of view.
• Note that this is a projection from a three-dimensional space to a
two-dimensional space, represented by a projection matrix.
Coordinates in the clip space are then mapped onto your viewport,
into screen coordinates, which directly correspond
to fragments which your fragment shader would process.
14
COORDINATE SYSTEMS
Just like how you can switch out a camera’s lens for
different photographic effects, you can change how the
camera perceives the view space by changing how it
is projected into its clip space. In other words, changing
its projection matrix, which describes a frustrum box
where everything that falls out of it gets clipped.
The orthographic projection is best described by
how every parallel line in the world space remains parallel
after the projection. It is often used in engineering.
The frustrum of the orthographic projection is represented
by two identically-sized planes, the near plane and the far
plane, and the distance between them. Note that the planes 15
are parallel to each other.
COORDINATE SYSTEMS
Below is the function that generates the projection
matrix of an orthographic projection.
21
VIEWING USING A SYNTHETIC
CAMERA
Pinhole camera with object and its projection:
22
VIEWING USING A SYNTHETIC CAMERA
The synthetic camera moves the image plane in front of
the center of projection to eliminate negatives:
23
VIEWING USING A SYNTHETIC
CAMERA
We'll use (X,Y,Z)(X,Y,Z) to refer to the 3D coordinates
of a point in space, and (x,y,z)(x,y,z) to refer to the
projected image coordinates. In the above diagram, the
negative zz axis points to the right, so z=−dz=−d.
Consider the yy coordinate in the above pictures. Using
similar triangles, Y/Z=y/d Y/Z=y/d, so y=d(Y/Z)y=d(Y/Z),
which can be written as y=Y/(Z/d)y=Y/(Z/d)
We can derive an expression for xx in a similar
way: x=X/(Z/d)
24
VISUALIZATION OF THE CAMERA API
The diagram below summarizes the key elements of the
camera setup:
25
VISUALIZATION OF THE CAMERA API
In the above code example, the camera parameters (adjusted in the
GUI) are stored in an object named cameraParams:
26
VISUALIZATION OF THE CAMERA API
Note that you also need to do something like this at the end of your
code:
TW's camera setup did this for us automatically. With this code,
though, we can cut the TW apron strings.
Question: Can we set up multiple cameras in the same scene? Not
really, but see this example from the Three.js site
27
CREATING DESIRED CAMERA VIEWS
28
OUTPUT PRIMITIVES AND ATTRIBUTES
Any parameter that affects the way a primitive is to be
displayed is called an attribute parameter
Line Attributes
Basic attributes of a straight line segment are line
Line type:
Lines are categorized into solid lines, dashed lines and
dotted lines.
Dashed lines could be displayed by generating an
30
OUTPUT PRIMITIVES AND ATTRIBUTES
LINE WIDTH
The line width attribute is set with the command,
setLinewidth ScaleFactor (/w);
The linewidth parameter Iw is assigned a positive
number to indicate the relative width of the line to be
displayed.
The value I speciûed the standard-width line.
31
OUTPUT PRIMITIVES AND ATTRIBUTES
LINE COLOUR
The line colour value is set with the function
32
33