0% found this document useful (0 votes)
40 views

Rendering Pipeline: Viewing: Geometry Processing Rendering Pixel Processing

The rendering pipeline involves several steps: 1. Geometry processing converts 3D world coordinates to 2D screen coordinates and performs operations like lighting and clipping. 2. Pixel processing determines which surfaces are visible by comparing depth values at each pixel position. The depth buffer algorithm stores a depth value for each pixel and only draws pixels from surfaces closer to the viewpoint. 3. Shading calculates the final color value at each pixel using lighting and shading models. Pixels are drawn to the frame buffer to compose the final image.

Uploaded by

arve1111
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Rendering Pipeline: Viewing: Geometry Processing Rendering Pixel Processing

The rendering pipeline involves several steps: 1. Geometry processing converts 3D world coordinates to 2D screen coordinates and performs operations like lighting and clipping. 2. Pixel processing determines which surfaces are visible by comparing depth values at each pixel position. The depth buffer algorithm stores a depth value for each pixel and only draws pixels from surfaces closer to the viewpoint. 3. Shading calculates the final color value at each pixel using lighting and shading models. Pixels are drawn to the frame buffer to compose the final image.

Uploaded by

arve1111
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 16

Rendering pipeline

Viewing: Geometry Processing Rendering Pixel Processing


World Coordinates (floating point) Screen Coordinates (integer)

Conservative Transform Light at Conservative Conservative Image- Shading


VSD Vertices Vertices VSD VSD precision
VSD
Selective To canonical Calculate Back-face View Volume Interpolate
traversal of view volume light intensity Culling Clipping Compare color values
object at vertices pixel depth (Gouraud)
database (lighting (Z-buffer) or normals
(or traverse model of (Phong)
scene graph choice)
to get CTM)

per polygon per pixel of polygon


Two Corrections!
• Projection equations in the Assignment
problem corresponds to COP at –z and VP
at XY
• Unit normal interpolation has to be
normalized to make it unit – this is why
Phong is expensive!
Rasterization will convert all objects to pixels in the image.
But we need to make sure we don’t draw occluded objects.

For each pixel in the viewport, what is the nearest object in the scene?
(provided the object isn’t transparent)
Thus, we need to determine the visible surfaces.
Definition
Given a set of 3-D objects and a view specification (camera),
determine which lines or surfaces of the object are visible

Also called Hidden Surface Removal (HSR)

canonical house
VSD algorithms
• We can broadly classify VSD algorithms according to whether they deal with
object definitions or with their projected images. Former is called object-
space methods and latter called image-space methods.

• Object-space: compares objects and parts of objects to each other within


the scene definition to determine which surfaces, as a whole, we should
label as visible.

• Image-space: Visibility is decided point by point at each pixel position on the


projection plane.

• (Most algorithms studied are Image-space)


Back-Face Culling

• Line of Sight Interpretation


– Approach assumes objs defined as closed polyhedra, w/eye pt always
outside of them
– Use outward normal (ON) of polygon to test for rejection
– LOS = Line of Sight, the projector from the center of projection (COP) to
any point P on the polygon. (For parallel projections LOS = DOP =
direction of projection)
– If normal is facing in same direction as LOS, it’s a back face:
• if LOS • ON > 0, then polygon is invisible – discard
• if LOS • ON < 0, then polygon may be visible
• To render one lone polyhedron, you only need back-face culling as
VSD.
Algorithm:

Painter’s algorithm: Sort objects back to front

Draw each object in depth order Loop over objects


- from back to front rasterize current object
- near objects overwrite far objects. write pixels

-Create drawing order, each poly overwriting the previous ones, that guarantees
correct visibility at any pixel resolution
-Strategy is to work back to front; find a way to sort polygons by depth (z), then draw
them in that order.
-do a rough sort of the polygons by the largest (farthest) z-coordinate in each poly
-scan-convert the most distant polygon first, then work forward towards the
viewpoint (“painters’ algorithm”)
-We can either do a complete sort and then scan-convert, or we can paint as we go.
Depth Buffer Method
• A commonly used image-space approach for VSD.

• Compares surface depth values throughout the scene for each pixel position
on the projection plane.

• Each surface of a scene is processed separately, one pixel position at a


time across the surface.

• Applied usually to scenes containing polygon surfaces.

• Implementation of the depth-buffer algorithm is typically carried out in


normalized coordinates, so that depth values range from 0 at the near
clipping plane to 1.0 at the far clipping plane. (window-to-viewport mapping
is then done and lighting calculated for each pixel)
• Also called z-buffer method.
• The Z-buffer algorithm
– Z-buffer is initialized to background value (furthest plane of view
volume = 1.0)
– As each object is traversed, z-values of all its sample points are
compared to z-value in same (x, y) location in Z-buffer
• z could be determined by plugging x and y into the plane
equation for polygon (Ax + By + Cz + D = 0)
• in reality, calculate z at vertices and interpolate rest
– If new point has z value less than previous one (i.e., closer to eye),
its z-value is placed in z-buffer and its color placed in frame buffer at
same (x, y); otherwise previous z-value and frame buffer color are
unchanged
– Can store depth as integers or floats or fixed points
• i.e.for 8-bit (1 byte) integer z-buffer, set 0.0 ->0 and
1.0 ->255
• each representation has its advantages in terms of precision

– Doesn’t handle transparencies well.


– z-buffers typically use integer depth values
– Requires two “buffers”
Intensity Buffer —our familiar RGB pixel buffer (initialized to background color
Depth (“Z”) Buffer —depth of scene at each pixel
—initialized to far depth = 255
– Polygons are scan-converted in arbitrary order. When pixels overlap, use Z-buffer to
decide which polygon “gets” that pixel
255255255255255255255255 12 127127127127127127127255
127127127127127 127
7
255255255255255255255255 127127127127127127255255
127127127127127 127
255255255255255255255255 127127127127127255255255
127127127127127
255255255255255255255255 127127127127 127127127127255255255255
+ =
255255255255255255255255 127127127255255255255255
127127127
255255255255255255255255 127127255255255255255255
127127
255255255255255255255255 127255255255255255255255
127
255255255255255255255255 255255255255255255255255

127127127127127127127255 127127127127127127127255
127127127127127127255255 127127127127127127255255
127127127127127255255255 127127127127127255255255
127127127127255255255255 63 63 127127127255255255255
+
127127127255255255255255 63 63 = 63 63 127255255255255255
127127255255255255255255 63 63 63 63 63 63 255255255255255
127255255255255255255255 63 63 63 63 63 63 63 63 255255255255
255255255255255255255255 63 63 63 63 63 63 63 63 63 63 255255255

Above: example using integer Z-buffer with near = 0, far = 255


– draw every polygon that we can’t reject trivially
– If we find a piece (one or more pixels) of a polygon that is closer to the
front, we paint over whatever was behind it

void zBuffer()
{ int x, y;
for ( y = 0; y < YMAX; y++)
for ( x = 0; x < XMAX; x++) {
WritePixel (x, y, BACKGROUND_VALUE);
WriteZ (x, y, 1);
}
for each polygon
for each pixel in polygon’s projection {
double pz = polygon’s Z-value at pixel (x, y);
if ( pz < ReadZ (x, y) ) {
/* New point is closer to front of view */
WritePixel (x, y, polygon’s color at pixel (x, y));
WriteZ (x, y, pz);
}
}
}
Z-Buffer Applet:
http://www.cs.technion.ac.il/~cs234325/Homepage/Applets/applets/zbuffer/
GermanApplet.html
– Once we have za and zb for each edge, can incrementally calculate zp as
we scan
– Simplicity lends itself well to hardware implementations: FAST
• used by all graphics cards
– Polygons do not have to be compared in any particular order: no
presorting in z necessary.
– Only consider one polygon at a time
• brute force, but it is fast!
– Z-buffer can be stored w/ an image; allows you to correctly
composite multiple images (easy!) w/o having to merge models
(hard!)
• great for incremental addition to a complex scene
– Can be used for non-polygonal surfaces, CSGs, and any z =
f(x,y)
– In some systems, user can provide region to z-buffer, thus saving
computation time.
– Also, z-buffer can be performed for a small region and moved
around to finish the entire viewport
A-Buffer
• A drawback of the depth-buffer is that it identifies only one visible surface at
each pixel position. i.e. it deals with only opaque surfaces.

• For transparent surfaces, it is necessary to accumulate color values for


more than one surface.

• Depth buffer in A-buffer has each position reference linked to a list of


surfaces.

• This allows a pixel color to be computed as a combination of different


surface colors for transparency and anti-aliasing effects.

• Surface information in the A-buffer (Accumulation buffer) includes: RGB,


opacity, depth, percent of area coverage (used for anti-aliasing effects),
rendering parameters such as color, etc.
Scan-Line Algorithm Z-buffer
(Wylie, Romney, Evans and Erdahl)
• For each horizontal scan line:
find all intersections with edges of all polygons
(ignore horizontal boundaries);
sort intersections by increasing X and store in Edge Table;
for each intersection on scan-line do
if edge intersected is left edge then {entering polygon}
set in-code of polygon
determine if polygon is visible, and if so use its
color (from Polygon Table) up to next
intersection;
else edge is a right edge then {leaving polygon}
determine which polygon is visible to right of edge,
and use its color up to next intersection;

Active Edge Table Contents


Ray Casting:

-Ray casting is based on geometric optics, which trace the paths of light rays.
-It is a special case of ray-tracing algorithms that trace multiple ray paths
to pick up global reflection and refraction contributions from multiple objects
In the scene.
-Consider the line of sight from a pixel position on the view plane through
the scene, we can determine which objects in the scene (if any) intersect.
-After calculating all ray-surface intersections, we identify the visible surface
as the one whose intersection point is closest to the pixel.
-It works with any primitive; we can write intersection tests for.
- But it is slow

Can use it fpr shadows, refractive objects, reflections, etc.

Loop over every pixel (x,y)


shoot ray from eye through (x,y)
intersect with all surfaces
find first intersection point
write pixel

You might also like