0% found this document useful (0 votes)
5 views

Computer Unit 5 Notes

The document discusses Bundle Adjustment (BA), an optimization technique used to refine 3D coordinates and camera parameters to minimize reprojection error, applicable in various fields like Structure from Motion and panoramic image stitching. It outlines the input requirements, optimization process, and real-time examples, emphasizing its importance in enhancing the accuracy of 3D reconstructions. Additionally, it covers concepts related to panorama rendering, including techniques like image stitching, gap closing, and cylindrical and spherical projections.

Uploaded by

Bhuvana H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Computer Unit 5 Notes

The document discusses Bundle Adjustment (BA), an optimization technique used to refine 3D coordinates and camera parameters to minimize reprojection error, applicable in various fields like Structure from Motion and panoramic image stitching. It outlines the input requirements, optimization process, and real-time examples, emphasizing its importance in enhancing the accuracy of 3D reconstructions. Additionally, it covers concepts related to panorama rendering, including techniques like image stitching, gap closing, and cylindrical and spherical projections.

Uploaded by

Bhuvana H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Unit 5 Notes – Image Rendering

Bundle Adjustments
Bundle Adjustment (BA) is an iterative optimization technique that refines the 3D
coordinates of a scene and the camera parameters (like position and orientation) to minimize
reprojection error.

It’s called “bundle adjustment” because light rays from the 3D points to the cameras form a
“bundle” — and the method adjusts this bundle for the best fit.

Where is it used?
 Structure from Motion (SfM)
 3D reconstruction
 Panoramic image stitching
 SLAM (Simultaneous Localization and Mapping)
 Augmented reality

Input to Bundle Adjustment


 A set of 2D image points (feature correspondences)
 A set of initial camera parameters (intrinsic & extrinsic)
 An initial estimate of 3D point positions

Goal of Bundle Adjustment


Minimize the difference between the observed 2D points and the projected 3D points.

This is done by adjusting:

 The 3D point locations


 The camera poses (position + orientation)
 Optionally, intrinsic camera parameters
Reprojection Error (Cost Function)
The core idea is to minimize the sum of squared reprojection errors:

Optimization Process
1. Initialize 3D points and camera parameters (e.g., from triangulation).
2. Compute reprojection errors.
3. Adjust 3D points and camera parameters to minimize reprojection error.
4. Repeat until convergence.

Tools: Levenberg–Marquardt algorithm (non-linear least squares solver) is often used.

Real-Time Example: Panorama Stitching


In panoramic stitching:

 Each image has a different view of the same scene.


 Feature points (like corners) are matched across images.
 Using bundle adjustment:
o Camera rotation angles are adjusted.
o Overlapping points are aligned for seamless stitching.
Real-Time Example: SFM in Drone Mapping
 Drone captures images from different altitudes.
 SfM reconstructs the terrain using feature matches.
 Bundle adjustment refines:
o Drone's camera positions
o The 3D terrain points
 Final output is an accurate 3D model of the land.

Why is it Important?
 Increases accuracy of 3D reconstructions
 Reduces accumulated error drift
 Aligns multi-view geometry precisely
 Essential in autonomous driving, robotics, and AR

✅ Summary Table
Aspect Description
Goal Minimize reprojection error
Adjusts 3D points, camera poses (and intrinsics)
Used in SfM, 3D mapping, panorama, SLAM
Algorithm Nonlinear least squares (e.g., Levenberg–Marquardt)
Benefits Accurate reconstruction, better alignment, realistic rendering

Example Step-by-Step
Step 1: Capture & Feature Matching
Image 1 Image 2

Feature point: (x₁, y₁) = (120, 80) Feature point: (x₂, y₂) = (100, 75)

These 2D points correspond to the same 3D point on the bottle cap.


Real-World Use Case: Drone Terrain Mapping
 A drone flies over a field and captures images at intervals.
 Feature points on the ground (rocks, road edges) are matched across images.
 Initial 3D terrain model is rough.
 Bundle Adjustment:
o Refines drone GPS + camera poses
o Refines 3D ground points
 Result: accurate 3D terrain map, used in agriculture, disaster response, etc.

Key Points to Remember for Exams


Feature Explanation
Purpose Refine 3D points + camera poses
Input Initial camera poses, 3D points, 2D projections
Output Optimized parameters that reduce reprojection error
Used In SfM, SLAM, 3D rendering, panoramic stitching
Techniques Levenberg–Marquardt, nonlinear least squares

Panorama and Related Concepts in Image Rendering Techniques

1. Panorama in Image Rendering

Definition: A panorama in image rendering refers to a wide-angle view or representation of


a physical space, created by stitching multiple overlapping images captured from different
angles or viewpoints.

Objective: To seamlessly blend individual images into a single continuous image to


represent a broader field of view than what a single image can capture.

Applications:

 Virtual tours (e.g., real estate walkthroughs)


 Surveillance systems (e.g., panoramic CCTV systems)
 Landscape photography (e.g., mountain range captures)
 Robotics and autonomous navigation (e.g., visual SLAM)

Techniques Used:

 Image stitching
 Feature detection (e.g., SIFT, SURF)
 Homography estimation
 Blending and warping

Steps Involved:

1. Capture multiple overlapping images.


2. Detect keypoints in each image.
3. Match features between overlapping regions.
4. Estimate transformation matrices (homography).
5. Warp and blend images into a common reference frame.

2. Rotational Panorama

Definition: A rotational panorama is a type of panorama captured by rotating the camera


around its optical center (nodal point), typically using a tripod or a robotic mount.

Characteristics:

 Captures a 360-degree field of view.


 Minimizes parallax since the camera rotates around a fixed point.
 Produces smoother and more accurate panoramic images.

Real-Time Use Cases:

 360-degree virtual tours in museums


 Immersive experiences in tourism apps (e.g., Google Street View)
 Photography using panoramic tripod heads

This is valid when the camera rotates around its center and the scene is distant (planar).
3. Gap Closing

Definition: Gap closing refers to the process of removing visual seams, misalignments, or
empty regions (gaps) in the stitched panoramic image.

Causes of Gaps:

 Parallax errors
 Inaccurate camera calibration
 Poor feature matching
 Exposure differences

Techniques for Gap Closing:

 Seam carving or optimal seam selection


 Image blending (multiband blending, feathering)
 Inpainting using CNN-based generative models
 Mesh warping for local adjustments

Example Use Case:

 Street-level image stitching where lamp posts or moving vehicles create parallax —
seam optimization and mesh warping help smoothen the final panorama.

Mathematical Technique (Multiband Blending): Combines Laplacian pyramids of images


and Gaussian pyramid of the mask to blend overlapping regions.

4. Cylindrical Coordinates in Panorama

Definition: Cylindrical projection maps image pixels onto a virtual cylinder wrapped around
the camera. The unwrapped cylinder gives a panoramic image.

Advantages:

 Suitable for wide horizontal FOV (field of view).


 Minimizes distortion for vertical lines.

Use Cases:

 Virtual reality environments


 Wide-angle sports and concert photography
 Surveillance cameras with panoramic lenses
Example:

 A stadium panorama captured from the center using a wide-angle cylindrical lens and
stitched using cylindrical warping.

5. Spherical Coordinates in Panorama

Definition: Spherical projection maps image points onto a sphere centered at the camera’s
optical center. The final image is created by unwrapping this sphere.

Advantages:

 Captures both horizontal and vertical FOV.


 Best suited for 360-degree panoramic scenes.

Use Cases:

 VR headsets and immersive gaming


 Planetarium projections
 Autonomous vehicle camera systems

Example:

 360° camera capturing interior of a car — spherical projection ensures full vertical
and horizontal coverage with minimal distortion.

You might also like