Color Processing
UNIT-5
What is color
• Color is electromagnetic waves with wavelengths
• between 380 nm and 780 nm
• Although waves beyond this range are invisible to the human
• they can be detected by special technical devices.
• Our eyes perceive the range that is visible to us with receptors
• Our brain processes their signals into a color impression.
Importance of Color in Computer Vision
• Color helps distinguish objects, segment images,
• and detect specific patterns.
• Object Detection: Recognizing objects based on their color (e.g., traffic signs, fruits).
• Medical Imaging: Identifying diseases based on color variations in medical scans.
• Remote Sensing: Analyzing satellite images for vegetation, water bodies, etc.
Color Models in Computer Vision
• A color model is a mathematical representation of colors
• in a way that computers can process.
• Different models are used for different applications
• RGB (Red, Green, Blue)
• HSV (Hue, Saturation, Value)
• HSL (Hue, Saturation, Lightness)
• LAB (CIELAB)
• YCrCb (Luminance and Chrominance)
• CMYK (Cyan, Magenta, Yellow, Black)
1. RGB (Red, Green, Blue) Color Model
• RGB is an additive color model used in electronic displays
• like TVs, monitors, and cameras.
• The colors are created by mixing different intensities of
• Red, Green, and Blue.
• Each pixel is represented by three values: (R, G, B)
• Each channel has a range of 0 to 255 in an 8-bit system.
1. RGB (Red, Green, Blue) Color Model
• Black: (0, 0, 0) – No light
• White: (255, 255, 255) – Maximum intensity of all colors
• Red: (255, 0, 0), Green: (0, 255, 0), Blue: (0, 0, 255)
• Applications:
• Used in computer screens, cameras, and web applications.
• Suitable for displaying images but not great for processing
• because RGB is affected by lighting conditions.
RGB
1. RGB (Red, Green, Blue) Color Model
2. HSV (Hue, Saturation, Value) Color
Model
• HSV separates the color information (Hue)
• from intensity (Value) and saturation.
• Components:
• Hue (H) → Defines the type of color (0°-360°)
• Red = 0°, Green = 120°, Blue = 240°
• Saturation (S) → Intensity of color (0-100%)
• 0% = Grayish, 100% = Pure color
• Value (V) → Brightness (0-100%)
• 0% = Black, 100% = Full brightness
2. HSV (Hue, Saturation, Value) Color
Model
2. HSV (Hue, Saturation, Value) Color
Model
• Why Use HSV?
• Unlike RGB, HSV is less sensitive to lighting changes.
• Useful for object detection and segmentation.
3. HSL (Hue, Saturation, Lightness)
Color Model
• HSL is similar to HSV but replaces Value (V) with Lightness
(L).
• Components:
• Hue (H) → 0° to 360° (color)
• Saturation (S) → 0 to 100% (intensity)
• Lightness (L) → 0 to 100% (brightness)
3. HSL (Hue, Saturation, Lightness)
Color Model
4. LAB (CIELAB) Color Model
• LAB is a perceptually uniform color model
• CIE: International Commission on Illumination
• used in professional color correction.
• Components:
• L (Lightness) → 0 (black) to 100 (white)
• A (Green to Red scale) → -128 (green) to +127 (red)
• B (Blue to Yellow scale) → -128 (blue) to +127 (yellow)
• Why Use LAB?
• Closely matches human vision.
• Used in Photoshop, color calibration, and printing.
4. LAB (CIELAB) Color Model
5. YCrCb (Luminance and Chrominance) Color
Model
• YCrCb is used in video compression and broadcasting (JPEG,
MPEG).
• It separates brightness and color information.
• Components:
• Y (Luminance) → Brightness (0-255)
• Cr (Chrominance-Red) → Red difference (-128 to 127)
• Cb (Chrominance-Blue) → Blue difference (-128 to 127)
5. YCrCb (Luminance and
Chrominance) Color Model
• Why Use YCrCb?
• Helps in reducing the file size for images/videos.
• Used in TV broadcasting, JPEG compression.
6. CMYK (Cyan, Magenta, Yellow,
Black) Color Model
• CMYK is a subtractive color model used in printing.
• Components:
• Cyan (C) → 0-100%
• Magenta (M) → 0-100%
• Yellow (Y) → 0-100%
• Black (K) → 0-100%
• Why Use CMYK?
• Used in inkjet and laser printers.
• Produces accurate colors for physical printing.
6. CMYK (Cyan, Magenta, Yellow,
Black) Color Model
Example: Detecting Red Objects in an Image
• Let’s say we have an image with red apples and green leaves.
• To extract only the red apples,
• we can use the HSV (Hue, Saturation, Value) color model
• to filter out red pixels.
Example: Detecting Red Objects in an Image
• RGB Red Color: (255, 0, 0)
• In HSV: (0°, 100%, 100%)
• Define the lower and upper range for detecting red:
• Lower Red: [0, 120, 70]
• Upper Red: [10, 255, 255]
Color Augmentation Techniques
• Color augmentation improves the robustness
• of machine learning models
• by applying transformations such as”
• brightness and contrast adjustments.
• Brightness & Contrast Adjustment
• Formula: output = image * α + β
• α = Contrast factor (1.0 = no change)
• β = Brightness factor (0 = no change)
Color Augmentation Techniques
• Numerical Example
• Original Pixel Value: 150
• If α = 1.2 and β = 30, the new pixel value = (150 × 1.2) + 30 =
210.
Color Constancy (White Balance)
• It ensures that objects appear in consistent colors
• under varying lighting conditions.
• Gray World Algorithm
• Assumes that the average color in an image is neutral gray
• and scales RGB values accordingly.
Color Constancy (White Balance)
• Steps
• Compute the average R, G, B values.
• Compute the scaling factor:
Scale Factor = (Average of R, G, B) / Individual Channel Average
• Adjust each channel
Color Constancy (White Balance)
Color Constancy (White Balance)
Range Image Processing
• A range image is a type of image where each pixel represents
• the distance between the sensor
• and an object in the scene.
• Unlike traditional 2D intensity images
• (which store color or grayscale values),
• range images store depth information.
• This makes them crucial for 3D vision, robotics, and depth sensing.
Key Characteristics of Range Images
• Each pixel value represents distance (depth)
• from the sensor instead of intensity.
• Typically captured using:
• LiDAR, structured light, or stereo vision.
• Used in applications like 3D reconstruction,
• object recognition, and autonomous driving.
Range Image Processing
Range Image Processing
• A depth map where each pixel contains a distance value:
• [ [2.1, 2.3, 2.5, 2.7], # Row 1 (distances in meters)
[2.0, 2.2, 2.4, 2.6], # Row 2
[1.9, 2.1, 2.3, 2.5] # Row 3
•]
Active Range Sensors
• Active sensors emit energy
• (such as laser, infrared, or ultrasound)
• and measure the reflected signal to compute depth.
• Useful for creating range images
Types of Active Sensors
• LiDAR (Light Detection and Ranging)
• Uses laser pulses to measure distances.
• Accurate for long-range applications
• e.g., self-driving cars,
• topography mapping (map of earth surface).
• Example: Velodyne LiDAR.
Types of Active Sensors
• Time-of-Flight (ToF) Cameras
• Measures the time light takes to travel to an object and back.
• Used in smartphones (Face ID),
• AR/VR, and robotics.
• Example: Microsoft Kinect, iPhone LiDAR.
Types of Active Sensors
• Structured Light Sensors
• Projects a pattern of light onto a scene
• and analyzes distortions to determine depth.
• Used in 3D scanning and facial recognition.
• Example: Intel RealSense.
Types of Active Sensors
• Stereo Vision
• Uses two cameras to compute depth by triangulation.
• Used in robotic vision and 3D photography.
• Example: Human binocular vision, OpenCV stereo cameras.
Preprocessing of Range Data
• Range images often contain noise,
• missing data, and misalignment.
• Preprocessing improves accuracy.
• Common Preprocessing Steps
• Noise Reduction
• Apply Gaussian filter or median filter to remove sensor noise.
• Hole Filling (Interpolation)
• Fill missing depth values.
Preprocessing of Range Data
• Outlier Removal
• Remove extreme depth values caused by sensor errors.
• Depth Normalization
• Scale depth values between 0 and 1 for better processing.
• Edge Detection & Segmentation
• Identify object boundaries in depth maps.
Applications of Range Data
• 3D Object Recognition
• Helps robots and AI systems recognize objects
• based on shape rather than just color.
• Used in warehouse automation and industrial inspection.
• Autonomous Vehicles
• LiDAR-based depth maps enable self-driving cars
• to detect obstacles and pedestrians.
• Used by Tesla, Waymo, and NVIDIA Drive.
.
Applications of Range Data
• Augmented Reality (AR) and Virtual Reality (VR)
• Depth sensors in AR/VR devices improve real-world interaction.
• Used in Microsoft HoloLens and Meta Quest.
• Medical Imaging
• 3D range data is used in MRI and CT scans
• Helps in surgical planning.
• 3D Reconstruction & Mapping
• Used for building 3D models of environments.
• Applied in Google Maps 3D-reconstructions, and gaming