CH-6 Terminologies
CH-6 Terminologies
Image Processing
Image processing refers to the manipulation and analysis of images to improve their quality,
extract useful information, or prepare them for further analysis. It can be performed on both
analog and digital images, but digital image processing is more common today due to the
widespread use of computers. Types of Image Processing:
• Low-level processing: Involves operations like noise reduction, contrast enhancement, and
sharpening. These processes are pixel-based and do not require understanding the content
of the image.
• Mid-level processing: Includes tasks like segmentation, object detection, and feature
extraction. These processes involve grouping pixels into meaningful regions or objects.
• High-level processing: Involves interpretation and understanding of the image content,
Applications:
• Medical Imaging: MRI, CT scans, and X-rays.
• Remote Sensing: Satellite imagery for weather forecasting and land use analysis.
3. Image
An image is a 2D representation of visual information, typically represented as a matrix of pixel
values. Each pixel corresponds to a specific location in the image and has an intensity or color
value.
Difference Between Image, Picture, and Photo:
• Image: A general term for any 2D representation of visual data, including digital and analog
forms.
• Picture: A visual representation of an object or scene, often used informally to refer to
photographs or drawings.
• Photo: A specific type of image captured by a camera, typically representing a real-world
scene.
4. Morphological Processing
Morphological processing involves operations that process images based on their shapes and
structures. It is used to extract, modify, or analyze the form of objects in an image.
Types of Morphological Operations:
• Dilation: Expands the boundaries of objects in an image.
• Opening: Erosion followed by dilation. It removes small objects while preserving the shape
of larger objects.
• Closing: Dilation followed by erosion. It fills small holes and gaps in objects.
Applications:
• Noise Removal: Removing small noise particles from an image.
• Capture: Sensors (e.g., CCD or CMOS) capture the reflected light and convert it into
electrical signals.
• Digitization: The electrical signals are converted into digital pixel values.
• Analog-to-Digital Converter (ADC): Converts the analog signals into digital pixel values.
6. Pixel
A pixel (short for "picture element") is the smallest unit of a digital image. Each pixel represents
a single color or intensity value at a specific location in the image.
Properties of Pixels:
• Intensity: The brightness level of the pixel (e.g., 0 = black, 255 = white in an 8-bit grayscale
image).
• Coordinates: The position of the pixel in the image matrix (e.g., (x, y)).
quality images have high resolution, good contrast, and minimal noise.
8. 8-bit Image
An 8-bit image is a digital image where each pixel is represented by 8 bits, allowing 256 possible
intensity levels (0 to 255). In grayscale images, 0 represents black, and 255 represents white. In
color images, each channel (Red, Green, Blue) is represented by 8 bits, resulting in 16.7 million
possible colors.
9. Resolution of CCD Camera vs. Human Eye
• CCD Camera: Resolution is measured in megapixels (e.g., 12 MP). A higher megapixel
megapixels. However, the brain processes only a small portion of this detail at any given
time.
and edge detection. These processes do not require understanding the content of the image.
• Mid-level processes: Operations like segmentation, object detection, and feature extraction.
• Low contrast: Small difference between light and dark regions, resulting in a flat
appearance.
• High contrast: Large difference between light and dark regions, resulting in a more
• CMYK Model: Adds a Black (K) channel to the CMY model for better printing results.
• HIS Model: Represents colors in terms of Hue (color type), Saturation (intensity of color),
• False contouring: Banding caused by insufficient intensity levels, resulting in visible steps
in smooth gradients.
• Noise: Random variations in pixel values caused by sensor limitations or environmental
factors.
• Blurring: Loss of sharpness caused by improper focusing or motion.
• Degraded Image: An image that has been affected by both blurring and noise.
• Space Variant PSF: The blurring effect varies across the image.
Image restoration: involves removing noise and blurring from an image to recover the original
scene. Techniques include deblurring and denoising.
Image Compression: Image compression reduces the size of image data for efficient storage
and transmission. Techniques include lossless compression (e.g., PNG) and lossy compression
(e.g., JPEG).
28. Texture
Texture is one of the most crucial features in digital image processing, allowing us to interpret
the surface characteristics of objects within an image. It is the visual pattern that emerges from
the spatial arrangement of pixel intensities and plays a vital role in image segmentation,
classification, and recognition. This article will explore the properties of image texture, the
methods for analyzing texture, and provide mathematical models and corresponding equations
to quantify these properties.
In digital image processing, texture refers to the repetitive and spatial pattern of intensity values
in an image. It helps in differentiating regions, objects, and surfaces within the image based on
their pixel intensity distributions. These patterns may vary in terms of regularity, scale, and
directionality, and they provide critical information about the underlying structure of the image.
29. Features
Features are distinct and measurable properties of an image or region that can be used to
represent or describe it. They are often used to identify objects, patterns, or regions of interest
in an image.
Characteristics:
• Local or Global: Features can describe local regions (e.g., edges, corners) or the entire image
Compact Representation: Features are often used to reduce the dimensionality of image
data.
Examples of Features:
• Low-Level Features: Edges, corners, blobs, color, intensity.
Applications:
• Feature extraction is used in object detection, image matching, classification, and machine
learning.
• It is a critical step in tasks like facial recognition, optical character recognition (OCR), and
autonomous driving.
• Regression: Predicting continuous values from image data (Recognition: Identifying objects
or patterns in an image)
31. Classification
Definition: Classification is the process of assigning a label or category to an image,
segment, or feature based on its characteristics. It involves grouping data into predefined
classes based on learned patterns from a set of labeled training data. The aim of
classification is to identify which category an image or region belongs to. This is
commonly used for object detection, image categorization, and identifying specific
features.
32. Identification
Definition: Identification involves determining the specific instance or identity of an
object or feature in an image. Unlike classification, which categorizes objects into broad
groups, identification is concerned with recognizing specific objects (e.g., identifying a
specific person or object within an image). Identification is used when you need to match
an object to a known instance from a database or set of possible candidates.
33. Regression
Definition: Regression is a process where the goal is to predict a continuous value from the
features of an image. It is used when the output is not a category but a quantity, such as
the estimation of physical properties or parameters from an image. Regression models
predict values like size, distance, or temperature based on the visual content of an image.
34.Difference Between X-Ray, CT, MRI, MRA, and PET, images
1. X-ray Image
• Imaging Principle:
Uses a single beam of X-rays that passes through the body. Dense tissues (e.g., bones) absorb
more X-rays and appear white, while less dense tissues (e.g., lungs) appear darker.
• Output:
Uses a rotating X-ray source and detector to capture multiple 2D X-ray projections from
different angles. These projections are reconstructed into a 3D image using computational
algorithms.
• Output:
Uses strong magnetic fields and radio waves to align hydrogen atoms in the body. When the
magnetic field is turned off, the atoms emit signals that are detected and reconstructed into
images.
• Output:
A specialized type of MRI that focuses on imaging blood vessels. It uses contrast agents (e.g.,
o Requires contrast agents, which may cause allergic reactions in some patients.
• Radiation Dose:
Uses a radioactive tracer (e.g., fluorodeoxyglucose, FDG) injected into the body. The tracer
emits positrons, which interact with electrons to produce gamma rays. These rays are
detected and reconstructed into images.
• Output:
• CT: Excellent for detailed 3D imaging of bones and soft tissues; higher radiation dose.
• MRI: Superior for soft tissues and functional imaging; no radiation but expensive and time-
consuming.
• MRA: Specialized MRI for blood vessels; no radiation but requires contrast agents.
• PET: Provides metabolic and functional information; useful for cancer and brain disorders