0% found this document useful (0 votes)
5 views

CH-6 Terminologies

The document provides an overview of image processing, including its types (low, mid, high-level), aims, and applications across various fields such as medical imaging and computer vision. It explains concepts like image acquisition, pixel properties, intensity, resolution, and discusses various image processing techniques including enhancement, restoration, and compression. Additionally, it covers morphological processing, features, segmentation, classification, and the differences between various imaging modalities like X-ray, CT, and MRI.

Uploaded by

mahdy demarea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

CH-6 Terminologies

The document provides an overview of image processing, including its types (low, mid, high-level), aims, and applications across various fields such as medical imaging and computer vision. It explains concepts like image acquisition, pixel properties, intensity, resolution, and discusses various image processing techniques including enhancement, restoration, and compression. Additionally, it covers morphological processing, features, segmentation, classification, and the differences between various imaging modalities like X-ray, CT, and MRI.

Uploaded by

mahdy demarea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1.

Image Processing
Image processing refers to the manipulation and analysis of images to improve their quality,
extract useful information, or prepare them for further analysis. It can be performed on both
analog and digital images, but digital image processing is more common today due to the
widespread use of computers. Types of Image Processing:
• Low-level processing: Involves operations like noise reduction, contrast enhancement, and

sharpening. These processes are pixel-based and do not require understanding the content
of the image.
• Mid-level processing: Includes tasks like segmentation, object detection, and feature

extraction. These processes involve grouping pixels into meaningful regions or objects.
• High-level processing: Involves interpretation and understanding of the image content,

such as object recognition, scene understanding, and decision-making.


Applications:
• Medical Imaging: MRI, CT scans, and X-rays.

• Remote Sensing: Satellite imagery for weather forecasting and land use analysis.

• Computer Vision: Facial recognition, autonomous vehicles, and robotics.


Entertainment: Image and video editing, special effects in movies.
2. Aim of Digital Image Processing
The primary aims of digital image processing are:
• Improving Image Quality: Enhancing visual appearance for human interpretation (e.g.,

contrast adjustment, noise reduction).


• Extracting Information: Analyzing images to extract meaningful data (e.g., object

detection, feature extraction).


• Image Compression: Reducing the size of image data for efficient storage and transmission

(e.g., JPEG, PNG).


• Machine Perception: Preparing images for automated analysis by machines (e.g., facial

recognition, autonomous driving).

3. Image
An image is a 2D representation of visual information, typically represented as a matrix of pixel
values. Each pixel corresponds to a specific location in the image and has an intensity or color
value.
Difference Between Image, Picture, and Photo:
• Image: A general term for any 2D representation of visual data, including digital and analog

forms.
• Picture: A visual representation of an object or scene, often used informally to refer to

photographs or drawings.
• Photo: A specific type of image captured by a camera, typically representing a real-world

scene.
4. Morphological Processing
Morphological processing involves operations that process images based on their shapes and
structures. It is used to extract, modify, or analyze the form of objects in an image.
Types of Morphological Operations:
• Dilation: Expands the boundaries of objects in an image.

• Erosion: Shrinks the boundaries of objects in an image.

• Opening: Erosion followed by dilation. It removes small objects while preserving the shape

of larger objects.
• Closing: Dilation followed by erosion. It fills small holes and gaps in objects.

Applications:
• Noise Removal: Removing small noise particles from an image.

• Object Detection: Identifying and isolating specific shapes in an image. Fingerprint


Recognition: Enhancing fingerprint patterns for analysis.
5. Image Acquisition
Image acquisition is the process of capturing an image using sensors (e.g., cameras, scanners)
and converting it into a digital format for processing. The steps in Image Acquisition:
• Illumination: Light is used to illuminate the scene.

• Reflection: Light reflects off objects in the scene.

• Capture: Sensors (e.g., CCD or CMOS) capture the reflected light and convert it into

electrical signals.
• Digitization: The electrical signals are converted into digital pixel values.

Components of Image Acquisition Systems:


• Lens: Focuses light onto the sensor.

• Sensor: Captures the light and converts it into electrical signals.

• Analog-to-Digital Converter (ADC): Converts the analog signals into digital pixel values.
6. Pixel
A pixel (short for "picture element") is the smallest unit of a digital image. Each pixel represents
a single color or intensity value at a specific location in the image.
Properties of Pixels:
• Intensity: The brightness level of the pixel (e.g., 0 = black, 255 = white in an 8-bit grayscale

image).
• Coordinates: The position of the pixel in the image matrix (e.g., (x, y)).

7. Intensity, Resolution, and Quality of the Image


• Intensity: The brightness level of a pixel. In grayscale images, intensity ranges from 0

(black) to 255 (white) in an 8-bit image.


• Resolution: The number of pixels in an image, typically expressed as width × height (e.g.,

1920 × 1080). Higher resolution means more detail.


• Quality: Determined by factors such as resolution, contrast, noise, and sharpness. High-

quality images have high resolution, good contrast, and minimal noise.

8. 8-bit Image
An 8-bit image is a digital image where each pixel is represented by 8 bits, allowing 256 possible
intensity levels (0 to 255). In grayscale images, 0 represents black, and 255 represents white. In
color images, each channel (Red, Green, Blue) is represented by 8 bits, resulting in 16.7 million
possible colors.
9. Resolution of CCD Camera vs. Human Eye
• CCD Camera: Resolution is measured in megapixels (e.g., 12 MP). A higher megapixel

count means more detail can be captured.


• Human Eye: The resolution of the human eye is estimated to be equivalent to ~576

megapixels. However, the brain processes only a small portion of this detail at any given
time.

10. Yellow-Green Sensation


The human eye is most sensitive to green light (wavelength of ~555 nm). This is why green is
often used in displays and sensors to maximize visibility and efficiency.

11. Low, Mid, and High-Level Processes


• Low-level processes: Pixel-based operations such as noise reduction, contrast enhancement,

and edge detection. These processes do not require understanding the content of the image.
• Mid-level processes: Operations like segmentation, object detection, and feature extraction.

These processes involve grouping pixels into meaningful regions or objects.


• High-level processes: Tasks like object recognition, scene understanding, and decision-

making. These processes involve interpreting the content of the image.


12. Contrast
• Contrast: The difference in intensity between the lightest and darkest regions of an image.

• Low contrast: Small difference between light and dark regions, resulting in a flat

appearance.
• High contrast: Large difference between light and dark regions, resulting in a more

dynamic appearance. Narrow contrast: Limited range of intensities, often resulting


in loss of detail.

13. High and Low Frequency Components


• High frequency: Represents edges, fine details, and rapid changes in intensity.

• Low frequency: Represents smooth regions and gradual changes in intensity.

14. Gray Image and Color Image


• Gray image: A single-channel image where each pixel represents an intensity value (0 =

black, 255 = white).


• Color image: A multi-channel image where each pixel represents a combination of color

values (e.g., RGB).


15.Image Sampling and Quantization
• Sampling: The process of converting a continuous image into discrete pixels by measuring

intensity at regular intervals.


• Quantization: The process of assigning discrete intensity levels to the sampled pixels.
16. Image Models
• RGB Model: Represents colors as combinations of Red, Green, and Blue.

• CMY Model: Represents colors as combinations of Cyan, Magenta, and Yellow.

• CMYK Model: Adds a Black (K) channel to the CMY model for better printing results.

• HIS Model: Represents colors in terms of Hue (color type), Saturation (intensity of color),

and Intensity (brightness).

17. Blockness, False Contouring, Noise, Blurring, and Aliasing


• Blockness: Artifacts caused by compression, where the image appears divided into blocks.

• False contouring: Banding caused by insufficient intensity levels, resulting in visible steps

in smooth gradients.
• Noise: Random variations in pixel values caused by sensor limitations or environmental

factors.
• Blurring: Loss of sharpness caused by improper focusing or motion.

• Aliasing: Distortion caused by undersampling, resulting in jagged edges or moiré patterns.

• Degraded Image: An image that has been affected by both blurring and noise.

18. Histogram of the Image


A histogram is a graphical representation of the distribution of pixel intensities in an image. It
shows the frequency of each intensity level, helping to analyze contrast and brightness.
19. Blurring Function (Point Spread Function - PSF)
The PSF describes how a point of light is spread in an image due to blurring. It is a mathematical
representation of the blurring effect. The types of PSF:
• Space Invariant PSF: The blurring effect is the same across the entire image.

• Space Variant PSF: The blurring effect varies across the image.

20. Transfer Function (TF)


The transfer function describes the system's response to an input signal. In image processing, it
is used to model how an image is transformed by a system.

21. Effects of Reducing Intensity Levels or Samples


• Reducing Intensity Levels: Results in loss of detail and false contouring, as fewer intensity

levels are available to represent the image.


• Reducing Samples: Results in aliasing and loss of resolution, as fewer pixels are used to

represent the image.


22. Image Enhancement, Image Restoration, Image Compression
Image enhancement: involves improving the visual quality of an image for human
interpretation or machine analysis. Techniques include contrast adjustment, noise reduction,
and sharpening.

Image restoration: involves removing noise and blurring from an image to recover the original
scene. Techniques include deblurring and denoising.

Image Compression: Image compression reduces the size of image data for efficient storage
and transmission. Techniques include lossless compression (e.g., PNG) and lossy compression
(e.g., JPEG).

23. Power Spectrum and Phase Spectrum


• Power Spectrum: Represents the magnitude of frequencies in an image.

• Phase Spectrum: Represents the phase information of frequencies in an image.

24. Parseval’s Formula


Parseval’s formula states that the total energy in the spatial domain is equal to the total energy
in the frequency domain.
25.Convolution Process
Convolution is a mathematical operation that combines two functions to produce a third
function. In image processing, it is used for filtering and feature extraction.
26. Auto-correlation and Cross-correlation
• Auto-correlation: Measures the similarity of a signal with itself.

• Cross-correlation: Measures the similarity between two signals.

27. Gaussian Function and Distribution


• Gaussian Function: A bell-shaped curve used for smoothing and filtering.

• Gaussian Distribution: A probability distribution that describes many natural phenomena.

28. Texture
Texture is one of the most crucial features in digital image processing, allowing us to interpret
the surface characteristics of objects within an image. It is the visual pattern that emerges from
the spatial arrangement of pixel intensities and plays a vital role in image segmentation,
classification, and recognition. This article will explore the properties of image texture, the
methods for analyzing texture, and provide mathematical models and corresponding equations
to quantify these properties.
In digital image processing, texture refers to the repetitive and spatial pattern of intensity values
in an image. It helps in differentiating regions, objects, and surfaces within the image based on
their pixel intensity distributions. These patterns may vary in terms of regularity, scale, and
directionality, and they provide critical information about the underlying structure of the image.

29. Features
Features are distinct and measurable properties of an image or region that can be used to
represent or describe it. They are often used to identify objects, patterns, or regions of interest
in an image.
Characteristics:
• Local or Global: Features can describe local regions (e.g., edges, corners) or the entire image

(e.g., color histogram).


• Discriminative: Features are chosen to distinguish between different objects or classes.

Compact Representation: Features are often used to reduce the dimensionality of image
data.
Examples of Features:
• Low-Level Features: Edges, corners, blobs, color, intensity.

• Mid-Level Features: Shapes, contours, texture descriptors.

• High-Level Features: Object parts, semantic labels.

Applications:
• Feature extraction is used in object detection, image matching, classification, and machine

learning.
• It is a critical step in tasks like facial recognition, optical character recognition (OCR), and

autonomous driving.

30. Segmentation, Classification, Identification, Regression, and


Recognition Segmentation: Dividing an image into regions
or objects.
• Classification: Assigning labels to regions or objects.

• Identification: Recognizing specific instances of objects.

• Regression: Predicting continuous values from image data (Recognition: Identifying objects

or patterns in an image)
31. Classification
Definition: Classification is the process of assigning a label or category to an image,
segment, or feature based on its characteristics. It involves grouping data into predefined
classes based on learned patterns from a set of labeled training data. The aim of
classification is to identify which category an image or region belongs to. This is
commonly used for object detection, image categorization, and identifying specific
features.

32. Identification
Definition: Identification involves determining the specific instance or identity of an
object or feature in an image. Unlike classification, which categorizes objects into broad
groups, identification is concerned with recognizing specific objects (e.g., identifying a
specific person or object within an image). Identification is used when you need to match
an object to a known instance from a database or set of possible candidates.
33. Regression
Definition: Regression is a process where the goal is to predict a continuous value from the
features of an image. It is used when the output is not a category but a quantity, such as
the estimation of physical properties or parameters from an image. Regression models
predict values like size, distance, or temperature based on the visual content of an image.
34.Difference Between X-Ray, CT, MRI, MRA, and PET, images
1. X-ray Image

• Imaging Principle:

Uses a single beam of X-rays that passes through the body. Dense tissues (e.g., bones) absorb
more X-rays and appear white, while less dense tissues (e.g., lungs) appear darker.
• Output:

A 2D projection of the 3D body structure.

• Applications: o Diagnosing fractures and bone abnormalities. o Detecting lung conditions

(e.g., pneumonia, tuberculosis).


o Dental imaging.
• Advantages: o Quick, inexpensive, and widely available.

o Excellent for visualizing bones and dense structures.


• Limitations: o Poor visualization of soft tissues.

o Overlapping structures can create artifacts.


• Radiation Dose:

Low to moderate, depending on the procedure.

CT (Computed Tomography) Image


• Imaging Principle:

Uses a rotating X-ray source and detector to capture multiple 2D X-ray projections from
different angles. These projections are reconstructed into a 3D image using computational
algorithms.
• Output:

A 3D volumetric image composed of cross-sectional slices (2D images) of the body.


• Applications: o Detailed imaging of soft tissues (e.g., brain, liver, kidneys). o Detecting
tumors, internal bleeding, and vascular diseases.
o Planning surgeries and radiation therapy.
• Advantages: o High resolution and detailed visualization of both bones and soft tissues.
o Ability to create 3D reconstructions.
• Limitations: o Higher radiation dose compared to X-rays.
o Expensive and less accessible than X-rays.
• Radiation Dose:
Higher than X-rays but lower than older CT machines due to modern dose-reduction
techniques.
MRI (Magnetic Resonance Imaging) Image
• Imaging Principle:

Uses strong magnetic fields and radio waves to align hydrogen atoms in the body. When the
magnetic field is turned off, the atoms emit signals that are detected and reconstructed into
images.
• Output:

High-resolution 2D or 3D images of soft tissues.


• Applications: o Imaging brain, spinal cord, muscles, and joints. o Detecting tumors,

strokes, and multiple sclerosis.


o Evaluating soft tissue injuries (e.g., ligaments, tendons).
• Advantages: o No ionizing radiation.
o Excellent soft tissue contrast and resolution.

o Can visualize functional and structural details.

• Limitations: o Expensive and time-consuming.

o Not suitable for patients with metal implants or claustrophobia.


• Radiation Dose:

None (uses magnetic fields and radio waves).


MRA (Magnetic Resonance Angiography) Image
• Imaging Principle:

A specialized type of MRI that focuses on imaging blood vessels. It uses contrast agents (e.g.,

gadolinium) to enhance the visibility of blood vessels.


• Output:

Detailed 2D or 3D images of blood vessels.


• Applications: o Diagnosing vascular diseases (e.g., aneurysms, stenosis). o Evaluating blood

flow and blockages.


o Planning vascular surgeries.
• Advantages: o No ionizing radiation.
o Excellent visualization of blood vessels and soft tissues.
• Limitations: o Expensive and time-consuming.

o Requires contrast agents, which may cause allergic reactions in some patients.
• Radiation Dose:

None (uses magnetic fields and radio waves).


PET (Positron Emission Tomography) Image
• Imaging Principle:

Uses a radioactive tracer (e.g., fluorodeoxyglucose, FDG) injected into the body. The tracer
emits positrons, which interact with electrons to produce gamma rays. These rays are
detected and reconstructed into images.
• Output:

Functional 2D or 3D images showing metabolic activity.


• Applications: o Detecting cancer and monitoring treatment. o Evaluating brain disorders

(e.g., Alzheimer's, epilepsy).


o Assessing heart function.
• Advantages: o Provides functional and metabolic information.

o Can detect diseases at an early stage.


• Limitations: o High cost and limited availability.

o Requires radioactive tracers, which expose patients to radiation.


• Radiation Dose:

Moderate to high, depending on the tracer used

X-Ray X-Ray MRI MRI PET


Conclusion
• X-ray: Best for bones and lungs; quick and inexpensive but limited soft tissue detail.

• CT: Excellent for detailed 3D imaging of bones and soft tissues; higher radiation dose.

• MRI: Superior for soft tissues and functional imaging; no radiation but expensive and time-

consuming.
• MRA: Specialized MRI for blood vessels; no radiation but requires contrast agents.

• PET: Provides metabolic and functional information; useful for cancer and brain disorders

but involves radiation exposure.


Each imaging modality has its strengths and is chosen based on the clinical requirements and
the specific part of the body being examined

You might also like