0% found this document useful (0 votes)
45 views13 pages

Unit 1 Digital Image Fundamentals (DIP)

The document provides a comprehensive overview of Digital Image Processing (DIP), covering its fundamentals, applications, and key concepts such as image acquisition, enhancement, and recognition. It details the characteristics of DIP, the role of image sensors, and the importance of color models like RGB and HSI. Additionally, it explains the processes of sampling and quantization necessary for converting physical images into digital formats.

Uploaded by

Prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views13 pages

Unit 1 Digital Image Fundamentals (DIP)

The document provides a comprehensive overview of Digital Image Processing (DIP), covering its fundamentals, applications, and key concepts such as image acquisition, enhancement, and recognition. It details the characteristics of DIP, the role of image sensors, and the importance of color models like RGB and HSI. Additionally, it explains the processes of sampling and quantization necessary for converting physical images into digital formats.

Uploaded by

Prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Digital Image Processing

Unit -1
Digital Image Fundamentals
Digital Image Processing Tutorial provides basic and advanced concepts of Image Processing. Our
Digital Image Processing Tutorial is designed for beginners and professionals both.

Digital Image Processing is used to manipulate the images by the use of algorithms. For processing
digital images the most common software that used widely is Adobe Photoshop.

Our Digital Image Processing Tutorial includes all topics of Digital Image Processing such as
introduction, computer graphics, signals, photography, camera mechanism, pixel, transaction, types
of Images, etc.

What is Digital Image Processing (DIP)?

Digital Image Processing (DIP) is a software which is used to manipulate the digital images by the use
of computer system. It is also used to enhance the images, to get some important information from
it.

For example : Adobe Photoshop, MATLAB, etc.

It is also used in the conversion of signals from an image sensor into the digital images.

A certain number of algorithms are used in image processing.

Digital Image Processing


o Digital Image Processing is a software which is used in image processing. For example: computer
graphics, signals, photography, camera mechanism, pixels, etc.
o Digital Image Processing provides a platform to perform various operations like image enhancing,
processing of analog and digital signals, image signals, voice signals etc.
o It provides images in different formats.

Digital Image Processing allows users the following tasks


o Image sharpening and restoration: The common applications of Image sharpening and restoration
are zooming, blurring, sharpening, grayscale conversion, edges detecting, Image recognition, and
Image retrieval, etc.
o Medical field: The common applications of medical field are Gamma-ray imaging, PET scan, X-Ray
Imaging, Medical CT, UV imaging, etc.
o Remote sensing: It is the process of scanning the earth by the use of satellite and acknowledges all
activities of space.
o Machine/Robot vision: It works on the vision of robots so that they can see things, identify them, etc.
How it works.

Characteristics of Digital Image Processing


o It uses software, and some are free of cost.
o It provides clear images.
o Digital Image Processing do image enhancement to recollect the data through images.
o It is used widely everywhere in many fields.
o It reduces the complexity of digital image processing.
o It is used to support a better experience of life.

What is an Image
An image is nothing more than a two dimensional signal. It is defined by the mathematical
function f(x,y) where x and y are the two co-ordinates horizontally and vertically.

The value of f(x,y) at any point is gives the pixel value at that point of an image.

The above figure is an example of digital image that you are now viewing on your computer
screen

Steps In Digital Image Processing


Purpose of Image processing

The main purpose of the DIP is divided into following 5 groups:


1. Visualization : The objects which are not visible, they are observed.
2. Image sharpening and restoration : It is used for better image resolution.
3. Image retrieval : An image of interest can be seen
4. Measurement of pattern : In an image, all the objects are measured.
5. Image Recognition : Each object in an image can be distinguished.
Following are Fundamental Steps of Digital Image Processing:

1. Image Acquisition
Image acquisition is the first step of the fundamental steps of DIP. In this stage, an image is given in
the digital form. Generally, in this stage, pre-processing such as scaling is done.

2. Image Enhancement
Image enhancement is the simplest and most attractive area of DIP. In this stage details which are not
known, or we can say that interesting features of an image is highlighted. Such as brightness,
contrast, etc...

3. Image Restoration
Image restoration is the stage in which the appearance of an image is improved.

4. Color Image Processing


Color image processing is a famous area because it has increased the use of digital images on the
internet. This includes color modeling, processing in a digital domain, etc....

5. Wavelets and Multi-Resolution Processing


In this stage, an image is represented in various degrees of resolution. Image is divided into smaller
regions for data compression and for the pyramidal representation.

6. Compression
Compression is a technique which is used for reducing the requirement of storing an image. It is a
very important stage because it is very necessary to compress data for internet use.

7. Morphological Processing
This stage deals with tools which are used for extracting the components of the image, which is
useful in the representation and description of shape.
8. Segmentation
In this stage, an image is a partitioned into its objects. Segmentation is the most difficult tasks in DIP.
It is a process which takes a lot of time for the successful solution of imaging problems which
requires objects to identify individually.

9. Representation and Description


Representation and description follow the output of the segmentation stage. The output is a raw
pixel data which has all points of the region itself. To transform the raw data, representation is the
only solution. Whereas description is used for extracting information's to differentiate one class of
objects from another.

10. Object recognition


In this stage, the label is assigned to the object, which is based on descriptors.

11. Knowledge Base


Knowledge is the last stage in DIP. In this stage, important information of the image is located, which
limits the searching processes. The knowledge base is very complex when the image database has a
high-resolution satellite.

Components of Image Processing System

Image Processing System is the combination of the different elements involved in


the digital image processing. Digital image processing is the processing of an
image by means of a digital computer. Digital image processing uses different
computer algorithms to perform image processing on the digital images.
It consists of following components
Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other features of the images
and passes the result to the image processing hardware. It includes the problem domain.

Image Processing Hardware:


Image processing hardware is the dedicated hardware that is used to process the instructions
obtained from the image sensors. It passes the result to general purpose computer.

Computer:
Computer used in the image processing system is the general purpose computer that is used
by us in our daily life.

Image Processing Software:


Image processing software is the software that includes all the mechanisms and algorithms
that are used in image processing system.

Mass Storage:
Mass storage stores the pixels of the images during the processing.

Hard Copy Device:


Once the image is processed then it is stored in the hard copy device. It can be a pen drive or
any external ROM device.

Image Display:
It includes the monitor or display screen that displays the processed images.

Network:
Network is the connection of all the above elements of the image processing system.

Image Sensing And Acquisition In Digital Image Processing


Image sensing and Acquisition
Image sensing and acquisition are crucial initial steps in digital image processing, where
physical scenes are converted into digital representations. Here's an overview of these
processes:
1. Image Sensing : Image sensing refers to the process of capturing physical scenes and
converting them into electrical signals. This is typically done using imaging devices such
as cameras or scanners.
2. Optical System : In cameras, the optical system focuses light from the scene onto an
image sensor. This system includes components like lenses, apertures, and filters, which
affect the quality and characteristics of the captured image.
3. Image Sensor : The image sensor is a semiconductor device that converts the optical
image formed by the optical system into an electrical signal. Common types of image
sensors include CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-
Semiconductor) sensors.
4. Sampling : Once the optical image is converted into electrical signals by the image
sensor, it is sampled to discretize it into a digital format. Sampling involves measuring
the intensity of light at discrete points in the image.
5. Quantization : After sampling, the continuous intensity values obtained are quantized
into discrete levels to represent digital images. This process involves assigning digital
values to each sampled point based on the intensity range of the captured scene.
6. Analog-to-Digital Conversion (ADC) : ADC converts the analog electrical signals
obtained from the image sensor into digital data that can be processed by a computer.
This conversion involves assigning digital values to the sampled points based on their
intensity levels.
7. Digital Image Formation : The final result of image sensing and acquisition is a digital
image, which consists of a grid of pixels, where each pixel represents a discrete sample
of the scene's intensity at a specific location.
Overall, image sensing and acquisition lay the foundation for subsequent image processing
tasks such as enhancement, analysis, and recognition. The quality and characteristics of the
acquired digital image greatly influence the effectiveness of these subsequent processing
steps

Image Sampling and Quantization


To create a digital image, we need to convert the continuous sensed data into digital form.
This process includes 2 processes:

1. Sampling : Digitizing the co-ordinate value is called sampling.


2. Quantization : Digitizing the amplitude value is called quantization.

To convert a continuous image f(x, y) into digital form, we have to sample the function in both
co-ordinates and amplitude.
Sampling Quantization

Digitization of co-ordinate values. Digitization of amplitude values.

x-axis(time) – discretized. x-axis(time) – continuous.

y-axis(amplitude) – continuous. y-axis(amplitude) – discretized.

Sampling is done prior to the Quantizatin is done after the sampling


quantization process. process.

It determines the spatial resolution of It determines the number of grey levels in


the digitized images. the digitized images.

It reduces c.c. to a series of tent poles It reduces c.c. to a continuous series of


over a time. stair steps.

A single amplitude value is selected Values representing the time intervals are
from different values of the time rounded off to create a defined set of
interval to represent it. possible amplitude values.
Relationship Between Pixels
A pixel is the smallest unit of a digital image or display and stands for “picture element.” It is
a very small, isolated dot that stands for one color and plays the most basic part in digital
images.

In digital imaging, a grid of pixels can be seen and the combination of thousands or millions
of such ‘pixels’ creates an overall visual representation that users see on their screens. The
term pixels, which means picture units, came about when digital imaging technologies were
developed in the mid-20th century.

Defining Key Terminologies


 Pixel (Picture Element): A pixel is the smallest part of a computer picture. It shows one
spot in the whole photo. Every little square has information about color, brightness and
position. When these squares are put together with others they make a complete picture
that we can see. Pixels are the parts that make up digital screens. They arrange together
to show letters, pictures and videos.

 Resolution: Resolution means the number of little squares, called pixels, in a digital
photo. It’s usually measured by width and height size. Using more details gives better
results in pictures. Usual measurements for resolution are pixels per inch (PPI) for
pictures that get printed and pixels per centimeter (PPCM). For example, a screen that
can show pictures at 1920 x 1080 has more tiny dots or pixels from left to right and has
1920 pixels horizontally and 1080 pixels vertically.

 Pixel Density: Display resolution is the number of tiny dots on a screen, often shown as
pixels per inch (PPI) for screens. It decides how clear a picture looks, and more pixels
make it sharper. Mobile phones with good picture quality often have lots of tiny dots on
the screen, making images colorful and clear.

 Color Depth: Bit depth, also called color depth, means how many bits show the color of
each pixel. Usual values are 8-bit, 16-bit and 24-bit color levels. The more bits a pixel has,
the more colors it can show. This makes for a wider and deeper range of colors.

 Raster and Vector Graphics: Raster graphics, a type of image creation, pixels are very
important. These pictures are made using lots of tiny squares called pixels. In contrast,
vector drawings use math equations to make shapes. This lets them get bigger without
losing picture quality. Vector graphics can’t use pixels, so they are good for jobs like
making logos and drawing pictures.
 Aspect Ratio: Aspect ratio means the balance between an image’s width and height.
Common aspect ratios include 4:3, 16:9, and 1:1. Different devices and mediums can
have special size rules, affecting how pictures are shown or taken.
Basic Relationships Between Pixels
 Neighborhood
 Adjacency
 Paths
 Connectivity
 Regions
 Boundaries

Color Image Fundamentals


 In automated image analysis, color is a powerful descriptor, which simplifies object
identification and extraction.
 The human eye can distinguish between thousands of color shades and intensities but
only about 20-30 shades of gray. Hence, use of color in human image processing would
be very effective.
 Color image processing consists of two parts: Pseudo-color processing and Full color
processing.
 In pseudo-color processing, (false) colors are assigned to a monochrome image. For
example, objects with different intensity values maybe assigned different colors, which
would enable easy identification/recognition by humans.
Color Models
• The purpose of a color model (or color space or color system) is to facilitate the specification
of color in some standard fashion.

• A color model is a specification of a 3-D coordinate system and a subspace within that
system where each color is represented by a single point.

• Most color models in use today are either based on hardware (color camera, printer) or on
applications involving color manipulation (computer graphics, animation).

• In image processing, the hardware based color models mainly used are: RGB, CMYK, and
HSI.

• The RGB (red, green, blue) color system is used mainly in color monitors and video cameras.

• The CMYK (cyan, magenta, yellow, black) color system is used in printing devices.

• The HSI (hue, saturation, intensity) is based on the way humans describe and interpret color.
It also helps in separating the color and grayscale information in an image.

RGB and HSI models


RGB Color Space
RGB stands for Red, Green, and Blue. This color space is widely used in computer graphics. RGB are
the main colors from which many colors can be made.

RGB can be represented in the 3-dimensional form:


Below table is 100% RGB color bar contains values for 100% amplitude, 100% saturated, and for
video test signal.

RGB : The RGB colour model is the most common colour model used in Digital image
processing and openCV. The colour image consists of 3 channels. One channel each for one
colour. Red, Green and Blue are the main colour components of this model. All other colours
are produced by the proportional ratio of these three colours only. 0 represents the black
and as the value increases the colour intensity increases.

Properties:
 This is an additive colour model. The colours are added to the black.
 3 main channels: Red, Green and Blue.
 Used in DIP, openCV and online logos.

Colour combination:

Green(255) + Red(255) = Yellow

Green(255) + Blue(255) = Cyan

Red(255) + Blue(255) = Magenta

Red(255) + Green(255) + Blue(255) = White


The HSI Color Model
HSI stands for hue, saturation, intensity. This model is interesting because it can initially seem
less intuitive than the RGB model, despite the fact that it describes color in a way that is much
more consistent with human visual perception.

 Hue is the color itself. When you look at something and try to assign a word to the color
that you see, you are identifying the hue. The concept of hue is consistent with the way
in which a particular wavelength of light corresponds to a particular perceived color.

 RGB is useful for hardware implementations and is serendipitously related to the way in
which the human visual system works.
 However, RGB is not a particularly intuitive way in which to describe colors.
 Rather when people describe colors they tend to use hue, saturation and brightness.
 RGB is great for color generation, but HSI is great for color description.
 Hue: A color attribute that describes a pure color (pure yellow, orange or red).
 Saturation: Gives a measure of how much a pure color is diluted with white light.
 Intensity: Brightness is nearly impossible to measure because it is so subjective. Instead
we use intensity.
 Intensity is the same achromatic notion that we have seen in grey level images

The Hue component describes the color itself in the form of an angle between [0,360] degrees. 0 degree
mean red, 120 means green 240 means blue. 60 degrees is yellow, 300 degrees is magenta.
The Saturation component signals how much the color is polluted with white color. The range of the S
component is [0,1].
The Intensity range is between [0,1] and 0 means black, 1 means white.
As the above figure shows, hue is more meaningful when saturation approaches 1 and less meaningful when
saturation approaches 0 or when intensity approaches 0 or 1. Intensity also limits the saturation values.
To formula that converts from RGB to HSI or back is more complicated than with other color models,
therefore we will not elaborate on the detailed specifics involved in this process.

You might also like