0% found this document useful (0 votes)
42 views35 pages

Group 1-Intruduction and Digital Image Fundamental

Uploaded by

mr.max533
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views35 pages

Group 1-Intruduction and Digital Image Fundamental

Uploaded by

mr.max533
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

University of Barisal

An Assignment
On
Introduction and Digital Image Fundamentals (Chapter 1-2)

Course Title: Digital Image Processing


Course Code: CSE-4201

Submitted To

Sohely Jahan
Lecturer,
Department Of Computer Science and Engineering
University of Barishal

Submitted By

Group (1)

Md.Saimun Islam 16CSE010 Md. Showkat Imam 16CSE029


Md. Kamruzzaman 14CSE037 Md. Jane Alam 16CSE005
Md. Foysal Sheikh 16CSE035
Date of Submission: November, 22, 2021
Digital Image Processing (DIP)

It is the manipulation of the digital data with the help of computer hardware and
software to produce digital maps in which the specific information has been extracted
and highlighted.
Digital image processing deals with manipulation of digital images through a digital
computer. It is a subfield of signals and systems but focus particularly on images. DIP
focuses on developing a computer system that is able to perform processing on an
image.
Image processing is a method to perform some operations on an image, in order to get
an enhanced image or to extract some useful information from it. It is a type of signal
processing in which input is an image and output may be image or
characteristics/features associated with that image.
Digital Image Processing provides a platform to perform various operations like image
enhancing, processing of analog and digital signals, image signals, voice signals etc.

How DIP works

Fig.1. Process of Capturing image using DIP


Characteristics of Digital Image Processing
o It uses software, and some are free of cost.
o It provides clear images.
o Digital Image Processing do image enhancement to recollect the data through images.
o It is used widely everywhere in many fields.
o It reduces the complexity of digital image processing.
o It is used to support a better experience of life.

Advantages of Digital Image Processing


o Image reconstruction (CT, MRI, SPECT, PET)
o Image reformatting (Multi-plane, multi-view reconstructions)
o Fast image storage and retrieval
o Fast and high-quality image distribution.
o Controlled viewing (windowing, zooming)

Disadvantages of Digital Image Processing


o It is very much time-consuming.
o It is very much costly depending on the particular system.
o Qualified persons can be used.

Applications of Digital Image Processing


o Image sharpening and restoration: The common applications of Image sharpening and
restoration are zooming, blurring, sharpening, grayscale conversion, edges detecting,
Image recognition, and Image retrieval, etc.
o Medical field: The common applications of medical field are Gamma-ray imaging, PET
scan, X-Ray Imaging, Medical CT, UV imaging, etc.
o Remote sensing: It is the process of scanning the earth by the use of satellite and
acknowledges all activities of space.
o Machine/Robot vision: It works on the vision of robots so that they can see things,
identify them, etc.
o Pattern recognition: It involves the study of image processing; it is also combined with
artificial intelligence such that computer-aided diagnosis, handwriting recognition and
images recognition can be easily implemented. Now a day, image processing is used for
pattern recognition.
o Video processing: It is also one of the applications of digital image processing. A
collection of frames or pictures are arranged in such a way that it makes the fast
movement of pictures. It involves frame rate conversion, motion detection, reduction of
noise and cooler space conversion etc.

Fundamental Steps in Digital Image Processing

Fig.2. Steps of Digital Image Processing


1. Image Acquisition
 This is the first step or process of the fundamental steps of digital image processing.
Image acquisition could be as simple as being given an image that is already in digital
form. Generally, the image acquisition stage involves preprocessing, such as scaling etc.
 This is the first stage of digital image processing.
 The image Acquisition stage involves preprocessing, such as scaling, etc.

Fig.3. An Example of Steps of Digital Image Acquisition

2. Image Enhancement
Image enhancement is among the simplest and most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or
simply to highlight certain features of interest in an image. Such as changing brightness &
contrast etc.

Fig.4. Enhancing grayscale images with histogram equalization .


3. Image Restoration
Image restoration is an area that also deals with improving the appearance of an image.
However, unlike enhancement, which is subjective, image restoration is objective, in the sense
that restoration techniques tend to be based on mathematical or probabilistic models of image
degradation.

Fig.5. An example of Image restoration in DIP

4. Color Image Processing


Color image processing is an area that has been gaining its importance because of the significant
increase in the use of digital images over the Internet. This may include color modeling and
processing in a digital domain etc.

Fig.6. Color Image Processing approach in DIP

5. Wavelets and Multiresolution Processing


Wavelets are the foundation for representing images in various degrees of resolution. Images
subdivision successively into smaller regions for data compression and for pyramidal
representation.
6. Compression
Compression deals with techniques for reducing the storage required to save an image or the
bandwidth to transmit it. Particularly in the uses of internet it is very much necessary to
compress data.

Fig.7. an example Process of image compression in DIP

7. Morphological Processing
Morphological processing deals with tools for extracting image components that are useful in the
representation and description of shape.

Fig.8. An example of Morphological Processing in DIP

8. Segmentation
Segmentation procedures partition an image into its constituent parts or objects. In general,
autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged
segmentation procedure brings the process a long way toward successful solution of imaging
problems that require objects to be identified individually.
Fig.9. An example of Segmentation in DIP

9. Representation and Description


Representation and description almost always follow the output of a segmentation stage, which
usually is raw pixel data, constituting either the boundary of a region or all the points in the
region itself. Choosing a representation is only part of the solution for transforming raw data into
a form suitable for subsequent computer processing. Description deals with extracting attributes
that result in some quantitative information of interest or are basic for differentiating one class of
objects from another.
10. Object recognition
Recognition is the process that assigns a label, such as, “vehicle” to an object based on its
descriptors.

Fig.10. An example of Object recognition approach in DIP

11. Knowledge Base


Knowledge may be as simple as detailing regions of an image where the information of interest
is known to be located, thus limiting the search that has to be conducted in seeking that
information. The knowledge base also can be quite complex, such as an interrelated list of all
major possible defects in a materials inspection problem or an image database containing high-
resolution satellite images of a region in connection with change-detection applications.
Components of a Digital Image Processing
 Image Sensors
 Specialized Image Processing Hardware
 Specialized Image Processing Software
 Computer
 Mass Storage
 Image Display
 Hard Copy Device
 Network

Fig.11. Components of a DIP

Image Sensors
It refers to sensing.
An image sensor is a 2D array of light-sensitive elements that convert photons into electrons.
Image sensors have two elements that are required to capture digital images.
The first is a physical device (sensor) that is sensitive to the energy radiated by the object we wish
to convert to image.
CCD (Charged Coupled Device) and CMOS (Complementary Metal-Oxide Conductor) image
sensors are widely used in image-capturing devices like digital cameras.

Fig.12. CMOS Image Sensors (CIS)


Specialized Image Processing Hardware

 It consists of the digitizer and hardware that performs primitive operations, such as an
Arithmetic Logic Unit (ALU), which performs arithmetic and logical operations in
parallel on entire images.
 This type of hardware sometimes is called a frontend subsystem, and its most
distinguishing characteristic is speed. In other words, this unit performs functions that
require fast data throughputs that the typical main computer cannot handle.

Specialized Image Processing Software

 Software for image processing consists of specialized modules that perform specific
tasks.
 A well-designed package also includes the capability for the user to write code that, as a
minimum, utilizes the specialized modules.
 Example: Photoshop, GIMP(GNU Image Manipulation Program), Fireworks, Pixelmator
and Inkscape.

Fig.13. Photoshop

Computer
 The computer in an image processing system is a general-purpose computer and
can range from a PC to a supercomputer.
 In dedicated applications, sometimes specially designed computers are used to
achieve a required level of performance.
Mass Storage
 Mass storage refers to the storage of a large amount of data in persisting and machine-readable
fashion.
 The mass storage capability is a must in image processing applications.
 Digital storage for image processing applications falls into three principal categories:
1. Short-term storage for use during processing.
2. On line storage for relatively fast recall.
3. Archival storage, characterized by infrequent access.

Image Display
Used for recording images, include laser printers, film cameras, heat-sensitive devices, inkjet
units and digital units, such as optical and CD-Rom disks.

Image Display

 Image display is the final link in the digital image processing chain.
 Image displays are mainly colored TV monitors.

Networking

 It is a required component to transmit image information over a networked computer.


 Because of the large amount of data inherent in image processing applications, the key
consideration in image transmission is bandwidth.
What is an Image?
An image is formed by two-dimensional analog and the digital signal that contains color
information arranged along x and y spatial axis.
An image is also a two-dimensional array specifically arranged in rows and columns. Digital
Image is composed of picture elements, image elements, and pixels. A Pixel is most widely
used to denote the elements of a Digital Image.
A digital image is a representation of a two-dimensional image as a finite set of digital
values, called picture elements or pixels.

Fig.11. An example of Digital Image in DIP

Analog Image Processing vs. Digital Image Processing

Analog Image Processing Digital Image Processing

The analog image processing is applied on analog The digital image processing is applied to
signals and it processes only two-dimensional digital signals that work on analyzing and
signals. manipulating the images.

Analog signal is time-varying signals so the It improves the digital quality of the image
images formed under analog image processing get and intensity distribution is perfect in it.
varied.

Analog image processing is a slower and costlier Digital image processing is a cheaper and fast
process. image storage and retrieval process.

Analog signal is a real-world but not good quality It uses good image compression techniques
of images. that reduce the amount of data required and
produce good quality of images
Difference between the Analog signals and Digital signals

Analog signals Digital signals

Analog signals are difficult to get analysed at Digital signals are easy to analyse.
first.

Analog signals are more accurate than digital Digital signals are less accurate.
signals.

Analog signals take time to be stored. It has Digital signals can be easily stored.
infinite memory.

To record an analog signal, the technique In recording digital signal, the sample signals are
used, preserves the original signals. taken and preserved.

There is a continuous representation of signals There is a discontinuous representation of signals in


in analog signals. digital signals.

Analog signals produce too much noise. Digital signals do not produce noise.

Examples of analog signals are Human voice, Examples of digital signals are Computers, Digital
Thermometer, Analog phones etc. Phones, Digital pens, etc.

Dimensions of image
Image dimensions have the length and width of a digital image. It does not have depth. An image
is measured in pixels. An image is only of 2-dimensional that is why an image is defined as a 2-
dimensional signal. Many graphics programmers view and work image in inches or centimeters.
Depending on the plane, size of the image can be changed.

How does television work?


As we have studied above, an image has only 2-dimension. If we want to convert an image to 3-
dimension, we will need one more dimension for the conversion. Let time be the 3rd dimension
so, 2dimensional images can move on 3rd dimension time.

The same concept is used on television. For example, if we a playing a video, then it is nothing
but 2-dimensional pictures moving over the time dimension. As 2-dimensional objects are
moving on 3rd dimension object, then we can say it is a three dimensional.
Different dimensions of signals
1-Dimension Signal
A noisy voice signal is an example of 1 dimension signal. In maths, it can be represented as:

F(x)=waveform

As it is a 1-dimension signal that is why one variable is used.

A question arises here that as it is a one-dimensional signal then why it has two axes?

Even, it is a one-dimensional signal but drawn on two-dimensional space. Or we can say that to
represent 1-dimensional signal we have used 2-dimensional space.

2-Dimension Signal
Any object which has length and height comes under 2-dimension signal. It has two independent
variables. In math’s, it can be represented as:

F (x, y) = Object

For example, below picture is a 2-dimensional signal of coordinates X and Y. At each point, an
intensity value is given and mapped on the computer screen known as 2-dimensional images.
3-Dimension Signal
Any object which has length and height and depth comes under 3-dimension signal. It has three
independent variables: In math, it can be represented as:

F (x, y, z) = Animated object

Our earth is a 3-dimensional world. A cube is also an example of a 3-dimensional signal

4-Dimension Signal
Any object which has length, height, depth and time comes under 4-dimension signal. It has four
independent variables. In math, it can be represented as:

F (x, y, z, t) = Animated movie

In reality, animated movies are 4D in which 3 dimensions are used and the 4th dimension is
time. In animated movies, each character is 3D and moves with respect to the time.

Pixel
Pixel is the smallest element of an image. Each pixel correspond to any one value. In an 8-bit gray scale
image, the value of the pixel between 0 and 255. The value of a pixel at any point correspond to the intensity
of the light photons striking at that point. Each pixel store a value proportional to the light intensity at that
particular location.

PEL
A pixel is also known as PEL. You can have more understanding of the pixel from the pictures given below.
In the above picture, there may be thousands of pixels, that together make up this image. We will zoom that
image to the extent that we are able to see some pixels division. It is shown in the image below.

In the above picture, there may be thousands of pixels, that together make up this image. We will zoom that
image to the extent that we are able to see some pixels division. It is shown in the image below.

Calculation of total number of pixels


We have defined an image as a two dimensional signal or matrix. Then in that case the number of PEL would
be equal to the number of rows multiply with number of columns. This can be mathematically represented as
below:

Total number of pixels = number of rows ( X ) number of columns

Or we can say that the number of (x,y) coordinate pairs make up the total number of pixels. We will look in
more detail in the tutorial of image types , that how do we calculate the pixels in a color image.

Gray level
The value of the pixel at any point denotes the intensity of image at that location, and that is also known as
gray level. We will see in more detail about the value of the pixels in the image storage and bits per pixel
tutorial, but for now we will just look at the concept of only one pixel value.

Pixel value
As it has already been define in the beginning of this tutorial, that each pixel can have only one value and
each value denotes the intensity of light at that point of the image. We will now look at a very unique value 0.
The value 0 means absence of light. It means that 0 denotes dark, and it further means that whenever a pixel
has a value of 0, it means at that point , black color would be formed.

Have a look at this image matrix

0 0 0

0 0 0

0 0 0

Now this image matrix has all filled up with 0. All the pixels have a value of 0. If we were to calculate the
total number of pixels form this matrix , this is how we are going to do it.

Total no of pixels = total no. of rows X total no. of columns

=3X3

= 9.

Bits in mathematics:

If we devise a formula for the calculation of total number of combinations that can be made from bit , it
would be like this.

Where, bpp denotes bits per pixel. Put 1 in the formula you get 2, put 2 in the formula , you get 4. It grows
exponentially.

Number of different colors:

Now as we said it in the beginning, that the number of different colors depend on the number of bits per pixel.

Bits per Number of colors Bits per pixel Number of colors


pixel
1 bpp 2 colors 7 bpp 128 colors

2 bpp 4 colors 8 bpp 256 colors

3 bpp 8 colors 10 bpp 1024 colors

4 bpp 16 colors 16 bpp 65536 colors

5 bpp 32 colors 24 bpp 16777216 colors

6 bpp 64 colors 32 bpp 4294967296 colors

Shades

You can easily notice the pattern of the exponential growth. The famous gray scale image is of 8 bpp , means it
has 256 different colors in it or 256 shades. hades can be represented as:

Color images are usually of the 24 bpp format , or 16 bpp. We will see more about other color formats and image
types in the tutorial of image types.

Image storage requirements

After the discussion of bits per pixel , now we have every thing that we need to calculate a size of an image.

Image size. The size of an image depends upon three things.

 Number of rows

 Number of columns

 Number of bits per pixel

The formula for calculating the size is given below. Size of an image = rows * cols * bpp. It means that if you
have an image, lets say this one:
Assuming it has 1024 rows and it has 1024 columns. And since it is a gray scale image , it has 256 different
shades of gray or it has bits per pixel. Then putting these values in the formula , we get

Size of an image = rows * cols * bpp

= 1024 * 1024 * 8

= 8388608 bits.

But since its not a standard answer that we recognize , so will convert it into our format.

Converting it into bytes = 8388608 / 8 = 1048576 bytes.

Converting into kilo bytes = 1048576 / 1024 = 1024kb.

Converting into Mega bytes = 1024 / 1024 = 1 Mb.

Elements of Visual Perception


The field of digital image processing is built on the foundation of mathematical and probabilistic
formulation, but human intuition and analysis play the main role to make the selection between various
techniques, and the choice or selection is basically made on subjective, visual judgements. In human
visual perception, the eyes act as the sensor or camera, neurons act as the connecting cable and the brain
acts as the processor.

The basic elements of visual perceptions are:

1. Structure of Eye
2. Image Formation in the Eye
3. Brightness Adaptation and Discrimination
Structure of Eye

Fig.18. Structure of Human Eye

The human eye is a slightly asymmetrical sphere with an average diameter of the length of
20mm to 25mm. It has a volume of about 6.5cc. The eye is just like a camera. The external
object is seen as the camera take the picture of any object. Light enters the eye through a small
hole called the pupil, a black looking aperture having the quality of contraction of eye when
exposed to bright light and is focused on the retina which is like a camera film.
The lens, iris, and cornea are nourished by clear fluid, know as anterior chamber. The fluid
flows from ciliary body to the pupil and is absorbed through the channels in the angle of the
anterior chamber. The delicate balance of aqueous production and absorption controls pressure
within the eye.
Cones in eye number between 6 to 7 million which are highly sensitive to colors. Human
visualizes the colored image in daylight due to these cones. The cone vision is also called as
photopic or bright-light vision.
Rods in the eye are much larger between 75 to 150 million and are distributed over the retinal
surface. Rods are not involved in the color vision and are sensitive to low levels of
illumination.
Image Formation in the Eye
When the lens of the eye focus an image of the outside world onto a light-sensitive membrane in the
back of the eye, called retina the image is formed. The lens of the eye focuses light on the
photoreceptive cells of the retina which detects the photons of light and responds by producing neural
impulses.

Fig.19. Structure of Human Eye

Brightness Adaptation and Discrimination

Digital images are displayed as a discrete set of intensities. The eyes ability to discriminate black and
white at different intensity levels is an important consideration in presenting image processing result.

Fig.20. Structure of Human Eye

The range of light intensity levels to which the human visual system can adapt is of the order of 1010 from the
scotopic threshold to the glare limit. In a photopic vision, the range is about 10 6.
Sampling, Quantization and Resolution

Image Acquisition

Images are typically generated by illuminating a scene and absorbing the energy reflected by the
objects in that scene.

Typical notions of illumination and scene can be way off:

• X-rays of a skeleton
• Ultrasound of an
unborn baby
• Electro-microscopic
images of molecules
Image Sensing

• Incoming energy lands on a sensor material responsive to that type of energy and this
generates a voltage
• Collections of sensors are arranged to capture images

Imaging
Sensor
Line of Image Sensors Array of Image Sensors
Sampling and quantization

In order to become suitable for digital processing, an image function f (x, y) must be digitized
both spatially and in amplitude. Typically, a frame grabber or digitizer is used to sample and
quantize the analogue video signal. Hence in order to create an image which is digital, we need
to covert continuous data into digital form. There are two steps in which it is done:

 Sampling
 Quantization
The sampling rate determines the spatial resolution of the digitized image, while the quantization
level determines the number of grey levels in the digitized image. A magnitude of the sampled
image is expressed as a digital value in image processing. The transition between continuous
values of the image function and its digital equivalent is called quantization.

The number of quantization levels should be high enough for human perception of fine shading
details in the image. The occurrence of false contours is the main problem in image which has
been quantized with insufficient brightness levels.

Digitizing a signal

As we have seen in the previous tutorials, that digitizing an analog signal into a digital, requires
two basic steps. Sampling and quantization. Sampling is done on x axis. It is the conversion of x
axis (infinite values) to digital values. The below figure shows sampling of a signal.
Sampling with relation to digital images

The concept of sampling is directly related to zooming. The more samples you take, the more
pixels, you get. Oversampling can also be called as zooming. This has been discussed under
sampling and zooming tutorial.

But the story of digitizing a signal does not end at sampling too, there is another step involved
which is known as Quantization.

What is quantization?

Quantization is opposite to sampling. It is done on y axis. When you are quantizing an image,
you are actually dividing a signal into quanta(partitions).

On the x axis of the signal, are the co-ordinate values, and on the y axis, we have amplitudes. So
digitizing the amplitudes is known as Quantization.

Here how it is done

Quantization

You can see in this image, that the signal has been quantified into three different levels. That
means that when we sample an image, we actually gather a lot of values, and in quantization, we
set levels to these values. This can be more clear in the image below.
Quantization levels

In the figure shown in sampling, although the samples has been taken, but they were still
spanning vertically to a continuous range of gray level values. In the figure shown above, these
vertically ranging values have been quantized into 5 different levels or partitions. Ranging from
0 black to 4 white. This level could vary according to the type of image you want.

The relation of quantization with gray levels has been further discussed below.

Relation of Quantization with gray level resolution:

The quantized figure shown above has 5 different levels of gray. It means that the image formed
from this signal, would only have 5 different colors. It would be a black and white image more or
less with some colors of gray. Now if you were to make the quality of the image better, there is
one thing you can do here. Which is, to increase the levels, or gray level resolution up. If you
increase this level to 256, it means you have a gray scale image. Which is far better than simple
black and white image.

Now 256, or 5 or whatever level you choose is called gray level. Remember the formula that we
discussed in the previous tutorial of gray level resolution which is,

We have discussed that gray level can be defined in two ways. Which were these two.

 Gray level = number of bits per pixel (BPP).(k in the equation)


 Gray level = number of levels per pixel.
In this case we have gray level is equal to 256. If we have to calculate the number of bits, we
would simply put the values in the equation. In case of 256levels, we have 256 different shades
of gray and 8 bits per pixel, hence the image would be a gray scale image.

Image Representation

Before we discuss image acquisition recall that a digital image is composed of M rows and N
columns of pixels each storing value Pixel values are most often grey levels in the range 0-
255(black-white) We will see later on that image can easily be represented as matrices.
Coordinate convention used to represent digital images. Because coordinate values are integers,
there is a one-to-one correspondence between x and y and the rows (r) and columns (c) of a
matrix.

(a) Image plotted as a surface. (b) Image displayed as a visual intensity array. (c) Image shown
as a 2-D numerical array. (The numbers 0, .5, and 1 represent black, gray, and white,
respectively.

Spatial Resolution

• The spatial resolution of an image is determined by how sampling was carried out
• Spatial resolution simply refers to the smallest discern able detail in an image
• Vision specialists will
often talk about pixel
size
• Graphic designers will
talk about dots per
inch (DPI)

Intensity Level Resolution

• Intensity level resolution refers to the number of intensity levels used to represent the
image
• The more intensity levels used, the finer the level of detail discern able in an
image
• Intensity level resolution is usually given in terms of the number of bits used to
store each intensity level

Saturation & Noise

Image Saturation

 saturation is the purity of a color.


 Saturation is a very important aspect in photography, perhaps as important as contrast.
 In addition to our eyes being naturally attracted to vibrant tones, colors have their own
unique way of telling a story that plays a crucial part in making a photograph.
Image Noise

 Image noise is random variation of brightness or color information in images.


 it is usually an aspect of electronic noise.
 It can be produced by the image sensor and circuitry of a scanner or digital camera.

Basic Relationships between Pixels


⮚ Neighborhood
⮚ Adjacency
⮚ Connectivity
⮚ Paths
⮚ Regions and boundaries

Neighbors of a Pixel
⮚ Any pixel p(x, y) has two vertical and two horizontal neighbors, given by (x+1, y),
(x-1, y), (x, y+1), (x, y-1)

⮚ This set of pixels are called the 4-neighbors of P, and is denoted by N4(P).

⮚ Each of them are at a unit distance from P.

⮚ The four diagonal neighbors of p(x,y) are given by, (x+1, y+1), (x+1, y-1), (x-1, y+1),
(x-1 ,y-1)
⮚ This set is denoted by ND(P).

⮚ Each of them are at Euclidean distance of 1.414 from P.

⮚ The points ND(P) and N4(P) are together known as 8-neighbors of the point P, denoted
by N8(P).

⮚ Some of the points in the N4, ND and N8 may fall outside image when P lies on the
border of image.

❖ 4-neighbors of a pixel p are its vertical and horizontal neighbors denoted by N4(p)

p
N4(p)

❖ 8-neighbors of a pixel p are its vertical horizontal and 4 diagonal neighbors denoted by
N8(p)

p
N8(p)
ND N4 ND

N4 P N 4

ND N4 ND

▪ N4- 4-neighbors
▪ ND - diagonal neighbors
▪ N8- 8-neighbors (N4 U ND)

Adjacency

⮚ Two pixels are connected if they are neighbors and their gray levels satisfy some
specified criterion of similarity.
⮚ For example, in a binary image two pixels are connected if they are 4-neighbors and have
same value (0/1).
⮚ Let V be set of gray levels values used to define adjacency.
⮚ 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set
N4(p).
⮚ 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set
N8(p).
⮚ m-adjacency: Two pixels p and q with values from V are madjacent if,

▪ q is in N4(P).
▪ q is in ND(p) and the set [N4(p)N4(q)] is empty (has no pixels whose
values are from V).
Connectivity

❑ To determine whether the pixels are adjacent V = {1, 2}


in some sense.

❑ Let V be the set of gray-level values used to define


0 1 1
connectivity; then Two pixels p, q that have Values
from the set V are:
0 2 0
a. 4-connected, if q is in the set N4(p)
0 0 1
b.8-connected, if q is in the set N8(p)
c. m-connected, iff 0 1 1
i. q is in N4(p) or 0 2 0
ii. q is in ND(p) and the set N (p)N (q) is empty.
4 4

0 0 1
0 1 1
0 2 0
0 0 1
Adjacency/Connectivity

❑ Pixel p is adjacent to pixel q if they are connected.


❑ Two image subsets S1 and S2 are adjacent if some pixel in S1 is adjacent to some pixel in
S2

S 1
S 2

Paths & Path lengths


⮚ A path from pixel p with coordinates (x, y) to pixel q with coordinates (s, t) is a sequence
of distinct pixels with coordinates: (x0, y0), (x1, y1), (x2, y2) … (xn, yn), where (x0, y0)=
(x, y) and (xn, yn)=(s, t); (xi, yi) is adjacent to (xi-1, yi-1) 1≤ i ≤ n
⮚ Here n is the length of the path.
⮚ We can define 4-, 8-, and m-paths based on type of adjacency used

Connected Components

⮚ If p and q are pixels of an image subset S then p is connected to q in S if there is a path


from p to q consisting entirely of pixels in S.

⮚ For every pixel p in S, the set of pixels in S that are connected to p is called a connected
component of S.

⮚ If S has only one connected component then S is called Connected Set.


Regions and Boundaries

⮚ A subset R of pixels in an image is called a Region of the image if R is a connected set.

⮚ The boundary of the region R is the set of pixels in the region that have one or more
neighbors that are not in R.

⮚ If R happens to be entire Image?

Distance measures

❑ Given pixels p, q and z with coordinates (x, y), (s, t), (u, v) respectively, the distance
function D has following properties:

❖ D(p, q) ≥ 0 [D(p, q) = 0, iff p = q]


❖ D(p, q) = D(q, p)
❖ D(p, z) ≤ D(p, q) + D(q, z)

The following are the different Distance measures:

⮚ Euclidean Distance :
De(p, q) = [(x-s)2 + (y-t)2]

⮚ City Block Distance:


D4(p, q) = |x-s| + |y-t|
⮚ Chess Board Distance:
D8(p, q) = max(|x-s|, |y-t|

Neighborhood based arithmetic/Logic


Value assigned to a pixel at position ‘e’ is a function of its neighbors and a set of window
functions.
Arithmetic/Logic Operations:
Tasks done using neighborhood processing:

❖ Smoothing / averaging.
❖ Noise removal / filtering.
❖ Edge detection.
❖ Contrast enhancement

Light and the Electromagnetic Spectrum

⮚ Light is just a particular part of the electromagnetic spectrum that can be sensed by the
human eye.
⮚ The electromagnetic spectrum is split up according to the wavelengths of different forms
of energy.

Types of Electromagnetic Radiation:

You might also like