Department of CE/IT
Image Processing Unit-3
Spatial domain &
frequency domain
(01CE0507)
By: Prof. Ankita Chavda
Enhancement
Techniques
Image
enhancement
techniques Frequency Domain
Spatial Domain
Operates on FT of
Operates on pixels
Image
R.C.Gonzalez & R.E.Woods
Enhancement:
Enhancement is the process of manipulating an image so that the
result is more suitable than the original for a specific application.
Enhancement techniques are problem oriented.
Ex. a method which is quite useful for enhancing an image for X-ray
may not necessarily be the best approach for enhancing satellite
images.
Image Good Images:
Enhancement For human visual
The visual evaluation of image quality is a highly subjective process.
It is hard to standardize the definition of a good image.
For machine perception
The evaluation task is easier.
A good image is one which gives the best machine recognition
results.
A certain amount of trial and error usually is required before a
particular image enhancement approach is selected.
R.C.Gonzalez & R.E.Woods
Spatial domain refers to image plane itself and image processing
methods are based on direct manipulation of pixels in an image.
2 principal categories of spatial processing are:
1. Intensity transformation
2. Spatial filtering
Intensity transformation operates on single pixels of an image, for the
Spatial domain purpose of contrast manipulation and image thresholding.
Spatial filtering deals with performing operations, such as image
sharpening by working in a neighborhood of every pixel in an image.
Frequency domain, in which operation are performed on Fourier
transform of an image, rather than on the image itself.
Generally spatial domain techniques are more efficient computationally
and require less processing resources to implement.
R.C.Gonzalez & R.E.Woods
In these methods a operation (linear or non-linear) is performed
on the pixels in the neighborhood of coordinate (x, y) in the input
image f(x, y), giving enhanced image g(x, y).
g(x,y) = T[f(x,y)]
Where f(x, y) is input image, g(x, y) is output image and T is
operator on f defined over a neighborhood of point (x, y).
➢ Procedures that operate directly on
Spatial domain pixels.
g ( x, y ) = T [ f ( x, y )]
where
▪ f ( x, y ) is the input image
▪ g ( x, y) is the processed image
▪ T is an operator on f defined over
R.C.Gonzalez & R.E.Woods
some neighborhood of ( x, y )
The point (x, y) shown is an arbitrary location in the image, and the
small region containing the point is a neighborhood of (x, y).
Neighborhood can be any shape but generally it is rectangular (
3x3, 5x5, 9x9 etc).
Example: “Compute the average intensity of the neighborhood”.
Typically the process starts at the top left of the input image and
proceeds pixel by pixel in a horizontal scan, one row at a time.
Spatial domain In spatial filtering, neighborhood along with the predefined
operation, is called spatial filter (spatial mask, kernel, template or
window).
2 related concepts are very important to know, when performing
linear spatial filtering.
1. Correlation
2. Convolution
R.C.Gonzalez & R.E.Woods
The smallest possible neighborhood is of size 1x1. In this case g
depends only on the value of f at single point (x, y) and T becomes
an intensity transformation function.
s = T(r)
Where s denoted for intensity of g and r denoted for f at any point
Intensity / Gray (x, y).
level
transformation
function
a) Contrast stretching function b) Thresholding function
R.C.Gonzalez & R.E.Woods
Contrast stretching:
Transformation to every pixel of f to generate the corresponding
pixels in g would be produce an image of higher contrast than the
original by darkening the intensity levels below m and brightening
the levels above m. This technique is called contrast stretching.
Intensity / Gray Thresholding function:
level T(r) produces a two level (binary) image. A mapping of this form is
called a thresholding function.
transformation
function ➢ (a) Produce higher contrast than the original by
▪ darkening the levels below m in the original image
▪ Brightening the levels above m in the original image
➢ (b) Produce a two-level (binary) image
R.C.Gonzalez & R.E.Woods
➢ Linear function
Intensity
transformation ➢ Negative and identity
transformation
functions
➢ Logarithm function
➢ Log and inverse-log
transformation
➢ Power-law function
➢ nth power and nth root
transformation
R.C.Gonzalez & R.E.Woods
➢ An image with gray level in
the range [0, L − 1]
where L = 2n; n=1,2,…
➢ Negative transformation :
s = L −1 − r
Image ➢ Reversing the intensity
levels of an image.
Negatives
➢ Suitable for enhancing white
or gray detail embedded in
dark regions of an image,
especially when the black
area dominant in size.
R.C.Gonzalez & R.E.Woods
Example of
Negative
Image
R.C.Gonzalez & R.E.Woods
s = c log(1 + r )
➢c is a constant and r≥0
➢Log curve maps a narrow
range of low gray-level
values in the input image
Log into a wider range of output
Transformations levels.
➢Inverse log used to expand
the values of dark pixels in an
image while compressing the
higher-level values.
R.C.Gonzalez & R.E.Woods
Any curve in general shape of log function would accomplish
spreading / compressing of intensity levels in an image.
The shape of the log curve in figure shows that transformation
maps a narrow range of low intensity values in the input into a
wider range of output levels.
Inverse log transformation is used to expand the values of dark
Log pixels in an image while compressing the higher level values.
Transformations The log function has the important characteristics that it
compresses the dynamic range of images with large variations in
pixel values.
Example: Application in which pixel values have a large dynamic
range is Fourier Spectrum, that ranges from 0 to 10^6 or higher.
We will scale this values linearly in an 8-bit system.
R.C.Gonzalez & R.E.Woods
Examples of
Logarithm
Image
R.C.Gonzalez & R.E.Woods
s = cr
➢ c and are positive constants
Power-Law ➢ Power-low curves with fractional
Transformations values of map a narrow range of
dark input values into a wider range
of output values.
➢c= =1 → Identity function
R.C.Gonzalez & R.E.Woods
Unlike the log transformation, we can notice that
transformation curves obtained simply by varying gamma
value.
A variety of devices used for image capture, printing and
display respond according to a power law.
Power-Law The process used to correct these power-law response
phenomena is called gamma correction.
Transformations
If gamma < 1, the mapping is weighted toward higher
(brighter) output values and if gamma > 1, the mapping is
weighted toward lower (darker) output values.
In figure, CRT device have an intensity to voltage response that
is power function , with exponent varying from 1.8 to 2.5
approximately.
R.C.Gonzalez & R.E.Woods
Gamma
correction
✓ Gamma correction is
done by preprocessing
the image before
inputting it to the
1/
monitor with s = cr
R.C.Gonzalez & R.E.Woods
Another
Example: MRI
R.C.Gonzalez & R.E.Woods
Another
Example
R.C.Gonzalez & R.E.Woods
➢ A complementary approach to the methods discussed in the
previous 3 sections is to use piecewise linear functions.
Piecewise- ➢Advantage :
Linear The form of piecewise function can be arbitrarily complex
Transformation (more options to design), some important transformations
can be formulated only as piecewise functions.
Functions
➢ Disadvantage:
Their specification requires considerably more user input
R.C.Gonzalez & R.E.Woods
✓(a) Increase the dynamic range of
the gray levels in the image
✓(b) a low-contrast image: result
from poor illumination, lack of
Contrast dynamic range in the imaging
Stretching sensor, or even wrong setting of a
lens aperture of image acquisition
✓(c) result of contrast stretching :
(r1 , s1 ) = (rmin ,0) and (r2 , s2 ) = (rmax , L −1)
✓(d) result of thresholding
R.C.Gonzalez & R.E.Woods
Contrast stretching is the process that expands the range of
intensity levels in the image, so that it spans the full intensity
range of the recording medium or display device.
The location of points (r1, s1) and (r2,s2) control the shape of the
transformation function.
If r1=s1 and r2=s2, the transformation is a linear function that
Contrast produces no changes in intensity levels.
Stretching If r1=r2, s1=0 and s2= L-1, the transformation becomes
thresholding function that create binary image.
Intermediate values of (r1, s1) and (r2, s2) produce various degrees
of spread in the intensity levels of output image.
In general r1<=r2 and s1<=s2 is assumed so that the function is
single valued and monotonically increasing.
R.C.Gonzalez & R.E.Woods
✓ Highlighting a specific range
of gray levels in an image
✓ Display a high value of all
gray levels in the range of
Intensity Level interest and a low value of all
slicing / other gray levels
Gray-level ✓ (a) transformation highlights
range [A,B] of gray level and
slicing reduces all others to a contrast
level
✓ (b) transformation highlights
range [A,B] but preserves all
other levels
R.C.Gonzalez & R.E.Woods
Highlighting specific range of intensities in an image often is of
interest, this process is called intensity level slicing.
One approach is to display one value (Ex. White) in the range of
interest and in another (Ex. black) with all other intensities.
Here, Figure(a) produces a binary image and figure (b) Brighten or
Intensity Level darken the desired range of intensities but leaves all other
intensity levels in the image unchanged.
slicing /
Gray-level
slicing
R.C.Gonzalez & R.E.Woods
Bit-plane
slicing ➢ Highlighting the contribution made to total image appearance by
specific bits
➢ Suppose each pixel is represented by 8-bits
➢ Higher-order bits contain the majority of the visually significant data
➢ Useful for analyzing the relative important played by each bit of the
image
R.C.Gonzalez & R.E.Woods
Pixels are digital numbers composed of bits. For example,
intensity of each pixel in a 256 level gray scale image is composed
of 8-bits.
Instead of highlighting intensity level ranges, we could highlight
the contribution made to total image appearance by specific bits.
Bit-plane Here in figure, plane 1 containing the lowest order bit of all pixels
slicing in the image and plane 8, all the highest order bits.
In example, you can observe that last 2 planes contain significant
amount of the visually significant data.
The lower order planes contribute to more subtle intensity details
in the image.
R.C.Gonzalez & R.E.Woods
Bit-plane 7 Bit-plane 6
Example Bit- Bit- Bit-
plane 5 plane 4 plane 3
Bit- Bit- Bit-
plane 2 plane 1 plane 0
R.C.Gonzalez & R.E.Woods
Binary image of 8th bit plane can be obtained by thresholding
function that map all intensities between 0 to 127 → 0 and map all
intensities between 128 to 255 → 1.
Bit-plane Decomposing an image into its bit planes is useful for analyzing
the relative importance of each bit in the image for determining
slicing number of bits used to quantize the image.
This type of decomposition also used for image compression.
R.C.Gonzalez & R.E.Woods
➢ Histogram of a digital image with gray levels in the
range [0,L-1] is a discrete function
h(rk ) = nk
➢ Where
rk : the kth gray level
Histogram nk : the number of pixels in the image having gray level
Processing h(rk ) : histogram of digital image with gray levels
➢ Normalized histogram:
p(rk) = nk / n OR p(rk) = nk / MN
n: total number of pixels in image
n = MN (M: row dimension, N: column dimension)
R.C.Gonzalez & R.E.Woods
➢ Dividing each of histogram at gray level rk by the
total number of pixels in the image,
p(rk ) = nk / n
for k=0,1,…,L-1
Normalized
➢ p(rk ) gives an estimate of the probability of
Histogram occurrence of gray level rk.
➢ The sum of all components of a normalized
histogram is equal to 1
R.C.Gonzalez & R.E.Woods
➢ Basic for numerous spatial domain processing
techniques.
➢ Used effectively for image enhancement.
➢ Information inherent in histograms also is useful in
Histogram image compression and segmentation.
Processing ➢ Histograms are very simple to calculate in software.
➢ Horizontal axis of each histogram plot corresponds
to intensity values rk.
➢ Vertical axis corresponds to the value of h(rk) = nk or
p(rk) = nk / MN if values are normalized.
R.C.Gonzalez & R.E.Woods
✓ Dark image
components of histogram
are concentrated on the low
side of the gray scale
Example
✓ Bright image
components of histogram
are concentrated on the high
side of the gray scale
R.C.Gonzalez & R.E.Woods
✓ Low-contrast image
histogram is narrow and
centered toward the middle of
the gray scale
✓ High-contrast image
Example
histogram covers broad
range of the gray scale and the
distribution of pixels is not too
far from uniform, with very few
vertical lines being much
higher than the others
R.C.Gonzalez & R.E.Woods
Histogram Transformation:
r: intensities of the image to be enhanced
r is in the range [0, L-1]
r = 0: black, r = L-1: white
s: processed gray levels for every pixel value r
Histogram s = T(r), 0 ≤ r ≤ L-1
Equalization
Requirements of transformation function T:
a) T(r) is a (strictly) monotonically increasing in the interval 0 ≤ r ≤
L-1
b) 0 ≤ T(r) ≤ L-1 for 0 ≤ r ≤ L-1
Inverse transformation
r = T-1(s), 0 ≤ s ≤ L-1
R.C.Gonzalez & R.E.Woods
Histogram
Equalization
R.C.Gonzalez & R.E.Woods
Single-valued (one-to-one relationship) guarantees that the
inverse transformation will exist
Monotonicity condition preserves the increasing order from
Conditions for black to white in the output image thus it won’t cause a
negative image
T(r)
0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1 guarantees that the output gray
levels will be in the same range as the input levels.
The inverse transformation from s back to r is
r = T -1(s) ; 0 ≤s ≤1
R.C.Gonzalez & R.E.Woods
The gray levels in an image may be viewed as random variables in
the interval [0,1].
The normalized histogram may viewed as a Probability Density
Function (PDF).
Histogram and The PDF of the transformed variable s is determined by
The gray-level PDF of the input image
Probability
And the chosen transformation function
Density
If pr(r) and T(r) are known and T-1(s) satisfies condition (a)
Function) then ps (s) can be obtained using a formula :
where pr(r) and ps (s) denote the PDF of random variable r and s.
R.C.Gonzalez & R.E.Woods
A transformation function is a cumulative distribution
function (CDF) of random variable r :
Cumulative
Distribution where w is a dummy variable of integration
Function (CDF) Note that T(r) depends on pr(r) and satisfies the
conditions of transformation function.
Here, right hand side of this equation is recognized as
CDF.
R.C.Gonzalez & R.E.Woods
PDF for the image transformed by T(r)= CDF
ds dT (r ) d r
= = ( L − 1) [ 0 pr ( w)dw] = ( L − 1) pr (r )
dr dr dr
Using CDF as
Transformation Substituting this result of dr/ds in equation
function
dr 1 1
ps ( s ) = pr (r ) = pr ( r ) = 0 s L −1
ds ( L − 1) pr (r ) L − 1
Uniform probability density function
R.C.Gonzalez & R.E.Woods
Histogram
Equalization
R.C.Gonzalez & R.E.Woods
The probability of occurrence of gray level in an image is
approximated by
nk
pr (rk ) = k = 0,1, 2,..., L − 1
Discrete MN
Transformation MN: total number of pixels in image
Function nk: number of pixels having gray level rk
L: total number of possible gray levels
k L −1 k
sk = T (rk ) = ( L − 1) pr (rj ) = nj k = 0,1, 2,.., L − 1
j =0 MN j =0
R.C.Gonzalez & R.E.Woods
A plot of pr(rk) versus rk is referred as histogram.
A processed (output) image is obtained by
Discrete mapping each pixel in the input image with
intensity rk into a corresponding pixel with level sk
Transformation in the output image.
Function
The transformation (mapping) T(rk) in this
equation is called histogram equalization or
histogram linearization transformation.
R.C.Gonzalez & R.E.Woods
➢ As the low-contrast image’s histogram is narrow and
centered toward the middle of the gray scale, if we
distribute the histogram to a wider range the quality
of the image will be improved.
Histogram ➢ We can do it by adjusting the probability density
function of the original histogram of the image so that
Equalization the probability spread equality
➢Equalization can be achieved by the following
transformation function
R.C.Gonzalez & R.E.Woods
Example
R.C.Gonzalez & R.E.Woods
Example
R.C.Gonzalez & R.E.Woods
Example
Result
R.C.Gonzalez & R.E.Woods
Example-2
R.C.Gonzalez & R.E.Woods
Example-2
R.C.Gonzalez & R.E.Woods
Histogram equalization
from dark image (1)
Histogram equalization
Histogram from light image (2)
equalization
Example Histogram equalization
from low contrast image (3)
Histogram equalization
from high contrast image (4)
R.C.Gonzalez & R.E.Woods
Histogram
equalization
Example
R.C.Gonzalez & R.E.Woods
Histogram
equalization
disadvantage
Example ✓ Image is dominated by
large, dark areas,
resulting in a histogram
characterized by a large
concentration of pixels in
pixels in the dark end of
the gray scale
R.C.Gonzalez & R.E.Woods
Histogram
equalization
disadvantage
Example ✓ The histogram equalization doesn’t make the
result image look better than the original image.
Consider the histogram of the result image, the net
effect of this method is to map a very narrow interval
of dark pixels into the upper end of the gray scale of
the input image. As a consequence, the output image
is light and has a washed-out appearance.
R.C.Gonzalez & R.E.Woods
Histogram equalization has a disadvantage which
is that it can generate only one type of output
image.
With Histogram Specification, we can specify the
Histogram shape of the histogram that we wish the output
image to have.
Specification
It doesn’t have to be a uniform histogram.
The method used to generate a processed image
that has a specified histogram is called histogram
matching or histogram specification.
R.C.Gonzalez & R.E.Woods
Let pr(r) is PDF of input image and pz(z) is specified probability
density function of output image that what we wish.
Procedure:
Histogram 1. Obtain pr(r) from the input image
Specification 2. Use specified PDF to obtain the transformation function G(z)
3. Obtain the inverse transformation because z is
obtained from s, this process is mapping from s to z.
4. Obtain output image by equalizing the input image, perform
inverse mapping, then PDF of output image will be equal to the
specified PDF.
R.C.Gonzalez & R.E.Woods
Histogram Equalization
Histogram Equalization
using Desired Histogram of
of input image, S.
output image, Z.
Histogram
Specification - S → R’ mapping R Z
- In continuous space, the two
equalize images (R and R’)
should be the same.
-Construct LUT as follows:
+ Map rk to sk using T(r)
+ Find vk , which is closest to sk
+ Compute zk using G-1(vk).
+ Repeat for all rk
R.C.Gonzalez & R.E.Woods
(1) the transformation
function G(z) obtained
from
k
G ( z k ) = p z ( zi ) = sk
i =0
k = 0,1, 2,...L − 1
Histogram (2) the inverse
Specification transformation
✓ Notice that the output histogram’s
low end has shifted right toward the
lighter region of the gray scale as
desired.
R.C.Gonzalez & R.E.Woods
Histogram specification is trial-and-error process
There are no rules for specifying histograms, and one
must resort to analysis on a case-by-case basis for any
given enhancement task.
Histogram Histograms processing methods are global processing,
in the sense that pixels are modified by a
Specification transformation function based in the gray-level content
of an entire image.
Sometimes, we may need to enhance details over
small areas in an image, which in called a local
enhancement.
R.C.Gonzalez & R.E.Woods
• Histogram processing methods in previous section are global
• Global methods are suitable for overall enhancement
• Histogram processing techniques are easily adapted to local
enhancement
Local • Example in Figure,
histogram b) Global histogram equalization
processing ➢ Considerable enhancement of noise
c) Local histogram equalization using 3x3 neighborhood
➢ Reveals (enhances) the small squares inside the dark squares contains
finer noise texture
R.C.Gonzalez & R.E.Woods
Original image Global histogram Local histogram
equalized image equalized image
Local
histogram
processing
R.C.Gonzalez & R.E.Woods
Define a square or rectangular neighborhood and move
the center of this area from pixel to pixel.
At each location, the histogram of the points in the
Local neighborhood is computed and either histogram
histogram equalization or histogram specification transformation
function is obtained.
processing
Another approach used to reduce computation is to
utilize non-overlapping regions, but it usually produces
an undesirable checkerboard effect.
R.C.Gonzalez & R.E.Woods
Spatial filtering is one of the principle tools used in this field of
broad spectrum of applications.
Filter is actually borrowed from frequency domain processing.
Filtering refers to accepting or rejecting certain frequency
components.
For example, the filter that passes low frequency is called lowpass
filters.
Spatial Operations with the values of the image pixels in the
Filtering neighborhood and the corresponding values of subimage.
Subimage: filter, mask, kernel, template, window
Values in the filter subimage : coefficient rather than pixels
Spatial filtering operations are performed directly on the pixels of
an image.
One-to-one correspondence between linear spatial filters and
filters in the frequency domain
R.C.Gonzalez & R.E.Woods
• A spatial filter consists of
1. a neighborhood (typically a small rectangle)
2. a predefined operation
Mechanics of • A processed (filtered) image is generated as the center of the filter
Spatial visits each pixel in the input image.
Filtering • Linear spatial filtering using 3x3 neighborhood
• At any point (x,y), the response, g(x,y), of the filter
g(x,y) = w(-1,-1)f(x-1,y-1) + w(-1,0)f(x-1,y) + …
+ w(0,0)f(x,y) + …+ w(1,0)f(x+1,y) + w(1,1)f(x+1,y+1)
R.C.Gonzalez & R.E.Woods
Mechanics of
Spatial
Filtering
R.C.Gonzalez & R.E.Woods
• simply move the filter mask from point to point in an image.
• At each point (x,y), the response of the filter at that point is
calculated using a predefined relationship.
R = w1 z1 + w2 z2 + ... + wmn zmn
Mechanics of =
mn
w z i i
i =1
Spatial
• Filtering of an image f with size MxN, a filter w of size m x n is
Filtering given by the expression
a b
g ( x, y ) = w( s, t ) f ( x + s, y + t )
s =− a t =− b
where a = (m-1) / 2, b = (n-1) / 2 or m = 2a+1, n = 2b+1
(a, b: positive integer)
R.C.Gonzalez & R.E.Woods
Spatial
Correlation
and
Convolution
R.C.Gonzalez & R.E.Woods
Spatial
Correlation
and
Convolution
R.C.Gonzalez & R.E.Woods
Correlation is process of moving filter mask over the image and
computing the sum of product at each location.
Convolution is exactly same as correlation except that the filter is
first rotated by 180°.
Spatial Correlation of a filter w(x,y) of size m x n with an image f(x,y)
Correlation
and g ( x, y ) =
a b
w( s, t ) f ( x + s, y + t )
s =− a t =− b
Convolution
Convolution of w(x,y) and f(x,y)
a b
g ( x, y ) = w( s, t ) f ( x − s, y − t )
s =− a t =− b
R.C.Gonzalez & R.E.Woods
Spatial
Correlation
and
Convolution Often used in applications where we need
Application to measure the similarity between images
or parts of images
(e.g., template matching).
R.C.Gonzalez & R.E.Woods
Linear spatial filtering by m x n filter:
Vector
Representation R = w1 z1 + w2 z2 +
mn
+ wmn zmn = wi zi = wT z
i =1
of Linear
Filtering Linear spatial filtering by 3 x 3 filter
9
R = w1 z1 + w2 z2 + + w9 z9 = wi zi = wT z
i =1
R.C.Gonzalez & R.E.Woods
Vector
Representation
of Linear
Filtering • Spatial filtering at the border of an image
• Limit the center of the mask no less than (n-1)/2 pixels
from the border -> Smaller filtered image
• Padding -> Effect near the border
• Adding rows and columns of 0’s
• Replicating rows and columns
R.C.Gonzalez & R.E.Woods
Linear spatial filtering by 3 x 3 filter
9
R = w1 z1 + w2 z2 + + w9 z9 = wi zi
i =1
Generating Average value in 3 x 3 neighborhood
Spatial Filter
1 9
Masks R = zi
9 i =1
Gaussian function
x2 + y 2
−
h ( x, y ) = e 2 2
R.C.Gonzalez & R.E.Woods
Smoothing filters are used for blurring and for noise reduction.
Blurring is used in preprocessing task, such as removal of small
details from an image prior to (large) object extraction.
Noise reduction can be accomplished by blurring with a linear
filter and non linear filter.
Smoothing
Spatial Filters Smoothing Linear Filters:
averaging filters, lowpass filters
• Noise reduction
• Undesirable side effect: blur edges
R.C.Gonzalez & R.E.Woods
The output of smoothing linear spatial filter is simply the average
of the pixel contained in the neighborhood of the filter mask.
These filters sometimes called averaging filters or lowpass
filters.
Idea is replacing the value of every pixel in an image by the
Smoothing average of the intensity levels in the neighborhood defined by the
filter mask.
Spatial Filters Result is reduced sharp transitions in intensities. Because random
noise typically consists of sharp transitions in intensities.
Averaging filter have undesirable side effect that they blue edges,
which is desirable feature of an image and characterized by sharp
intensity transitions .
R.C.Gonzalez & R.E.Woods
standard weighted
average average
Smoothing
Spatial Filters In first mask, we can observe all coefficient values are 1. Image is
finally divided by 9.
A spatial averaging filter in which all coefficients are equal is
known as box filters.
The second mask is weighted average, where coefficients are
different. Here, center of the mask is having highest weightage
and given more importance. Sum of all coefficient is 16 and
attractive feature because it is an integer power of 2.
R.C.Gonzalez & R.E.Woods
Standard averaging by 3x3 filter
1 9
R = zi
9 i =1
• Weighted averaging
Smoothing Reduce blurring compared to standard averaging
Spatial Filters General implementation for filtering with a weighted
averaging filter of size m x n (m=2a+1, n=2b+1)
a b
w(s, t ) f ( x + s, y + t )
s =− a t =− b
g ( x, y ) = a b
w( s, t )
s =− a t =− b
R.C.Gonzalez & R.E.Woods
Result of smoothing with square averaging filter masks
Original mn=3x3
Smoothing
Spatial Filters mn=5x5
mn=9x9
mn=35x35
mn=15x15
R.C.Gonzalez & R.E.Woods
Application example of spatial averaging
Original 15x15 averaging Result of thresholding
Smoothing
Spatial Filters
R.C.Gonzalez & R.E.Woods
• Order-statistic filters are nonlinear spatial filters whose response
is based on ordering (ranking) the pixels
• Median filter
• Replaces the pixel value by the median of the gray levels in
the neighborhood of that pixel
Order-Statistic • Effective for impulse noise (salt-and-pepper noise)
(Nonlinear) • Isolated clusters of pixels that are light or dark with respect
to their neighbors, and whose area is less than n2/2, are
Filters eliminated by an n x n median filter
• Median
• 3x3 neighborhood: 5th largest value
• 5x5 neighborhood: 13th largest value
• Max filter: select maximum value in the neighborhood
• Min filter: select minimum value in the neighborhood
R.C.Gonzalez & R.E.Woods
Order-Statistic
(Nonlinear)
Filters
R.C.Gonzalez & R.E.Woods
It replaces the value of a pixel by the median of the gray levels in
the neighborhood of that pixel ( the original value of the pixel is
included in the computation of the median)
Quite popular because for certain types of random noise (impulse
noise → salt and pepper noise), they provide excellent noise-
reduction capabilities, with considering less blurring than linear
smoothing filters of similar size.
forces the points with distinct gray levels to be more like their
Median Filters neighbors.
Isolated clusters of pixels that are light or dark with respect to
their neighbors, and whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median filter.
forced to have the value equal the median intensity of the
neighbors.
larger clusters are affected considerably less
R.C.Gonzalez & R.E.Woods
Median Filters
R.C.Gonzalez & R.E.Woods
The median represents the 50th percentile of a ranked set of
numbers, but ranking lend itself to many other possibilities.
For e.g., using 100th percentile results in the max filter given by, f
Max and Min (x, y) = max {g (s, t)} where (s, t) ∈ 𝑆𝑥𝑦 .
This filter is useful for finding the brightest points in an image.
Filter
The 0th percentile filter is the min filter which is given by f (x, y) =
min {g (s, t)} where (s, t) ∈ 𝑆𝑥𝑦 .
This filter is useful for finding the darkest points in an image.
R.C.Gonzalez & R.E.Woods
Objective of sharpening:
Highlight fine detail to enhance detail that has been blurred
Image blurring can be accomplished by digital averaging
Sharpening
Digital averaging is similar to spatial integration
Spatial Filters
Image sharpening can be done by digital differentiation
Digital differentiation is similar to spatial derivative
Image differentiation enhances edges and other discontinuities
R.C.Gonzalez & R.E.Woods
• Image sharpening by first- and second-order derivatives
• Derivatives are defined in terms of differences
• Requirement of first derivative
1) Must be zero in flat areas
Sharpening
2) Must be nonzero at the onset (start) of step and ramp
spatial filter
3) Must be nonzero along ramps
Foundation
• Requirement of second derivative
1) Must be zero in flat areas
2) Must be nonzero at the onset (start) of step and ramp
3) Must be zero along ramps of constant slope
R.C.Gonzalez & R.E.Woods
f
( x) = f ( x + 1) − f ( x)
x
first-order derivative
Sharpening f
( x − 1) = f ( x) − f ( x − 1)
spatial filter x
Foundation
2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x) second-order derivative
x 2
R.C.Gonzalez & R.E.Woods
Sharpening
spatial filter
Foundation
R.C.Gonzalez & R.E.Woods
• At the ramp
• First-order derivative is nonzero along the ramp
• Second-order derivative is zero along the ramp
• Second-order derivative is nonzero only at the onset and end of the
ramp
• At the step
Sharpening • Both the first- and second-order derivatives are nonzero
spatial filter • Second-order derivative has a transition from positive to negative
(zero crossing)
Foundation • Some conclusions
• First-order derivatives generally produce thicker edges
• Second-order derivatives have stronger response to fine detail
• First-order derivatives generally produce stronger response to gray-
level step
• Second-order derivatives produce a double response at step
R.C.Gonzalez & R.E.Woods
Image
Subtraction
R.C.Gonzalez & R.E.Woods
Unsharp Masking: The process of subtracting an Unsharp
(smoothed) image from the original image is known is Unsharp
masking.
Used to make image sharpen.
Already been used by printing and publishing industry.
Unsharp
Steps:
Masking and 1. Blur the original image.
Highboost 2. Subtract the blurred image from the original image (difference
Filtering is called mask)
3. Add the mask to the original.
g mask ( x, y ) = f ( x, y ) − f ( x, y )
Mask = original image – blurred image
R.C.Gonzalez & R.E.Woods
Sharpen Image = original image + Mask
Highboost Filtering:
g ( x, y ) = f ( x, y ) + k g mask ( x, y )
• When k=1, unsharp masking
Unsharp • When k > 1, highboost filtering
Masking and
Highboost
Filtering
R.C.Gonzalez & R.E.Woods
Unsharp
Masking and
Highboost
Filtering
R.C.Gonzalez & R.E.Woods
Thank you
R.C.Gonzalez & R.E.Woods