0% found this document useful (0 votes)
5 views118 pages

Meta Electronics

gray level transition in electronic filter technique to enhance the filtering method in equipment

Uploaded by

Nine To
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views118 pages

Meta Electronics

gray level transition in electronic filter technique to enhance the filtering method in equipment

Uploaded by

Nine To
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 118

UNIT_II IMAGE

ENHANCEMENT
AND FILTERING
Digital Image Processing, 3rd edition by Gonzalez and
Woods
L-7
Gray level
transformatio
ns
Image
Enhancement
 The objective of image enhancement is to process an image so that the
result is more suitable than the original image for a specific application.
 There are two main approaches:
 Image enhancement in spatial domain: Direct manipulation of pixels in

an image
 Point processing: Change pixel intensities
 Spatial filtering

 Image enhancement in frequency domain: Modifying the Fourier


transform of an image
Introduction
• spatial domain refers to the aggregate of
pixels composing an image. Spatial domain
methods are procedures that operate
directly on these pixels.
• Spatial domain processes will be denoted
by the
expression
g(x, y) = T[f(x, y)]

where f(x, y) is the input image, g(x, y) is the


processed image, and T is an operator on f,
defined over some neighborhood of (x, y).
• T can operate on a set of input images, such
as performing the pixel-by-pixel sum of K
images for noise reduction,
…contd
• The simplest form of T is when the neighborhood is of size 1*1 (that
is, a single pixel). In this case, g depends only on the value of f at (x,
y), and T becomes a gray-level (also called an intensity or mapping)
transformation function of the form
s = T(r)

• where, for simplicity in notation, r and s are variables denoting,


respectively,
• the gray level of f(x, y) and g(x, y) at any point (x, y).
if T(r) has the form shown in Fig. 3.2(a), the effect of this transformation would be
to produce an image of higher contrast than the original by darkening the levels
below m and brightening the levels above m in the original image(contrast
stretching), the values of r below m are compressed by the transformation function
into a narrow range of s, toward black
…contd
• The opposite effect takes place for values of r above m as in Fig. 3.2(b)
• T(r) produces a two-level (binary) image. A mapping of this form is
called a thresholding function.
• Because enhancement at any point in an image depends only on the
gray level at that point, techniques in this category often are referred
to as point processing.
• s=T(r), where T is a transformation that maps apixel value r into a
pixel value s.
Some Basic Intensity
Transformation Functions
Image Negatives
s=L–1–r
• S is the output intensity value
• L is the highest intensity levels
• r is the input intensity value
• Reversing the intensity levels of an image in this
manner produces the equivalent of a photographic
negative.
• Particularly suited for enhancing white or gray detail
embedded in dark regions of an image, especially
when the black areas are dominant in size.
Some Basic Intensity
Transformation Functions
Three basic types of functions used frequently for image enhancement:
1. linear (negative and identity transformations),
2. logarithmic (log and inverse-log
transformations), and
3. power-law (nth power and nth root transformations).
The identity function is the trivial case in which output intensities are
identical to input intensities.
• It is included in the graph only for completeness
Some Basic Intensity
Transformation Functions
Some Basic Intensity
Transformation Functions
2. Log Transformations
• s = c log(1 + r): c is constant, r>0

• It maps a narrow range of low intensity values in the input into a wide range of
output levels
• The opposite is true of higher values of input levels
• It expands the values of dark pixels in an image while compressing the higher level
values
• It compresses the dynamic range of images with large variations in pixel values
• log function characteristic -compresses the dynamic range of images with large
variations in pixel values.
Some Basic Intensity Transformation
Functions
• Fig. 3.5(a) shows a Fourier spectrum with
values in the range 0 to 1.5*106. (not
perceived as black)
• When these values are scaled linearly for
display in an 8-bit system, the brightest
pixels will dominate the display, at the
expense of lower (and just as important)
values of the spectrum.
• Figure 3.5(b) shows the result of scaling this
new range linearly and displaying the
spectrum in the same 8-bit display.
• The wealth of detail visible in this image as
compared to a straight display of the
spectrum is evident from these pictures
Some Basic Intensity
Transformation Functions
3. Power Law (Gamma) Transformations
• s = c rγ c and γ are both positive constants
• With fractional values(0<γ<1) of gamma map a narrow range of dark input
values into a wider range of output values, with the opposite being true for
higher values (γ >1)of input levels.
• C=gamma=1 means it is an identity transformations.
• Variety of devices used for image capture , printing, and display respond
according to a power law.
• Process used to correct these power law response phenomena is called gamma
correction.
Some Basic Intensity
Transformation Functions
• power-law curves with fractional values of g map a narrow
range of dark input values into a wider range of output
values, with the opposite being true for higher values of
input levels.

• notice here a family of possible transformation curves


obtained simply by varying g.

• in Fig. 3.6 that curves generated with values of g>1 have


exactly the opposite effect as those generated with values of
g<1.

• Finally, we note that Eq. (3.2-3) reduces to the identity


transformation when c=g=1.
Some Basic Intensity
Transformation Functions
•CRT devices have an intensity-to-
voltage response that is a power
function, with exponents varying from
approximately 1.8 to 2.5.

• With reference to the curve for g=2.5


in Fig. 3.6, we see that such display
systems would tend to produce
images that are darker than intended
illustrated in Fig. 3.7.

• Figure 3.7(a) shows a simple


gray-scale linear wedge input into a
CRT monitor.

• As expected, the output of the


monitor appears darker than the input
Some Basic Intensity
Transformation Functions
Power Law (Gamma) Transformations
• Images that are not corrected properly look either
bleached out or too dark.
• Varying gamma changes not only intensity, but also the ratio
of red to green to blue in a color images.
Useful for general purpose contrast manipulation.
• Apply gamma correction on CRT (Television, monitor), printers,
scanners etc.
• Gamma value depends on device.
Some Basic Intensity
Transformation Functions
Some Basic Intensity
Transformation Functions
Piecewise-Linear Transformation
Functions
• Contrast Stretching
• Low contrast images can result from poor
illuminations.
• Lack of dynamic range in the imaging sensor, or even
the wrong setting of a lens aperture during image
acquisition.
• It expands the range of intensity levels in an image so
that it spans the full intensity range of display devices.
• Contrast stretching is obtained by setting
(r1,s1) = (rmin , 0) and (r2,s2) = (rmax , L-1)
Piecewise-Linear Transformation
Functions
Piecewise-Linear Transformation
Functions
• Intensity Level Slicing
• Highlighting specific range of intensities in an image.
• Enhances features such as masses of water in satellite
imagery and enhancing flaws in X-ray images.
• It can be Implemented two ways:
• 1) To display only one value (say, white) in the range of
interest and rests are black which produces binary
image.
• 2) brightens (or darkens) the desired range of
intensities but leaves all other intensity levels in the
image unchanged.
Piecewise-Linear Transformation
Functions
• Intensity Level Slicing
Piecewise-Linear Transformation
Functions
• Intensity Level Slicing
Piecewise-Linear Transformation
Functions
• Intensity Level Slicing
Piecewise-Linear Transformation
• Bit Plane Slicing
Functions
• Pixels are digital numbers composed of bits.
• 256 gray scale image is composed of 8 bits.
• Instead of highlighting intensity level ranges, we could
highlight the contribution made to total image
appearance by specific bits.
• 8-bit image may be considered as being composed of
eight 1-bit planes, with plane 1 containing the lowest-
order bit of all pixels in the image and plane 8 all the
highest-order bits.
Piecewise-Linear Transformation
• Bit Plane Slicing
Functions
Piecewise-Linear Transformation
• Bit Plane Slicing
Functions
Piecewise-Linear Transformation
• Bit Plane Slicing
Functions
L-8
HISTOGRAM EQUALIZATION
AND
SPECIFICATIONS,
Histogram
• A histogram is a graphical representation of the
distribution of numerical data. It is an estimate of the
probability distribution of a continuous variable.
• A histogram may also be normalized displaying relative
frequencies. It then shows the proportion of cases that fall
into each of several categories, with the sum of the heights
equaling 1. The bins are usually specified as consecutive,
non-overlapping intervals of a variable
• This histogram is a graph showing the number of pixels in
an image at each different intensity value found in that
image.
• For an 8-bit grayscale image
there are 256 different
possible intensities, and so
the histogram will graphically
display 256 numbers showing
the distribution of pixels
amongst those grayscale
values.
Application of Histograms
• One of the more common is to decide what value of
threshold to use when converting a grayscale image to a
binary one by Thresholding.
• If the image is suitable for Thresholding then the
histogram will be bi-modal i.e. the pixel intensities will
be clustered around two well separated values. A suitable
threshold for separating these two groups will be found
somewhere in between the two peaks in the histogram
Histogram Equalization
• Histogram equalization is a powerful point processing
enhancement technique that seeks to optimize the contrast
of an image at all points.
• Histogram equalization seeks to improve image contrast
by flattening, or equalizing, the histogram of an image.
• A histogram is a table that simply counts the number of
times a value appears in some data set.
• In image processing, a histogram is a histogram of sample
values.
Histogram Equalization

• For an 8-bit image there will be 256 possible samples in


the image and the histogram will simply count the
number of times that each sample value actually occurs in
the image.
• In other words, the histogram gives the frequency
distribution of sample values within the image.
• Histogram equalization uses the cumulative distribution
function (CDF) as the lookup table.
HISTOGRAM
PROCESSING
• Histogram of a digital image with intensity levels in the range [0,L-1] is
a discrete function h(rk) = nk, where rk is the kth intensity value and nk
is the number of pixels in the image with intensity rk
• Normalized histogram p(rk)=nk/MN, for k = 0,1,2..….. L-1.
• Histogram manipulation can be used for image enhancement.
• Information inherent in histogram also is quite useful in other image
processing applications, such as image compression and
segmentation.
Four basic image
types: dark, light,
low contrast, high
contrast, and their
corresponding
histograms.
(Original image
courtesy of Dr.
Roger Heady,
Research School
of Biological
Sciences, Australian
National University,
Canberra, Australia.)
…contd…
• Dark image that the components of the histogram are concentrated on
the low (dark) side of the gray scale. Similarly, the components of the
histogram of the bright image are biased toward the high side of the gray
scale.
• An image with low contrast has a histogram that will be narrow and will
be centered toward the middle of the gray scale.
• For a monochrome image this implies a dull, washed-out gray look.
• Finally, the components of the histogram in the high-contrast image
cover a broad range of the gray scale and, further, that the distribution
of pixels is not too far from uniform, with very few vertical lines being
much higher than the others.
Histogram
Equalization
• Intensity mapping form
s  T (r), 0  r  L 1
Conditions:
a) T(r) is a monotonically increasing function in the
interval [0, L-1] and
b) 0  T (r)  L 1 for0  r  L 1
In some formulations, we use the inverse in which case
r  T 1 (s), 0s1
(a) change to T(r) is a strictly monotonically increasing function in the
interval [0, L-1]
Histogram
Processing
Histogram
Processing
Histogram
Equalization
• Intensity levels in an image may be viewed as random variables in
the interval [0,L-1]
• Fundamental descriptor of a random variable is its probability density
function (PDF)
• Let pr(r) and ps(s) denote the PDFs of r and s respectively
dr
ps (s)  pr (r)
ds
r
s  T (r)  (L  pr (w)dw (w is dummy variable of
0
1) integration)
d   pr (w)dw (L 1) pr (r)
r
ds dT (r)
dr  dr  (L 1) dr  0  
Histogram
1
Equalization
ps (s) 
L 1
k k nj
sk  T )  (L  p (r )r (L 1)  , k  0,1,2,...,L
1)  MN 1
(rk j0
j j
0
HISTOGRAM
EQUALIZATION
• Result of performing histogram equalization on each of these
images.
• The first three results (top to bottom) show significant
improvement.
• As expected, histogram equalization did not produce a
significant visual difference in the fourth image because the
histogram of this image already spans the full spectrum of
the gray scale.
• histograms of the equalized images are shown (c).
• while all these histograms are different, the histogram
equalized images themselves are visually very similar.
• This is not unexpected because the difference between the
images in the left column is simply one of contrast, not of
content.
• Since the images have the same content, the increase in
contrast resulting from histogram equalization was enough to
render any gray-level differences in the resulting images
visually indistinguishable
Histogram
Equalization
• Transformation functions
Histogram Matching
(Specification)
• Histogram equalization automatically determines a
transformation function produce uniform histogram
• When automatic enhancement is desired, equalization is a
good approach
• There are some applications in which attempting to base
enhancement on a uniform histogram is not the best
approach
• In particular, it is useful sometimes to be able to specify the
shape of the histogram that we wish the processed image
to have.
• The method used to generate a processed image that has a
specified histogram is called histogram matching or
specification
Histogram Matching
(Specification)
• Histogram Specification Procedure:
1) Compute the histogram pr (r) of the given image, and use
it to find the histogram equalization transformation in
equation
k nj
sk  T (rk )  (L  , k  0,1,2,...,L
j MN 1
1) 0
and round the resulting values to the integer range [0, L-1]
2) Compute all values of the transqformation function G using
same equation
 pz (ri ), q  0,1,2,..., L
G(z q )  (L 1)i
and round values of G0
1
3) For every value of sk, k = 0,1,…,L-1, use the stored values of G to find the
corresponding value of zq so that G(zq) is closet to sk and store these
mappings from s to z.
Histogram Matching
(Specification)
• Histogram Specification Procedure:
4) Form the histogram-specified image by first histogram-
equalizing the input image and then mapping every
equalized pixel value, sk , of this image to the corresponding
value zq in the histogram-specified image using the
mappings found in step 3.
Histogram
Matching
Histogram
Matching
Histogram
Matching
Local Histogram
Processing
• Histogram Processing methods discussed in the previous two sections are Global,
in the sense that pixels are modified by a transformation function based on the
intensity distribution of an entire image.
• There are some cases in which it is necessary to enhance detail over small areas in
an image.
• This procedure is to define a neighborhood and move its center pixel to pixel.
• At each location, the histogram of the points in the neighborhood is computed and
either a histogram equalization or histogram specification transformation function
is obtained.
• Map the intensity of the pixel centered in the neighborhood.
• Center of the neighborhood region is then moved to an adjacent pixel location
and the procedure is repeated.
Local Histogram
Processing
• This approach has obvious advantages over repeatedly
computing the histogram of all pixels in the neighborhood
region each time the region is moved one pixel location.

• Another approach used sometimes to reduce computation


is to utilize non overlapping regions, but this method
usually produces an undesirable “blocky” effect.
Local Histogram
Processing
L-9
PIXEL-DOMAIN SMOOTHING FILTERS –
• Linear and
• Order-statistics
Spatial Filtering
• Also called spatial masks, kernels, templates, and windows.
• The concept of filtering has its roots in the use of the Fourier transform for
signal processing in the so-called frequency domain.
• spatial filtering to differentiate this type of process from the more traditional
frequency domain filtering
• It consists of (1) a neighborhood (typically a small window), and (2) a
predefined operation that is performed on the image pixels encompassed by
the neighborhood.
• Filtering creates a new pixel with coordinates equal to the center of the
neighborhood.
• If operation is linear, then filter is called a linear spatial filter otherwise
nonlinear.
Mechanics of Spatial
Filtering
Mechanics of Spatial Filtering
• At each point (x, y), the response of the filter at that point is calculated using a predefined relationship.
• For linear spatial filtering the response is given by a sum of products of the filter coefficients and the
corresponding image pixels in the area spanned by the filter mask.
• For the 3*3 mask the result (or response), R, of linear filtering with the filter mask at a point (x, y) in the
image is
R = w(-1, -1)f(x - 1, y - 1) + w(-1, 0)f(x - 1, y) + w(0, 0)f(x, y) +w(1, 0)f(x + 1, y) + w(1, 1)f(x + 1, y + 1)
• The sum of products of the mask coefficients with the corresponding pixels directly under the mask.
• Coefficient w(0, 0) coincides with image value f(x, y), indicating that the mask is centered at (x, y) when the
computation of the sum of products takes place
• the sum of products of the mask coefficients with the corresponding pixels directly under the mask. Note in
particular that the coefficient w(0, 0) coincides with image value f(x, y), indicating that the mask is centered
at (x, y) when the computation of the sum of products takes place
Mechanics of Spatial Filtering
• In general, linear filtering of an image f of size M*N with a filter mask of size m*n is given by the expression:
• g(x, y) =
• where, from the previous paragraph, a=(m-1)2 and b=(n-1)2. To
• generate a complete filtered image this equation must be applied for x=0, 1,2, ……………, M-1
and y=0, 1, 2, ……….., N-1
Linear Spatial Filtering Methods
•Two main linear spatial filtering methods:
•Correlation
•Convolution

• Correlation: Often used in applications where we need to measure the


similarity between images or parts of images
(e.g., pattern matching).
Correlation
w(i,j)
g(i,j)

Output
Image

f(i,j)

K /2 K /2
g ( x, y ) w( x, y )  f ( x, y )   
s  K /2 t  K /2
w( s, t ) f ( x  s, y  t )
Convolution
• Similar to correlation except that the mask is first flipped both horizontally
and vertically.
K /2 K /2
g ( x, y ) w( x, y )  f ( x, y )   
s  K /2 t  K /2
w( s, t ) f ( x  s, y  t )

Note: if w(x,y) is symmetric, that is w(x,y)=w(-x,-y), then convolution is


equivalent to correlation!
Spatial Correlation &
Convolution
• Correlation is the process of moving a filter mask over the
image and computing the sum of the products at each
location.
• Convolution process is same except that the filter is first
rotated by 180 degree.
Assignment
Vector Representation of Linear Filtering

• response, R, of an m*n mask at any point (x, y), simplify the notation
by using the following expression of SOP
R= +
=
• where the w’s are mask coefficients,(coefficients of m x n matrix)
• the z’s are the values of the image gray levels corresponding to those
coefficients,
• mn is the total number of coefficients in the mask.
Image Smoothing
• Image Smoothing is a function to smooth a data set.
• It create an approximating function that attempts to
capture important patterns in the data, while leaving out
noise or other fine-scale structures/rapid phenomena.
• In smoothing, the data points of a signal are modified
and points that are lower than the adjacent points are
increased leading to a smoother signal.
Image Smoothing
• Smoothing may be used in two important ways that can aid in data analysis
– Extract more information from the data as long as the assumption of
smoothing is reasonable and
– able to provide analyses that are both flexible and robust.
• Many different algorithms are used in smoothing.
• Smoothing is also usually based on a single value representing the image, such as
the average value of the image or the middle (median) value.
− Smoothing with Average Values
− Smoothing with Median Values
Smoothing Spatial
Linear Filters
• Smoothing filters are used for blurring and for noise reduction.
• Blurring is used in preprocessing steps, such as removal of small details from an image
prior to (large) object extraction, and bridging of small gaps in lines or curves.
• Noise reduction can be accomplished by blurring with a linear filter and also by
nonlinear filtering.
• Also called averaging filters or Lowpass filter.
• By replacing the value of every pixel in an image by the average of the intensity levels in
the neighborhood defined by the filter mask.
• Reduced “sharp” transition in intensities.
• Random noise typically consists of sharp transition.
• Edges also characterized by sharp intensity transitions, so averaging filters have the
undesirable side effect that they blur edges.
• If all coefficients are equal in filter than it is also called a box filter.
Smoothing Spatial
Linear Filters
• The other mask is called weighted average,
terminology used to indicate that pixels are
multiplied by different coefficient.
• Center points.
point is more weighted than any other
• Strategy
behind weighing the center point the
highest and then reducing value of the coefficients as a
function of increasing distance from the origin is simply
an attempt to reduce blurring in the smoothing process.
• Intensity of smaller object blends with background.
Smoothing
Linear Filter

two 3*3 smoothing filters. Use of the first filter yields the
standard average of the pixels under the mask.This can best be seen by substituting the coefficients of the
mask into =1/9

which is the average of the gray levels of the pixels in the 3*3 neighbourhood defined by the mask.
Note- instead of being 19, the coefficients of the filter are all 1’s(computationally more efficient to have
coefficients valued 1).
At the end of the filtering process the entire image is divided by 9.
An m*n mask would have a normalizing constant equal to 1mn.
A spatial averaging filter in which all coefficients are equal is sometimes called a box filter
Smoothing Linear Filter
• The second mask yields a so-called weighted average, terminology used to indicate that pixels are
multiplied by different coefficients, thus giving more importance (weight) to some pixels at the
expense of others.
• Here the pixe at the center of the mask is multiplied by a higher value than any other, thus giving
this pixel more importance in the calculation of the average.
• The other pixels are inversely weighted as a function of their distance from the center of the mask.
• The diagonal terms are further away from the center than the orthogonal neighbors (by a factor
of ) and, thus, are weighed less than these immediate neighbors of the center pixel.
• The basic strategy behind weighing the center point the highest and then reducing the value of the
coefficients as a function of increasing distance from the origin is simply an attempt to reduce
blurring in the smoothing process.
• the sum of all the coefficients in the mask of Fig. 3.34(b) is equal to 16, an attractive feature for
computer implementation because it has an integer power of 2
Order-Statistic (Nonlinear)
Filters
• Response is based on ordering (ranking) the pixels contained in the image
area encompassed by the filter, and then replacing the value of the center
pixel with the value determined by the ranking result.
• Best-known filter is median filter.
• Replaces the value of a center pixel by the median of the intensity values
in the neighborhood of that pixel.
• Used to remove impulse or salt-pepper noise.
• Larger clusters are affected considerably less.
• Median represents the 50th percentile of a ranked set of numbers while
100th or 0th percentile results in the so- called max filter or min filter
respectively.
1. Median Filter
(Nonlinear)

• The image processed with the averaging filter has less visible noise, but the price paid
is significant blurring.
• The superiority in all respects of median over average filtering in this
case is quite evident.
• In general, median filtering is much better suited than averaging for the removal of
additive salt-and-pepper noise.
L-10
PIXEL-DOMAIN SHARPENING
FILTERS

FIRST AND SECOND DERIVATIVE


Sharpening
Spatial Filters
• Objective of sharpening is to highlight transitions in
intensity.
• Uses in printing and medical imaging to industrial
inspection and autonomous guidance in military systems.
• Averaging is analogous to integration, so sharpening is
analogous to spatial differentiation.
• Thus, image differentiation enhances edges and other
discontinuities (such as noise) and deemphasizes areas
with slowly varying intensities.
Foundat
ion
• Definition for a first order derivative (1) must be zero in
areas of constant intensity (2) must be nonzero at the onset
of an intensity step or ramp and (3) must be nonzero along
ramps.
• For a second order derivatives (1) must be zero in constant
areas (2) must be nonzero at the onset and (3) must be zero
along ramps of constant slope.
• First order derivative of a one dimensional function f(x) is
the difference of f(x+1) – f(x).
• Second order = f(x+1) + f(x-1) -2f(x)
Sharpening Spatial Filters.

• (a) shows a simple image that contains various solid objects, a


line, and a single noise point.

• (b) shows a horizontal gray-level profile (scan line) of the image along the center and including the noise
point. (one-dimensional function)

• (c) shows a simplification of the profile, with just enough numbers to make it possible for us to analyze how
the first- and second-order derivatives behave as they encounter a noise point, a line, and then the edge of an
object.

• In simplified diagram the transition in the ramp spans four pixels, the noise point is a single pixel, the line is
three pixels thick, and the transition into the gray-level step takes place between adjacent pixels

• The number of gray levels was simplified to only eight levels.


Sharpening Spatial Filters.
• Consider the properties of the first and second derivatives that the first-order derivative is nonzero along the entire ramp,

• While the second-order derivative is nonzero only at the onset and end of the ramp.

• Because edges in an image resemble this type of transition, we conclude that first-order derivatives produce “thick” edges
and second-order derivatives, much finer ones.

• Next the isolated noise point has the response at and around the point is much stronger for the second- than for the first-
order derivative.

• A second-order derivative is much more aggressive than a first-order derivative in enhancing sharp changes.

• Thus second-order derivative to enhance fine detail (including noise) much more than a first-order derivative.

• The thin line is a fine detail, and we see essentially the same difference between the two derivatives.

• If the maximum gray level of the line had been the same as the isolated point, the response of the second derivative would
have been stronger for the latter.

• Second derivative has a transition from positive back to negative shown as a thin double line.
CONCLUSIONS.

(1) First-order derivatives generally produce thicker edges in an image.

(2) Second-order derivatives have a stronger response to fine detail, such as thin
lines and isolated points.

(3) First order derivatives generally have a stronger response to a gray-level step.

(4) Second-order derivatives produce a double response at step changes in gray


level
Sharpening Filters: Derivatives
• Taking the derivative of an image results in sharpening the
image.
• The derivative of an image can be computed using the
gradient.
• The gradient is a vector which has magnitude and
direction:
Sharpening Filters: Derivatives (cont’d)
• Magnitude: provides information about edge strength.
• Direction: perpendicular to the direction of the edge.
Example

f
x

f
y
Second Derivatives-The
Laplacian
The simplest isotropic derivative operator is the Laplacian, which, for a function (image) f(x, y) of two
variables, is defined as

• Because derivatives of any order are linear operations, the Laplacian is a linear operator.
• For digital image processing, this equation expressed in discrete form.
• The definition of the digital second derivative given in that section is one of the most used.
• Taking into account that we now have two variables, we use the following notation for the partial second-order
derivative in the x-direction
Second Derivatives-The Laplacian

• This equation can be implemented using the mask shown in Fig. 3.39(a), which gives an isotropic result for rotations

in increments of 90°

• The diagonal directions can be incorporated in the definition of the digital

Laplacian by adding two more terms to Eq. (3.7-4), one for each of the two diagonal directions.

• Since each diagonal term also contains a –2f(x, y) term, the total subtracted from the difference terms now would be –

8f(x, y). This mask yields isotropic results for increments of 45°.

• The other two masks shown in Fig. 3.39 also are used frequently in practice.

• They are based on a definition of the Laplacian that is the negative of the one.

• they yield equivalent results, but the difference in sign must be kept in mind when combining (by addition or

subtraction) a Laplacian filtered image with another image


Second Derivatives -
The Laplacian
Second Derivatives-The
Laplacian
• Laplacian is a derivative operator, its use highlights gray-level discontinuities in an image and deemphasizes
regions with slowly varying gray levels.
• Produce images that have grayish edge lines and other discontinuities, all superimposed on a dark,
featureless background.
• Background features can be “recovered” while still preserving the sharpening effect of the Laplacian
operation simply by adding the original and Laplacian images.
• If the definition of the Laplacian used has a negative center coefficient, then we subtract, rather than add, the
Laplacian image to obtain a sharpened result.
• Laplacian for image enhancement is as follows:
E.g. North pole of moon
• Figure 3.40(a) shows an image of the North Pole of the moon.

• Figure 3.40(b) shows the result of filtering this image with the Laplacian mask in Fig. 3.39(b).

• Since the Laplacian image contains both positive and negative values, a typical way to scale it is to use the approach

• The image shown in Fig. 3.40(c) was scaled in the manner just described for display purposes.

• dominant features of the image are edges and sharp gray-level discontinuities of various gray-level values.

• The background, previously near black, is now gray due to the scaling.

• This grayish appearance is typical of Laplacian images that have been scaled properly.

• The detail in this image is unmistakably clearer and sharper than in the original image.

• Adding the image to the Laplacian restored the overall gray level variations in the image, with the Laplacian increasing the

contrast at the locations of gray-level discontinuities.


First Derivative – The
Gradient
• First derivatives in image processing are implemented using the magnitude of the gradient.
• For a function f(x, y), the gradient of f at coordinates (x, y) is defined as the two-
dimensional column vector
First Derivative – The Gradient

• The computational burden of implementing above eqn. is to


approximate the magnitude of the gradient by using absolute values
instead of squares and square roots:

• This equation is simpler to compute and it still preserves relative changes in gray levels,
but the isotropic feature property is lost in general.
• However, the isotropic properties of the digital gradient defined in the following paragraph
are preserved only for a limited number of rotational increments that depend on the masks
used to approximate the derivatives.
• As it turns out, the most popular masks used to approximate the gradient give the same
result only for vertical and horizontal edges and thus the isotropic properties of the
Assignment
• Figure 3.45(a) shows an optical image of a contact lens, illuminated by a lighting arrangement designed

to highlight imperfections, such as the two edge defects in the lens boundary seen at 4 and 5 o’clock.

• The edge defects also are quite visible in this image, but with the added advantage that constant or

slowly varying shades of gray have been eliminated, thus simplifying considerably the computational

task required for automated inspection.

• Note also that the gradient process highlighted small specs that are not readily visible in the gray-scale

image (specs like these can be foreign matter, air pockets in a supporting solution, or miniscule

imperfections in the lens).

• The ability to enhance small discontinuities in an otherwise flat gray field is another important feature of

the gradient
L-11
• Two-dimensional DFT and its inverse,

• Frequency domain filters – low-pass and


high-pass.
ASSIGNMENT - 2D Fourier transform and its inverse (Fourier transform pair)

• Fourier transform
• F u, v    f x, y e j 2 uxvy dxdy
 
(4.2 - 3)
•  

• Inverse Fourier transform


• f x, y     F u, ve j 2 uxvy dudv
 
(4.2 - 4)
•  

• Discrete Fourier
M Transform
• F u   1  f xe j 2ux M for u  0, 1, 2, ..., M -1. (4.2 - 5)
• 1

• M x0

• Inverse Discrete Fourier Transform


• f x   F ue for x  0, 1, 2, ..., M -1.
j 2• ux MM 1


(4.2 - 6)
x0
Frequency Domain
• Euler’s formula
• e j  cos  j sin (4.2 - 7)
• substituting this expression into Eq. (4.2-5)
• 1

• F u   1 M f xcos 2ux / M  j sin 2ux / M  M x0

• for u  1, 2, ..., M -1 (4.2 - 8)


To express F(x) in polar coordinates
 F u   F u  e  
 j u
(4.2 - 9)

…Contd

• |F(u)| is called the magtitude or spectrum



• u  tan  I u/R u  (4.2 -11)
1

•  is called the phase angle or phase spectrum


• Pu   F u   R u   I u  (4.2 -12)
2 2 2

• p(u) is called power spectrum of spectrum density

F(u,v)
…contd
• 2D DFT and its inverse

 Spectrum, Phase angle, and Spectrum density


Smoothing Frequency-Domain Filters
• Gu, v  H u, vF u, v

Ideal Lowpass Filters



0 if Du,v  D 0

• D(u,v) is the distance between a point (u,v)



• Du, v   u  M / 22  v  N / 22  1
2

• Where M and N are padded sizes


• D0 is called cut-off frequency
Ideal LPF

Do is picked based upon the fraction of image power to be


removed
Choosing the cutoff frequency according to the power
spectrum of an image
• P  M 1 N 1 Pu, v
T 
u 0 v0

•   100
 Pu, v/ PT
u v

Filtering the padded


test imcage with an
ILPF in the
frequency domain
blurring and ringing effects
•Frequency Domain : Gu, v  H u, vF u, v

•Spatial Domain : gx, y  hx, y* f x, y

Blurring : Low frequencies are removed


Ringing : Cutoff is too sharp
2. Butterworth Lowpass Filters
Transfer function BLPF

H u, v  1/ 1 Du, v D0  2n
BLPF
3. Gaussian Lowpass Filters
2 2
u ,v / 2
H u, v  e D
2 2
H u, v  e D u ,v  / 2 D
Sharping Frequency-Domain Filters

• H hp u, v  1 Hlp u, v


HPF (SPATIAL DOMAIN)
1. Test Image: Ideal HPF

2-D IHPF   0
H u, v   < D0
if D u, v

1  
if D u, v

D0
2. Test Image: Butterworth HPF

 
H u, v
3. Test Image: Gaussian HPF

2 2
 D u ,v / 2 D0
H u, v  1 e
HPF Mathematical Definitions
4. The Laplacian in the Frequency Domain

• 2 f x, y  1  u  M / 2  v  N / 2 F u, v


2 2

• 2 f x, y  u  M / 2  v  N / 2 F u, v


2 2

• An image can be enhanced by subtracting the Laplacian


from the original image.
• gx, y  f x, y  f x, y
2

• gx, y  1  1 u  M 2  v  N 2 Fu, v 


2 2
UNIT 2 END

You might also like