Meta Electronics
Meta Electronics
ENHANCEMENT
AND FILTERING
Digital Image Processing, 3rd edition by Gonzalez and
Woods
L-7
Gray level
transformatio
ns
Image
Enhancement
The objective of image enhancement is to process an image so that the
result is more suitable than the original image for a specific application.
There are two main approaches:
Image enhancement in spatial domain: Direct manipulation of pixels in
an image
Point processing: Change pixel intensities
Spatial filtering
• It maps a narrow range of low intensity values in the input into a wide range of
output levels
• The opposite is true of higher values of input levels
• It expands the values of dark pixels in an image while compressing the higher level
values
• It compresses the dynamic range of images with large variations in pixel values
• log function characteristic -compresses the dynamic range of images with large
variations in pixel values.
Some Basic Intensity Transformation
Functions
• Fig. 3.5(a) shows a Fourier spectrum with
values in the range 0 to 1.5*106. (not
perceived as black)
• When these values are scaled linearly for
display in an 8-bit system, the brightest
pixels will dominate the display, at the
expense of lower (and just as important)
values of the spectrum.
• Figure 3.5(b) shows the result of scaling this
new range linearly and displaying the
spectrum in the same 8-bit display.
• The wealth of detail visible in this image as
compared to a straight display of the
spectrum is evident from these pictures
Some Basic Intensity
Transformation Functions
3. Power Law (Gamma) Transformations
• s = c rγ c and γ are both positive constants
• With fractional values(0<γ<1) of gamma map a narrow range of dark input
values into a wider range of output values, with the opposite being true for
higher values (γ >1)of input levels.
• C=gamma=1 means it is an identity transformations.
• Variety of devices used for image capture , printing, and display respond
according to a power law.
• Process used to correct these power law response phenomena is called gamma
correction.
Some Basic Intensity
Transformation Functions
• power-law curves with fractional values of g map a narrow
range of dark input values into a wider range of output
values, with the opposite being true for higher values of
input levels.
Output
Image
f(i,j)
K /2 K /2
g ( x, y ) w( x, y ) f ( x, y )
s K /2 t K /2
w( s, t ) f ( x s, y t )
Convolution
• Similar to correlation except that the mask is first flipped both horizontally
and vertically.
K /2 K /2
g ( x, y ) w( x, y ) f ( x, y )
s K /2 t K /2
w( s, t ) f ( x s, y t )
• response, R, of an m*n mask at any point (x, y), simplify the notation
by using the following expression of SOP
R= +
=
• where the w’s are mask coefficients,(coefficients of m x n matrix)
• the z’s are the values of the image gray levels corresponding to those
coefficients,
• mn is the total number of coefficients in the mask.
Image Smoothing
• Image Smoothing is a function to smooth a data set.
• It create an approximating function that attempts to
capture important patterns in the data, while leaving out
noise or other fine-scale structures/rapid phenomena.
• In smoothing, the data points of a signal are modified
and points that are lower than the adjacent points are
increased leading to a smoother signal.
Image Smoothing
• Smoothing may be used in two important ways that can aid in data analysis
– Extract more information from the data as long as the assumption of
smoothing is reasonable and
– able to provide analyses that are both flexible and robust.
• Many different algorithms are used in smoothing.
• Smoothing is also usually based on a single value representing the image, such as
the average value of the image or the middle (median) value.
− Smoothing with Average Values
− Smoothing with Median Values
Smoothing Spatial
Linear Filters
• Smoothing filters are used for blurring and for noise reduction.
• Blurring is used in preprocessing steps, such as removal of small details from an image
prior to (large) object extraction, and bridging of small gaps in lines or curves.
• Noise reduction can be accomplished by blurring with a linear filter and also by
nonlinear filtering.
• Also called averaging filters or Lowpass filter.
• By replacing the value of every pixel in an image by the average of the intensity levels in
the neighborhood defined by the filter mask.
• Reduced “sharp” transition in intensities.
• Random noise typically consists of sharp transition.
• Edges also characterized by sharp intensity transitions, so averaging filters have the
undesirable side effect that they blur edges.
• If all coefficients are equal in filter than it is also called a box filter.
Smoothing Spatial
Linear Filters
• The other mask is called weighted average,
terminology used to indicate that pixels are
multiplied by different coefficient.
• Center points.
point is more weighted than any other
• Strategy
behind weighing the center point the
highest and then reducing value of the coefficients as a
function of increasing distance from the origin is simply
an attempt to reduce blurring in the smoothing process.
• Intensity of smaller object blends with background.
Smoothing
Linear Filter
two 3*3 smoothing filters. Use of the first filter yields the
standard average of the pixels under the mask.This can best be seen by substituting the coefficients of the
mask into =1/9
which is the average of the gray levels of the pixels in the 3*3 neighbourhood defined by the mask.
Note- instead of being 19, the coefficients of the filter are all 1’s(computationally more efficient to have
coefficients valued 1).
At the end of the filtering process the entire image is divided by 9.
An m*n mask would have a normalizing constant equal to 1mn.
A spatial averaging filter in which all coefficients are equal is sometimes called a box filter
Smoothing Linear Filter
• The second mask yields a so-called weighted average, terminology used to indicate that pixels are
multiplied by different coefficients, thus giving more importance (weight) to some pixels at the
expense of others.
• Here the pixe at the center of the mask is multiplied by a higher value than any other, thus giving
this pixel more importance in the calculation of the average.
• The other pixels are inversely weighted as a function of their distance from the center of the mask.
• The diagonal terms are further away from the center than the orthogonal neighbors (by a factor
of ) and, thus, are weighed less than these immediate neighbors of the center pixel.
• The basic strategy behind weighing the center point the highest and then reducing the value of the
coefficients as a function of increasing distance from the origin is simply an attempt to reduce
blurring in the smoothing process.
• the sum of all the coefficients in the mask of Fig. 3.34(b) is equal to 16, an attractive feature for
computer implementation because it has an integer power of 2
Order-Statistic (Nonlinear)
Filters
• Response is based on ordering (ranking) the pixels contained in the image
area encompassed by the filter, and then replacing the value of the center
pixel with the value determined by the ranking result.
• Best-known filter is median filter.
• Replaces the value of a center pixel by the median of the intensity values
in the neighborhood of that pixel.
• Used to remove impulse or salt-pepper noise.
• Larger clusters are affected considerably less.
• Median represents the 50th percentile of a ranked set of numbers while
100th or 0th percentile results in the so- called max filter or min filter
respectively.
1. Median Filter
(Nonlinear)
• The image processed with the averaging filter has less visible noise, but the price paid
is significant blurring.
• The superiority in all respects of median over average filtering in this
case is quite evident.
• In general, median filtering is much better suited than averaging for the removal of
additive salt-and-pepper noise.
L-10
PIXEL-DOMAIN SHARPENING
FILTERS
• (b) shows a horizontal gray-level profile (scan line) of the image along the center and including the noise
point. (one-dimensional function)
• (c) shows a simplification of the profile, with just enough numbers to make it possible for us to analyze how
the first- and second-order derivatives behave as they encounter a noise point, a line, and then the edge of an
object.
• In simplified diagram the transition in the ramp spans four pixels, the noise point is a single pixel, the line is
three pixels thick, and the transition into the gray-level step takes place between adjacent pixels
• While the second-order derivative is nonzero only at the onset and end of the ramp.
• Because edges in an image resemble this type of transition, we conclude that first-order derivatives produce “thick” edges
and second-order derivatives, much finer ones.
• Next the isolated noise point has the response at and around the point is much stronger for the second- than for the first-
order derivative.
• A second-order derivative is much more aggressive than a first-order derivative in enhancing sharp changes.
• Thus second-order derivative to enhance fine detail (including noise) much more than a first-order derivative.
• The thin line is a fine detail, and we see essentially the same difference between the two derivatives.
• If the maximum gray level of the line had been the same as the isolated point, the response of the second derivative would
have been stronger for the latter.
• Second derivative has a transition from positive back to negative shown as a thin double line.
CONCLUSIONS.
(2) Second-order derivatives have a stronger response to fine detail, such as thin
lines and isolated points.
(3) First order derivatives generally have a stronger response to a gray-level step.
f
x
f
y
Second Derivatives-The
Laplacian
The simplest isotropic derivative operator is the Laplacian, which, for a function (image) f(x, y) of two
variables, is defined as
• Because derivatives of any order are linear operations, the Laplacian is a linear operator.
• For digital image processing, this equation expressed in discrete form.
• The definition of the digital second derivative given in that section is one of the most used.
• Taking into account that we now have two variables, we use the following notation for the partial second-order
derivative in the x-direction
Second Derivatives-The Laplacian
• This equation can be implemented using the mask shown in Fig. 3.39(a), which gives an isotropic result for rotations
in increments of 90°
Laplacian by adding two more terms to Eq. (3.7-4), one for each of the two diagonal directions.
• Since each diagonal term also contains a –2f(x, y) term, the total subtracted from the difference terms now would be –
8f(x, y). This mask yields isotropic results for increments of 45°.
• The other two masks shown in Fig. 3.39 also are used frequently in practice.
• They are based on a definition of the Laplacian that is the negative of the one.
• they yield equivalent results, but the difference in sign must be kept in mind when combining (by addition or
• Figure 3.40(b) shows the result of filtering this image with the Laplacian mask in Fig. 3.39(b).
• Since the Laplacian image contains both positive and negative values, a typical way to scale it is to use the approach
• The image shown in Fig. 3.40(c) was scaled in the manner just described for display purposes.
• dominant features of the image are edges and sharp gray-level discontinuities of various gray-level values.
• The background, previously near black, is now gray due to the scaling.
• This grayish appearance is typical of Laplacian images that have been scaled properly.
• The detail in this image is unmistakably clearer and sharper than in the original image.
• Adding the image to the Laplacian restored the overall gray level variations in the image, with the Laplacian increasing the
• This equation is simpler to compute and it still preserves relative changes in gray levels,
but the isotropic feature property is lost in general.
• However, the isotropic properties of the digital gradient defined in the following paragraph
are preserved only for a limited number of rotational increments that depend on the masks
used to approximate the derivatives.
• As it turns out, the most popular masks used to approximate the gradient give the same
result only for vertical and horizontal edges and thus the isotropic properties of the
Assignment
• Figure 3.45(a) shows an optical image of a contact lens, illuminated by a lighting arrangement designed
to highlight imperfections, such as the two edge defects in the lens boundary seen at 4 and 5 o’clock.
• The edge defects also are quite visible in this image, but with the added advantage that constant or
slowly varying shades of gray have been eliminated, thus simplifying considerably the computational
• Note also that the gradient process highlighted small specs that are not readily visible in the gray-scale
image (specs like these can be foreign matter, air pockets in a supporting solution, or miniscule
• The ability to enhance small discontinuities in an otherwise flat gray field is another important feature of
the gradient
L-11
• Two-dimensional DFT and its inverse,
• Fourier transform
• F u, v f x, y e j 2 uxvy dxdy
(4.2 - 3)
•
• Discrete Fourier
M Transform
• F u 1 f xe j 2ux M for u 0, 1, 2, ..., M -1. (4.2 - 5)
• 1
• M x0
•
(4.2 - 6)
x0
Frequency Domain
• Euler’s formula
• e j cos j sin (4.2 - 7)
• substituting this expression into Eq. (4.2-5)
• 1
F(u,v)
…contd
• 2D DFT and its inverse
• 100
Pu, v/ PT
u v
H u, v 1/ 1 Du, v D0 2n
BLPF
3. Gaussian Lowpass Filters
2 2
u ,v / 2
H u, v e D
2 2
H u, v e D u ,v / 2 D
Sharping Frequency-Domain Filters
2-D IHPF 0
H u, v < D0
if D u, v
1
if D u, v
D0
2. Test Image: Butterworth HPF
H u, v
3. Test Image: Gaussian HPF
2 2
D u ,v / 2 D0
H u, v 1 e
HPF Mathematical Definitions
4. The Laplacian in the Frequency Domain