Image Enhancement
Image Enhancement
To process an image so that the result is
more suitable than the original image for a
specific application.
Spatial domain methods and frequency
domain methods.
Image Enhancement Methods
Spatial Domain Methods (Image Plane)
Techniques are based on direct manipulation of
pixels in an image
Frequency Domain Methods
Techniques are based on modifying the Fourier
transform of the image.
Combination Methods
There are some enhancement techniques based
on various combinations of methods from the
first two categories
Image Enhancement
Image Enhancement
Point Operations Spatial Operations Transform Operations Pseudo coloring
Contrast stretching i) Noise smoothing i) Linear Filtering i)
Flase coloring
Noise clipping ii) Median Filtering ii) Root filtering ii) Pseudo coloring
Gray level slicing iii) LPF, HPF, BPF iii) Homomorphic Filtering
Bit level slicing iv) Unsharpening masking
Histogram Model
Gray level transformation
Spatial Domain Methods
As indicated previously, the term spatial domain refers
to the aggregate of pixels composing an image.
Spatial domain methods are procedures that operate
directly on these pixels. Spatial domain processes will
be denoted by the expression:
g(x,y) = T [f(x,y)]
Where f(x,y) in the input image, g(x,y) is the processed
image and T is as operator on f, defined over some
neighborhood of (x,y)
In addition, T can operate on a set of input images.
Image Enhancement in the Spatial Domain
The simplest form of T, is when the neighborhood of size 1X1
(that is a single pixel). In this case, g depends only on the value
of f at (x,y), and T becomes a grey-level (also called intensity
or mapping) transformation function of the form:
s = T (r)
Where, for simplicity in notation, r and s are variables
denoting, respectively, the grey level of f(x,y) and g(x,y) at any
point (x,y)
Examples of Enhancement Techniques
Contrast Stretching:
If T(r) has the form as shown in the figure below, the effect of
applying the transformation to every pixel of f to generate the
corresponding pixels in g would:
Produce higher contrast than the original image, by:
Darkening the levels below m in the original
image
Brightening the levels above m in the
original image
So, Contrast Stretching: is a simple image
enhancement technique that improves the contrast
in an image by ‘stretching’ the range of intensity values it contains
to span a desired range of values. Typically, it uses a linear function
Examples of Enhancement Techniques
Thresholding
Is a limited case of contrast stretching, it produces a two-level
(binary) image.
Some fairly simple, yet powerful, processing approaches can be
formulated with grey-level transformations. Because
enhancement at any point in an image depends only on the gray
level at that point, techniques in this category often are referred
to as point processing.
Examples of Enhancement Techniques
Larger neighborhoods allow considerable more flexibility.
The general approach is to use a function of the values of f
in a predefined neighborhood of (x,y) to determine the value
of g at (x,y).
One of the principal approaches in this formulation is based
on the use of so-called masks (also referred to as filters)
So, a mask/filter: is a small (say 3X3) 2-D
array, such as the one shown in the
figure, in which the values of the mask
coefficients determine the nature of the
process, such as image sharpening.
Enhancement techniques based on this
type of approach often are referred to as
mask processing or filtering.
Some Basic Intensity (Gray-level) Transformation Functions
The three basic types of functions used
frequently for image enhancement:
Linear Functions:
Negative Transformation
Identity Transformation
Logarithmic Functions:
Log Transformation
Inverse-log Transformation
Power-Law Functions:
nth power transformation
nth root transformation
Linear Functions
Identity Function
Output intensities are identical to input
intensities
This function doesn’t have an effect on an
image, it was included in the graph only for
completeness
Its expression:
s=r
Linear Functions
Image Negatives (Negative Transformation)
The negative of an image with gray level in the range [0, L-
1], where L = Largest value in an image, is obtained by using
the negative transformation’s expression:
s=L–1–r
Which reverses the intensity levels of an input image, in this
manner produces the equivalent of a photographic negative.
The negative transformation is suitable for enhancing white
or gray detail embedded in dark regions of an image,
especially when the black area are dominant in size
Example of image negative
Logarithmic Transformations
Log Transformation
The general form of the log transformation:
s = c log (1+r)
Where c is a constant, and r ≥ 0
Log curve maps a narrow range of low gray-level
values in the input image into a wider range of the
output levels.
Used to expand the values of dark pixels in an image
while compressing the higher-level values.
It compresses the dynamic range of images with large
variations in pixel values.
Logarithmic Transformations
Piecewise-Linear Transformation Functions
Principle Advantage: Some important
transformations can be formulated only as a
piecewise function.
Principle Disadvantage: Their
specification requires more user input that
previous transformations
Types of Piecewise transformations are:
Contrast Stretching
Gray-level Slicing
Bit-plane slicing
Contrast Stretching
One of the simplest piecewise linear
functions is a contrast-stretching
transformation, which is used to enhance the
low contrast images.
Idea behind the stretching is to increase the
dynamic range of the gray levels in the image
being processed.
Low contrast images may result from:
Poor illumination
Wrong setting of lens aperture during image
acquisition.
Contrast Stretching
Figure shows a typical transformation
used for contrast stretching.
The locations of points (r1, s1) and (r2, s2)
control the shape of the transformation function.
If r1 = s1 and r2 = s2, the transformation is a linear
function that produces no changes in gray levels.
If r1 = r2, s1 = 0 and s2 = L-1, the transformation becomes
a thresholding function that creates a binary image. As
shown previously in slide 7.
Intermediate values of (r1, s1) and (r2, s2) produce various
degrees of spread in the gray levels of the output image,
thus affecting its contrast.
In general, r1 ≤ r2 and s1 ≤ s2 is assumed, so the function
is always increasing.
(a) shows a 8- bit low-contrast image (b) Result of contrast stretching (c ) thresholding
Fig. (b) shows the result of contrast stretching, obtained by
setting (r1, s1) = (rmin, 0) and (r2, s2) = (rmax,L-1) where rmin and
rmax denote the minimum and maximum gray levels in the image,
respectively. Thus, the transformation function stretched the
levels linearly from their original range to the full range [0, L-1].
Finally, Fig. (c) shows the result of using the thresholding
function defined previously, with r1=r2=m, the mean gray level in
the image.
Gray-level Slicing
This technique is used to highlight a specific range of gray
levels in a given image. It can be implemented in several
ways, but the two basic themes are:
One approach is to display a high value for all gray
levels in the range of interest and a low value for all
other gray levels. This transformation, shown in Fig 3.11
(a), produces a binary image.
The second approach, based on the transformation
shown in Fig 3.11 (b), brightens the desired range of
gray levels but preserves gray levels unchanged.
Fig 3.11 (c) shows a gray scale image, and fig 3.11 (d)
shows the result of using the transformation in Fig 3.11
(a).
Gray-level Slicing
Bit-plane Slicing
Pixels are digital numbers, each one composed of
bits. Instead of highlighting gray-level range, we
could highlight the contribution made by each bit.
This method is useful and used in image
compression.
Most significant bits contain the majority of visually
significant data.
Histogram Processing
Histogram Processing
Histogram Equalization
Histogram Specification
Local Processing
Global Processing
Grayscale Histogram
• Count intensities
• Normalize
• What determines Histogram?
– Contrast, aperture, lighting levels, scene,
Histogram Processing
• The Histogram of a digital image with gray levels
in therange [0,L-1] is a discrete function
P(rk ) = nk /n
Where rk is the kth gray level,
nk is the number of pixels in the image
n is the total number of pixels in the image
k= 0,1,2,…… , L-1
• Simply P(rk ) gives an estimation of the probability
of
occurrence of gray-level rk.
• A plot of this function for all values of k provides a
global
description of the appearance of an image
Histogram Processing
(a)
(b)
(c) (d)
Fig (a) shows the gray levels are concentrated towards the dark end of the gray
scale
(b) shows overall dark characteristics just opposite of fig (a)
(c ) shows a narrow shape and little dynamic also appeared as murky gray
(d) shows a histogram with significant spread, corresponding to an image with
high contrast.
Histogram Equalization
The idea is to spread out the histogram so that it
makes full use of the dynamic range of the image.
For example, if an image is very dark, most of the
intensities might lie in the range 0-50. By choosing
f to spread out the intensity values, we can make
fuller use of the available intensities, and make
darker parts of an image easier to understand.
If we choose f to make the histogram of the new
image, J, as uniform as possible, we call this
histogram equalization.
· Therefore, the output histogram is given by
1
pout ( s ) pin (r ) 1r T 1 ( s ) 1, 0 s 1
pin (r ) r T 1 ( s )
· The output probability density function is uniform,
regardless of the input.
· Thus, using a transformation function equal to the CDF of
input gray values r, we can obtain an image with uniform gray
values.
· This usually results in an enhanced image, with an increase
in the dynamic range of pixel values.
How to implement histogram equalization?
Step 1:For images with discrete gray values, compute:
nk
pin (rk ) 0 rk 1 0 k L 1
n
L: Total number of gray levels
nk: Number of pixels with gray value rk
n: Total number of pixels in the image
Step 2: Based on CDF, compute the discrete version of the
previous transformation :
k
s k T (rk ) pin (r j ) 0 k L 1
j 0
Example:
· Consider an 8-level 64 x 64 image with gray values (0, 1, …,
7). The normalized gray values are (0, 1/7, 2/7, …, 1). The
normalized histogram is given below:
NB: The gray values in output are also (0, 1/7, 2/7, …, 1).
# Fraction
pixels of #
pixels
Gray Normalized gray
value value
k
· Applying the transformation, s k T (rk ) pin (r j ) we have
j 0
· Notice that there are only five distinct gray levels --- (1/7, 3/7,
5/7, 6/7, 1) in the output image. We will relabel them as (s0,
s1, …, s4 ).
· With this transformation, the output image will have
histogram
Histogram of output
image
# pixels
Gray values
· Note that the histogram of output image is only approximately, and
not exactly, uniform. This should not be surprising, since there is no
result that claims uniformity in the discrete case.
Example Original image and its histogram
Histogram equalized image and its
histogram
· Comments:
Histogram equalization may not always produce desirable
results, particularly if the given histogram is very narrow. It
can produce false edges and regions. It can also increase
image “graininess” and “patchiness.”
Histogram
Specification
(Histogram Matching)
· Histogram equalization yields an image whose pixels are (in
theory) uniformly distributed among all gray levels.
· Sometimes, this may not be desirable. Instead, we may want a
transformation that yields an output image with a pre-specified
histogram. This technique is called histogram specification.
· Given Information
(1) Input image from which we can compute its histogram .
(2) Desired histogram.
· Goal
Derive a point operation, H(r), that maps the input image into
an output image that has the user-specified histogram.
· Again, we will assume, for the moment, continuous-gray values.
Approach of derivation
z=H(r) = G-1(v=s=T(r))
Input image Uniform Output
image image
s=T(r) v=G(z)
· Suppose, the input image has probability density in p(r) . We
want to find a transformation z = H (r) , such that the probability
density of the new image obtained by this transformation is pout(z) ,
which is not necessarily uniform.
· First apply the transformation
r
s T (r ) pin ( w)dw, 0 r 1 (*)
0
This gives an image with a uniform probability density.
· If the desired output image were available, then the following
transformation would generate an image with uniform density:
z
V G ( z ) pout ( w)dw , 0 z 1 (**)
0
· From the gray values n we can obtain the gray values z by
using the inverse transformation, z = G-1(v)
· If instead of using the gray values n obtained from (**), we
use the gray values s obtained from (*) above (both are
uniformly distributed ! ), then the point transformation
Z=H(r)= G-1[ v=s =T(r)]
will generate an image with the specified density out p(z) ,
from an input image with density in p(r) !
· For discrete gray levels, we have
k
s k T (rk ) pin (r j ) 0 k L 1
j 0
k
vk G ( z k ) pout ( z j ) sk 0 k L 1
j 0
· If the transformation zk ® G(zk) is one-to-one, the inverse
transformation sk ® G-1 (sk) , can be easily determined, since
we are dealing with a small set of discrete gray values.
· In practice, this is not usually the case (i.e., ) zk ® G(zk) is not
one-to-one) and we assign gray values to match the given
histogram, as closely as possible.
Algorithm for histogram specification:
(1) Equalize input image to get an image with uniform gray
values, using the discrete equation:
k
s k T (rk ) pin (r j ) 0 k L 1
j 0
(2) Based on desired histogram to get an image with uniform
gray values, using the discrete equation:
k
vk G ( z k ) pout ( z j ) sk 0 k L 1
j 0
-1 -1
(3)
= ® =
z G (v=s) z G [T (r)]
Example:
· Consider an 8-level 64 x 64 previous image.
#
pixels
Gray
value
· It is desired to transform this image into a new image, using a
transformation Z=H(r)= G-1[T(r)], with histogram as specified
below:
# pixels
Gray values
· The transformation T(r) was obtained earlier (reproduced
below):
· Now we compute the transformation G as before.
· Computer z=G-1 (s), Notice that G is not invertible.
G-1(0) = ?
G-1(1/7) = 3/7
G-1(2/7) = 4/7
G-1(3/7) = ?
G-1(4/7) = ?
G-1(5/7) = 5/7
G-1(6/7) = 6/7
G-1(1) = 1
· Combining the two transformation T and G-1 , compute
z=H(r)= G-1[v=s=T(r)]
· Applying the transformation H to the original image yields an
image with histogram as below:
· Again, the actual histogram of the output image does not exactly
but only approximately matches with the specified histogram. This is
because we are dealing with discrete histograms.
Original image and its histogram
Histogram specified image and its histogram
Desired histogram
Local Enhancement
Using neighboring method easy to map the
gray levels
Reduce nonoverlapping regions, but this
method usually produces an undesirable
checkerboard effect.
Local enhancement based on pixel intensities
The intensity mean and variance (standard
deviation)
Mean is a measure of average brightness
Variance is a measure of contrast