(Ebook) The Image Processing Handbook by Russ, John C ISBN 9781498740289, 1498740286 Download
(Ebook) The Image Processing Handbook by Russ, John C ISBN 9781498740289, 1498740286 Download
https://ebooknice.com/product/the-image-processing-
handbook-5284392
https://ebooknice.com/product/the-image-processing-handbook-981882
https://ebooknice.com/product/the-image-processing-handbook-sixth-
edition-2225300
https://ebooknice.com/product/the-image-processing-handbook-fourth-
edition-2325970
https://ebooknice.com/product/the-image-processing-handbook-5245774
(Ebook) Biota Grow 2C gather 2C cook by Loucas, Jason; Viles,
James ISBN 9781459699816, 9781743365571, 9781925268492,
1459699815, 1743365578, 1925268497
https://ebooknice.com/product/biota-grow-2c-gather-2c-cook-6661374
https://ebooknice.com/product/the-image-processing-handbook-2598084
https://ebooknice.com/product/image-analysis-of-food-microstructure-2359094
https://ebooknice.com/product/matematik-5000-kurs-2c-larobok-23848312
https://ebooknice.com/product/sat-ii-success-math-1c-and-2c-2002-peterson-
s-sat-ii-success-1722018
SE VE N T H E D I T I O N
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Introduction . . . . . . . . . . . . . . . . . . . . . . xiii
About this text.................................................. xiii
A word of caution............................................. xv
A personal note............................................... xvi
1 Acquiring Images . . . . . . . . . . . . . . . . 1
Human reliance on images.................................. 1
Extracting information......................................... 4
Video cameras................................................... 6
CCD cameras.................................................... 8
CMOS detectors...............................................12
Camera artifacts and limitations..........................13
Color cameras..................................................15
Camera resolution.............................................18
Electronics and bandwidth limitations.................. 20
Handling color data..........................................21
Color encoding................................................ 22
Other image sources..........................................24
Pixels.............................................................. 25
Tonal resolution................................................ 29
The image contents........................................... 30
Camera limitations.............................................31
Noise............................................................. 33
High-depth images............................................34
Focusing.......................................................... 35
Color displays.................................................. 36
Image types......................................................37
v
© 2016 by Taylor & Francis Group, LLC
Multiple images............................................... 40
Imaging requirements........................................ 44
vi Contents
© 2016 by Taylor & Francis Group, LLC
4 Correcting Imaging Defects . . . . . . . 163
Color adjustments............................................164
Hue, saturation, intensity..................................166
Other spaces..................................................168
Color correction.............................................. 171
Noisy images................................................. 174
Neighborhood averaging................................. 178
Gaussian smoothing........................................ 181
Neighborhood ranking....................................185
The color median............................................190
More median filters..........................................193
Weighted, conditional, and adaptive
neighborhoods..............................................196
Other neighborhood noise reduction methods.... 204
Defect removal, maximum entropy, and
maximum likelihood...................................... 208
Nonuniform illumination................................... 214
Fitting a background function............................ 217
Rank leveling..................................................222
Color images..................................................225
Nonplanar views.............................................227
Computer graphics..........................................228
Geometric distortion........................................230
Alignment.......................................................234
Interpolation...................................................236
Morphing.......................................................241
Contents vii
© 2016 by Taylor & Francis Group, LLC
Principal component analysis........................... 305
Principal component analysis for contrast
enhancement................................................ 310
Other image combinations............................... 313
Cross- correlation............................................. 317
viii Contents
© 2016 by Taylor & Francis Group, LLC
Masks............................................................447
From pixels to features.................................... 449
Filling holes................................................... 450
Measurement grids..........................................452
Boolean logic with features...............................454
Selecting features by location...........................458
Double thresholding.........................................462
Erosion and dilation.........................................463
Opening and closing.......................................465
Isotropy..........................................................469
Measurements using erosion and dilation...........470
Extension to grayscale images...........................473
Neighborhood parameters............................... 474
Examples of use..............................................476
Euclidean distance map...................................479
Watershed segmentation................................. 482
Ultimate eroded points.................................... 488
Skeletons....................................................... 490
Topology........................................................492
Boundary lines................................................497
Combining skeleton and Euclidean distance
map............................................................ 500
Contents ix
© 2016 by Taylor & Francis Group, LLC
Separation distance.........................................579
Alignment.......................................................582
The linear Hough transform...............................586
The circular Hough transform............................588
Counting........................................................592
Special counting procedures.............................597
Feature size................................................... 602
Circles and ellipses..........................................607
Caliper dimensions......................................... 609
Perimeter........................................................ 612
12 Correlation, Classification,
Identification, and Matching . . . . . . 683
A variety of purposes...................................... 683
Matching.......................................................685
Cross- correlation.............................................689
Curvature scale space......................................692
Classification..................................................696
Distributions and decision points........................697
Linear discriminant analysis (LDA) and principal
component analysis (PCA)............................. 700
Class definition...............................................702
Unsupervised learning......................................709
Are groups different?....................................... 712
Neural nets..................................................... 716
k-Nearest neighbors.........................................720
Parametric description......................................722
Bayesian statistics............................................724
x Contents
© 2016 by Taylor & Francis Group, LLC
A comparison.................................................726
Harmonic analysis and invariant moments..........729
Species examples............................................732
Correlation.....................................................736
Landmark data................................................743
13 3D Imaging. . . . . . . . . . . . . . . . . . . 749
More than two dimensions................................749
Volume imaging versus sections.........................750
Serial sections.................................................754
Removing layers..............................................756
Reconstruction.................................................760
Confocal microscopy.......................................762
Stereo viewing................................................764
Tomography................................................... 767
Tomographic reconstruction..............................770
Reconstruction artifacts.....................................774
Algebraic reconstruction...................................776
Maximum entropy...........................................780
Imaging geometries......................................... 781
Other signals..................................................783
Beam hardening and other issues......................786
3D tomography...............................................790
Dual energy methods.......................................795
Microtomography............................................799
3D reconstruction and visualization................... 803
Slices and surfaces......................................... 806
Marching cubes.............................................. 810
Volumetric displays.......................................... 813
Ray tracing..................................................... 815
Contents xi
© 2016 by Taylor & Francis Group, LLC
Spherical harmonics, wavelets, and fractal
dimension.....................................................881
Other applications and future possibilities.......... 888
References . . . . . . . . . . . . . . . . . . . . . . 957
Index . . . . . . . . . . . . . . . . . . . . . . . . . 1013
xii Contents
© 2016 by Taylor & Francis Group, LLC
Introduction
1. improving the visual appearance of images for a human observer, including their
printing and transmission, and
2. preparing images for the measurement and analysis of the features and struc-
tures that they reveal.
The techniques appropriate for each of these tasks are not always the same, but there
is considerable overlap, and this book explains, illustrates, and compares methods used
for both purposes. To get the best possible results, it is important to know about the
intended uses of the processed images. For visual enhancement, this means having some
familiarity with the human visual process and an appreciation of what cues the viewer
responds to or normally overlooks in images. The chapter on human vision addresses
those issues. It also is useful to know about printing and storage methods, since many
images are processed in the context of reproduction, storage, or transmission.
This handbook presents and illustrates an extensive collection of image processing tools
to help the user and prospective user of computer-based systems understand the meth-
ods provided in various software packages and determine those steps that may be best
suited for particular applications. Comparisons are presented for different algorithms
that may be employed for similar purposes, using a selection of representative pictures
from various microscopy techniques, as well as macroscopic, forensic, remote sensing,
and astronomical images. Throughout the text, a conscious effort has been made to
include examples of image processing and analysis from a wide variety of disciplines,
and at all scales, from the nano- to astro-, including real-world macroscopic images.
It is very important to emphasize that the scale of an image matters very little to the
techniques used to process or analyze it. Microscopes that have a resolution of nano-
meters and telescopes that produce images covering light years produce images that
xiii
© 2016 by Taylor & Francis Group, LLC
require many of the same algorithms. People trying to use image processing and analysis
methods for their particular area of interest should understand that the same basic tools
are useful at other scales and in other disciplines, and that a solution to their problems
may already exist just down the street. It may also help to recall that image processing,
like food processing or word processing, does not reduce the amount of data present
but simply rearranges it. Some arrangements may be more appealing to the senses, and
some may make the information more accessible, but these two goals might not call for
identical methods.
The measurement of images is often a principal method for acquiring scientific and
forensic data, and generally requires that objects or structure be well defined, either
by edges or by unique brightness, color, texture, or some combination of these factors.
The types of measurements that can be performed on entire scenes or on individual
features are important in determining the appropriate processing steps. Several chapters
deal with measurement in detail. Measurements of size, position, and brightness are
subjects that humans generally understand, although human vision is not quantitative
and is easily fooled. Shape is a more subtle concept, dealt with in a separate chapter.
Measurement data may be used for classification or recognition of objects. There are sev-
eral different strategies that can be applied, and examples are shown. The topics covered
are generally presented in the same order in which the methods would be applied in a
typical workflow.
For many years, in teaching this material to students I have described achieving mas-
tery of these techniques as being much like becoming a skilled journeyman carpenter.
The number of distinct tools (saws, planes, drills, etc.) is relatively small, and although
there are some variations (slotted or Phillips-head screwdrivers, or saw blades with fine
or coarse teeth, for example) knowing how to use each type of tool is closely linked
to understanding and visualizing what it can do. With a set of these tools, the skilled
craftsman can produce a house, a boat, or a piece of furniture. So it is with image pro-
cessing tools, which are conveniently grouped into only a few classes, such as histogram
modification, neighborhood operations, Fourier-space processing, and so on, that can be
used to accomplish a broad range of purposes. Visiting your local hardware store and
purchasing the appropriate woodworking tools does not provide the skills to use them.
Understanding their use requires practice, which develops the ability to visualize before-
hand what each will do. The same is true of the tools for image processing.
The emphasis throughout this seventh edition continues to be explaining and illustrating
methods so that they can be clearly understood, rather than providing dense mathemat-
ics and derivations. There are excellent texts on Fourier transforms, image compression,
mathematical morphology, stereology, and so on that provide all of the equations and
rigor that may be desired; many of them, as well as original publications, are referenced
here. But the thrust of this book remains teaching by example. Few people learn the
principles of image processing from equations. Just as we use images to communicate
ideas and to “do science,” so most of us rely on images to learn about things, including
imaging itself. The hope is that by seeing and comparing what various operations do to
representative images, you will discover how and why to use them. Then, if you should
need to look up the mathematical foundations or the computer code, they will be easier
to understand.
This edition includes a greater range of “high end” or computationally intensive algo-
rithms than previous versions. With the continuing increase in the power and speed of
desktop and laptop computers, more of these methods are now practical for most users,
xiv Introduction
© 2016 by Taylor & Francis Group, LLC
and consequently more of the available software programs tend to include them. The
algorithms themselves are not necessarily new, but they have become more accessible.
However, the simpler tools that have been available to most users for decades are still
viable, and sometimes give equal or even superior results.
A word of caution
A very real concern for everyone involved in imaging, particularly in scientific and foren-
sic fields, is the question of what constitutes proper and appropriate processing, and
what constitutes unethical or even fraudulent manipulation. The short answer is that
anything that alters an image so as to create a false impression on the part of the viewer
is wrong. The problem with that answer is that it does not take into account the fact that
different viewers will see different things in the image anyway, and that what constitutes
a false impression for one person may not for another. The first rule is always to store a
permanent copy of the original image along with relevant data on its acquisition. The
second rule is to carefully document whatever steps are taken to process the image and
generally to report those steps when the processed image is published.
The word “photoshopping” has become an everyday expression, with generally nega-
tive connotations. Most scientific publications and the editors who review submitted
papers have become very aware of the ease with which images can be processed or cre-
ated, and the dangers of inadequate documentation. For example, see M. Rossner and
K. M. Yamada’s “What’s in a Picture?” ( J. Cell Biology 166:11–15, 2004) for the Journal
of Cell Biology’s policy on image ethics and examples of improper manipulation. For
forensic purposes, there is an additional responsibility to fully record the entire step-
by-step procedures that are used and to make sure that those methods are acceptable
in court according to the U.S. Supreme Court’s Daubert ruling (Daubert v. Merrell Dow
Pharmaceuticals, 92–102, 509 U.S. 579, 1993). This generally means that not only are
the methods widely accepted by professionals, but also that they have been rigorously
tested and have known performance outcomes. In a forensic setting, there will often be
a need to explain a procedure to a nontechnical jury. This frequently requires showing
that the details obtained from the image are present in the original but become visually
more evident and measurable with the processing.
Some procedures, such as rearranging features or combining them within a single image,
or differently adjusting the contrast of several images to make them appear more alike,
are potentially misleading and usually wrong. Some such as using copy-and-paste to
insert something into an image or selectively erasing portions of an image are out-and-
out fraudulent. Even selective cropping of an image (or choosing which field of view to
record in the first place) can create a false impression. A general guideline to be consid-
ered is that it is never acceptable to add anything to an image, but it may be acceptable
to suppress or remove some information if it makes the remaining existing details more
accessible, either visually for presentation and communication, or to facilitate measure-
ment. Of course, the steps used must be documented and reported, and it is better to use
an algorithmic procedure than to manually tweak settings until the results “look good.”
Any of the procedures shown here may be appropriate in a particular instance, but they
can also be misused and should in any case never be used without understanding and
careful documentation. The heart of the scientific method is replicability. If adequate
information is provided on the processing steps applied and the original image data are
preserved, then the validity of the results can be independently verified.
Introduction xv
© 2016 by Taylor & Francis Group, LLC
The scientist who does not understand his or her instruments and what they do risks
making serious errors. But after acquiring an image, some are content to use software,
possibly downloaded free from the web, to apply algorithms they do not understand and
have not tested. The results can be misleading or worse. An important but often over-
looked concern is the importance of avoiding the use of programs that alter the image
without the user being aware of it. For example, placing an image into a slide presenta-
tion, web page, or word processing document may alter colors, discard pixels, and intro-
duce unwanted compression. Saving an image with a lossy compression method such
as jpeg will discard potentially important information that cannot be recovered and is
strongly discouraged.
This seventh edition brings many more references, plus new examples and images
throughout. The characterization of shape and the statistical analysis of data are covered
in greater detail, and examples of forensic applications of imaging have been expanded.
A major expansion is in the area of 3D imaging. The availability of instruments such as
synchrotron and microfocus X-ray tomography, and the ongoing development of com-
puter software that can present visualizations as well as make measurements with 3D
voxel arrays offer exciting possibilities for many research disciplines as well as routine
industrial and medical applications, and I try to present a current overview of these new
capabilities and their uses.
A great many scientists, worldwide and representing a broad range of disciplines, are
represented by example images from their work, and many have provided assistance,
raw data, references, and other information. They are acknowledged throughout, and
their help is gratefully appreciated. Many thanks to you all. Particular thanks are due
to Dr. Brian Metscher, at the University of Vienna, who performed microtomography on
specimens; to Dr. Loes Brabant, at Inside Matters, who provided data and the Octopus
software (based on the Morpho+ programs developed at the University of Gent) for pro-
cessing and measurement of 3D data sets; to Dr. Andreas Wiegmann, at Math2Market
GmbH, who provided valuable references, examples, and reconstructions; to Patrick
Barthelemy, at FEI Visualization Science, who loaned their Amira 3D visualization soft-
ware; and to Dr. Paul Shearing, at the Department of Chemical Engineering, University
College, London, who provided access to data from ongoing research projects. Access to
advanced 2D algorithms was provided by Andy Thé at The MathWorks. Christian Russ
at Ocean Systems provided valuable input on video technology and image compression.
A personal note
A brief personal and historical comment seems appropriate for this seventh edition.
Nearly 60 years ago I undertook my first serious foray into image processing—and it
was analog, not digital. As a student project, I tried to design an optical solution to a
problem that seemed to be beyond the capability of then-available (at least to me) com-
puter solution. The problem was to obtain dimensionally corrected images of the land
surface using aerial photography for surveying (with satellite imagery unimaginably far
in the future, the intended vehicle was a small plane flying at 12,000 feet, which tied in
nicely with my interest in flying). My solution was a continuously moving strip recorder
using photographic film and a wide angle slit lens system that “morphed” the image
projected onto the film one line at a time so that the geometric distortion was can-
celed. It worked on paper. In practice, the physical optics were impractical to fabricate
and specific to a fixed elevation and speed, and many difficulties—such as maintaining
xvi Introduction
© 2016 by Taylor & Francis Group, LLC
the stability and knowing the exact position of the airplane—would have frustrated its
actual use. The project succeeded in achieving its immediate purpose of fulfilling the
course requirements.
Since then I have been continuously involved in many ways with imaging throughout
my career, both in industry and academia. I’ve used and in some cases helped design
light and electron microscopes, surface metrology instruments, X-ray, gamma ray and
neutron tomography; have mounted digital cameras onto telescopes as a hobby; and
have programmed computers (ranging from a Cray to the early Apple IIs) to implement
most of the known, and some novel, processing and measurement algorithms. My name
is on several patents, and I’ve been involved in forensic cases and court trials in the
United States, Canada, and England. Most satisfying to me personally is that I’ve taught
image processing and measurement to several thousands of students, in settings rang-
ing from formal semester-long university courses, to 3- or 4-day intensive workshops
under the auspices of professional societies for their members and corporations for their
own employees.
As I approach the start of my ninth decade, and marvel at the progress in the capabili-
ties to extract useful information from images, I am especially pleased to work with the
next generation of researchers who will carry on this work. I have previously published
a book (Introduction to Image Processing and Analysis) and several papers with my
son, Christian Russ, who is involved in software developments for imaging. My present
coauthor, Brent Neal, was once a student who worked with me at North Carolina State
University and is now an accomplished researcher and the leader of the core instrumen-
tation facility at Milliken Research, where his expertise in polymer characterization and
measurement science uses a broad spectrum of 2D and 3D imaging technologies. I’ve
worked with Brent on a previous book (Measuring Shape), as well as several technical
publications, and a set of materials science teaching aids (“Visualizations in Materials
Science,” sponsored by the National Science Foundation), and have enormous respect
for his knowledge, intelligence, wide-ranging curiosity, and sense of humor. Putting
this volume together with him has been fun, and it is a much stronger text because of
his involvement.
The encouragement, support, and patience of our “girls”—Helen, Sarah, Meg, and Claire—
has also been a vital and most appreciated assistance for this effort.
Introduction xvii
© 2016 by Taylor & Francis Group, LLC
1
Acquiring Images
1
© 2016 by Taylor & Francis Group, LLC
Figure 1.1 The pinwheel galaxy (M101) imaged in visible (green, Hubble image), infrared (red, Spitzer
image), and X-ray (blue, Chandra image) wavelengths. (Image courtesy of NASA and ESA.)
Figure 1.2 Combining visible light and radio astronomy produces images such as this view of HH 46/47
(generated with data from the Atacama Large Millimeter Array). These are often displayed with false
colors to emphasize subtle variations in signal strength or, as in this example, Doppler shift. The red
and orange colors identify the jet moving away from us, and the blue and magenta show the jet mov-
ing toward us. (Courtesy of European Space Agency.)
data as images (Figure 1.4). The data so collected may represent the surface elevation and
topography, but other signals, such as surface compliance or drag force on the probe, may
also be used. Acoustic waves at low frequency produce sonar images, while at gigahertz fre-
quencies the acoustic microscope produces images with resolution similar to that of the light
microscope, but with image contrast that is produced by local variations in the attenuation
and refraction of sound waves rather than light. Figure 1.5 shows an acoustic microscope
image of a subsurface defect, and Figure 1.6 shows a sonogram of a baby in the womb.
Figure 1.5 Microscope image of voids in solder bond beneath a GaAs die: (a) die surface (visible light);
(b) acoustic image showing strong signal reflections (white areas) from the surface of the voids.
(Courtesy of J. E. Semmens, Sonoscan Inc., Elk Grove Village, Illinois).
Acquiring Images 3
© 2016 by Taylor & Francis Group, LLC
Random documents with unrelated
content Scribd suggests to you:
O. H. Oldroyd, editor of the "Lincoln Memorial Album," says:
"His fame is world-wide and stands in history more lasting than a
monument of brass. His words will continue to sound through the ages as
long as the flowers shall bloom or the waters flow."
In New York City the celebration was the most hearty and
widespread of its kind ever seen there. The city's official celebration
was held in Cooper Union, in the hall in which Lincoln made his great
speech called the "Cooper Union Speech," delivered in 1860.
Addresses were delivered by Joseph H. Choate and Rev. Dr. Lyman
Abbott. At a great club meeting, Booker T. Washington delivered an
address, and referred to himself as "one whom Lincoln found a piece
of property and made into an American citizen."
In closing this little volume as an humble tribute to the memory of
Abraham Lincoln, I desire to say that, while Mr. Lincoln possessed so
many excellent traits of character, the most significant and worthy
one was his constant anxiety, as he expressed it, to know and do the
will of God. This, in the providence of God, is what made him truly
great.
Transcriber's Note:
Minor typographical errors have been corrected
without note.
Ambiguous hyphens at the ends of lines were
retained.
Mid-paragraph illustrations have been moved to
the end of chapters.
References added to the list of illustrations: House
in which Lincoln died and Lincoln's mill.
*** END OF THE PROJECT GUTENBERG EBOOK FOOTPRINTS OF
ABRAHAM LINCOLN ***
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com