0% found this document useful (0 votes)
32 views83 pages

Plant Image Analysis Fundamentals and Applications Edited by S Dutta Gupta and Yasuomi Ibaraki Download

The document is a comprehensive overview of the book 'Plant Image Analysis Fundamentals and Applications', edited by S. Dutta Gupta and Yasuomi Ibaraki, which explores various imaging techniques used in plant science. It discusses the importance of image analysis for assessing plant growth, nutrient status, and photosynthetic efficiency, and includes contributions from international experts in the field. The book serves as a valuable resource for students and professionals involved in plant biology and agricultural technology.

Uploaded by

giqhtfu0030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views83 pages

Plant Image Analysis Fundamentals and Applications Edited by S Dutta Gupta and Yasuomi Ibaraki Download

The document is a comprehensive overview of the book 'Plant Image Analysis Fundamentals and Applications', edited by S. Dutta Gupta and Yasuomi Ibaraki, which explores various imaging techniques used in plant science. It discusses the importance of image analysis for assessing plant growth, nutrient status, and photosynthetic efficiency, and includes contributions from international experts in the field. The book serves as a valuable resource for students and professionals involved in plant biology and agricultural technology.

Uploaded by

giqhtfu0030
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Plant Image Analysis Fundamentals And

Applications Edited By S Dutta Gupta And Yasuomi


Ibaraki download

https://ebookbell.com/product/plant-image-analysis-fundamentals-
and-applications-edited-by-s-dutta-gupta-and-yasuomi-
ibaraki-4768460

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Intelligent Image Analysis For Plant Phenotyping 1st Edition Ashok


Samal Editor

https://ebookbell.com/product/intelligent-image-analysis-for-plant-
phenotyping-1st-edition-ashok-samal-editor-31544976

The New Plant Collector The Next Adventure In Your House Plant Journey
Darryl Cheng

https://ebookbell.com/product/the-new-plant-collector-the-next-
adventure-in-your-house-plant-journey-darryl-cheng-56517334

The New Plant Parent Develop Your Green Thumb And Care For Your
Houseplant Family Original Retail Darryl Cheng

https://ebookbell.com/product/the-new-plant-parent-develop-your-green-
thumb-and-care-for-your-houseplant-family-original-retail-darryl-
cheng-9991846

The New Plant Collector Darryl Cheng

https://ebookbell.com/product/the-new-plant-collector-darryl-
cheng-230094018
The Billionaire Plan Image Project Series Book 2 Katherine Garbera

https://ebookbell.com/product/the-billionaire-plan-image-project-
series-book-2-katherine-garbera-48857726

Castaways Of The Image Planet Movies Show Business Public Spectacle


Obrien

https://ebookbell.com/product/castaways-of-the-image-planet-movies-
show-business-public-spectacle-obrien-11906192

The Everything Parents Guide To Eating Disorders The Information Plan


You Need To See The Warning Signs Help Promote Positive Body Image And
Develop A Recovery Plan For Your Child Angie Bestboss

https://ebookbell.com/product/the-everything-parents-guide-to-eating-
disorders-the-information-plan-you-need-to-see-the-warning-signs-help-
promote-positive-body-image-and-develop-a-recovery-plan-for-your-
child-angie-bestboss-46334280

Planet Heal Thyself The Revolution Of Regeneration In Body Mind And


Planet Jordan Rubin

https://ebookbell.com/product/planet-heal-thyself-the-revolution-of-
regeneration-in-body-mind-and-planet-jordan-rubin-48814136

Gods Plan For Our Success Nehemiahs Way Connie Hunterurban

https://ebookbell.com/product/gods-plan-for-our-success-nehemiahs-way-
connie-hunterurban-50365376
Plant Image
Analysis
Fundamentals and Applications

Edited by
S. Dutta Gupta and Y. Ibaraki
Plant Image
Analysis
Fundamentals and Applications
Plant Image
Analysis
Fundamentals and Applications

Edited by
S. Dutta Gupta and Yasuomi Ibaraki
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2015 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20140717

International Standard Book Number-13: 978-1-4665-8302-3 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Contents
Preface................................................................................................................ vii
Contributors........................................................................................................ix

Chapter 1 An introduction to images and image analysis..................... 1


Michael P. Pound and Andrew P. French

Chapter 2 Image analysis for plants: Basic procedures


and techniques........................................................................... 25
Yasuomi Ibaraki and S. Dutta Gupta

Chapter 3 Applications of RGB color imaging in plants...................... 41


S. Dutta Gupta, Yasuomi Ibaraki, and P. Trivedi

Chapter 4 RGB imaging for the determination of the nitrogen


content in plants......................................................................... 63
Gloria Flor Mata-Donjuan, Adán Mercado-Luna, and
Enrique Rico-García

Chapter 5 Sterile dynamic measurement of the in vitro


nitrogen use efficiency of plantlets........................................ 77
Yanyou Wu and Kaiyan Zhang

Chapter 6 Noninvasive measurement of in vitro growth of


plantlets by image analysis................................................... 115
Yanyou Wu and Kaiyan Zhang

Chapter 7 Digital imaging of seed germination.................................. 147


Didier Demilly, Sylvie Ducournau, Marie-Hélène Wagner,
and Carolyne Dürr

v
vi Contents

Chapter 8 Thermal imaging for evaluation of seedling


growth........................................................................................ 165
Étienne Belin, David Rousseau, Landry
Benoit, Didier Demilly, Sylvie Ducournau,
François Chapeau-Blondeau, and Carolyne Dürr

Chapter 9 Anatomofunctional bimodality imaging for plant


phenotyping: An insight through depth imaging
coupled to thermal imaging.................................................. 179
Yann Chéné, Étienne Belin, François Chapeau-Blondeau,
Valérie Caffier, Tristan Boureau, and David Rousseau

Chapter 10 Chlorophyll fluorescence imaging for plant


health monitoring.................................................................... 207
Kotaro Takayama

Chapter 11 PRI imaging and image-based estimation of light


intensity distribution on plant canopy surfaces............... 229
Yasuomi Ibaraki and S. Dutta Gupta

Chapter 12 ROS and NOS imaging using microscopical


techniques................................................................................. 245
Nieves Fernandez-Garcia and Enrique Olmos

Chapter 13 Fluorescent ROS probes in imaging leaves........................ 265


Éva Hideg and Ferhan Ayaydin

Chapter 14 Analysis of root growth using image analysis.................. 279


Andrew P. French and Michael P. Pound

Chapter 15 Advances in imaging methods on plant


chromosomes............................................................................ 299
Toshiyuki Wako, Seiji Kato, Nobuko Ohmido, and
Kiichi Fukui

Chapter 16 Machine vision in estimation of fruit crop yield.............. 329


A. Payne and K. Walsh
Preface
Image analysis is a useful tool for obtaining quantitative information
for target objects. The application of imaging techniques to plant and
agricultural sciences has previously been confined to images obtained
through remote sensing techniques. Technological advancement in the
development of powerful hardware, picture capturing tools, and robust
algorithms in a cost-effective manner paves the path of image analysis
toward nondestructive and objective evaluation of biological objects, and
opens up a new window to look into the field of plant science. The com-
plex, dynamic nature of plant responses to unexpected changes in the
environment compelled scientists to contemplate the application of image
analysis in high-throughput phenotyping for different purposes. Various
types of imaging techniques, such as red, green, and blue (RGB) imag-
ing, hyperspectral imaging, fluorescence imaging, and thermal imaging,
have contributed significantly to different aspects of crop performance
and improvement.
Predicting crop performance as a function of genome architecture
is one of the major challenges for crop improvement in the twenty-first
century to ensure agricultural production that will satisfy the needs of
a human population likely to exceed 9 billion by 2050. Compared to the
advancements made in the “next generation” genotyping tools, plant
phenotyping technology has progressed slowly over the past 25 years.
Constraints in plant phenotyping capability limit our approaches to dis-
sect complex traits such as stress tolerance and yield potential. In recent
years, phenomics facilities are popping up with the development of new
methodological applications of nonconventional optical imaging coupled
with computer vision algorithms and widening the set of tools available
for automated plant phenotyping.
The present book provides a comprehensive treatise of recent devel-
opments in image analysis of higher plants. The book introduces read-
ers to the fundamentals of images and image analysis and then features
various types of image analysis techniques covering a diverse domain
of plant sciences. It covers imaging techniques that include RGB imag-
ing, hyperspectral imaging at the small canopy level, thermal imaging,

vii
viii Preface

photochemical reflectance index (PRI) imaging, chlorophyll fluorescence


imaging, reactive oxygen species (ROS) imaging, and chromosome imag-
ing. The book includes 16 chapters presenting a wide spectrum of applica-
tions of image analysis that are relevant to assessment of plant growth,
nutrient status, and photosynthetic efficiency both in vivo and in vitro,
early detection of diseases and stress, cellular detection of reactive oxygen
species, plant chromosome analysis, fruit crop yield, and plant phenotyp-
ing. The chapters are written by international experts who are pioneers
and have made significant contributions to this fascinating field.
The book is designed for graduate students, research workers, and
teachers in the fields of cell and developmental biology, stress physiology,
precision agriculture, and agricultural biotechnology, as well as profes-
sionals involved in areas that utilize machine vision in plant science.
We express our deep sense of gratitude to all the contributors for their
kind support and cooperation in our humble approach to present the cur-
rent status, state of the art, and future outlook of plant image analysis.
Thanks are also due to Dr. Rina Dutta Gupta for her support and encour-
agement throughout the preparation of this volume. Finally, we thank
CRC Press for giving us the opportunity to bring out this book.

S. Dutta Gupta
Kharagpur, India

Y. Ibaraki
Yamaguchi, Japan
Contributors
Ferhan Ayaydin Valérie Caffier
Cellular Imaging Laboratory INRA
Biological Research Center Institut de Recherche en
Szeged, Hungary Horticulture et Semences
Beaucouzé, France
and
Étienne Belin
Laboratoire d’Ingénierie des Agrocampus-Ouest
Systèmes Automatisés (LISA) Université d’Angers
Université d’Angers Angers, France
Angers, France
François Chapeau-Blondeau
Laboratoire d’Ingénierie des
Systèmes Automatisés (LISA)
Landry Benoit Université d’Angers
Laboratoire d’Ingénierie des Angers, France
Systèmes Automatisés (LISA)
Université d’Angers Yann Chéné
Angers, France Laboratoire d’Ingénierie des
Systèmes Automatisés (LISA)
Université d’Angers
Tristan Boureau Angers, France
Université d’Angers
Institut de Recherche en Didier Demilly
Horticulture et Semences GEVES
INRA, Agrocampus-Ouest Station Nationale d’Essais de
Université d’Angers Semences (SNES)
Beaucouzé, France Beaucouzé, France

ix
x Contributors

Sylvie Ducournau Yasuomi Ibaraki


GEVES Faculty of Agriculture
Station Nationale d’Essais de Yamaguchi University
Semences (SNES) Yamaguchi, Japan
Beaucouzé, France
Seiji Kato
Carolyne Dürr Yamanashi Prefectural
INRA Agritechnology Center
Institut de Recherche en Yamanashi, Japan
Horticulture et Semences
Beaucouzé, France Gloria Flor Mata-Donjuan
Department of Mechatronics
Nieves Fernandez-Garcia Polytechnic Queretaro University
Department of Abiotic Stress and Querétaro, México
Plant Pathology
Centro de Edafologia y Biologia Adán Mercado-Luna
Aplicada del Segura Department of Biosystems
Consejo Superior de School of Engineering
Investigaciones Cientificas Queretaro State University
Murcia, Spain Querétaro, México

Andrew P. French Nobuko Ohmido


Centre for Plant Integrative Graduate School of Human
Biology Development and Environment
University of Nottingham Kobe University Nada Ku
Nottingham, UK Kobe, Japan

Kiichi Fukui Enrique Olmos


Department of Biotechnology Department of Abiotic Stress and
Graduate School of Engineering Plant Pathology
Osaka University Centro de Edafologia y Biologia
Osaka, Japan Aplicada del Segura
Consejo Superior de
S. Dutta Gupta Investigaciones Cientificas
Agricultural and Food Murcia, Spain
Engineering Department
Indian Institute of Technology A. Payne
Kharagpur, India Central Queensland University
Queensland, Australia
Éva Hideg
Institute of Biology Michael P. Pound
Faculty of Sciences Centre for Plant Integrative Biology
University of Pécs University of Nottingham
Pécs, Hungary Nottingham, UK
Contributors xi

Enrique Rico-García K. Walsh


Department of Biosystems Central Queensland University
School of Engineering Centre for Plant and Water
Queretaro State University Science
Querétaro, México Queensland, Australia

David Rousseau
Yanyou Wu
Université de Lyon
Key Laboratory of Modern
Université Lyon 1
Agricultural Equipment and
Villeurbanne, France
Technology
Chinese Ministry of Education
Kotaro Takayama
Jiangsu University
Ehime University
Zhenjiang, People’s Republic of
Matsuyama, Japan
China
P. Trivedi and
Agricultural and Food
State Key Laboratory of
Engineering Department
Environmental Geochemistry
Indian Institute of Technology
Institute of Geochemistry
Kharagpur, India
Chinese Academy of Sciences
Guiyang, People’s Republic of
Marie-Hélène Wagner
China
GEVES—Station Nationale
d’Essais de Semences
Beaucouzé, France Kaiyan Zhang
State Key Laboratory of
Toshiyuki Wako Environmental Geochemistry
Division of Plant Sciences Institute of Geochemistry
National Institute of Chinese Academy of Sciences
Agrobiological Sciences Guiyang, People’s Republic of
Tsukuba, Japan China
chapter one

An introduction to images
and image analysis
Contents
1.1 Introduction................................................................................................ 2
1.2 What is an image?...................................................................................... 3
1.2.1 Image structure.............................................................................. 3
1.2.2 Pixels................................................................................................ 4
1.2.3 Bit depth and color channels........................................................ 4
1.2.4 Image file formats.......................................................................... 5
1.2.5 Color spaces.................................................................................... 8
1.2.5.1 RGB.................................................................................. 10
1.2.5.2 HSV................................................................................. 11
1.2.5.3 HSL.................................................................................. 11
1.2.5.4 YCbCr.............................................................................. 11
1.3 Analyzing images.................................................................................... 12
1.3.1 Image filtering.............................................................................. 12
1.3.2 Kernel convolution....................................................................... 13
1.3.2.1 Mean filter...................................................................... 14
1.3.2.2 Gaussian filter................................................................ 15
1.3.2.3 Median filter.................................................................. 16
1.3.3 Segmentation................................................................................ 17
1.3.3.1 Binary thresholding...................................................... 17
1.3.3.2 Adaptive thresholding................................................. 18
1.3.3.3 Region-based segmentation........................................ 18
1.3.3.4 Advanced segmentation.............................................. 19
1.3.4 Morphological operations........................................................... 19
1.3.5 Edge detection.............................................................................. 20
1.4 Conclusion................................................................................................ 22
References........................................................................................................... 23

Michael P. Pound and Andrew P. French

1
2 Michael P. Pound and Andrew P. French

1.1 Introduction
In this chapter, the reader will be presented with a basic introduction
to images, image data, and some basic and widely used image process-
ing techniques. When developing or understanding image analysis
approaches in general, not just for the study of plant growth, it is nec-
essary to have an understanding of the underlying data representations,
within which is buried the information we wish to extract in the analy-
sis stage. With a good understanding of the raw data, the reader will be
well placed to comprehend the function and limitations of more specific
image analysis methods. But understanding that an image is essentially a
matrix of numbers that can represent different kinds of spatial informa-
tion—dependent on the sensor type, image type, resolution, etc.—is the
first step toward forming an image analysis solution.
Storing images for automated image analysis is a different techni-
cal problem than that of storing digital images for later manual analysis.
Using high-quality raw data is crucial, as recapturing the data at a later
stage is at best costly, at worst impossible, and in both cases clearly unde-
sirable. Choice of image type, format, and compression is crucial here.
Most people have heard of JPEG images, and some may know they com-
press the data, but do you know why you need to be careful of using them
for data collected for scientific image analysis research? In this chapter, we
hope to answer such fundamental questions.
Data storage these days is often thought of as prolific and cheap.
Certainly, it is cheap to store terabytes of data. But over the course of an
imaging-intensive research project, it may be that petabytes of data storage
have to be allocated, and often in triplicate to allow for a backup strategy.
An automated plant phenotyping setup using a variety of image sen-
sors and capturing 3D data could easily end up accumulating this much
data. Then, decisions relating to compression and bit depth of the images
become serious considerations, so an understanding of these is essential.
A comparison of some commonly encountered color spaces is pre-
sented next. Some approaches in image analysis perform best on indi-
vidual channels of an image (or components of a color space), and so an
understanding of what is available and the differences between them will
help the user to prepare data for a particular processing technique. For
example, we may wish to segment an image into areas of the same color
(hue), but we might not be interested in different brightness values (inten-
sity) within the same color regions. Here, choosing a color space where we
can separate hue from brightness would be a sensible choice.
Following a description of common data and file formats, and color
spaces, some basic image analysis processing techniques are then pre-
sented. These techniques are often used as part of a preprocessing stage,
prior to running more complex image analysis algorithms. Ways of
Chapter one: An introduction to images and image analysis 3

removing different kinds of noise from an image are discussed, and use-
ful terms are defined, including what segmentation means. Simple example
segmentation approaches are described—in this case, related to binary
thresholding. Morphological operations, which allow us to process geo-
metric structures in the binary image, are then described in the context of
cleaning up binary plant-related images. Finally, an introduction to some
basic image features, such as edges and how to detect them, is presented.
If the reader becomes familiar with the concepts in this chapter, his or
her understanding of the more involved image analysis techniques in the
rest of the book should have a good foundation.

1.2 What is an image?


1.2.1 Image structure
An image is most often represented as a two-dimensional, rectangu-
lar grid of pixels. Images represented this way are called raster images.
Each position in the image is located using positive integer values on a
Cartesian coordinate system. The main distinguishing feature between
images and regular Cartesian coordinates is that the origin of the image,
pixel (0, 0), is found in the upper left corner of the image. At each coordi-
nate, a pixel represents the color at that point. An example image can be
seen in Figure 1.1.
This chapter will deal exclusively with raster images. Images captured
by biologists using cameras, scanners, and microscopes will all use this
representation, and image analysis algorithms assume that an image is in
this form. However, it should be noted that there is a theoretical opposite
of a raster image, often called a vector graphic. In these images, objects are
represented as a series of points, lines, and more complex paths, generated
using mathematical expressions. The benefits of vector graphics are scale
and resolution, and device independence; if you scale a vector graphic, it

0,0 1,0 ...

0,1 1,1

...

Pixel

11,5

Figure 1.1 An example image of width 12 pixels and height 6 pixels.


4 Michael P. Pound and Andrew P. French

Figure 1.2 Two representations of the same image. Left: The common inter-
pretation of pixels, as small squares. Computer monitors display pixels in this
manner. Right: A smoothed image, treating each pixel as a point in the cen-
ter of each square, and linearly interpolating between each value for positions
between pixels.

simply becomes larger, rather than becoming pixelated due to inadequate


resolution. Nevertheless, computer displays consist of a grid of pixels, like
a raster image, so any vector graphics must first be converted into a raster
image before they are shown. This process is called rasterization.

1.2.2 Pixels
As discussed above, at each position in the image a pixel represents the
brightness and the color at that point. Although pixels are usually thought
of as a small square section of the image, mainly because of the similarities
between image pixels and display pixels in computer monitors, strictly
speaking they represent some sampled point of the image. Thus, in real-
ity, pixels represent a single point of color, or intensity level, assigned to a
coordinate. Figure 1.2 shows this distinction.
In most cases the distinction between a pixel as a square and a pixel
as a point is largely arbitrary. However, there are times in image analysis
where it might be necessary to calculate the color or gray value between
two pixels, in which case the rectangle representation would be inade-
quate, and interpolation should be used.

1.2.3 Bit depth and color channels


Along with width and height values in pixels, images are also described
using their bit depth. This is the total number of bits, zeros, and ones that
describe the color or intensity at each pixel location. As with all values
stored on a computer, the number of bits used to store some information
tells us how many different values can be stored. For example, 8 bits is a
block of 8 zeros and ones, and can distinguish between 28 = 256 different
Chapter one: An introduction to images and image analysis 5

Image data 00000000 01000000 00100110 11110000 10101100 01000110 11111111

Coordinate (0,0) (0,1) (0,2) (0,3) (11,3) (11,4) (11,5)


Gray value 0 128 38 240 172 70 255
Intensity

Figure 1.3 An example of grayscale image data, and the respective values that
these data represent. Each row in the image is listed one after another in order
from top to bottom. Pixel values range from 0 (black) to 255 (white).

values. Eight-bit images are usually, but not always, grayscale, and the
pixels are usually stored consecutively in a list (see Figure 1.3).
For color images the theory is the same, except that additional chan-
nels are used to store separate values for each color component. For exam-
ple, in a 24-bit color image, 8 bits are used for each value of red, green,
and blue, and a tuple of RGB represents a single pixel. At each pixel, the
combination of red, green, and blue produces the final color in the image.
Table 1.1 provides details of some common bit and channel combinations
for images used in image analysis.

1.2.4 Image file formats


While there are many file formats, most image analysis in the biologi-
cal domain will encounter only a select few dedicated image formats. It
is true that there are many proprietary microscope formats that contain
image data, and include metadata such as microscope settings, etc., but for
the purpose of image analysis, often we want to export from these files to
image data in a more conventional format (that said, software such as Fiji
(Schindelin et al., 2012) is capable of opening a wide variety of microscope
formats and working with the data directly). The file format dictates not
only how the image data are stored, but also what compression is used.
Some file formats such as TIFF will also allow a user to tag an image with
relevant information, such as date of capture, which can be particularly
helpful when capturing images during an experiment.
Image compression is a technique whereby the raw image data are
transformed in such a way as to make them more memory efficient.
Compression can take one of two forms. Lossless compression aims to
shrink the size of the image data, while preserving all of the information
held within them. Most images will contain some degree of repetition, for
example, a solid area in a single color. Where this occurs, these contiguous
blocks can be compressed into a single instruction, allowing a decoding
algorithm to reconstruct the entire image. For example, pixel data such as
000000111111112222223333 may be compressed using run length encoding
in a form such as 6-0, 8-1, 6-2, 4-3, which requires less space.
6

Table 1.1 Descriptions of Notable Image Bit and Channel Combinations


Number of Bits per Total possible
Name channels channel color combinations Description
1-bit binary 1 1 2 In a binary image, values can be either 0 (black) or 1 (white).
There are no intermediate values. Cameras and other capture
devices do not usually capture a binary image; rather, they are
obtained by thresholding a higher-quality image into
foreground and background pixels.
8-bit grayscale 1 8 256 As seen in Figure 1.3, an 8-bit grayscale image stores up to 256
levels of gray, referred to as intensity levels due to their
correspondence with image brightness.
16-bit grayscale 1 16 65,536 Structurally identical to an 8-bit grayscale image, but with
double the number of bits per pixel. This grayscale image can
be used when 256 intensity levels is insufficient to capture the
information required.
8-bit indexed 1 8 256 of 16 m Contains 256 grayscale or color values, but these values are
stored separately from the pixel data that index them. Can be
used if some color is required, but only 8 bits per pixel is
available.
Michael P. Pound and Andrew P. French
24-bit RGB 3 8 16 m 3 bytes per pixel, allows for a total of 16 million color
combinations. A very common image format. In practice, each
pixel is usually stored in BGR order, but this makes little
difference to image analysis algorithms.
32-bit ARGB 4 8 16 m + Much like 24 bpp RGB, this includes a separate alpha channel
transparency for transparency. This image format is very common in web
applications. The similar 32 bpp RGB format simply ignores
any transparency data in the last 8 bits.
RAW — — — This is a platform-dependent format that allows images to be
captured without any alteration. This will usually require
conversion into a more common format, using proprietary
software included with the camera or capture device. This
format is only beneficial if some aspect of the image would be
lost if it was preconverted to another format.
Note: The only remaining consideration with regard to bits per pixel is the resulting image size. Assuming there is no compression in the image (a
topic covered in the next section), high bit depths can cause file sizes to become increasingly large. In previous decades, where magnetic storage
was smaller and more expensive, this was more of a concern than it is now. However, even now RAW and uncompressed image formats can
be very large.
Chapter one: An introduction to images and image analysis
7
8 Michael P. Pound and Andrew P. French

The obvious advantage of lossless compression is that images are


unaltered, and all information is retained and can be used in any image
analysis steps. The disadvantage is that this approach can still be memory
inefficient where images are hard to compress. Images that are repetitive
or have large blocks in the same color can be compressed easily, but pho-
tos of biological subjects may not have these properties. The common ZIP
format is an example of lossless compression that can be used on any file
type. TIFF file formats allow different types of compression behind the
scenes, including ZIP-like lossless compression.
Lossy compression attempts to reduce the file size further than loss-
less approaches, but at the cost of some image information, which will be
irrevocably lost. Traditionally, lossless compression is used in domains
where memory efficiency is paramount, such as transmitting images
across the Internet. Many compression algorithms also attempt to exploit
properties of the human visual system such that the loss in quality is not
noticeable to most observers. For example, the JPEG image format applies
more lossless compression to the color information in an image than to the
brightness information. This is because human eyes are more sensitive to
contrast than they are to changes in color, and hence more color compres-
sion can be achieved before a human observer will notice.
In image analysis, the appearance of an image to a human is of little
importance, and modern memory is very cheap. Users capturing images
should think carefully before using lossy compression methods, and
in most cases lossless compression will ensure that all information is
retained for image analysis. Certainly if JPEG is used, the highest-quality
setting should be used.
Table 1.2 shows a selection of common image file formats and their
properties.

1.2.5 Color spaces


The color image data described above were concerned with 8-bit RGB
color, that is, color separated into three separate red, green, and blue com-
ponents. This is one of many ways to represent color, and not all image
formats store RGB data. In addition, many image analysis routines can be
used effectively in different color spaces, so knowledge of these spaces
is helpful. Diagrammatic representations of common color spaces can be
seen in Figure 1.4.
Biologists who are familiar with confocal microscopy should note
that confocal images are often stored using arbitrary colors, sometimes
correlating with the colors of the lasers used. In fact, the confocal micro-
scope measures only intensity of the fluorescence at a point and at given
wavelengths, so in reality confocal images are similar to a group of
Table 1.2 A Comparison of Common Image File Formats
Allows
Full name Extension Compression tagging Description
Bitmap (BMP) .bmp Lossless No Can make use of limited compression, but is often
uncompressed. This results in very large file sizes.
Portable Network .png Lossless No Now very common, reasonable lossless encoding now
Graphic (PNG) makes PNG preferable to BMP in many situations.
Tagged Image File .tiff Lossless Yes Often used in scientific research. Uses lossless encoding
Format (TIFF) but also allows a significant amount of extra
information to be included in tags.
Joint Photographic .jpg, .jpeg Lossy Yes Generally used to store photographs, the lossy
Experts Group compression in JPEG might be unsuitable for scientific
(JPEG) use. However, the amount of compression can be
altered, and at low levels a large decrease in file size
can still be obtained, with minimal loss in quality.
Chapter one: An introduction to images and image analysis

Graphics Interchange .gif Lossy No GIF compression uses a color palette of only 256 colors,
Format (GIF) and is unsuitable for scientific use in most cases.
9
10 Michael P. Pound and Andrew P. French

grayscale intensity images, and do not adhere to any of the color spaces
discussed below.

1.2.5.1 RGB
The RGB color space splits each pixel into three colors, representing the
three primary color components, red, green, and blue. RGB can be visu-
alized as a three-dimensional cube (Figure 1.4a), where each axis rep-
resents one of the color channels. The color black is found at the origin,
where RGB values are (0, 0, 0). White is found at the opposite corner,
with RGB values of (255, 255, 255) for an 8-bit image. Grayscale pixels
are found along the line between the black and white corners, where
R, G, and B have the same value. The RGB format is popular because it
matches the structure of pixels in monitors and other displays. However,
the main notable drawback of RGB is that it combines color and bright-
ness into the same space. Conversions between RGB and all other color
spaces exist; thus, the required color component can be separated from
the brightness component by converting it into a color space that makes
that distinction.
Although this representation is almost exclusively referred to as RGB,
many bitmap files, in particular the Windows BMP format, actually store
the data in BGR order. In most cases pixels are stored in 32-bit blocks; thus,

S
A B D

Y
G
H
R L

B
Cr

S
C
Cb
E
Cr Cr Cr
H
V
Y=0 Y = 0.5 Y=1

Cb Cb Cb

Figure 1.4 (See color insert.) Diagrammatic representations of common color


spaces. (a) The RGB cube. (b) The HSL cylinder. (c) The HSV cylinder. (d) The
YCbCr cube. (e) A selection of planes taken from the YCbCr cube, demonstrating
how color changes with Y, Cb, and Cr.
Chapter one: An introduction to images and image analysis 11

a pixel will usually comprise 24 bits of BGR, followed by an additional


unused 8 bits. In other formats such as PNG, these 8 bits are used as a
transparency, or alpha, channel.

1.2.5.2 HSV
The HSV color space represents the separate hue, saturation, and
value components of a pixel. HSV is most easily viewed as a cylinder
(Figure 1.4c), with hue being the position around the edge, saturation the
distance from the center to the edge, and value the position from the top
to the bottom.
Hue represents the color of the pixel, and usually a value in degrees,
from 0 to 360. At 0°, the hue color is red; by rotating about the hue wheel,
the colors will pass through blue and then green, and then finally back to
red. Saturation represents the intensity of the color, from strong through
to grayscale. The closer to the center of the cylinder cross section the satu-
ration value lies, the less the hue value will be expressed. Finally, the posi-
tion up and down the cylinder represents value, or brightness. Toward the
bottom there will be darker pixels, with lighter pixels above. Any position
on the HSV cylinder can be matched by a pixel in RGB, and conversion
between the two color spaces is simple. Because HSV separates color (H +
S) from intensity (V), the HSV color space separates the color components
of a pixel from the grayscale component in a way that RGB does not. HSV
is therefore more stable during changing lighting conditions, which may
occur when analyzing images over time.

1.2.5.3 HSL
Similar to HSV, the hue, saturation, and lightness (HSL) color space con-
verts RGB into a distinct color and brightness components. However,
there are slight differences, as can be seen in the color space diagram
(Figure 1.4b). While the saturation value still influences whether a color
is vibrant, or washed out, the lightness value now assigns the blackness
and whiteness of a color. For image analysis purposes, HSV and HSL are
similar. However, it is often thought that HSL is the more intuitive space,
as a high value of lightness will produce a white pixel, rather than a pixel
whose whiteness depends on the additional saturation variable.

1.2.5.4 YCbCr
YCbCr (sometimes referred to as YUV) exists to reduce the redundancy
inherent in signals sent using RGB. The Y component, scaled between 0
and 1, represents the luminance of a pixel. The color components Cb and
Cr represent the blue difference and red difference. Any RGB color can
be found on the Cb and Cr axis, with the luminance Y specifying the
shade of that color. A diagram of the color space is shown in Figure 1.4d;
12 Michael P. Pound and Andrew P. French

however, it is often easier to visualize as only CbCr using constant Y


values, as in Figure 1.4e. While the amount of information per pixel in
YCbCr is not any greater than RGB, by separating the luminance and
color information, much like with HSV, different compression algo-
rithms can be applied to the color or to the intensity of an image. It is
the YCbCr color space that the JPEG file format uses to highly compress
the color information. Human observers cannot resolve color informa-
tion as accurately as grayscale intensity, so video encoding for TV is also
processed as YCbCr.
While compression of the color information is not ideal for images
meant for use in image analysis, it should be noted that many algorithms,
such as segmentation and stereo reconstruction, can work well on gray-
scale images. Other algorithms, like common edge filters, operate exclu-
sively on grayscale images. It is therefore more important to preserve
intensity information than color information when capturing images.

1.3 Analyzing images


Image analysis is the process of working from raw pixel data to obtain-
ing some useful information from the image, typically a measurement of
the objects within. While some image analysis algorithms can be quite
complex (as will be seen in later chapters), there are a variety of simple
techniques that see widespread use due to their varied applicability. For
example, image filtering is often used to reduce or remove image noise
prior to further processing. Image segmentation can be used to locate
regions of interest, separating foreground from background, and ranges
in complexity, from very simple techniques to very complex ones.

1.3.1 Image filtering


Image noise occurs in all captured images, regardless of the quality of
the sensor. It is caused by a number of factors, but primarily by natu-
rally occurring electronic noise. The different causes of image noise pro-
duce unwanted variations in pixel color or intensity, away from the true
color of the object being viewed. These variations are drawn from differ-
ent probability distributions, depending on the nature of the noise. For
example, the majority of noise generated by a camera sensor will follow
a Gaussian distribution.
Image filtering is an effective way of reducing noise in an image while
preserving important aspects of the subject. The nature of the filter used
should depend on the nature of the noise, and multiple filters can be used
where there are multiple sources of noise. This section discusses a variety
of noise types and suggests appropriate image filters to reduce them.
Chapter one: An introduction to images and image analysis 13

1.3.2 Kernel convolution


Many image filters use the mathematical convolution operation to con-
volve an image with a mask, often called a kernel. An example kernel may
be structured like this:

1 1 1

2 3 2

1 1 1

where coordinates of values in the kernel are defined relative to the center,
not the top corner like an image:

(–1, –1) (0, –1) (1, –1)

(–1, 0) (0, 0) (1, 0)

(–1, 1) (0, 1) (1, 1)

The kernel is altered depending on the effect that the filter requires.
The discrete convolution operation at pixel coordinate is defined as

∞ ∞

∑ H( i , j ) ∑ ∑
1
I ′( u , v ) = I ( u − i , v − j )H ( i , j )
i = −∞ j = −∞

where I is the original image, I′ is the filtered image, and H is the kernel
to be applied to I. In other words, for each pixel in the source image, we
apply the kernel at that point. We then multiply all neighboring pixels
under the mask by the corresponding value in the kernel, and sum the
result for all neighbors. How many pixels are considered at each location
is dependent on the size of the kernel. The division by ΣH(i, j), the sum of
all elements in the kernel, ensures that the image intensity is not altered,
should the sum of the values in the kernel not be 1. The following example
aims to illustrate the process of convolution.
Given the following kernel:

1 1 1

2 3 2

1 1 1

and the following image:


14 Michael P. Pound and Andrew P. French

1 3 5 2 8 8 5 1

6 4 6 9 3 1 9 9

2 7 1 5 3 6 8 2

1 3 8 2 4 9 7 3

7 3 5 6 4 1 5 4

3 1 4 7 9 2 9 9

The filtered value at coordinate (4, 2) is calculated as

1
= 6.55
9(9 ∗ 1 + 3 ∗ 1 + 1 ∗ 1 + 5 ∗ 2 + 3 ∗ 3 + 6 ∗ 2 + 2 ∗ 1 + 4 ∗ 1 + 9 ∗ 1)

This process is repeated for all values in the image, resulting in a new,
filtered image as output.

1.3.2.1 Mean filter


The mean filter averages local pixel values in a region, reducing the
magnitude of uniform noise. The filter also has the effect of blurring the
image. The main form of noise in an image is quantization noise, that is,
noise that forms where pixels have been quantized into a discrete range,
for example, 0–255 in 8-bit images. If a sensor can measure intensity or
color with a higher degree of accuracy than 8 bits, the encoding of the
image will cause pixels to take the nearest appropriate value, and some
additional information will be lost. This loss will cause a uniform error
throughout the image.
The kernel for a mean filter is given as

1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 or 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1

Two examples are shown, a 3 × 3 kernel and a 5 × 5 kernel. A mean


filter kernel can be any size, N × N. The larger the kernel, the more pixels
are included in the mean calculation, and the larger the blur. Because a
Chapter one: An introduction to images and image analysis 15

mean filter treats all pixels under the kernel as equal in weight, there is a
disadvantage that pixels far from the center pixels can influence its color.

1.3.2.2 Gaussian filter


Unlike a mean filter, a Gaussian filter gives higher weight to pixels closer to
the center of the kernel. These values are drawn from a two-dimensional
normal distribution, where the mean of this distribution is the center of
the kernel, and the standard deviation is varied depending on how strong
the effect of the filter should be. The general formula for a 2D normal dis-
tribution is given as

  ( x − x0 )2 ( y − y0 )2  
f ( x , y ) = A exp  −  + 
  2σ
2
2σ 2  

where A is the amplitude, (x, y) is the center, and σ is the standard devia-
tion (s.d.) of both directions of the Gaussian. The s.d. is kept identical for
both directions, to ensure the same amount of blur is applied horizontally
and vertically in the image.
The dimensions of the kernel necessary to represent a normal dis-
tribution are dependent on the standard deviation. While theoretically
normal distributions are of infinite size, in practice the majority (98%) of
the distribution can be represented as a kernel with a radius of σ * 2.5.
However, for even relatively small σ, this produces kernels large enough
to become computationally inefficient. The appropriate dimensions of a
kernel based on a Gaussian with is 20 × 20. One benefit of using a nor-
mal distribution is that the x and y components can be separated into two
separate passes over the image; an N × N kernel can be split into two ker-
nels of size N × 1 and 1 × N. The result of convolution with the first kernel
is convoluted with the second, to produce the same result that would be
obtained if convoluting with the much less efficient N × N kernel. The x
and y component kernels for a Gaussian filter of σ = 3 are

f(x) =
0.004 0.009 0.018 0.033 0.055 0.081 0.106 0.126 0.133 0.126 0.106 0.081 0.055 0.033 0.018 0.009 0.004

T
f(y) =
0.004 0.009 0.018 0.033 0.055 0.081 0.106 0.126 0.133 0.126 0.106 0.081 0.055 0.033 0.018 0.009 0.004

Compared to a mean filter, a Gaussian filter is much better at preserv-


ing edges, and distant pixels do not influence the target pixel value as
much as closer pixels. An example of an image filtered using a Gaussian
blur can be seen in Figure 1.5a and b.
16 Michael P. Pound and Andrew P. French

A B

C D

Figure 1.5 (a) Image of an Arabidopsis seedling; the image has had artificially
added Gaussian noise, followed by salt-and-pepper noise. (b) The same image
filtered with a median filter; the majority of the salt-and-pepper noise has been
removed, but the Gaussian noise remains. (c) The same image filtered with a
Gaussian blur. Much of the Gaussian noise has been removed, but the salt-and-
pepper noise has been spread out to neighboring pixels. (d) A median filter
followed by a Gaussian filter. Combining two image filters first removes the salt-
and-pepper noise, and then reduces the Gaussian blur. (Courtesy of Ric Traini,
Centre for Plant Integrative Biology, University of Nottingham.)

1.3.2.3 Median filter


A median filter works in a similar way to the operation of a mean filter,
except that the median values of the pixels under the mask are calculated,
rather than the mean value. A median filter does not use a specific kernel,
simply a window size, below which the median is calculated. For exam-
ple, given the following image, using a window size of 3 × 3:

1 3 5 2 8 8 5 1
6 4 6 9 3 1 9 9
2 7 1 5 3 6 8 2
1 3 8 2 4 9 7 3
7 3 5 6 4 1 5 4
3 1 4 7 9 2 9 9
Chapter one: An introduction to images and image analysis 17

the filtered value at coordinate (4, 2) is calculated as the median of {9, 3, 1,


5, 3, 6, 2, 4, 9}. The median of these values is 4.
A median filter is very effective at removing salt-and-pepper noise,
where errors in the capture device lead to some pixels having extremely
low or high intensities. High levels of Gaussian noise will also lead to
some extremely high or low values. As the noise is normally distributed,
these values will be uncommon, but a median filter will remove them
where they do occur. Figure 1.5c shows a typical output of a median fil-
ter on an image containing both Gaussian and salt-and-pepper noise.
Figure 1.5d shows a more effective approach of first using a median filter
to remove salt-and-pepper noise, and following this with a Gaussian filter
to smooth the remaining Gaussian noise.

1.3.3 Segmentation
Segmentation is the process of splitting the pixels of an image into groups,
where each group has some meaningful distinction from the others. The
simplest form of segmentation would split pixels into two classes, where
one class represents the areas of interest in the image, and the other rep-
resents the background. Segmentation into two groups of pixels is often
achieved using thresholding, the process of grouping pixels based on
their intensity or color.

1.3.3.1 Binary thresholding


The simplest form of thresholding into two classes is called binary thresh-
olding. In binary thresholding, some intensity or color value is chosen;
all pixels below this level are classed as background, and all pixels above
are classed as foreground. This approach essentially separates bright fore-
ground from darker background. An example can be seen in Figure 1.6.

Figure 1.6 Grayscale image of a wheat root grown on germination paper. Right:
A binary image resulting from thresholding the left image. A threshold value of
177 was manually chosen.
18 Michael P. Pound and Andrew P. French

While binary thresholding can be appropriate for images where the


foreground and background pixels are clearly defined, it can become
problematic where intensity levels are not so clear. The user must also
specify the level at which thresholding occurs. A level midway between
0 and 255 may work on many images, but not on images that are darker
or brighter than usual. Binary thresholding over an entire image will also
fail to adequately handle images that include gradual changes in intensity
over the entire image. Where one side of the image is lighter or darker
than another, the results of the segmentation will change in the different
regions of the image.

1.3.3.2 Adaptive thresholding


One disadvantage inherent in binary thresholding is that the thresh-
old itself must be set manually by the user. This process can be time-
consuming and subjective; a better approach would be to automatically
calculate the optimum threshold. A popular approach to so-called adap-
tive thresholding is the Otsu method (Otsu, 1979).
Otsu thresholding operates by minimizing the intraclass variance,
calculated as a weighted sum of the variances of both the foreground and
background classes. In other words, Otsu chooses a threshold level t that
minimizes the variance of the pixels in both classes. This intraclass vari-
ance calculation is made easier by instead calculating the interclass vari-
ance, that is, the variance between the foreground and background pixels.
Otsu showed that the threshold t that maximizes the interclass variance is
the same value that minimizes the intraclass variance.
Interclass variance is given as

σ B2 = wb ∗ w f ∗ (µ b − µ f )2

where the weights of the foreground and background classes, wb and wf ,


are calculated as the number of pixels in the respective classes, divided
by the total number of pixels. μb and μf are the mean values of the back-
ground and foreground classes.
Otsu thresholding calculates σ B2 for each possible threshold value,
and then selects the optimum threshold:

Threshold = arg max


tσ B2 ( t )

1.3.3.3 Region-based segmentation


While adaptive thresholding is preferable to manual thresholding in
many cases, broad changes in image intensity over the entire image can
Chapter one: An introduction to images and image analysis 19

still lead to poor results. To account for global changes in image inten-
sity, it is often beneficial to split an image into smaller subregions, before
separately applying an adaptive threshold to each region. If some regions
appear brighter than others, local adaptive thresholding can treat these
regions as distinct, and apply a different threshold value.

1.3.3.4 Advanced segmentation


The segmentation techniques described so far can be thought of as low-
level pixel-based methods. While the average intensity of neighboring
pixels is a factor in region-based segmentation, regions are generally
large, and each pixel’s influence is slight. More advanced segmentation
algorithms will use the locations of pixels to better effect, by clustering
groups of neighboring pixels into separate regions. This chapter will not
cover such algorithms in detail; however, those interested are encouraged
to read about watershed segmentation (Vincent and Soille, 1991).

1.3.4 Morphological operations


When an image is segmented into a binary result (such as foreground and
background) without using any prior models of expected shape, it is likely
that the initial results will need further processing to improve the seg-
mentation. For example, due to variations in gray level, a thin foreground
object may have missing sections (see Figure 1.7).
To remove such extraneous noise and holes from images, morphologi-
cal processing can be used. Morphological operators are commonly used
on binary images. Two of the most basic processes are performed with the
erosion operator and the dilation operator, which will be explained here.
Much of the other morphological operations can be thought of in terms of
these two foundations operators.
The erosion operator has the effect of shrinking the foreground pixels
representing an area. That is, the foreground pixel area is eroded away,
leaving less foreground present. This is achieved using a structuring ele-
ment, similar to the kernel used in filtering. A typical morphological ker-
nel is a 3 × 3 square as follows:

1 1 1

1 1 1

1 1 1

This is placed over every pixel in the foreground, with the current
pixel under test being placed in the (0, 0) central position. The neighbor-
ing pixels that fall under the kernel are considered, and if all of them are
20 Michael P. Pound and Andrew P. French

Figure 1.7 Example of morphological erosion and dilation. Original image of


plant roots (top left), binary image produce by thresholding, inverted for clarity
(top right), erosion applied to the binary image (bottom left), and dilation applied
to the binary image (bottom right).

foreground pixels, then the pixel under consideration is set to the fore-
ground in the output image; otherwise, it is set to the background.
The opposite of the erode operator, and the second of the two most
basic operators, is the dilate operator. Its use proceeds with the same
kernel and procedure as above, except that the output pixel is set to the
foreground if at least one of the other pixels in the kernel is a foreground.
As its name implies, this has the effect of dilating or growing the fore-
ground segmentation.
By chaining a dilate operation and an erode operation together, we
can fill small holes in the foreground. It is easy to imagine how a dilation
operation can fill holes, by expanding the boundary of foreground shapes
until a gap between them is filled. Clearly this affects the shape and size
of the foreground object. Following the dilation with an erosion operation
using the same structural element still allows small gaps to be bridged,
but is less destructive to the shape of the original foreground element, as
the boundary is shrunk again after the initial expansion (see Figure 1.8,
right panel). The opening operator effectively produces the opposite effect
to closing, and has the effect of removing small areas of foreground, such
as speckle noise (rather than small holes).

1.3.5 Edge detection


Finding features in the image can be thought of as one conceptual level
above finding groups of pixels, which the thresholding-based methods
we have seen so far have produced. A common feature requirement
is to segment the edges from the images. This allows us to find the
boundaries of objects. A crude method is to use morphological oper-
ations on a binary image; dilation followed by subtracting away the
original image will leave only the dilated pixels around the edges of
Chapter one: An introduction to images and image analysis 21

Figure 1.8 Example of a common problem with fixed-level global thresholding.


Suppose we want to segment the central linear feature in the simulated image
(left); using global thresholding at a cutoff value of 128 correctly categorizes all
the background pixels as black, but misclassifies some of the darker foreground
pixels into the background category (center). To fix this, a morphological dilation
is performed, followed by an erosion. This is also referred to as morphological
closing, and has the effect of filling small holes in the foreground (right).

the foreground areas. Often, though, the images in which we are try-
ing to find an edge are not binary. An edge can represent a separa-
tion between regions of different colors, textures, 3D depths, etc., but is
most simply introduced as a difference in intensity between neighbor-
ing regions in an image.
Figure 1.9 shows the first derivative of the intensity plot across the
edge—note how it gives rise to a sharp peak on the strong edge, which
we can clearly see in the figure. This suggests the first derivative will
make a good basis for an edge detection operator. A kernel convolution
operation can achieve the desired effect. A first derivative approxima-
tion in the x direction can be calculated by passing the following kernel
across the image:

–1 0 +1

Hopefully it is clear to the reader why this will give a large response
when sited over a light/dark vertical edge. One of the most widely known
edge operators is the Sobel operator. It uses the same principles, but has
two kernels each designed to search for edges in different directions:
22 Michael P. Pound and Andrew P. French

Strong edge (areas of different intensity)

Plot of intensity across the edge above

First derivative of the intensity plot

Figure 1.9 Top: A synthetic image illustrating two regions with a clear edge
between them. Center: Intensity profile across the two regions. Bottom: First deri-
vate (gradient) of the intensity profile line.

–1 0 +1 +1 +2 +1

–2 0 +2 0 0 0

–1 0 +1 –1 –2 –1
Gx Gy

The results of the operators can be combined using Pythagoras’ theo-


rem to give a magnitude for the response to the edge detection:

G = Gx 2 + Gy 2

It is also possible to combine the two responses to produce a domi-


nant direction for the edge. For more details see Sonka et al. (1999).

1.4 Conclusion
In this chapter we have presented to the reader an introduction into the
structure and design of images, and an overview of many common low-
level image analysis operations. An understanding of the underlying data
representations of image data is crucial if one hopes to design effective
image analysis algorithms, or correctly make use of existing algorithms
and tools.
An overview of the storage mechanisms behind pixel data was pre-
sented, followed by a comparison of some popular image formats used
within plant science research and further afield. We presented a brief
Chapter one: An introduction to images and image analysis 23

overview of the contrasts between lossless and lossy compression, in


the hope that researchers will consider their image capture require-
ments before embarking on a new project. Finally, we covered many of
the fundamental image analysis algorithms necessary for a researcher
to begin analyzing his or her own images. While these techniques could
be considered only an introduction to image analysis, they can be found
throughout the literature, in many complex image analysis applications.

References
Otsu, N., A threshold selection method from gray-level histograms, IEEE Trans.
Syst. Man. Cybern., SMC-9, 62–66, 1979.
Schindelin, J., Arganda-Carreras, I., Frise, E., et al., Fiji: an open-source platform
for biological-image analysis, Nature Methods, 9, 676–682, 2012.
Sonka, M., Hlavac, V., and Boyle, R., Image processing, analysis, and machine vision,
2nd ed., Brooks/Cole, California, 1999.
Vincent, L., and Soille, P., Watersheds in digital spaces: an efficient algorithm
based on immersion simulations, IEEE Trans. Pattern Anal. Mach. Intell., 13,
583–598, 1991.
chapter two

Image analysis for plants:


Basic procedures and techniques

Yasuomi Ibaraki and S. Dutta Gupta

Contents
2.1 Introduction.............................................................................................. 25
2.2 Procedures of image analysis for biological objects........................... 26
2.2.1 Basic flow...................................................................................... 26
2.2.2 Image acquisition......................................................................... 27
2.2.3 Preprocessing............................................................................... 28
2.2.4 Extraction of objects of interest.................................................. 31
2.3 Color analysis........................................................................................... 31
2.4 Shape analysis.......................................................................................... 32
2.5 Particle analysis........................................................................................ 33
2.6 Growth analysis....................................................................................... 34
2.7 Texture analysis....................................................................................... 35
2.8 Emerging applications and future perspectives................................. 36
References........................................................................................................... 37

2.1 Introduction
Image analysis is a promising tool for nondestructive analysis of biologi-
cal objects, and has been widely used in botanical research and practical
agriculture. The technique is now readily available at low cost and is being
widely applied to objects from the cell level to the plant and canopy levels.
Advances in devices for digital image acquisition and personal computers
have contributed to this progress. Software for image analysis is also now
readily available. The main advantage of image analysis is its potential
for nondestructive and objective analysis. The objectives of the analysis
include measurement (of size, population, growth, etc.), quality evaluation,
classification, and visualization. As a useful research tool, image analysis
has been widely used in microscopy for improving the visual appearance
of an image to a human viewer or for measurement of various features

25
26 Yasuomi Ibaraki and S. Dutta Gupta

of organelles, cells, and organs from his or her images. In addition, it is


possible to quantify elongation or expansion in roots or shoots using seri-
ally acquired images for plant growth analysis (Spalding and Miller, 2013).
Image-based analysis of morphological features may also be an effective
tool for phenomics in plant (Arvidsson et al., 2011; Iyer-Pascuzzi et al., 2010;
Zhong et al., 2009; Keyser et al., 2013; Chapter 9 in this book). Furthermore,
images are used for photosynthetic analysis, via chlorophyll fluorescence
imaging or photochemical reflectance imaging, giving spatial information
on photosynthetic properties within a leaf, plant, or canopy.
Image analysis is also promising for practical use in agriculture.
Evaluation of plant status based on visual inspection is often performed
for management of cultivation. Image analysis has the potential for objec-
tive evaluation of plant status and is expected to help in the management
of cultivation. In particular, nondestructive evaluation of plant status using
images permits monitoring a time course of plant status, yielding valuable
information, including growth rates and developmental stages. In addition,
image analysis is promising for acquiring information about the physio-
logical state of plants, including leaf area index (Liu and Pattey, 2010), chlo-
rophyll content (Yadav et al., 2010; Dutta Gupta et al., 2013), and disease
severity (Corkidi et al., 2006; Wijekoon et al., 2008; Cui et al., 2010). Thermal
imaging of leaves can also provide information about transpiration and can
be used for stress detection, and numerous applications have been pub-
lished to date (Jones et al., 2002). Moreover, it has been recently reported
that light intensity distribution on a canopy surface can be estimated using
images acquired through a specific optical filter (Ibaraki et al., 2012).
Given that readings of reflection and radiation of electromagnetic
waves from a target object or area are acquired as images in remote sens-
ing, image analysis is one of the basic components of remote sensing data
analysis. Numerous applications for plants, mainly for large plant cano-
pies, have been reported in this area of research. The potential uses of
remote sensing for horticultural crops have been reviewed by Usha and
Singh (2013).
In this chapter, procedures and techniques of image analysis along
with its application in higher plants are discussed, mainly focusing on
macroscopic imaging at the level of plant organs such as leaves and roots,
whole plants, or small plant canopies.

2.2 Procedures of image analysis


for biological objects
2.2.1 Basic flow
Image analysis of a biological target proceeds as follows: (1) acquisition of
an image of the target object, (2) preprocessing of the image for facilitating
Chapter two: Image analysis for plants: Basic procedures and techniques 27

further processing, (3) selection of pixels of interest, and (4) extraction of


characteristic features.
After preprocessing of the acquired digital image, pixels correspond-
ing to the target object in the image are selected. Characteristic features
are then calculated for the selected area (the selected pixels) according
to the purpose of analysis. For examples, the pixels are counted in case
of measurement of size (length or area), and red, green, and blue (RGB)
values may be extracted for color analysis.
For texture analysis, the flow is somewhat different and does not
require selection of pixels corresponding to the target objects (Ibaraki,
2006). Details of texture analysis of plants are described with application
examples in Section 2.7.

2.2.2 Image acquisition


Digital images of target objects can now be readily acquired with a digital
camera, a digital video camera, or a camera attached to a personal computer
or a cellular phone. Image data acquired with analog imaging devices, in
the form of a printed picture or a video signal, may be converted into dig-
ital image data with special devices such as a scanner, a digitizer, or an
analog–digital converter. An image is an expression of the spatial distri-
bution of light intensity or color. A digital image consists of a number of
small compartments referred to as picture elements or pixels, and a digital
value expressing light intensity is assigned to each pixel. This value is often
referred to as pixel value, pixel digital number, intensity, or gray level.
Properties of the acquired image, including spatial resolution, num-
ber of gray levels, color system, and the file format (compressed or uncom-
pressed), should be given proper attention. A digital camera saves images
in Joint Photographic Experts Group (JPEG) format, which is one kind
of lossy compression image format. Part of the image information is lost
in the compression process, particularly when a high compression rate is
applied. Because images in uncompressed formats such as BMP, uncom-
pressed TIFF, and raw data format are easy to analyze and introduce the
highest optical performance in space and color resolution of the imag-
ing device, they are desirable for precise image-based measurement.
However, images of these types have greater file sizes and require more
calculation time.
Most digital cameras are provided with automatic image control sys-
tems such as automatic gain control and automatic white balance control.
Automatic gain control changes sensitivity of the camera to the input of
light, i.e., the output (pixel value in an image), and automatic white bal-
ance results in color changes in the image according to lighting condi-
tions. These automatic control systems change imaging conditions and,
as a result, may make comparison among the images difficult. Moreover,
28 Yasuomi Ibaraki and S. Dutta Gupta

when light intensity is to be measured from images, the linearity between


input and output should be confirmed.
Another important aspect in imaging target objects for analysis is to
acquire an image in which it is easy to select (extract) the target objects
from the background. Therefore, attention should be paid to the back-
ground in imaging. It is effective to place behind the target object a board
of a different color than the target. Materials used for supporting plants
in cultivation, such as poles and nets, should have different colors than
leaves, stems, or fruits in practical application of image analysis. Lighting
conditions in imaging also influence the success of extraction of the target
object. Special lighting devices are sometimes used for plant imaging to
extract the objects of interest. For example, Keyser et al. (2013) used a black
light to extract leaves.
In specialized imaging such as thermal, hyperspectral, and fluores-
cence imaging, suitable imaging devices are required for each specific
purpose, although image analysis follows the same process as that of the
normal image. In particular, thermal imaging cannot be performed with
normal digital cameras, as the imaging device requires the detection of
infrared rays. However, a general-purpose charge-coupled device (CCD)
or digital cameras may be used for hyperspectral imaging and fluores-
cence imaging by combining optical filters including band-pass filters or
long/short-pass filters, which transmit only specific wavelengths of light.
Indices using spectral reflectance such as normalized difference vegeta-
tion index (NDVI) or photochemical reflectance index (PRI) may be esti-
mated by changing the band-pass filters attached to a camera (Ibaraki et
al., 2010). For chlorophyll fluorescence imaging, optical filters are used
for both camera and light source for exciting fluorescence (Omasa and
Takayama, 2003; Ibaraki and Matsumura, 2005). Modulation lighting sys-
tems enable us to image fluorescence emitted in response to constant exci-
tation light intensity (fluorescent quantum yield). Commercially available
chlorophyll fluorescence imaging systems (Fluorcam, PSI; Imaging PAM,
Walz) are equipped with modulated lighting systems and permit readily
to image PSII quantum yield (Figure 2.1), although they are still expensive.
Lasers, which are narrow-band light sources, have also been used as light-
ing devices for imaging fluorescence (Novák, 2011). Recently, systems for
imaging green fluorescent protein at the macroscopic level—in a whole
leaf (Stephan et al., 2011) or root (Novák, 2011)—have been developed. In
addition, multispectral fluorescence imaging can provide useful informa-
tion about plants’ state of health (Lenk et al., 2007).

2.2.3 Preprocessing
Preprocessing is a procedure for facilitating subsequent processing of the
image. The purpose of preprocessing includes noise reduction, geometric
Chapter two: Image analysis for plants: Basic procedures and techniques 29

A B

Figure 2.1 Chlorophyll fluorescence images of strawberry leaves acquired by a


commercially available chlorophyll fluorescence imaging system. (a) F image, (b)
Fm image, and (c) ΦPSII image. Only the left leaf had been irradiated with high
intensity of light, showing the reduced ΦPSII.

correction, modification of spatial resolution and number of gray levels, and


conversion of color mode. For noise reduction, a smoothing filter, which is a
matrix used to calculate the average pixel value using several pixels around
the target pixel, including a moving average filter, Gaussian filter, or other
type, is often used. A median filter assigns to a target pixel the median
value of several pixels around it and is also used for noise reduction.
Processes called opening and closing may be effective for noise reduc-
tion in a binary image, eliminating small internal holes. It should be noted
that all images acquired with imaging devices, including digital cameras
and CCD cameras, are susceptible to geometric distortion, which is a dis-
crepancy on an image between the actual and the ideal image coordinates,
and is caused mainly by the properties of the lens. As a result, there are
many types of distortion. It is effective to image orthogonal grids such as
a section of paper for identifying the degree and type of distortion caused
by the imaging device. Geometric correction of the image includes affine,
conformal, and projective transformation.
30 Yasuomi Ibaraki and S. Dutta Gupta

Although an 8-bit or 24-bit color image (8 bits for each color) is nor-
mally used in digital image analysis with a personal computer, some CCD
cameras can output images with a number of gray levels greater than 8
bits. A higher number of gray levels involves more information on light
intensity and affords a more detailed analysis of light intensity, while
requiring more computing time and a greater file size. The number of
gray levels required depends on the purpose of the analysis. A reduction
in the number of gray levels may suppress the effects of noise in imag-
ing. The number of gray levels should be reduced to a level matching the
purpose of the analysis.
The spatial resolution of 640 × 480 pixels corresponding to Video
Graphics Array (VGA) aspects has historically often been used for digital
analysis, but now we can acquire an image more than several thousand
pixels in both width and height. High-resolution imaging has an advan-
tage in macroscopic imaging, particularly for plant cell culture (Ibaraki,
2006). Imaging a whole culture at high resolution yields information on
cell and cell clusters. However, analysis of high-spatial-resolution images
requires more time, and reduction of the spatial resolution has the merits
of effectively reducing not only the computing time but also noise.
The properties of a camera should also be considered. Given that the
relationship between input and output of a digital camera is generally not
linear, a gamma correction is needed in order to obtain the linear relation-
ship between them. The gamma value mainly depends on the electrical
properties of the camera and should be predetermined for each camera. In
addition, the linearity of the relationship of pixel values to input is often
limited to a certain range of pixel values. Particularly, in the region of low
and high pixel values, linearity may not be observed. Therefore, the con-
ditions in which linearity is observed should be confirmed, particularly
when pixel values are used to estimate light intensity entering the camera
(Ibaraki et al., 2012).
To enhance the visual appearance of an image, a grayscale image is
converted into an 8-bit or fewer color image using a lookup table that lists
the pixel value corresponding to each color. This method is often applied
in fluorescence imaging-based ion mapping, and a color bar represent-
ing the lookup table as a bar chart should be added to in the images.
Histogram stretching and tone curve adjustment are also effective ways to
enhance the visual appearance of the image and can be easily performed
with commercially available software. However, these methods change
the pixel values, and therefore should not be used for analyses based on
the pixel values.
Logarithmic transformation of pixel values is often effective for an
image under transmitted lighting because the relationship between the
optical density and transmitted light intensity is not linear, and it follows
a logarithmic relationship.
Chapter two: Image analysis for plants: Basic procedures and techniques 31

2.2.4 Extraction of objects of interest


The pixels corresponding to objects of interest are selected for further
analysis. This process is sometimes referred to as segmentation or thresh-
olding. One of the popular methods for selecting pixels to be analyzed
is thresholding with a fixed gray-level value (threshold value). For deter-
mination of the threshold value, a histogram of pixel values, which is a
frequency distribution of numbers of pixels with the same pixel values,
is often used. In the histogram, an object consisting of pixels with similar
pixel values is expressed as a distribution with a peak, so that pixels of
two different objects can be distinguished by setting the threshold value
at the valley between two peaks. For color images, the threshold value can
be set for each color component individually, or in the image converted to
grayscale based on the values derived from the color components.
Several automatic methods for the determination of threshold value,
such as discriminant analysis based method (Otsu, 1979) have been
developed. The details of automatic segmentation methods are described
in Chapter 1 of this book. Methods using machine learning techniques
such as support vector machine (Yu et al., 2011) or neural networks (Fu
and Chi, 1996) have also been reported. Robust and multipurpose meth-
ods for thresholding, however, have not yet been developed. Tajima and
Kato (2011) compared 16 automatic thresholding algorithms for rice root
images and observed that the accuracy of root length estimation varied
with the algorithms. It should be noted that the effectiveness of each
thresholding method may depend on the chromatic and structural char-
acteristics of the objects (plants) and imaging conditions. Therefore, it is
very important to acquire an image in which target pixels can be distin-
guished from the background.

2.3 Color analysis


Color is one of the main characteristics used in image analysis for plants,
given that plant cells have various kinds of pigments, which are the
source of their color. In practical cultivation, leaf color, which is normally
visualized with the naked eye, has been used for evaluating the plant sta-
tus in order to support management practices. For example, some plant
diseases can be detected by color degradation in parts of a leaf, and fer-
tilization timing can be based on leaf color information for crops such as
rice. Foliar color has always been of great interest and value to resource
managers and scientists as a visual indicator of plant health (Murakami
et al., 2005).
Normally, an RGB color coordinate system is used in digital image
analysis using a personal computer, although the JPEG color images
acquired by commercially available imaging devices such as digital still
32 Yasuomi Ibaraki and S. Dutta Gupta

and video cameras use a YCbCr color format, which allows performing
lousy subsampling to reduce the file size.
For color analysis, RGB data are often converted into a color appear-
ance system such as a hue-saturation-intensity (HSI; occasionally referred
to as HLS) color model or hue-saturation-value (HSV; occasionally referred
to as HSB) color model because the color appearance system is more suit-
able for expressing human sense impressions. A formula for converting
RGB into HSI or HSV values has been proposed, and a function for con-
verting an RGB color image into an HSI or HSV image is provided in most
commercially available software. Extraction of leaves from an image is
often performed using the image converted into an HSI image (Bardsley
and Ngugi, 2013; Möller et al., 2007). HSI color components have also
been used for the estimation of pigment production in hairy root culture
(Berzin et al., 1999).
In recent studies, R, G, and B values have sometimes been directly
used or by converting into the component ratios, which are referred to as
r, g, and b, respectively, for color analysis, combined with principal com-
ponent analysis (PCA) or nonlinear identification methods such as neural
networks (Prasad and Dutta Gupta, 2008; Dutta Gupta et al., 2013). Flower
color could be analyzed using RGB values and the derived values (Keyser
et al., 2013).
Plant leaves contain many types of pigments, among which chloro-
phyll is the richest and most important. Greenness of a leaf depends on
chlorophyll content and provides key information for the diagnosis of
plant physiological status, including nitrogen or water status. Yadav et al.
(2010) estimated the leaf chlorophyll content of micropropagated potato
plantlets using rgb values. Leaf greenness index from g values was cal-
culated for the comparison of leaf color in bedding plants (Parsons et al.,
2009). Wang et al. (2008) used a ratio of R to G for estimation of leaf chlo-
rophyll content.
In color analysis, the most important point is to keep the imaging
conditions constant. The RGB values in an image depend on the spectral
properties of the light source and the imaging device. Therefore, a color
standard or a color chart should be imaged together with the target object
for proper color analysis, particularly under natural sunlight conditions,
in which imaging conditions vary with time.

2.4 Shape analysis


In plants, organs such as leaves, flowers, and fruits have different shapes,
and shape features can be used for identification or selection of the tar-
get organs. Given that development processes involve morphological
changes, shape analysis also provides valuable information for classifi-
cation of developmental stages. Moreover, morphological information of
Chapter two: Image analysis for plants: Basic procedures and techniques 33

leaves is used with color information to identify plant species and to select
weeds in the crop canopy (Golzarian and Frick, 2011).
Simple shape analysis is accomplished by extracting geometri-
cal features of the target area. These include length (major and minor
axes, perimeter, etc.), area, centroid, moment, and indices derived from
combinations of these features, including aspect ratio, circularity (4π ×
area/perimeter2), compactness (perimeter2/area), and symmetry. Elliptic
Fourier descriptors along the contour from the centroid are often used
for morphological analysis of biological objects. Features extracted from
the Fourier descriptors have been used for morphological classification of
somatic embryos (Uozumi et al., 1993), for analysis of leaf shape variations
(Iwata et al., 2002; Keyser et al., 2013), and for description of root morphol-
ogy (Lootens et al., 2007). Sets of these geometrical parameters have often
been used as inputs to determine a model describing the morphological
feature of interest by statistical analyses such as regression analysis, PCA,
and discriminant analysis, or by nonlinear identification systems such as
artificial intelligence using support vector machines or neural networks.
Skeleton analysis is also used for shape analysis. Leaf morphol-
ogy could be analyzed by skeleton analysis (Wilder et al., 2011). Somatic
embryos of carrots were evaluated morphologically using skeleton images
extracted by a thinning process (Kurata et al., 1993). Midlines of root or
stem can be used for shape change analysis (Spalding and Miller, 2013).
Midline length and the distribution of local curvature along the midline
can provide a useful morphological description of a plant root or stem
(Silk, 1984).
Template matching technique is one of the pattern recognition meth-
ods and can be used for shape analysis, particularly for detection of an
object with desirable morphological features. In template matching, simi-
larity score, the degree of the matching, is evaluated by the sum of squared
differences (SSD), sum of absolute differences (SAD), or normalized cor-
relation coefficient (NCC). Although template matching is a robust way to
select objects with the required properties (local features) related not only
to shape but also to color and texture, it requires more computing time
and is susceptible to rotation and size of the template. Scale-invariant fea-
ture transform (SIFT) has been proposed as a way to extract local features
independent of rotation and size (Lowe, 1999).

2.5 Particle analysis


Particle analysis is a procedure that recognizes closed areas as particles
and calculates features related to pixel value (color information) and geo-
metric features for each particle. The analysis yields frequency distribu-
tions for the features. It is effective for images in which target objects are
assembled, including microscopic images of an assembly of organelles
34 Yasuomi Ibaraki and S. Dutta Gupta

and cells or macroscopic images of plants or plant canopy in which mul-


tiple leaves, fruits, and flowers are present.
Particle analysis is also used for the detection of lesions in a leaf
infected by plant disease. The size distribution or numbers of lesions are
good indices for evaluating the disease level (severity), normally assessed
by visual inspection.
Success in particle analysis depends on thresholding, particularly
of small particles. The area and shape of a particle consisting of a small
number of pixels are generally strongly affected by thresholding. In detec-
tion of lesions on a leaf, a small lesion is subject to thresholding processes
(Wijekoon et al., 2008). In addition, small particles are subject to noise.
These problems can be avoided by increasing spatial resolution of the
image by using a camera of high resolution (greater number of pixels) or
limiting the field of view in imaging (close-up imaging).

2.6 Growth analysis


Images can be used for estimation of dimension (size) of a target object or
measuring line length or projected area as number of pixels. Differences
between size features of an object, such as shoot or root estimated from
images acquired at different times, are attributable to growth of the object,
including elongation or expansion. Machine vision, in which digital
images are automatically acquired, is applicable to the measurement of
plant growth (Spalding and Miller, 2013). It permits not only morphomet-
rics, which is the study of geometric features in growth, but also kinemat-
ics, which is the study of the internal material processes that create the
geometry (Spalding and Miller, 2013). Time-lapse images of a plant organ
are also used for analysis of growth-dependent oscillation, called circum-
nutation (Iwabuchi and Hirafuji, 2002).
Time-lapse imaging has also been applied for root growth and devel-
opment (French et al., 2009; Lobet et al., 2011). Root growth pattern and
complexity were used for phenotyping (Iyer-Pascuzzi et al., 2010; Zhong
et al., 2009). There have been many reports on root growth patterns using
images. Fractal dimension has often been used for analyzing root com-
plexity (Tatsumi et al., 1989; Walk et al., 2004), and recently alternative
methods have been proposed (Zhong et al., 2009; Iyer-Pascuzzi et al., 2010).
Nondestructive acquisition of size information, such as of leaf area,
for a whole plant permit the analysis of growth rate, normally a destruc-
tive analysis. Relative growth rate (RGR) is commonly used for growth
analysis and is based on invasive measurement of dry weight. In contrast,
relative leaf growth rate (RLGR) can be estimated nondestructively from
images. RLGR of Arabidopsis thaliana was estimated automatically and
used for phenotyping (Arvidsson et al., 2011). Normally, a projected area
can be measured from an image. If a linear relationship between projected
Chapter two: Image analysis for plants: Basic procedures and techniques 35

area and actual leaf area is observed, RLGR can be estimated simply by
image analysis.
In forestry application, digital hemispherical photography, which
captures an image with a fish-eye lens from below a canopy, is often used
to estimate the leaf area index (LAI). Liu and Pattey (2010) showed the
effectiveness of digital photography for agricultural crops using a recti-
linear lens at the top of the canopy.

2.7 Texture analysis


Texture analysis does not require selection of pixels of a target object in an
image. Instead, it analyzes whole regions of the image and extracts char-
acteristic features related to the texture of the image, i.e., the macroscopic
pattern of light intensity. Texture analysis can characterize individual
objects in a macroscopic image in which individual objects are not clearly
identified (Shono et al., 1994). Mean gray level, variance, range (the differ-
ence between maximum and minimum values of gray level), and other
statistical features derived from a gray-level histogram, including skew-
ness and kurtosis, are used as simple texture features for classification
and segmentation of images based on texture, although these texture fea-
tures may not involve information on spatial distribution (Ibaraki, 2006).
Tuceryan and Jain (1998) divided texture analysis methods into four
categories: statistical, geometrical, model based, and signal processing. Of
these categories, histogram-derived features, the gray-level run lengths
method (Galloway, 1975), and the spatial gray-level dependence method
(SGDM) are classified as statistical methods, and two-dimensional (2D)
frequency transformation is classified as a signal processing method.
Two-dimensional frequency transformation has been widely used for
image analysis. It can derive the power spectrum image (frequency-
domain image), which expresses periodic features in the image texture. In
the gray-level run lengths method (Galloway, 1975), features are extracted
from the matrix that yield a set of probabilities that a particular-length
line consisting of pixels with the same gray level will occur at a distinct
orientation. It is useful for analysis of band pattern texture. Texture fea-
tures extracted using SGDM, developed by Haralick et al. (1973), have
often been used for texture analysis of biological objects. In SGDM, a co-
occurrence matrix is determined and 14 texture features are calculated
from the matrix. Color co-occurrence matrices derived from image matri-
ces for each color attribute—intensity, hue, and saturation—have also
been used in texture analysis (Shearer and Holmes, 1990).
Geometrical methods consider texture to be composed of texture
primitives, describing the primitives and the rules that govern their spa-
tial organization (Ojala and Pietikäinen, 2003). Model-based methods
hypothesize the underlying texture process, constructing a parametric
36 Yasuomi Ibaraki and S. Dutta Gupta

generative model that could have created the observed intensity distribu-
tion (Ojala and Pietikäinen, 2003).
In remote sensing, texture analysis has been extensively used for
classification of land use or plant species identification (Ibaraki, 2006). In
proximal remote sensing for plant canopies, applications of texture analy-
sis have been reported. Shearer and Holmes (1990) identified plant species
using color co-occurrence matrices. Shono et al. (1995) compared the effec-
tiveness of several methods for texture analysis, including the gray-level
run lengths, SGDM, and power spectrum methods, on estimation of the
species composition in a pasture. Shono et al. (1995) analyzed leaf orienta-
tion by texture features extracted by the power spectrum method. Murase
et al. (1994) quantified plant growth by analyzing texture features using a
neural network.
Texture features have also been used as an input of PCA for classifi-
cation to separate wheat from weeds (Golzarian and Frick, 2011). Texture
features could also be used as promising markers for identifying calcium-
deficient lettuce plants (Story et al., 2010).

2.8 Emerging applications and future perspectives


Three-dimensional image analysis is expected to be used for analyzing
growth or structure of plants in botanical research, given that plants have
complex structures. Recently, several applications have been reported for
3D image analysis for plants (see Chapter 9 in this book). Use of 3D images
is promising for plant image analysis, although difficulties in image acqui-
sition remain to be resolved.
At present, visualization of invisible objects or phenomena is a key
process in biological research, and image analysis is expected to play an
important role in this process. Use of electromagnetic waves other than
visible, or hyperspectral imaging, is promising for the analysis of physi-
ological and functional properties of plants. Fluorescence imaging has
already become an essential tool for cell biology using microscopy, and
development of a user-friendly system for acquiring fluorescence images
at the macroscopic level is expected to contribute to progress in plant
stress biology.
In recent years, advances in imaging devices have been remark-
able, and high-quality images can now be easily acquired. However,
progress in the development of software for image analysis is limited.
Multipurpose software packages are expensive and must be customized
for the user’s purpose, requiring considerable knowledge. NIH Image© or
Scion Image© is the most popular software for multipurpose image analy-
sis. Several manuals describe image analyses for a special purpose using
this software (e.g., Robinson et al., 2009). Key features to be considered
Another Random Document on
Scribd Without Any Related Topics
can get a new supply of eggs, will try to answer all letters which I can not answer now.

Fannie W. Rogers.

Owing to the severe weather, I have been unable to collect enough arrow-heads to supply all my
correspondents, but I will send them as soon as possible. If those who have offered me coins and
other things in exchange will wait until I can get some more arrow-heads, which will be before
long, I will be very glad.

Isobel L. Jacobs,
Darlington Heights, Prince
Edward County, Va.

I am very much interested in the Post-office Box. I like Young People very much.
I live beside the beautiful Geneva Lake, which is a great summer resort. In warm weather we have
great sport fishing, but now it is all ice-boating and skating.
We raised five Bramah chickens last summer. They were very tame. One went to sleep with its
head on my aunt's shoulder, and they were capital pickpockets. They were in such demand that we
had to part with all but one. She is named Pulleta, and is so tame I can pick her up anywhere.
I would like to exchange postmarks, for foreign stamps, or shells from the Gulf of Mexico or Atlantic
coast.

Hubert C. Scofield,
P. O. Box 207, Geneva,
Wis.

I would like to exchange pieces of bass-wood, red and white oak, bird's-eye and hard and soft
maple, iron-wood, red and yellow birch, elm, ash, and butternut, for specimens of other kinds of
woods. Correspondents will please mark specimens.

George Empey,
Hersey, St. Croix County,
Wis.

I would like to exchange postmarks, for sea-shells. I am nine years old.

Reynolds White,
132 East Forty-fifth Street,
New York City.

I will exchange postmarks, for stamps, with any little boy or girl. I am nine years old.
Percy G. Lapey,
62 Clinton Street, Buffalo,
N. Y.

I would like to exchange postage stamps. I have a Swedish, a Canadian, and a New South Wales
stamp, two Italian, some French, English, and old issues of United States stamps, which I will give
for others.

A Subscriber to "Young
People,"
141 Fifth Avenue, New
York City.

I wish to notify correspondents that I do not wish to exchange for postage stamps any longer, but I
will exchange stamps, curiosities, shells, and minerals, for curiosities, shells, and minerals.

V. L. Kellogg,
P. O. Box 411, Emporia,
Kansas.

I would like to exchange shells and pressed sea-weeds, for other shells, Lake Superior agates, ore,
or other small specimens of minerals. I would like everything sent me to be clearly marked, and I,
in return, will name and classify the shells.

Miss May Hart,


Soquel, Santa Cruz County,
Cal.

I live only eighteen miles from King's Mountain, where a great battle of the Revolutionary war was
fought.
I have a little rat terrier I have named Rip Van Winkle, because he sleeps so much. I would like to
exchange birds' eggs with readers of Harper's Young People. I am twelve years old.

Willie F. Robertson,
Yorkville, S. C.

I have a collection of about fifteen hundred stamps, and I have about five hundred duplicates,
which I would like to exchange for others. Correspondents will please send a list of those they
desire.
Hiram H. Bice,
39 Second Street, Utica,
N. Y.

The following exchanges are also offered by correspondents:

Coins or specimens of woods, for Indian relics, curiosities, fossils, or minerals.

Alfred S. Kellogg,
P. O. Box 103, Westport,
Fairfield County, Conn.

Postage stamps.

J. Clarke Burrell,
307 East Eighty-sixth
Street, New York City.

Postage stamps, for Indian relics, or anything suitable for a museum.

George Lunham,
147 Skillman Street,
Brooklyn, L. I.

Foreign postage stamps.

Lionel W. Crompton,
Care of Mr. Clifton, 104
Sixth St., Hoboken, N. J.

Foreign postage stamps, for old issues of United States postage stamps, or for any Department
stamps.

Frank Bang,
271 Avenue B, New York
City.

Postmarks, for stamps.


Jay Hollis Gibson,
Vassar College,
Poughkeepsie, N. Y.

A silver Japanese coin and a piece of prehistoric pottery, for a genuine Indian bow and arrow.

David M. Gregg,
404 Penn Street, Reading,
Penn.

Ocean curiosities, for a guinea-hen's egg or other eggs; or twenty-five postmarks, for a Chinese
stamp and nine other foreign stamps.

Helen S. Lovejoy,
39 Munjoy Street, Portland,
Maine.

Cotton and rice as they grow, Spanish moss, arrow-heads, Southern insects, or pressed flowers, for
stamps.

John J. Hawkins,
Prosperity, S. C.

An ancient Spanish coin to exchange for some curiosity.

Thomas Ewing,
Osceola, Clark County,
Iowa.

Persian, Japanese, and other stamps, for Turkish or South American stamps or minerals.

Theodore Morrison,
3262 Chestnut Street,
Philadelphia, Penn.

Teasels, which are pretty for bouquets and decorating, for coins, curiosities, or minerals.

J. E. Garbutt,
Garbutt, Monroe County,
N. Y.
A stone from Delaware or Pennsylvania, for one from any other State; or shells, postmarks, or June
beetles, for ore of any kind, or for curiosities.

S. Stinson,
1705 Oxford Street,
Philadelphia, Penn.

Sea-shells or minerals, for minerals.

John D. Brown,
P. O. Box 171, Newton
Centre, Mass.

Pressed sea-weeds from Santa Cruz, on Monterey Bay, for ferns or sea-weeds from other localities.

Nellie Hyde,
162 Third Street, Oakland,
Cal.

Postmarks.

Henry F. Steele,
63 East Fifty-fifth Street,
New York City.

Soil from Illinois, for that of any other State.

Arthur Davenport,
34 Ogden Avenue,
Chicago, Ill.

An Italian stamp, for one of any other foreign country.

Giorgino Chapman,
Everett House, Union
Square, New York City.
A ten-cent United States stamp, War Department stamps, or a Cuban, Spanish, or Netherlands
stamp, for a Brazilian ten-reis.

Fred McGahie,
78 Second Place, Brooklyn,
L. I.

Twenty-five postmarks, for a Japanese, Chinese, or East Indian stamp, or twelve other foreign
stamps.

Annie Dryden,
Care of John Dryden,
Brooklin, Ontario, Canada.

Buttons, or California postmarks, for postage stamps.

Floy Moody,
Care of Charles Moody,
San José, Santa Clara
County, Cal.

Postmarks and postage stamps, for Indian relics and ocean curiosities.

Charles B. Bartlett,
92 Franklin Avenue,
Brooklyn, N. Y.

Postage stamps, minerals, fossils, coins, ocean curiosities, and Indian relics.

S. G. Guerrier,
Emporia, Kansas.

Stamps from Peru, United States official stamps, and others, in exchange for rare stamps.

Allen R. Baker,
P. O. Box 1275, Bay City,
Mich.

Copper ore from the Eli Copper Mines, New Hampshire, specimens of meteoric rock, and stone
from the Hoosac Tunnel, for Indian relics, ocean curiosities, fossils, or minerals.
Fred W. Glasier,
P. O. Box 235, Adams,
Berkshire County, Mass.

Ocean curiosities, for turtles not more than three inches long, newts, or lizards. Correspondents will
please write before sending any of these creatures.

Daniel D. Lee, 14 Myrtle


Street,
Jamaica Plains, Suffolk
County, Mass.

Postmarks, for postmarks; or twice the number of postmarks, for any number of postage stamps.

Ralph D. Clearwater,
Care of A. T. Clearwater,
Kingston, N. Y.

Edmund S. H., and R. D. Britton.—The disastrous war between Peru and Chili originated in a dispute about
certain privileges to mine copper and nitrate of soda in the desert region of Atacama, the strip of sea-
coast on the Pacific, belonging to Bolivia, which separates Peru from Chili. In 1875, the nitrate grounds
were ceded by the Bolivian government to a Peruvian business house, which transferred a portion of its
rights to some Chilian merchants. A heavy export duty was immediately laid on the nitrate by Bolivia,
which step was considered by the Chilian government as a direct insult to its merchants, and also to be
in contradiction to earlier concessions made by Bolivia to Chili. The Peruvians, fearing the ruin of their
mining interest, took up the cause of Bolivia, and much secret diplomacy was going on, when suddenly,
on April 6, 1879, Chili made a declaration of war against Peru, and prepared to support its claims by
arms. The naval combat of Iquique took place in May of the same year, in which both Chili and Peru lost
valuable war vessels. For several months Chili maintained the blockade of Iquique, and meanwhile the
Peruvian iron-clad Huascar was harassing Chilian ports, until, in October, 1879, she was captured by two
Chilian men-of-war. The Chilian army and the united forces of Peru and Bolivia met in numerous
engagements, but since the capture of the Huascar the war has been one prolonged success for Chili.
After the battle of Chorillos, on January 14, 1881, in which the Peruvian forces were completely
overthrown, the Chilian armies marched triumphantly into Lima, on the 17th of the same month. An
armistice is now asked for by the diplomatic body at Lima, and it is to be hoped that this foolish
devastation of a beautiful country will soon come to an end.

Grace H.—You will find simple recipes for cream candy in the Post-office Boxes of Young People Nos. 35
and 38.
Willie F. W.—It is impossible to trace the superstition concerning Friday to its source. It exists among
many different peoples, each assigning to it an origin in accordance with the belief of the country. The
Friday superstition is met with even among the Brahmins of India, who hold it unlucky to begin any
enterprise on that day. In ancient times, thirty-two days in the year were considered unlucky by the
astrologers, and warnings were given against the performance of any work of importance on those days
—an advice which was no doubt strictly followed by all lazy people.

Fred L. C.—Mount Everest is the highest mountain of the earth. It is situated in the northern part of
Nepaul, which is an independent state of Hindostan, lying between Thibet and British India. Mount
Everest is a part of the eastern range of the Himalayas, and, according to measurements taken in 1856,
has an altitude of 29,002 feet, and thousands of cattle and sheep and mountain goats are herded on its
broad slopes of pasture-lands.

J. N. H.—If your puzzles are suitable for our columns, they will be accepted.

M. I. S.—The double-page pictures in Harper's Young People are bound by being fastened to a narrow
strip of paper, which is called a "guard." Any good book-binder will understand how this should be done.

A. J.—The line in question appeared in literature, and was often given as a quotation, long before the
ballad which you mention was printed.

Lulu De L.—We can not make room to print your little story.

Karl C. W., and Others.—The answers to all puzzles given in our columns are printed in full three weeks
after the publication of the puzzles.

H. D. F.—The directions for tracing a pattern on Russia crash were given in "Embroidery for Girls, No.
2," in Harper's Young People No. 57, November 30, 1880.

H. H. C.—Egypt and China are both supposed to be the oldest countries in the world, but it is impossible
to tell to which the greatest age may be assigned, as the most learned historians differ upon this point.
The earliest development of civilization was probably in Egypt. Damascus, if not the first city in the
world, was certainly one of the earliest of consequence. The date of its foundation is unknown, but it
was a flourishing place in the time of Abraham, and is mentioned in the book of Genesis.
Mamie Brooke.—If what appears to be sand and dirt will not wash off from your copper ore, we can not
tell you how to clean it, without seeing the specimen. What you consider dirt may be a coating of oxide.
—Your wiggles were received too late for insertion.

Constant Reader.—A very good mucilage, similar to that used on postage stamps, may be made as
follows: acetic acid, one part; gum-dextrine, two parts; water, five parts. Dissolve in a water-bath, which
consists of one vessel within another, like a double glue-pot, so that the mixture may be evenly heated.
When the gum is well dissolved, add one part of alcohol.

Fall River.—Make your camera box of quarter-inch black-walnut; or pine of the same thickness will do
equally well, and will be more easily worked, and cost less, and if neatly stained will make a pretty box.
The expense of your camera, apart from the lenses (see answer to Fred B. and Fred W. in Post-office
Box of No. 67), will be very small; and if you are handy with tools, you will have no trouble in the
construction, if you follow the directions and drawings given in Harper's Young People No. 63.
Perseverance and ingenuity will have a great deal to do with your success.

Charles A. G.—It is not easy to give you advice in a matter which may affect your whole life, but we
venture to suggest the trade of a printer as one by which a boy of your age, if he be industrious, can
earn his living in a very pleasant manner after he has conquered the difficulties which meet a beginner
in whatever branch of apprenticeship he may select.

Willie Lloyd and M. D. Austin.—Send your full address, and we will gladly print your requests for
exchange.

Favors are acknowledged from Addie B. McEwen, Annie H. Rundlett, Mamie E. S., Charlie Hopper,
Charles F. Bailey, Lyman C. S., Ella A., Edward L. Haines, Albert H. F., J. A. M. and A. W. W., Milard B.,
M. B. W., Istalina B., Jamie Craig, Edith M., Eva D. Aldrich, Joseph T. H., Wilfred J. Wood, George W.
Merritt, Howard Coleman, J. D. Pettigrew, E. G. Robinson, Arthur W. French, H. M. Redlein, E. A.
Folsom, Percy T. Warner, Helena Pierce, Minnie L., George C. Williams, Ferdinand Travis, Ollie J. McKay,
Louie Van A., Lena Burrows, Eudora Bishop, Belle Wallace, S. J. Coatsworth, E. B. G., Jacob S. Kinsely,
Josie L. Stone, Frank A. Taylor, Gottfried Steenken.

Correct answers to puzzles have been sent by Willie F. Robertson, Edwin Nesmith, Bessie Comstock,
Cora R. Price, "Lone Star," Dora Neville Taylor, J. M. Haydock, Willie Parkhurst, Willie F. Woolard, Percy L.
McDermott, Nellie Brainard, W. I. Trotter, "Jupiter," M. Lila Baker, "Bolus," Ed I. T., Annie De Pfuhl, James
W. Downing, Benno Myers, Karl C. Wells, Millie C. B., Blanche Jefferson, Frank Lomas, Andrew De Motte,
Fred Wieland, "Starry Flag," Grace A. McE., Jennie and May Ridgway, Charlie Haight, Grace Montgomery,
Fanny B. Squire, Willie M. Hargest.

PUZZLES FROM YOUNG CONTRIBUTORS.

No. 1.

HALF-SQUARE.
A poetic foot. To honor. Snug. To endeavor. A pronoun. A letter.
Starry Flag.

No. 2.

HOUR-GLASS PUZZLE.
A debate. Permanent. A public carriage. Kindness. A home of wild beasts. In February. Cunning. A fruit.
Decoration. An angry speech. Negligent. Centrals—An emblem of peace.
Dame Durden.

No. 3.

ENIGMA.
First in carol, not in song.
Second in justice, not in wrong.
Third in save, not in keep.
Fourth in huddle, not in heap.
Fifth in vain, not in proud.
Sixth in still, not in loud.
Seventh in slave, not in master.
Eighth in slow, not in faster.
Ninth in grieve, not in cry.
An enterprising town am I,
And though my site is drear and cold,
Men seek me for my hidden gold.

E. B.

No. 4.
NUMERICAL CHARADE.
My whole is composed of 10 letters, and has been received by many
readers of Young People.
My 4, 8, 5 is a sin.
My 6, 10, 7 is used by fishermen.
My 2, 1, 9 is the front of an army.
My 3, 9, 7 is an insect.

G. T. W.

No. 5.

ENIGMA.
In smart, not in good.
In hat, not in hood.
In plant, not in tree.
In caged, not in free.
In viola, not in flute.
The whole a Southern fruit.

Bolus.

ANSWERS TO PUZZLES IN No. 65.

No. 1.
Bombshell.

No. 2.
1. Never too late to mend.
2. Kalmia.
3. Fire-place.

No. 3.
1. Crane. 2. Owl. 3. Kite. 4. Heron. 5. Wren. 6. Robin. 7. Snow-bird. 8. Linnet.
HARPER'S YOUNG PEOPLE.
Single Copies, 4 cents; One Subscription, one year, $1.50; Five Subscriptions, one year, $7.00—payable in
advance, postage free.
The Volumes of Harper's Young People commence with the first Number in November of each year.
Subscriptions may begin with any Number. When no time is specified, it will be understood that the
subscriber desires to commence with the Number issued after the receipt of the order.
Remittances should be made by Post-Office. Money-Order or Draft, to avoid risk of loss.
HARPER & BROTHERS,
Franklin Square, N. Y.

Ho! ho! St. Valentine once more


Returns, with all his brilliant store
Of verses sweet and pictures gay;
You pick and choose whate'er you may.
Poor Bobby sees one, bright and fine;
He wants it for his Valentine.
Alas! his pennies all are spent;
For candies and for cakes they went.
What, Bobby! sobs and tears? O fie!
You can do better if you try:
Just write her one, in rhyme and metre,
And she will think it all the sweeter.

Oh! Kitty dear,


See here, see here,
Some one a Valentine has sent
For "Kitty Lee";
That's you or me—
How can we tell which one is meant?

I think 'tis me;


For, don't you see,
By dear young Tommy Dodd 'tis written;
If 'twas for you,
'Tis surely true,
It would have come from Tommy's kitten.
(And Pussy said, "Me-you!")
Here sits a maiden all forlorn,
Without her Valentine;
She's waited there since early morn—
The post brought ne'er a line.

Some love short boys,


Some love tall;
This little maiden
Loves them all.

Whoever passes,
Rain or shine,
She thinks 'tis sure
Her Valentine.
I love my neighbor over the way,
And bless the Saint who makes this day;
In coming years may her love and mine
Date from to-day and my Valentine!
*** END OF THE PROJECT GUTENBERG EBOOK HARPER'S YOUNG
PEOPLE, FEBRUARY 15, 1881 ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for


the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,


the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like