Introduction To Remote Sensing (PDFDrive)
Introduction To Remote Sensing (PDFDrive)
INTRODUCTION TO
REMOTE SENSING
Second Edition
9255_C000.fm Page ii Tuesday, February 27, 2007 12:33 PM
9255_C000.fm Page iii Tuesday, February 27, 2007 12:33 PM
INTRODUCTION TO
REMOTE SENSING
Second Edition
Arthur P. Cracknell
Ladson Hayes
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2007 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a pho-
tocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
9255_C000.fm Page v Tuesday, February 27, 2007 12:33 PM
Preface
Arthur Cracknell
Ladson Hayes
9255_C000.fm Page vi Tuesday, February 27, 2007 12:33 PM
9255_C000.fm Page vii Tuesday, February 27, 2007 12:33 PM
Prof. Ladson Hayes read for a doctor of philosophy under the supervision
of Arthur Cracknell and is now a lecturer in electrical and electronic engi-
neering at the University of Dundee, Scotland.
9255_C000.fm Page viii Tuesday, February 27, 2007 12:33 PM
9255_C000.fm Page ix Tuesday, February 27, 2007 12:33 PM
Table of Contents
Chapter 6 Ground Wave and Sky Wave Radar Techniques .......... 113
6.1 Introduction ................................................................................................ 113
6.2 The Radar Equation .................................................................................. 115
6.3 Ground Wave Systems.............................................................................. 118
6.4 Sky Wave Systems .....................................................................................120
1
An Introduction to Remote Sensing
1.1 Introduction
Remote sensing may be taken to mean the observation of, or gathering of
information about, a target by a device separated from it by some distance.
The expression “remote sensing” was coined by geographers at the U.S.
Office of Naval Research in the 1960s at about the time that the use of “spy”
satellites was beginning to move out of the military sphere and into the
civilian sphere. Remote sensing is often regarded as being synonymous with
the use of artificial satellites and, in this regard, may call to mind glossy
calendars and coffee-table books of images of various parts of the Earth (see,
for example, Sheffield [1981, 1983]; Bullard and Dixon-Gough [1985]; and
Arthus-Bertrand [2002]) or the satellite images that are commonly shown on
television weather forecasts. Although satellites do play an important role
in remote sensing, remote sensing activity not only precedes the expression
but also dates from long before the launch of the first artificial satellite. There
are a number of ways of gathering remotely sensed data that do not involve
satellites and that, indeed, have been in use for very much longer than
satellites. For example, virtually all of astronomy can be regarded as being
built upon the basis of remote sensing data. However, this book is concerned
with terrestrial remote sensing. Photogrammetric techniques, using air pho-
tos for mapping purposes, were widely used for several decades before
satellite images became available. The idea of taking photographs of the
surface of the Earth from a platform elevated above the surface of the Earth
was originally put into practice by balloonists in the nineteenth century; the
earliest known photograph from a balloon was taken of the village of Petit
Bicêtre near Paris in 1859. Military reconnaissance aircraft in World War I
and, even more so, in World War II helped to substantially develop aerial
photographic techniques. This technology was later advanced by the inven-
tion and development of radar and thermal-infrared systems.
Some of the simpler instruments, principally cameras, that are used in
remote sensing also date from long before the days of artificial satellites. The
principle of the pinhole camera and the camera obscura has been known for
1
9255_C001.fm Page 2 Thursday, March 8, 2007 11:14 AM
FIGURE 1.1
Sonar image of part of a flooded abandoned limestone mine in the West Midlands of England.
(Cook, 1985.)
the surface of the Earth — that is, we shall confine ourselves to Earth obser-
vation. This is not meant to imply that the gathering of data about other
planets in the solar system or the use of ultrasound for subsurface remote
sensing and communications purposes are unimportant. In dealing with the
observation of the Earth’s surface using remote sensing techniques, this book
will be considering a part of science that not only includes many purely
scientific problems but also has important applications in the everyday lives
of mankind. The observation of the Earth’s surface and events thereon
involves using a wide variety of instruments and platforms for the detection
of radiation at a variety of different wavelengths. The radiation itself may
be either radiation originating from the Sun, radiation emitted at the surface
of the Earth, or radiation generated by the remote sensing instruments them-
selves and reflected back from the Earth’s surface. A quite detailed treatise
and reference book on the subject is the Manual of Remote Sensing (Colwell,
1983; Henderson and Lewis, 1998; Rencz and Ryerson, 1999; Ustin, 2004;
Ryerson, 2006); many details that would not be proper to include in the
present book can be found in that treatise. In addition, a number of general
textbooks on the principles of Earth observation and its various applications
are available; some of these are listed in the Bibliography.
9255_C001.fm Page 4 Thursday, March 8, 2007 11:14 AM
The original initiative behind the space program lay with the military. The
possibilities of aerial photography certainly began to be appreciated during
World War I, whereas in World War II, aerial photographs obtained by recon-
naissance pilots, often at very considerable risk, were of enormous importance.
The use of infrared photographic film allowed camouflaged materials to be
distinguished from the air. There is little doubt that without the military
impetus, the whole program of satellite-based remote sensing after World War
II would be very much less developed than it is now. This book will not be
concerned with the military aspects of the subject. But as far as technical details
are concerned, it would be a reasonably safe assumption that any instrument
or facility that is available in the civilian satellite program has a corresponding
instrument or facility with similar or better performance in the military pro-
gram, if there is any potential or actual military need for it. As has already
been indicated, the term “remote sensing” was coined in the early 1960s at
the time that the rocket and space technology that was developed for military
purposes after World War II was beginning to be transferred to the civilian
domain. The history of remote sensing may be conveniently divided into two
periods: the period prior to the space age (up to 1960) and the period thereafter.
The distinctions between these two periods are summarized in Table 1.1.
TABLE 1.1
Comparison of the Two Major Periods in the History of Remote Sensing
Prior to Space Age (1860–1960) Since 1960
Only one kind and date of photography Many kinds and dates of remote sensing data
Heavy reliance on the human analysis of Heavy reliance on the machine analysis and
unenhanced images enhancement of images
Extensive use of photo interpretation keys Minimal use of photo interpretation keys
Relatively good military/civil relations with Relatively poor military/civil relations with
respect to remote sensing respect to remote sensing
Few problems with uninformed opportunists Many problems with uninformed opportunists
Minimal applicability of the “multi” concept Extensive applicability of the “multi” concept
Simple and inexpensive equipment, readily Complex and expensive equipment, not readily
operated and maintained by resource- operated and maintained by resource-oriented
oriented workers workers
Little concern about the renewability of Much concern about the renewability of
resources, environmental protection, global resources, environmental protection, global
resource information systems, and resource information systems, and associated
associated problems related to “signature problems related to “signature extension,”
extension,” “complexity of an area’s “complexity of an area’s structure,” and/or the
structure,” and/or the threat imposed by threat imposed by “economic weaponry”
“economic weaponry”
Heavy resistance to “technology acceptance” Continuing heavy resistance to “technology
by potential users of remote sensing-derived acceptance” by potential users of remote
information. sensing-derived information.
Adapted from Colwell, 1983.
9255_C001.fm Page 5 Thursday, March 8, 2007 11:14 AM
Remote sensing is far from being a new technique. There was, in fact, a
very considerable amount of remote sensing work done prior to 1960,
although the actual term “remote sensing” had not yet been coined. The
activities of the balloonists in the nineteenth century and the activities of the
military in World Wars I and II have already been mentioned. Following
World War II, enormous advances were made on the military front. Spy
planes were developed that were capable of revealing, for example, the
installation of Soviet rocket bases in Cuba in 1962. Military satellites were
also launched; some were used to provide valuable meteorological data for
defense purposes and others were able to locate military installations and follow
the movements of armies. In the peacetime between World Wars I and II,
substantial advances were made in the use of aerial photography for civilian
applications in areas such as agriculture, cartography, forestry, and geology.
Subsequently, archaeologists began to appreciate its potential as well. Remote
sensing, in its earlier stages at least, was simply a new area in photointerpre-
tation. The advent of artificial satellites gave remote sensing a new dimension.
The first photographs of the Earth taken from space were obtained in the
early 1960s. Man had previously only been able to study small portions of
the surface of the Earth at one time and had painstakingly built up maps
from a large number of local observations. The Earth was suddenly seen as
an entity, and its larger surface features were rendered visible in a way that
captivated people’s imaginations. In 1972, the United States launched its first
Earth Resources Technology Satellite (ERTS-1), which was later renamed
Landsat 1. It was then imagined that remote sensing would solve almost
every remaining problem in environmental science. Initially, there was enor-
mous confidence in remote sensing and a considerable degree of overselling
of the new systems. To some extent, this boom was followed by a period of
disillusionment when it became obvious that, although valuable information
could be obtained, there were substantial difficulties to be overcome and
considerable challenges to be met. A more realistic approach is now per-
ceived and people have realized that remote sensing from satellites provides a
tool to be used in conjunction with traditional sources of information, such
as aerial photography and ground observation, to improve the knowledge
and understanding of a whole variety of environmental, scientific, engineering,
and human problems.
An extensive history of the development of remote sensing will be found
in the book by Kramer (2002) and Dr Kramer has produced an even more
comprehensive version which is available on the following website: http://
directory.eoportal.org/pres_ObservationoftheEarthanditsEnvironment.html
Before proceeding any further, it is worthwhile commenting on some
points that will be discussed in later sections. First, it is convenient to divide
remotely sensed material according to the wavelength of the electromagnetic
radiation used (optical, near-infrared, thermal-infrared, microwave, and
radio wavelengths). Secondly, it is convenient to distinguish between passive
and active sensing techniques. In a passive system, the remote sensing instru-
ment simply receives whatever radiation happens to arrive and selects the
9255_C001.fm Page 6 Thursday, March 8, 2007 11:14 AM
Mm
G = mrω 2 (1.1)
r2
or
GM
ω2 = (1.2)
r3
and the period of revolution, T, of the satellite is then given by:
2π r3
T= = 2π (1.3)
ω GM
Since p, G, and M are constants, the period of revolution of the satellite
depends only on the radius of the orbit, provided the satellite is high enough
above the surface of the Earth for the air resistance to be negligible. It is very
common to put a remote sensing satellite into a near-polar orbit at about 800
to 900 km above the surface of the Earth; at that height, it has a period of
about 90 to 100 minutes. If the orbit has a larger radius, the period will be
longer. For the Moon, which has a period of about 28 days, the radius of the
orbit is about 384,400 km. Somewhere in between these two radii is one
value of the radius for which the period is exactly 24 hours, or 1 day. This
radius, which is approximately 42,250 km, corresponds to a height of about
35,900 km above the surface of the Earth. If one chooses an orbit of this
radius in the equatorial plane, rather than a polar orbit, and if the sense of
the movement of the satellite in this orbit is the same as the rotation of the
9255_C001.fm Page 7 Thursday, March 8, 2007 11:14 AM
Earth, then the satellite will remain vertically over the same point on the
surface of the Earth (on the equator). This constitutes what is commonly
known as a geosynchronous or geostationary orbit.
The last point in this list, which concerns the cost to the user, may seem a little
surprising. Clearly, it is much more expensive to build a satellite platform and
sensor system, to launch it, to control it in its orbit, and to recover the data
than it would be to buy and operate a light aircraft and a good camera or
scanner. In most instances, the cost of a remote sensing satellite system has
9255_C001.fm Page 8 Thursday, March 8, 2007 11:14 AM
TABLE 1.2
Uses of Remote Sensing
Archaeology and anthropology
Cartography
Geology
Surveys
Mineral resources
Land use
Urban land use
Agricultural land use
Soil survey
Health of crops
Soil moisture and evapotranspiration
Yield predictions
Rangelands and wildlife
Forestry - inventory
Forestry, deforestation, acid rain, disease
Civil engineering
Site studies
Water resources
Transport facilities
Water resources
Surface water, supply, pollution
Underground water
Snow and ice mapping
Coastal studies
Erosion, accretion, bathymetry
Sewage, thermal and chemical pollution monitoring
Oceanography
Surface temperature
Geoid
Bottom topography
Winds, waves, and currents
Circulation
Sea ice mapping
Oil pollution monitoring
Meteorology
Weather systems tracking
Weather forecasting
Heat flux and energy balance
Input to general circulation models
Sounding for atmospheric profiles
Cloud classification
Precipitation monitoring
Climatology
Atmospheric minority constituents
Surface albedo
Heat flux and energy balance
Input to climate models
Desertification
9255_C001.fm Page 9 Thursday, March 8, 2007 11:14 AM
been borne by the taxpayers of one country or another. In the early days, the
costs charged to the user of the data covered little more than the cost of the
media on which the data were supplied (photographic film, computer data
storage media of the day [i.e. computer compatible tape], and so forth) plus
the postage. Subsequently, with the launch of SPOT-1 in 1986 and a change of
U.S. government policy with regard to Landsat at about the same time, the
cost of satellite data was substantially increased in order to recover some of
the costs of the ground station operation from the users of the data. To try to
recover the development, construction, and launch costs of a satellite system
from the selling of the data to users would make the cost of the data so
expensive that it would kill most possible applications of Earth observation
satellite data stone dead. What seems to have been evolving is a two-tier
system in which data for teaching or academic research purposes are provided
free or at very low cost, whereas data for commercial uses are rather expensive.
Recently, two satellite remote sensing systems have been developed on a
commercial basis (IKONOS and Quickbird); these are very high resolution
Satellite
Aircraft
~ thousands of ~ hundreds of
feet (m.) miles (km)
FIGURE 1.2
Causes of differences in scale of aircraft and satellite observations.
9255_C001.fm Page 10 Thursday, March 8, 2007 11:14 AM
Kiruna
Sweden
Prince Albert
Canada Gatineau
Fucino Beijing
Norman
Canada Italy China Hatoyama
Islamabad Japan
USA
Maspalomas Riyadh Pakistan Chung-Li
Spain Saudi Arabia Taiwan
Hyderabad
Bangkok
India
Thailand
Cotopaxi
Ecuador Parepare
Indonesia
Cuiaba Johannesburg
Brazil South Africa Alice Springs
Australia
FIGURE 1.4
Landsat TM ground receiving stations and extent of coverage (stations not shown: Argentina,
Chile, Kenya, and Mongolia). (http://geo.arc.nasa.gov/sge/1andsat/coverage.html)
9255_C001.fm Page 14 Thursday, March 8, 2007 11:14 AM
The first option may be satisfactory if the amount of data received is relatively
small; however, if the data are substantial and can only be retrieved occasion-
ally, this method may not be very suitable. The second option may be satis-
factory over short distances but becomes progressively more difficult over
longer distances. The third option has some attractions and is worth a little
further consideration here. Two satellite-based data collection systems are of
importance. One involves the use of a geostationary satellite, such as Meteosat;
the other, the Argos data collection system, involves the National Oceanic and
Atmospheric Administration (NOAA) polar-orbiting operational environmen-
tal satellite (POES) (see Figure 1.5).
Using a satellite has several advantages over using a direct radio trans-
mission from the platform housing the data-collecting instruments to the
user’s own radio receiving station. One of these is simply convenience. It
saves on the cost of reception equipment and of operating staff for a receiving
station of one’s own; it also simplifies problems of frequency allocations.
There may, however, be the more fundamental problem of distance. If the
satellite is orbiting, it can store the messages on board and play them back
later, perhaps on the other side of the Earth. The Argos system accordingly
enables someone in Europe to receive data from buoys drifting in the Pacific
9255_C001.fm Page 15 Thursday, March 8, 2007 11:14 AM
TIROS-N series
satellite
NOAA
Wallops Island U.S.A.
telemetry station NOAA
Gilmore Creek, U.S.A
telemetry station
METEO
Lannion, France
telemetry station
CNES
Service ARGOS
data processing centre
France
NESS
Suitland, U.S.A Users
Users
FIGURE 1.5
Overview of Argos data collection and platform location system. (System Argos.)
60° 60°
30° 30°
0° 0°
−30° −30°
−60° −60°
−90° −90°
−150° −90° −30° 0° 30° 90° 150°
FIGURE 1.6
Meteosat reception area. (European Space Agency.)
c − v cos θ
f′ = f0 (1.4)
c
where
c is the velocity of light,
v is the velocity of the satellite, and
q is the angle between the line of sight and the velocity of the satellite.
F
Orbit 2
1 Orbit 1
C
2
B 1´
D
A
FIGURE 1.7
Diagram to illustrate the principle of the location of platforms with the Argos system.
then be calculated. The position of the satellite is also known from the orbital
parameters so that a field of possible positions of the platform is obtained.
This field takes the form of a cone, with the satellite at its apex and the
velocity vector of the satellite along the axis of symmetry (see Figure 1.7).
A, B, and C denote successive positions of the satellite when transmissions
are received from the given platform. D, E, and F are the corresponding
positions at which messages are received from this platform in the following
orbit, which occurs approximately 100 minutes later. Because the altitude of
the satellite is known, the intersection of several of the cones for one orbit
(each corresponding to a separate measurement) with the altitude sphere
yields the solution for the location of the platform. Actually, this yields two
solutions: points 1 and 1’, which are symmetrically placed relative to the
ground track of the satellite. One of these points is the required solution, the
other is its “image.” This ambiguity cannot be resolved with data from a
single orbit alone, but it can be resolved with data received from two suc-
cessive orbits and a knowledge of the order of magnitude of the drift velocity
of the platform. In Figure 1.7, point 1’ could thus be eliminated. In practice,
because of the considerable redundancy, one does not need to precisely know
f0; it is enough that the transmitter frequency f0 be stable over the period of
observation. The processing of all the measurements made at A, B, C, D, E,
and F then yields the platform position, its average speed over the interval
between the two orbits, and the frequency of the oscillator.
The Argos platform location and data collection system has been opera-
tional since 1978. It was established under an agreement (Memorandum of
Understanding) between the French Centre National d’Etudes Spatiales
(CNES) and two U.S. organizations, the National Aeronautics and Space
Administration (NASA) and the NOAA. The Argos system’s main mission
is to provide an operational environmental data collection service for the
9255_C001.fm Page 18 Thursday, March 8, 2007 11:14 AM
entire duration of the NOAA POES program and its successors. Argos is
currently operated and managed by Collecte, Localisation, Satellites (CLS),
a CNES subsidiary in Toulouse, France, and Service Argos, Inc., a CLS North
American subsidiary, in Largo, Maryland, near Washington D.C. (web sites:
http://www.cls.fr and http://www.argosinc.com). After several years of opera-
tional service, the efficiency and reliability of the Argos system has been
demonstrated very successfully and by 2003 there were 8000 Argos trans-
mitters operating around the world.
The Argos system consists of three segments:
The set of all users’ platforms (buoys, rafts, fixed or offshore stations,
animals, birds, etc.), each being equipped with a platform transmit-
ter terminal (PTT)
The space segment composed of the onboard data collection system
(DCS) flown on each satellite of the NOAA POES program
The ground segment for the processing and distribution of data.
Because all Argos PTTs work on the same frequency, they are particularly
easy to operate. They are also moderately priced. The transmitters can be
very small; miniaturized models can be as compact as a small matchbox,
weighing as little as 0.5 oz (15 g), with a tiny power consumption. These
features mean that Argos transmitters can be used to track small animals
and birds.
At any given time, the space segment consists of two satellites equipped
with the Argos onboard DCS. These are satellites of the NOAA POES series
that are in near-circular polar orbits with periods of about 100 minutes. Each
orbit is Sun-synchronous, that is the angle between the orbital plane and the
Sun direction remains constant. The orbital planes of the two satellites are
inclined at 90˚ to one another. Each satellite crosses the equatorial plane at
a fixed (local solar) time each day; these are 1500 hours (ascending node)
and 0300 hours (descending node) for one satellite, and 1930 hours and 0730
hours for the other. These times are approximate as there is, in fact, a slight
precession of the orbits from one day to the next. The PTTs are not interro-
gated by the DCS on the satellite — they transmit spontaneously. Messages
are transmitted at regular intervals by any given platform. Time-separation
9255_C001.fm Page 19 Thursday, March 8, 2007 11:14 AM
are processed along with Argos locations through the Argos system.
Results are integrated with Argos data and GPS and Argos locations appear
in the same format (a flag indicates whether a location is obtained from
Argos or GPS). Needless to say, the use of a GPS receiver impacts on the
platform’s power requirements and costs.
9255_C002.fm Page 21 Friday, February 16, 2007 10:30 PM
2
Sensors and Instruments
2.1 Introduction
Remote sensing of the surface of the Earth — whether land, sea, or atmosphere
— is carried out using a variety of different instruments. These instruments,
in turn, use a variety of different wavelengths of electromagnetic radiation. This
radiation may be in the visible, near-infrared (or reflected-infrared), thermal-
infrared, microwave, or radio wave part of the electromagnetic spectrum.
The nature and precision of the information that it is possible to extract from
a remote sensing system depend both on the sensor that is used and on the
platform that carries the sensor. For example, a thermal-infrared scanner that
is flown on an aircraft at an altitude of 500 m may have an instantaneous field
of view (IFOV), or footprint, of about 1m2 or less. If a similar instrument is
flown on a satellite at a height of 800 to 900 km, the IFOV is likely to be about
1 km2. This chapter is concerned with the general principles of the main sensors
that are used in Earth remote sensing. In most cases, sensors similar to the
ones described in this chapter are available for use in aircraft and on satellites,
and no attempt will be made to draw fine distinctions between sensors
developed for the two different types of platforms. Some of these instruments
have been developed primarily for use on aircraft but are being used on
satellites as well. Other sensors have been developed primarily for use on sat-
ellites although satellite-flown sensors are generally tested with flights on
aircraft before being used on satellites. Satellite data products are popular
because they are relatively cheap and because they often yield a new source
of information that was not previously available. For mapping to high accu-
racy or for the study of rapidly changing phenomena over relatively small
areas, data from sensors flown on aircraft may be much more useful than
satellite data.
In this chapter we shall give a brief account of some of the relevant aspects
of the physics of electromagnetic radiation (see Section 2.2). Electromagnetic
radiation is the means by which information is carried from the surface of the
Earth to a remote sensing satellite. Sensors operating in the visible and infrared
regions of the electromagnetic spectrum will be considered in Sections 2.3 and
2.4, and sensors operating in the microwave region of the electromagnetic
21
9255_C002.fm Page 22 Friday, February 16, 2007 10:30 PM
spectrum will be considered in Section 2.5. The instruments that will be dis-
cussed in Sections 2.3 to 2.5 are those commonly used in aircraft or on satellites.
It should be appreciated that other systems that operate with microwaves and
radio waves are available and can be used for gathering Earth remote sensing
data using installations situated on the ground rather than in aircraft or on
satellites; because the physics of these systems is rather different from those
of most of the sensors flown on aircraft or satellites, the discussion of ground-
based systems will be postponed until later (see Chapter 6).
It is important to distinguish between passive and active sensors. A passive
sensor is one that simply responds to the radiation that is incident on the instru-
ment. In an active instrument, the radiation is generated by the instrument,
transmitted downward to the surface of the Earth, and reflected back to the
sensor; the received signal is then processed to extract the required information.
As far as satellite remote sensing is concerned, systems operating in the visible
and infrared parts of the electromagnetic spectrum are very nearly all passive,
whereas microwave instruments are either passive or active; all these instru-
ments can be flown on aircraft as well. Active instruments operating in the
visible and infrared parts of the spectrum, while not commonly being flown on
satellites, are frequently flown on aircraft (see Chapter 5). Active instruments
are essentially based on some aspect of radar principles (see Chapters 5 to 7).
Remote sensing instruments can also be divided into imaging and nonimag-
ing instruments. Downward-looking imaging devices produce two-dimensional
pictures of a part of the surface of the Earth or of clouds in the atmosphere.
Variations in the image field may denote variations in the color, temperature,
or roughness of the area viewed. The spatial resolution may range from
about 1 m, as with some of the latest visible-wavelength scanners or synthetic
aperture radars, to tens of kilometers, as with the passive scanning micro-
wave radiometers. Nonimaging devices give information such as the height
of the satellite above the surface of the Earth (the altimeter) or an average
value of a parameter such as the surface roughness of the sea, the wind
speed, or the wind direction averaged over an area beneath the instantaneous
position of the satellite (see Chapter 7 in particular).
From the point of view of data processing and interpretation, the data
from an imaging device may be richer and easier to interpret visually, but
they usually require more sophisticated (digital) image-processing systems
to handle them and present the results to the user. The quantitative handling
of corrections for atmospheric effects is also likely to be more difficult for
imaging than for nonimaging devices.
105 104 103 102 101 1 10−1 10−2 10−3 10−4 10−5 10−6 10−7 10−8 10−9 10−10 10−11 10−12 10−13 10−14 Photon
Electron volts energy
−21 −22 23 Photon
10−14 10−15 10−16 10−17 10−18 10−19 10−20 10 10 10− 10−24 10−25 10−26 10−27 10−28 10−29 10−30 10−31 10−32 10−33
Joules energy
Sensors and Instruments
1020 1019 1018 1017 1016 1015 1014 1013 1012 1011 1010 109 108 107 106 105 104 103 102 101 1 Frequency
Hertz
10−11 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 10 101 102 103 104 105 106 107 108 Wavelength
9255_C002.fm Page 23 Friday, February 16, 2007 10:30 PM
Metres
1 nm 1 µm 1 mm 1m 1 km 1000 km
Gamma Ultra Visible Microwave Radio Audio A.C. Spectral
rays X-rays violet light Infrared EHF SHF UHF VHF HF MF LF regions
“Hard” “Soft” Q/Kg Ku XCSL UHF
Transmission
through
atmosphere
Total X-ray Atomic Imaging, Radiometry, Passive microwave Electromagnetic
gamma imaging absorption single and spectrometry, radiometry, sensing
ray spectro- multi-lens thermography Radar Principal
counts, photometry, cameras, imaging techniques for
gamma mechanical various film environmental
ray line scanning emulsions, remote sensing
spectrometry Multispectral
photography
FIGURE 2.1
The electromagnetic spectrum. The scales give the energy of the photons corresponding to radiation of different frequencies and wavelengths.
(Barrett and Curtis, 1982.)
23
9255_C002.fm Page 24 Friday, February 16, 2007 10:30 PM
take any value from zero to infinity, radiation from only part of this range of
wavelengths is useful for remote sensing of the surface of the Earth. First of
all, there needs to be a substantial quantity of radiation of the wavelength in
question. A passive system is restricted to radiation that is emitted with a
reasonable intensity from the surface of the Earth or which is present in
reasonable quantity in the radiation that is emitted by the Sun and then
reflected from the surface of the Earth. An active instrument is restricted to
wavelength ranges in which reasonable intensities of the radiation can be
generated by the remote sensing instrument on the platform on which it is
operating. In addition to an adequate amount of radiation, it is also necessary
that the radiation is not appreciably attenuated in its passage through the
atmosphere between the surface of the Earth and the satellite; in other words,
a suitable atmospheric “window” must be chosen.
In addition to these considerations, it must also be possible to recover the
data generated by the remote sensing instrument. In practice this means that
the amount of data generated on a satellite must be able to be accommodated
both by the radio link by which the data are to be transmitted back to the
Earth and by the ground receiving station used to receive the data. These
various considerations restrict one to the use of the visible, infrared, and
microwave regions of the electromagnetic spectrum. The wavelengths
involved are indicated in Figure 2.2.
The visible part of the spectrum of electromagnetic radiation extends from
blue light with a wavelength of about 0.4 µm to red light with a wavelength
of about 0.75 µm. Visible radiation travels through a clean, dry atmosphere
with very little attenuation. Consequently, the visible part of the electromag-
netic spectrum is a very important region for satellite remote sensing work.
For passive remote sensing work using visible radiation, the radiation is
usually derived from the Sun, being reflected at the surface of the Earth.
Radar
Microwave Radio
Frequency (Hz) 1015 1014 1013 1012 1011 1010 109 108
Ultraviolet Infrared
Visible
FIGURE 2.2
Sketch to illustrate the electromagnetic spectrum.
9255_C002.fm Page 25 Friday, February 16, 2007 10:30 PM
FIGURE 2.3
Nighttime satellite image of Europe showing aurora and the lights of major cities. (Aerospace
Corporation.)
If haze, mist, fog, or dust clouds are present, the visible radiation will be
substantially attenuated in its passage through the atmosphere. At typical
values of land-surface or sea-surface temperature, the intensity of visible
radiation that is emitted by the land or sea is negligibly small. Satellite
systems operating in the visible part of the electromagnetic spectrum there-
fore usually only gather useful data during daylight hours. Exceptions to
this are provided by aurora, by the lights of major cities, and by the gas
flares associated with oil production and refining activities (see Figure 2.3).
An interesting and important property of visible radiation, by contrast
with infrared and microwave radiation, is that visible radiation, especially
toward the blue end of the spectrum, is capable of penetrating water to a
distance of several meters. Blue light can travel 10 to 20 m through clear
ocean water before becoming significantly attenuated; red light, however,
penetrates very little distance. Thus, with visible radiation, one can probe
the physical and biological properties of the near-surface layers of water
bodies, whereas with infrared and microwave radiation, only the surface
itself can be directly studied with the radiation.
Infrared radiation cannot be detected by the human eye, but it can be
detected photographically or electronically. The infrared region of the spec-
trum is divided into the near-infrared, with wavelengths from about 0.75 µm
to about 1.5 µm, and the thermal-infrared, with wavelengths from about 3
9255_C002.fm Page 26 Friday, February 16, 2007 10:30 PM
8π hc
E(λ )dλ = dλ (2.1)
λ exp( hc/k λ T ) − 1
5
where
h = Planck’s constant,
c = velocity of light, and
k = Boltzmann’s constant.
This formula was first put forward by Max Planck as an empirical relation; it
was only justified in terms of quantum statistical mechanics much later. The
value of the quantity E(l), in units of 8phc m–5, is given in Table 2.1 for five
different wavelengths, when T = 300 K, corresponding roughly to radiation
emitted from the Earth. In this table, values of E(l)(r/R)2 are also given for
the same wavelengths, when T = 6,000 K, where r = radius of the Sun and
R = radius of the Earth’s orbit around the Sun. This gives an estimate of the
order of magnitude of the solar radiation reflected at the surface of the Earth,
leaving aside emissivities, atmospheric attenuation, and other factors.
TABLE 2.1
Estimates of Relative Intensities of Reflected Solar Radiation
and Emitted Radiation From the Surface of the Earth
Wavelength (l) Emitted Intensity Reflected Intensity
Blue 0.4 µm 7.7 × 10–20 6.1 × 1024
Red 0.7 µm 2.4 × 100 5.1 × 1024
Infrared 3.5 µm 1.6 × 1021 4.7 × 1022
Thermal-infrared 12 µm 7.5 × 1022 4.5 × 1020
Microwave 3 cm 2.6 × 1010 1.3 × 107
Note: Second column corresponds to E(l) in units of 8p hc m–5 for T = 300 K,
third column corresponds to E(l)(r/R)2 in the same units for T = 6000 K.
9255_C002.fm Page 27 Friday, February 16, 2007 10:30 PM
From Table 2.1 it can be seen that at optical and very near-infrared wave-
lengths the emitted radiation is negligible compared with the reflected radia-
tion. At wavelengths of about 3 or 4 µm, both emitted and reflected radiation
are important, whereas at wavelengths of 11 or 12 µm, the emitted radiation
is dominant and the reflected radiation is relatively unimportant. At micro-
wave wavelengths, the emitted radiation is also dominant over natural
reflected microwave radiation; however, as the use of man-made microwave
radiation for telecommunications increases, the contamination of the signals
from the surface of the land or sea becomes more serious. A strong infrared
radiation absorption band separates the thermal-infrared part of the spectrum
into two regions, or windows, one between roughly 3 µm and 5 µm and the
other between roughly 9.5 µm and 13.5 µm (see Figure 2.1). Assuming the
emitted radiation can be separated from the reflected radiation, satellite remote
sensing data in the thermal-infrared part of the electromagnetic spectrum can
be used to determine the temperature of the surface of the land or sea, provided
the emissivity of the surface is known. The emissivity of water is known; in
fact, it is very close to unity. For land, however, the emissivity varies widely
and its value is not very accurately known. Thus, infrared remotely sensed
data can readily be used for the measurement of sea-surface temperatures, but
their interpretation for land areas is more difficult. Aircraft-flown thermal-
infrared scanners are widely used in surveys to study heat losses from roof
surfaces of buildings as well as in the study of thermal plumes from sewers,
factories, and power stations. Figure 2.4 highlights the discharge of warm
sewage into the River Tay. It can be seen that the sewage dispersion is not
particularly effective in the prevailing conditions. Because this is a thermal
image, the tail-off with distance from the outfall is possibly more a measure
of the rate of cooling than the dispersal of the sewage.
In the study of sea-surface temperatures using the 3 to 5 µm range, it is
necessary to restrict oneself to the use of nighttime data in order to avoid
the considerable amount of reflected thermal-infrared radiation that is
present at these wavelengths during the day. This wavelength range is used
for channel (or band) 3 of the Advanced Very High Resolution Radiometer
(AVHRR) (see Section 3.2.1). This channel of the AVHRR can accordingly be
used to study surface temperatures of the Earth at night only. For the 9.5 to
13.5 µm wavelength range, the reflected solar radiation is much less impor-
tant and so data from this wavelength range can be used throughout the
day. However, even in these two atmospheric windows, the atmosphere is
still not completely transparent and accurate calculations of Earth-surface
temperatures or emissivities from thermal-infrared satellite data must incor-
porate corrections to allow for atmospheric effects. These corrections are
discussed in Chapter 8. Thermal-infrared radiation does not significantly
penetrate clouds, so one should remember that in cloudy weather it is the
temperature and emissivity of the upper surface of the clouds — not of the
land or sea surface of the Earth — that are being studied.
In microwave remote sensing of the Earth, the range of wavelengths used
is from about 1 mm to several tens of centimeters. The shorter wavelength
9255_C002.fm Page 28 Friday, February 16, 2007 10:30 PM
Tay Estuary
(a)
Land
area
(b)
FIGURE 2.4
A thermal plume in the Tay Estuary, Dundee: (a) thermal-infrared scanner image; (b) enlarged
and thermally contoured area from within box in (a). (Wilson and Anderson, 1984.)
corrupt the signal that is transmitted from the satellite, reflected at the surface
of the Earth, and finally received back at the satellite. A third difference is that
the wavelengths of the microwave radiation used are comparable in size to
many of the irregularities of the surface of the land or the sea. Therefore, the
remote sensing instrument may provide data that enables one to obtain infor-
mation about the roughness of the surface that is being observed. This is of
particular importance when studying oceanographic phenomena.
Photographic Electro-optical
(cameras)
Imaging Non-imaging
Scanning Detector
arrays
FIGURE 2.5
Classification scheme for sensors covering the visible and thermal-infrared range of the
electromagnetic spectrum.
9255_C002.fm Page 30 Friday, February 16, 2007 10:30 PM
Optics
Scan mirror
6 Detectors
per band
(24 total)
+ 2 for band 8
(Landsat-C)
185 km
6 Lines scan/band
Direction
of flight
FIGURE 2.6
Landsat MSS scanning system. (National Aeronautics and Space Administration [NASA 1976].)
9255_C002.fm Page 31 Friday, February 16, 2007 10:30 PM
Electronically
despun antenna
Toroidal
pattern antennas
Solar panels
VHF
antenna
Cooler
Radiometer
aperture
FIGURE 2.7
The first Meteosat satellite, Meteosat-1.
views a given area beneath it and concentrates the radiation from that IFOV
onto the detecting system; successive pixels in the scan line are generated
by data from successive positions of the mirror as it rotates and receives
radiation from successive IFOVs. For a polar-orbiting satellite, the advance
to the next scan line is achieved by the motion of the satellite. For a geosta-
tionary satellite, line-scanning is achieved by having the satellite spinning
about an axis parallel to the axis of rotation of the Earth; the advance to the
next scan line is achieved by adjusting the look direction of the optics —
that is, by tilting the mirror. For example, Meteosat-1, which was launched
into geostationary orbit at the Greenwich Meridian, spins at 100 rpm about
an axis almost parallel to the N-S axis of the Earth (see Figure 2.7). Changes
in inclination, spin rate, and longitudinal position are made, when required,
by using a series of thruster motors that are controlled from the ground.
The push-broom scanner is an alternative scanning system that has no mov-
ing parts. It has a one-dimensional array of CCDs that is used in place of a
scanning mirror to achieve cross-track scanning. No mechanical scanning is
involved; a whole scan line is imaged optically onto the CCD array and the
scanning along the line is achieved from the succession of signals from the
responses of the detectors in the array. At a later time, the instrument is moved
forward, the next scan line is imaged on the CCD array, and the responses are
obtained electronically — in other words, the advance from one scan line to
the next is achieved by the motion of the satellite (see Figure 2.8).
9255_C002.fm Page 32 Friday, February 16, 2007 10:30 PM
Scan Altitude
Direction 10 km
Ground
Resolution
Cell
10 m by 10 m
FIGURE 2.8
Sketch of push-broom or along track scanner. (Sabins, 1986.)
angles to the flight line. The z direction represents the band number or, on
a linear scale if the bands are equally spaced, the wavelength. For any given
value of z, the horizontal sheet of intensities corresponds to the image of the
ground at one particular wavelength.
A great deal of information can be extracted from a monochrome image
obtained from one band of an MSS or hyperspectral scanner. The image can
be handled as a photographic product and subjected to the conventional
techniques of photointerpretation. The image can also be handled on a digital,
interactive image-processing system and various image-enhancement
operations, such as contrast enhancement, edge enhancement, and density
slicing, can be applied to the image. These techniques are discussed in
Chapter 9. However, more information can usually be extracted by using
the data from several bands and thereby exploiting the differences in the
reflectivity, as a function of wavelength, of different objects on the ground.
The data from several bands can be combined visually, for example, by using
three bands and putting the pictures from these bands onto the three guns
of a color television monitor or onto the primary-color emulsions of a color
film. The colors that appear in an image that is produced in this way will
not necessarily bear any simple relationship to the true colors of the original
objects on the ground when they are viewed in white light from the Sun.
Examples of such false color composites abound in many coffee-table books
of satellite-derived remote sensing images (see Figure 2.9, for example).
Colored images are widely used in remote sensing work. In many
instances, the use of color enables additional information to be conveyed
visually that could not be conveyed in a black-and-white monochrome
image, although it is not uncommon for color to be added for purely cosmetic
purposes. Combining data from several different bands of an MSS to produce
a false color composite image for visual interpretation and analysis suffers
from the restriction that the digital values of three bands only can be used
as input data for a given pixel in the image. This means that only three bands
can be handled simultaneously; if more bands are used, then combinations
or ratios of bands must be taken before the data are used to produce an
image and, in that case, the information available is not being exploited to
the full. Full use of the information available in all the bands can be made
if the data are analyzed and interpreted with a computer. The numerical
methods that are used for handling multispectral data will be considered in
some detail in Chapter 9. Different surfaces generally have different reflec-
tivities in different parts of the spectrum. Accordingly, an attempt may be
made to identify surfaces from their observed reflectivities. In doing this one
needs to consider not just the fraction of the total intensity of the incident
sunlight that is reflected by the surface but also the distribution of the
reflectivity as a function of wavelength. This reflectivity spectrum can be
regarded as characteristic of the nature of the surface and is sometimes
described as a spectral “signature” by which the nature of the surface may
be identified. However, the data recovered from an MSS do not provide
reflectivity as a continuous function of wavelength; one only obtains a
9255_C002.fm Page 34 Friday, February 16, 2007 10:30 PM
I
Band Band Band Band
1 2 3 4
FIGURE 2.10
Sketch to illustrate the relation between a continuous reflectivity distribution and the band-
integrated values (broken line histogram).
Magnetic
(Optional)
tape recorder
direct film Modulated
recorder light source
Amplifier
Liquid
Recorder nitrogen
mirror container
Motor
Scan
mirror Signal
Detector
Controlled radiant
temperature sources
(for calibrated imagery)
Instantaneous field Focusing
of view (2 to 3 mrad) Angular mirrors
field of
view (90 to
Scan pattern 120°)
on ground
Aircraft flight
direction
Ground
resolution cell
FIGURE 2.11
Schematic view of an airborne infrared scanning system.
motion and in continuous collision with each other. These motions and
collisions give rise to the emission of electromagnetic radiation over a broad
range of wavelengths. The temperature of an object affects the quantity of
the continuum radiation it emits and determines the wavelength at which
the radiation is a maximum (lmax). The value of this wavelength, lmax, can
actually be derived from the Planck radiation formula in Equation 2.1 by
considering the curve for a constant value of T and differentiating with
respect to l to find the maximum of the curve. The result is expressed as
Wien’s displacement law:
30 T = 700 K
2.0
20 T = 293 K
T = 600 K
1.0
10
T = 500 K
T = 400 K
0 0
0 5 10 15 20 0 5 10 15 20
λ(µm) λ(µm)
(a) (b)
FIGURE 2.12
Planck distribution function for black body radiation at (a) 293 K and (b) a number of other
temperatures; note the change of scale between (a) and (b).
emissivity of unity and all other bodies having emissivities less than unity.
Wien’s displacement law describes the broadband emission properties of an
object. As indicated in Section 2.2, Planck’s radiation law gives the energy
distribution within the radiation continuum produced by a black body.
Using the Planck relationship (Equation 2.1), one can draw the shape of
the energy distribution from a black body at a temperature of 293 K (20°C
[68°F]), the typical temperature of an object viewed by an infrared scanner.
The 5 to 20 µm range is also commonly referred to as the thermal-infrared
region, as it is in this region that objects normally encountered by human
beings radiate their heat. From Figure 2.12 it can also be seen that the energy
maximum occurs at 10 µm, which is fortuitous because an atmospheric
transmission window exists around this wavelength. To explain what is
meant by an atmospheric window, it should be realized that the atmosphere
attenuates all wavelengths of electromagnetic radiation differently due to
the absorption spectra of the constituent atmospheric gases. Figure 2.13 shows
the atmospheric absorption for a range of wavelengths, with some indication
of the gases that account for this absorption. It can be seen then from the
lowest curve in Figure 2.13, which applies to the whole atmosphere, that
there is a region of high atmospheric transmittance between 8 and 14 µm and
it is this waveband that is used for temperature studies with airborne and
satellite-flown radiometers. This region of the spectrum is also the region in
which there is maximum radiation for the range of temperatures seen in
terrestrial objects (for example, ground temperatures, buildings, and roads).
The total radiation emitted from a body at a temperature T is given by the
well-known Stefan-Boltzmann Law:
E = σT 4 (2.3)
Accordingly, if the total radiation emitted is measured, the temperature of
the body may then be determined. Equation 2.3 was originally put forward
as an empirical formula, but it can be derived by integrating the Planck
9255_C002.fm Page 38 Friday, February 16, 2007 10:30 PM
Wavelength (µm)
1.98 1.99 2.00
Absorption
coefficient
(rel units)
1
Detail
CH4
of H2O
spectrum 0
1 1
N2O
0 0
1
Absorptivity
O2 and O3
0
1
CO2
0
1
H2O
0
1
Atmosphere
0
0.1 0.2 0.3 0.4 0.6 0.8 1 1.5 2 3 4 5 6 8 10 20 30
Wavelength (µm)
FIGURE 2.13
Whole atmosphere transmittance.
distribution function in Equation 2.1, for a given temperature, over the whole
range of l, from zero to infinity. This also yields an expression for s.
Airborne infrared surveys are flown along parallel lines at fixed line spacing
and flying height and, because thermal surveys are usually flown in darkness,
a sophisticated navigation system is invariably required. This may take the form
of ground control beacons mounted on vehicles. Predawn surveys are normally
flown because thermal conditions tend to stabilize during the night and temper-
ature differences on the surface are enhanced. During daytime, solar energy heats
the Earth’s surface and may accordingly contaminate the information sought.
The predawn period is also optimal for flying because turbulence that can cause
aircraft instability, and consequently image distortion, is at a minimum. The
results are usually printed like conventional black-and-white photographs,
showing hot surfaces as white and cool surfaces as dark. The term infrared
thermography is commonly applied to the determination of temperatures, using
infrared cameras or scanners, for studying objects at close range or from an
aircraft. This term tends not to be used with thermal-infrared data from satellites.
alluded to. Passive microwave sensors are also capable of gathering data at
night as well as during the day because they sense emitted radiation rather
than reflected solar radiation. However, the spatial resolution of passive
microwave sensors is very poor compared with that of visible and infrared
scanners. There are two reasons for this. First, the wavelength of microwaves
is much longer than those of visible and infrared radiation and the theoretical
limit to the spatial resolution depends on the ratio of the wavelength of the
radiation to the aperture of the sensing instrument. Secondly, as already
mentioned, the intensity of microwave radiation emitted or reflected from
the surface of the Earth is very low. The nature of the environmental and
geophysical information that can be obtained from a microwave scanner is
complementary to the information that can be obtained from visible and
infrared scanners.
Passive microwave radiometry applied to investigations of the Earth’s
surface involves the detection of thermally generated microwave radiation.
The characteristics of the received radiation, in terms of the variation of
intensity, polarization properties, frequency, and observation angle, depend
on the nature of the surface being observed and on its emissivity. The part
of the electromagnetic spectrum with which passive microwave radiometry
is concerned is from ~1 GHz to ~200 GHz or, in terms of wavelengths, from
~0.15 cm to ~30 cm.
Figure 2.14 shows the principal elements of a microwave radiometer.
Scanning is achieved by movement of the antenna and the motion of the
platform (aircraft or satellite) in the direction of travel. The signal is very
small, and one of the main problems is to reduce the noise level of the receiver
itself to an acceptable level. After detection, the signal is integrated to give
a suitable signal-to-noise value. The signal can then be stored on a tape
recorder on board the platform or, in the case of a satellite, it may then be
transmitted by a telemetry system to a receiving station on Earth.
The spatial resolution of a passive microwave radiometer depends on the
beamwidth of the receiving antenna, the aperture of the antenna, and the
wavelength of the radiation, as represented by the equation:
λ 2 R 2 sec 2 θ
AG = (2.4)
AA
where
AG is the area viewed (resolved normally),
l is the wavelength,
R is the range,
AA is the area of the receiving aperture, and
q is the scan angle.
The spatial resolution decreases by three or four orders of magnitude for a given
size of antenna from the infrared to the microwave region of the electromagnetic
spectrum. For example, the thermal-infrared channels of the AVHRR flown on
9255_C002.fm Page 40 Friday, February 16, 2007 10:30 PM
Axis of rotation
Offset
reflector
Multi-frequency
feed horn
Drive system
Skyhorn
cluster
FIGURE 2.14
Scanning multichannel (or multifrequency) microwave radiometer (SMMR).
(a)
(b)
S T 4λ 2
= F S2 4 (2.5)
N R TR
where
TS is the brightness temperature of the target,
TR is the temperature of the receiver,
and R is the range.
• Time for the emitted pulse of radiation to travel from the satellite to
the ground and back to the satellite
• Doppler shift in the frequency of the radiation as a result of relative
motion of the satellite and the ground
• Polarization of the radiation (although polarization can also be mea-
sured by passive instruments).
The relationships used to determine the sea state and wind speed are essen-
tially empirical. These empirical relationships are based originally on mea-
surements obtained with altimeters flown on aircraft and calibrated with
surface data; subsequent refinements of these relationships have been
achieved using satellite data. Accuracies of ±1.5 ms−1 are claimed for the
derived wind speeds.
The scatterometer is another active microwave instrument that can be used
to study sea state. Unlike the altimeter, which uses a single beam directed
vertically downward from the spacecraft, the scatterometer uses a more
complicated arrangement that involves a number of radar beams that enable
the direction as well as the speed of the wind to be determined. It was
possible to determine the wind direction to within ±20° with the scatterom-
eters on the Seasat, ERS-1, and ERS-2 satellites. Further details of active
microwave systems are presented in Chapter 7.
The important imaging microwave instruments are the passive scanning
multichannel, multispectral, or multifrequency microwave radiometers and
the active SARs. It has already been noted that passive radiometry is limited
by its poor spatial resolution, which depends on the range, the wavelength of
the radiation used, the aperture of the antenna, and the signal/noise ratio. The
signal/noise ratio in turn is influenced by the strength of the signal produced
by the target and by the temperature and sensitivity of the receiver. Ideally, a
device is required that can operate in all weather conditions, that can operate
both during the day and during the night, and that has adequate spatial
resolution for whatever purpose it is required to use the instrument in an Earth
observation program. For many remote-sensing applications, passive micro-
wave radiometers cannot satisfy the third requirement. An active microwave
instrument, that is some kind of radar device, meets the first two of these
conditions, the conditions concerning all-weather and nighttime operation.
When used on an aircraft, conventional imaging radars are able to give very
useful information about a variety of phenomena on the surface of the Earth.
Accordingly, conventional (side-looking airborne) radars are frequently flown
on aircraft for remote sensing work. However, when it comes to carrying an
imaging radar on board a satellite, calculations of the size of antenna that
would be required to achieve adequate spatial resolution show that one would
need an antenna that was enormously larger than one could possibly hope to
mount on board a satellite. SAR has been introduced to overcome this problem.
In a SAR, reflected signals are received from successive positions of the
antenna as the platform moves along its path. In this way, an image is built
up that is similar to the image one would obtain from a real antenna of several
hundreds of meters or even a few kilometers in length. Whereas in the case
of a radiometer or scanner, an image is produced directly and simply from
the data transmitted back to Earth from the platform, in the case of a SAR, the
reconstruction of an image from the transmitted data is much more compli-
cated. It involves processing the Doppler shifts of the received radiation. (This
will be described further in Chapter 7).
9255_C002.fm Page 44 Friday, February 16, 2007 10:30 PM
FIGURE 2.16
Displacement of a ship relative to its wake in a SAR image; data from Seasat orbit 834 of
August 24, 1978 processed digitally. (RAE Farnborough.)
1
d= vt (2.6)
2
A Y
B
C
X
FIGURE 2.17
Artist’s impression of a side scan sonar transducer beam: A = slant range, B = towfish height
above bottom, C = horizontal range. (Klein Associates Inc.)
record, termed a sonograph, in such a way that the time scan can easily
be calibrated in terms of distance across the seabed. The first echo in any
scan is the bottom echo, with subsequent echoes being reflected from
features ranging across the seabed to the outer limit of the scan. A number
of points should be noted. The range scale shown on a sonograph is usually
not the true range across the seabed but the slant range of the sound beam
(A in Figure 2.17) and, as with the echo sounder, distances indicated on a
record depend on an assumption about the velocity of sound in water
because the distance is taken to be equal to 1/2vt. If properly calibrated,
the sonograph will show the correct value for B, the depth of water beneath
the fish, which is presented as a depth profile on a sonograph. Echoes
reflected across the scan, subsequent to the seabed echo (from points X to
Y in Figure 2.17), are subject to slant range distortion: the actual distance
scanned across the seabed is
C = A 2 − B2 (2.7)
3
Satellite Systems
3.1 Introduction
In April 1960, only 3.5 years after the first man-made satellite orbited the
Earth, the United States began its environmental satellite program with the
launch of TIROS-1, the first satellite in its TIROS (Television InfraRed
Observation Satellite) series. This achievement clearly demonstrated the
possibility of acquiring images of the Earth’s cloud systems from space, and
TIROS became the first in a long series of satellites launched primarily for
the purpose of meteorological research. A detailed account of the first
30 years of meteorological satellite systems is given by Rao et al. (1990). An
enormous number of other satellites have now been launched for a wide
range of environmental remote sensing work.
In this chapter, a few of the important features of some remote sensing
satellite systems are outlined. Rather than attempt to describe every system
that has ever been launched, this chapter will concentrate on those that are
reasonably widely used by scientists and engineers who actually make use
of remote sensing data collected by satellites. A comprehensive survey of Earth
observation satellite systems is given by Kramer (2002) and in Dr Kramer’s
updated version of his book which is available on the following website:
http://directory.eoportal.org/pres_ObservationoftheEarthanditsEnvironment. html.
A lot of information gathered by the Committee on Earth Observation Sat-
ellites can also be found on the website http://www.eohandbook.com.
Consideration is given in this chapter to the spatial resolution, spectral
resolution, and frequency of coverage of the different systems. Although it is
also important to consider the atmospheric effects on radiation traveling from
the surface of the Earth to a satellite, as they do influence the design of the
remote-sensing systems themselves, an extensive discussion of this topic is
postponed until Chapter 8, as the consideration of atmospheric effects is of
importance primarily at the data processing and interpretation stages.
In the early days, the main players in space programs for Earth resources
monitoring were the United States and the former Union of Soviet Socialist
Republics (USSR). In the meteorological field, the programs were similar
49
9255_C003.fm Page 50 Friday, February 16, 2007 11:08 PM
Satellite Systems 51
TABLE 3.1
Overview of Polar-Orbiting Meteorological Satellite Series
Satellite Series Major
(Agency) Launch Instruments Comments
NOAA-2 to -5 (NOAA) October 21, 1971; VHRR 2580-km swath
July 29, 1976
TIROS-N (NOAA October 13, 1978 AVHRR > 2600-km swath
POES)
NOAA-15 and NOAA- May 13, 1998 to 2007 AVHRR/3 > 2600-km swath
L, -M, -N, and N′
DMSP Block 5D-1 September 11, 1976, OLS 3000 km swath
(DoD) to July 14, 1980
DMSP Block 5D-2 December 20, 1982, OLS, SSM/I SSMIS replaces
(DoD) to April 4, 1997 SSM/I starting
with F-16 (2001)
DMSP Block 5D-3 December 12, 1999 OLS, SSM/1
(DoD)
Meteor-3 series October 24, 1985 MR-2000M 3100-km swath
(ROSHYDROMET)
Meteor-3M series 2001 (Meteor-3M-1) MR-900B 2600 km swath
(ROSHYDROMET)
FY-1A to 1D (Chinese September 7, 1988; MVISR 2800-km swath
Meteorological May 15, 2002
Administration)
MetOp-1 (EUMETSAT) * AVHRR/3, PM complement to
MHS, IASI NOAA POES series
NPP, (NASA/IPO) * VIIRS, CrIS, NPOESS
ATMS Preparatory Project
NPOESS (IPO) * VIIRS, CMIS, Successor to NOAA
CrIS POES and DMSP
series
(Adapted from Kramer, 2002)
*Not yet launched at time of writing.
The last six spacecraft of the series, the ATN spacecraft, were larger and
had additional solar panels to provide more power. This additional space
and power onboard enabled extra instruments to be carried, such as the
Earth Radiation Budget Experiment (ERBE), the Solar Backscatter Ultraviolet
instrument (SBUV/2), and a search and rescue system. The search and rescue
system uses the same location principles as the Argos system but is separate
from Argos (we follow Kramer [2002] in calling it S&RSAT rather than
SARSAT to avoid confusion with synthetic aperture radar [SAR]). The ERBE
is a National Aeronautics and Space Administration (NASA) research instru-
ment. ERBE data contribute to understanding the total and seasonal plane-
tary albedo and Earth radiation balances, zone by zone. This information is
used for recognizing and interpreting seasonal and annual climate variations
and contributes to long-term climate monitoring, research, and prediction.
The SBUV radiometer is a nonscanning, nadir-viewing instrument designed to
9255_C003.fm Page 52 Friday, February 16, 2007 11:08 PM
TABLE 3.2
Overview of Geostationary Meteorological Satellites
Spacecraft Series
(Agency) Launch Major Instrument Comment
ATS-1 to ATS-6 (NASA) December 6, 1966, to SSCC (MSSCC Technical
August 12, 1969 ATS-3) demonstration
GOES-1 to -7 (NOAA) October 16, 1975, to VISSR First generation
February 26, 1987
GOES-8 to -12 (NOAA) April 13, 1994, to July GOES-Imager, Second generation
23, 2001 Sounder
GMS-1 to -5 (JMA) July 14, 1977; March VISSR (GOES First generation
18, 1995 heritage)
MTSAT-1 (JMA et al.) November 15, 1999 JAMI Second generation
(launch failure of H-2
vehicle)
MTSAT-1R (JMA) February 26, 2005 JAMI
MTSAT-2 February 18, 2006 JAMI
Meteosat-1 to -7 November 23, 1977; VISSR First generation
(EUMETSAT) September 3, 1997
MSG-1 (EUMETSAT) August 28, 2002 SEVIRI, GERB Second generation
INSAT-1B to -1D (ISRO) August 30, 1983, to VHRR
June 12, 1990
INSAT-2A to -2E (ISRO) July 9, 1992, to April 3, VHRR/2 Starting with -2E
1999
INSAT-3B, -3C, -3A, 3E March 22,2000, to
(ISRO) September 28, 2003
MetSat-1 (ISRO) September 12, 2002 VHRR/2 Weather satellite only
GOMS-1 (Russia/ October 31, 1994 STR First generation
Planeta)
FY-2A, -2B (CMA, June 10, 1997; July 26, S-VISSR
China) 2000
(Adapted from Kramer, 2002)
measure scene radiance in the ultraviolet spectral region from 160 to 400 nm.
SBUV data are used to determine the vertical distribution of ozone and the
total ozone in the atmosphere as well as solar spectral irradiance. The
S&RSAT is part of an international program to save lives. The S&RSAT
equipment on POES is provided by Canada and France. Similar Russian
equipment, called COSPAS (Space System for the Search of Distressed
Vessels), is carried on the Russian polar-orbiting spacecraft. The S&RSAT
and COSPAS systems relay emergency radio signals from aviators, mariners,
and land travelers in distress to ground stations, where the location of the
distress signal transmitter is determined. Information on the nature and
location of the emergency is then passed to a mission control center that
alerts the rescue coordination center closest to the emergency. Sketches of
spacecraft of the TIROS-NOAA series are shown in Figure 3.1.
9255_C003.fm Page 53 Friday, February 16, 2007 11:08 PM
Satellite Systems 53
Equipment High-energy
Array drive support
electronics proton and alpha
Solar array module particle detector
drive motor S-band Thermal
omni control Medium-energy
Solar antenna pinwheel proton and electron
array louvres (12) detector
Earth Sun sensor detector
sensor Inertial
assembly measurement
unit
Instrument
mounting
Nitrogen platform sunshade
tank (2) Instrument
mounting
Hydrazine platform
tank (2)
Reaction S-band Beacon/ Advanced very
system support antenna command high resolution
structure (3) antenna radiometer
VHF S-band
Battery real-time omni Microwave Stratospheric
modules (4) antenna atenna sounding sounding unit
Rocket engine UHF data unit
assembly (4) collection High resolution
system antenna infrared radiation
sounder
(a)
AVHRR
IMU
Thermal control
pinwheel louvres (15) SAR antennas
IMP
Battery
SOA modules (6) Solar array
HIRS SAD
ESA
SSU
MSU SLA
(1)
UDA BDA
SOA SBUV
ERBE SBA (3)
(Scanner) ERBE VRA
(Non-scanner)
REA (4)
(b)
FIGURE 3.1
Sketches of (a) TIROS-N spacecraft (Schwalb, 1978) and (b) Advanced TIROS-N spacecraft (ITT).
The first VHRR channel measured reflected visible radiation from cloud tops
or the Earth’s surface in the limited spectral range of 0.6 to 0.7 µm. The
second channel measured thermal-infrared radiation emitted from the Earth,
sea, and cloud tops in the 10.5 to 12.5 µm region. This spectral region
permitted both daytime and nighttime radiance measurements and the
determination of the temperature of the cloud tops and of the sea surface in
cloud-free areas, both during daytime and at night. Improvements were
made through the third and fourth generations and, starting with TIROS-
N, the system has delivered digital scanner data rather than analogue data.
TIROS-N had a new set of data gathering instruments. The instruments
flown on TIROS-N and its successors include the TIROS Operational Ver-
tical Sounder (TOVS), the Advanced Very High Resolution Radiometer
(AVHRR), the Argos data collection system (see Section 1.5.2), and the
Space Environment Monitor (SEM). The TOVS is a three-instrument system
consisting of:
Satellite Systems 55
TABLE 3.3
Spectral Channel Wavelengths of the AVHRR
AVHRR/1 AVHRR/2
NOAA-7,
Channel TIROS-N NOAA-6, -9, -11, -12, IFOV Principal Use
No. (µm) -8, -10 (µm) -14 (µm) (mrad) of Channel
1 0.550–0.90 0.550–0.68 0.550–0.68 1.39 Day cloud and surface
mapping
2 0.725–1.10 0.725–1.10 0.725–1.10 1.41 Surface water
delineation and
vegetation mapping
3 3.550–3.93 3.550–3.93 3.550–3.93 1.51 Sea surface
temperature and fire
detection
4 10.50–11.50 10.50–11.50 10.30–11.30 1.41 Sea surface
temperature and night
time cloud mapping
5 Repeat of Repeat of 11.50–12.50 1.30 Surface temperature
channel 4 channel 4 and day/night cloud
mapping
(Kramer, 2002)
Electronics
module assembly
Earth shield
Optical assembly and radiator
assembly
Detector
assembly
Baseplate
Scanner assembly
FIGURE 3.2
Illustration of the AVHRR instrument. (Kramer, 2002.)
Satellite Systems 57
have been identified (see Chapter 10); an extensive discussion of the non-
meteorological uses of AVHRR data is given by Cracknell (1997).
The USSR was the other great world power involved in space from the very
early days. “Meteor” is the generic name for the long series of polar-orbiting
weather satellites that were launched by the USSR, and subsequently by Rus-
sia. The agency responsible for them is the Russian Federal Service for
Hydrometeorology and Environmental Monitoring (ROSHYDROMET). Prior
to this series, there was an experimental Cosmos series, of which the first
member with a meteorological objective was Cosmos-44, launched in 1964,
followed by a further nine Cosmos satellites until 1969, when the series was
officially named “Meteor-1.” This was followed by the series Meteor-2 and
Meteor-3.
In parallel with the civilian POES program, the U.S. military services of
the Department of Defense (DOD) built their own polar-orbiting meteoro-
logical satellite series, referred to as the Defense Meteorological Satellite
Program (DMSP), with the objective of collecting and disseminating world-
wide cloud cover data on a daily basis. The first of the DMSP satellites was
launched on January 19, 1965, and a large number of satellites in this series
have been launched since then, with the satellites being progressively more
sophisticated. Like the NOAA series of satellites, the DMSP satellites are in
Sun-synchronous orbits with a period of about 102 minutes; two satellites
are normally in operation at any one time (one with a morning and one with
a late morning equatorial crossing time). The spacecraft are in orbits with a
nominal altitude of 833 km, giving the instrument a swath width of about
3,000 km. The spacecraft carry the Operational Linescan System, or OLS.
This instrument is somewhat similar to the VHRR, which has already been
described briefly; it is a two-channel across-track scanning radiometer, or
MSS, that was designed to gather daytime and nighttime cloud cover imag-
ery. The wavelength ranges of the two channels are 0.4 to 1.1 µm and 10.0
to 13.4 µm (8 to 13 µm before 1979). The visible channel has a low-light
amplification system that enables intense light sources associated with urban
areas or forest fires to be seen in the nighttime data.
Many of the later DMSP spacecraft (from 1987 onward) have carried the
Special Sensor Microwave Imager (SSM/I), which is a successor to the
Scanning Multichannel Microwave Radiometer (SMMR) flown on Nimbus-
7 and Seasat, both of which were launched in 1978. The SSM/I is a four-
frequency, seven-channel instrument with frequencies and spatial resolution
similar to those of the SMMR (see Table 3.4). SSM/I is now, in turn, being
succeeded on the latest DMSP spacecraft by the Special Sensor Microwave
Imager Sounder (SSMIS), which incorporates other earlier microwave sound-
ing instruments flown on DMSP spacecraft.
The future U.S. polar-orbiting meteorological satellite system is the National
Polar-Orbiting Operational Environmental Satellite System (NPOESS). This
system represents a merger of the NOAA POES and DMSP programs, with
the objective of providing a single, national remote-sensing capability for
meteorological, oceanographic, climatic, and space environmental data.
9255_C003.fm Page 58 Friday, February 16, 2007 11:08 PM
TABLE 3.4
Characteristics of the SSM/I
Resolution (km along
Wavelength (mm) Frequency (GHz) Polarization track × km across track)
15.5 19.35 Vertical 68.9 × 44.3
15.5 19.35 Horizontal 69.7 × 43.7
13.5 22.235 Vertical 59.7 × 39.6
8.1 37.0 Vertical 35.4 × 29.2
8.1 37.0 Horizontal 37.2 × 28.7
3.5 85.0 Vertical 15.7 × 13.9
3.5 85.0 Horizontal 15.7 × 13.9
(Adapted from Kramer, 2002)
The DoD’s DMSP and the NOAA POES convergence is taking place in two
phases:
During the first phase, which began in May 1998, all DMSP satellite
operational command and control functions of Air Force Space
Command (AFSPC) were transferred to a triagency integrated
program office (IPO) established within NOAA. NOAA was given
the sole responsibility of operating both satellites programs, POES
and DMSP (from the National Environmental Satellite, Data, and
Information Service [NESDIS] in Suitland, MD).
During the second phase, the IPO will launch and operate the new
NPOESS satellites that will satisfy the requirements of both the DOD
and the Department of Commerce (of which NOAA is a part) from
about the end of the present decade.
Satellite Systems 59
210° 240° 270° 300° 330° 0° 30° 60° 90° 120° 150° 180°
90° 90°
60° 60°
30° 30°
Latitude
−30° −30°
−60° −60°
−90° −90°
−150° −90° −30° W 0° E 30° 90° 150°
Telecom coverage Longitude Imaging coverage
FIGURE 3.3
Coverage of the Earth by the international series of geostationary meteorological satellites.
Introduction to Remote Sensing
9255_C003.fm Page 61 Friday, February 16, 2007 11:08 PM
Satellite Systems 61
TABLE 3.5
Features of Commonly Used Multispectral Scanners
GOES First Generation: VIISR
(continued)
9255_C003.fm Page 62 Friday, February 16, 2007 11:08 PM
SeaWiFS
1.58–1.75 20 1.58–1.75 20
Satellite Systems 63
IKONOS-2, Quickbird-2
IKONOS Quickbird
Channel Wavelength (µm) IFOV (m) IFOV (m)
1 0.45–0.52 ≤ 4 2.5
2 0.52–0.60 ≤ 4 2.5
3 0.63–0.69 ≤ 4 2.5
4 0.76–0.90 ≤ 4 2.5
Panchromatic mode 0.45–0.90 ≤ 1 0.61
Generation series, which launched its first satellite on August 28, 2002, pro-
vides considerable improvements, particularly in generating images more
frequently (every 15 minutes instead of every 30 minutes).
The Japanese Meteorological Authority and Japan's National Space
Development Agency (NASDA) also have a series of geostationary meteoro-
logical satellites, which have been located at 120°E (GMS-3) and 140 °E (GMS-
4, GMS-5). Japan started its geostationary meteorological satellite program
with the launch of Geostationary Meteorological Satellite-1 (GMS-1), referred
to as Himawari-1 in Japan, on July 7, 1977. The newest entry into the ring,
Multifunctional Transport Satellite-1 (MTSAT-1), which was launched on
November 15, 1999, was planned to provide the double service of an “aero-
nautical mission” (providing navigation data to air-traffic control services
in the Asia Pacific region) and a “meteorological mission”; however, a launch
failure of the H-2 vehicle occurred. In the latter function, MTSAT is a suc-
cessor program to the GMS series. There is a replacement satellite, MTSAT-
1R, and the prime instrument of the meteorology mission on MTSAT-1R is
the Japanese Advanced Meteorological Imager (JAMI).
China joined the group of nations with geostationary meteorological satel-
lites with the launch of FY-2A (Feng-Yun-2A) on 10 June 1997. The prime
sensor, the Stretched-Visible and Infrared Spin-Scan Radiometer (S-VISSR), is
an optomechanical system, providing observations in three bands (at resolu-
tions of 1.25 km in the visible and 5 km in the infrared and water vapor bands).
According to Kramer (2002), a U.S. commercial geostationary weather
satellite program is being developed by Astro Vision, Inc. (located at NASA’s
Stennis Space Center in Pearl River, MS). The overall objective is to launch
a series of five AVstar satellites to monitor the weather over North and South
America and provide meteorological data products to a customer base. One
goal is to produce quasilive regional imagery with a narrow-field instrument
to permit researchers to monitor quickly the formation of major weather
patterns. Far more-detailed information about the various polar-orbiting and
geostationary meteorological satellites than we have space to include here
can be found in Rao et al. (1990) and Kramer (2002).
3.3.1 Landsat
The Landsat program began with the launch by NASA in 1972 of the first
Earth Resources Technology Satellite (ERTS-1), which was subsequently
renamed Landsat-1. Since then, the Landsat program has had a checkered
political history in the United States. The original program was continued as
9255_C003.fm Page 65 Friday, February 16, 2007 11:08 PM
Satellite Systems 65
Relative response
100
90
80
70
60
50
40
30
20
10
450 500 550 600 650 700 750 800 850 900 950 1000 1050
Wavelength (nm)
FIGURE 3.4
Landsat MSS wavelength bands.
Alaska
60°
NTTF
Goldstone
30°
14 13 12 11 10 9 8 7 6 5 4 3
0°
2 15 1
Day Day 1
2 (repeats every 18 days) Orbit
number 30°
60°
FIGURE 3.5
Landsat-1, -2, and -3 orbits in 1 day. (NASA)
1999. For several years, the Landsat program provided the only source of high-
resolution satellite-derived imagery of the surface of the Earth.
Each of the Landsat satellites was placed in a near-polar Sun-synchronous
orbit at a height of about 918 km above the surface of the Earth. Each satellite
travels in a direction slightly west of south and passes overhead at about 10.00
hours local solar time. In a single day, 14 southbound (daytime) passes occur;
northbound passes occur at night (see Figure 3.5). Because the distance
between successive paths is much greater than the swath width (see
Figure 3.6), not all of the Earth is scanned in any given day. The swath width
is 185 km and, for convenience, the data from each path of the satellite is
divided into frames or scenes corresponding to tracks on the ground of approx-
imately 185 km in length; each of these scenes contains 2,286 scan lines, with
3,200 pixels per scan line. The orbit precesses slowly so that, on each successive
day, all the paths move slightly to the west; on the 18th day, the pattern repeats
itself exactly. Some overlap of orbits occurs, and, in northerly latitudes, this
overlap becomes quite large. After the first three satellites in the series, the
orbital pattern was changed slightly to give a repeat period of 16 days instead
of 18 days. At visible and near-infrared wavelengths, the surface of the Earth
is obscured if clouds are present. Given these factors, the number of useful
Landsat passes per annum over a given area might be fewer than half a dozen.
Nonetheless, data from the MSSs on the Landsat series of satellites have been
used very extensively in a large number of remote sensing programs. As their
name suggests, the Landsat satellites were designed primarily for remote
sensing of the land, but in certain circumstances useful data are also obtained
over the sea and inland water areas.
9255_C003.fm Page 67 Friday, February 16, 2007 11:08 PM
Satellite Systems 67
40
°N
185
KM
2100
KM
120 K 40°N
M
Orbit N, day M
Orbit N + 1, day M Orbit N, day M + 18
FIGURE 3.6
Landsat-1, -2, and -3 orbits over a certain area on successive days. (NASA)
3.3.2 SPOT
The Système pour l’Observation de la Terre (SPOT) is a program started by
the French Space Agency (Centre National d’Etudes Spatiales, CNES) in
which Sweden and Belgium also now participate. The first spacecraft in the
series, SPOT-1, was launched in 1986 and several later spacecraft in the series
have followed. The primary instrument on the first three spacecraft in the
series is the Haute Resolution Visible (HRV), an along-track, or push-broom,
scanner with a swath width of 60 km. The HRV can operate in two modes,
a multispectral mode with three spectral bands and 20 m × 20 m IFOV or a
one-band panchromatic mode with a 10 m × 10 m IFOV (see Table 3.5).
Because the SPOT instrument is a push-broom type, it has a longer signal
integration time that serves to reduce instrumental noise. However, it also
gives rise to the need to calibrate the individual detectors across each scan
line. An important feature of the SPOT system is that it contains a mirror
that can be tilted so that the HRV instrument is not necessarily looking
vertically downward but can look sideways at an angle of up to 27°. This
serves two useful purposes. By using data from a pair of orbits looking at
the same area on the ground from two different directions, it is possible to
9255_C003.fm Page 68 Friday, February 16, 2007 11:08 PM
obtain stereoscopic pairs of images; this means that SPOT data can be used
for cartographic work involving height determination. Secondly, it means
that the gathering of data can be programmed so that if some phenomenon
or event of particular interest is occurring, such as flooding, a volcanic
eruption, an earthquake, a tsunami, or an oil spillage, the direction of obser-
vation can be adjusted so that images are collected from that area from a
large number of different orbits while the interest remains live. For a system
such as the Landsat MSS or TM, which does not have such a tilting facility,
the gathering of data from a given area on the ground is totally constrained
by the pattern of orbits. An improved version of the HRV was developed
for SPOT-4, which was launched in 1998. Another instrument, named VEG-
ETATION, was also built for SPOT-4; this is a wide-swath (2,200 km), low-
resolution (about 1 km) scanner with 4 spectral bands (see Table 3.5). As its
name implies, this instrument is designed for large-scale monitoring of the
Earth’s vegetation.
3.3.4 IRS
In 1988, the Indian Space Research Organization (ISRO) began launching a
series of Indian Remote Sensing Satellites (IRS). IRS-1A carried two MSSs,
the Linear Imaging Self-Scanning Sensor (LISS-I and LISS-II), the first one
having a spatial resolution of 73 m and the second one having a spatial
resolution of 36.5 m. Each instrument had four spectral bands with wave-
length ranges that were similar to those of the Landsat MSS. IRS-1B, which
was similar to IRS-1A, was launched in 1991. Subsequently further spacecraft
in the series, carrying improved instruments, have since been launched. In
the early years, when the satellites had no onboard tape recorder and no
ground stations were authorized to receive direct broadcast transmissions
apart from the Indian ground station at Hyderabad, no data were available
9255_C003.fm Page 69 Friday, February 16, 2007 11:08 PM
Satellite Systems 69
except for data on the Indian subcontinent. More recently, other ground
stations have begun to receive and distribute IRS data for other parts of the
world.
Two other important instruments that were carried on Nimbus-7 should also
be mentioned: the SBUV and the Total Ozone Mapping Spectrometer (TOMS).
Both instruments measured the ozone concentration in the atmosphere. These
measurements have been continued with SBUV/2 instruments on board the
NOAA-9, -11, -14, -16, and -17 satellites, and TOMS instruments on the Russian
Meteor-3, Earth Probe, and Japanese ADEOS satellites. The two groups of
instruments, TOMS and SBUV types, differ principally in two ways. First, the
TOMS instruments are scanning instruments and the SBUV instruments are
nadir-looking only. Secondly, the TOMS instruments measure only the total
ozone content of the atmospheric column, whereas the SBUV instruments mea-
sure both the vertical profile and the total ozone content. These instruments
have played an important role in the study of ozone depletion, both generally
and in particular in the ozone “hole” that appears in the Antarctic spring.
3.3.6 ERS
Apart from the French development of the SPOT program, Europe (in the
form of the ESA) was quite late in entering the satellite remote sensing
arena, although a number of national agencies and institutions developed
their own airborne scanner and SAR systems. The main European contri-
butions to Earth observation have been through the Meteosat program
(see Section 3.2.2) and the two ESA Remote Sensing (ERS) satellite missions.
The ERS program originated in requirements framed in the early 1970s
and is particularly relevant to marine applications of remote sensing. Since
then, the requirements have become more refined, as has the context within
which these needs have been expressed. Early on in the mission definition
the emphasis was on commercial exploitation. But by the time the mission
configuration was finalized in the early 1980s, the emphasis had changed,
with a realization of the importance of global climate and ocean monitor-
ing programs. More recently, the need to establish a commercial return
on the data has reappeared. The main instruments that have been carried
on both the ERS-1 and ERS-2 satellites are a set of active microwave
instruments similar to those that were flown on Seasat. These comprise
the Active Microwave Instrument (AMI) and a radar altimeter. There is,
however, an additional instrument, the Along Track Scanning Radiometer
(ATSR/M), an infrared imaging instrument with some additional micro-
wave channels. The ATSR/M was designed for accurate sea-surface tem-
perature determination.
The AMI is a C-band instrument capable of operating as a SAR and as a
scatterometer; it makes common use of much of the hardware in order to
reduce the payload. However, a consequence of this shared design is that it
is not possible to collect both types of data at the same time.
The radar altimeter is a Ku-band, nadir-pointing instrument that measures
the delay time of the return echoes from ocean and ice surfaces. These data
can provide information about surface elevation, significant wave heights,
and surface wind speeds (see Section 7.1).
9255_C003.fm Page 71 Friday, February 16, 2007 11:08 PM
Satellite Systems 71
3.3.7 TOPEX/Poseidon
The demonstration of various oceanographic applications of data generated
by active microwave instruments flown in space was successfully performed
by the proof-of-concept Seasat satellite. However, Seasat failed after about
3 months in orbit, in 1978, and no plans were made for an immediate
successor to be built and flown in space. TOPEX/Poseidon is an altimetry
mission conducted jointly by CNES and NASA. It can be regarded, as far as
satellite radar altimetry is concerned, as the first successor to Seasat. The
mission was launched in 1992 to study the global ocean circulation from
space and was very much a part of the World Ocean Circulation Experiment.
Because TOPEX/Poseidon started life as two separate altimetry missions,
which were later combined into one, it carries two altimeters. To use a radar
altimeter on a satellite to make precise measurements of the geometry of the
surface of the oceans, the orbit of the spacecraft must be known very pre-
cisely; a laser retroreflector is therefore used for accurate positioning. The
Poseidon instrument is an experimental, light-weight, single frequency radar
altimeter operating in the Ku band, whereas the main operational instrument
is a dual-frequency Ku/C-band NASA Radar Altimeter. A microwave radi-
ometer provides atmospheric water content data for the purpose of making
atmospheric corrections to allow for variations in the velocity of the radio
waves in the atmosphere.
Satellite Systems 73
3.4 Resolution
In discussing remote sensing systems, three important and related qualities
need to be considered:
Spectral resolution
Spatial resolution (or IFOV)
Frequency of coverage.
TABLE 3.6
Frequency of Coverage versus Spatial Resolution
System IFOV Repeat Coverage
SPOT-5 Multispectral 10 m
Days variable*
SPOT-5 Panchromatic 5m
Landsat MSS 80 m
Several days‡
Landsat TM 30m
NOAA AVHRR ~1 km Few hours‡
Geostationary satellites ~1–~2.5 km 30 minutes/15 minutes
* Pointing capability complicates the situation.
‡ Exact value depends on various circumstances.
9255_C003.fm Page 74 Friday, February 16, 2007 11:08 PM
Satellite Systems 75
has data coverage of the whole Earth, but not complete coverage from each
orbit. A direct readout ground station may have complete coverage from all
orbits passing over it, but its collection is restricted to the area that is scanned
while the satellite is not out of sight or too low on the horizon. Thus no facility
is able to gather directly all the full 1-km resolution data from all the complete
orbits of the spacecraft. On the other hand, the degraded lower resolution
GAC AVHRR data from each complete orbit can be recorded on board and
downlinked at one of NOAA’s own ground stations.
9255_C004.fm Page 77 Tuesday, February 27, 2007 12:35 PM
4
Data Reception, Archiving,
and Distribution
4.1 Introduction
The philosophy behind the gathering of remote sensing data is rather
different in the case of satellite data than for aircraft data. Aircraft data are
usually gathered in a campaign that is commissioned by, or on behalf of, a
particular user and is carried out in a predetermined area. They are also
usually gathered for a particular purpose, such as making maps or monitoring
some given natural resource. The instruments, wavelengths, and spatial
resolutions used are chosen to suit the purpose for which the data are to be
gathered. The owner of the remotely sensed data may or may not decide to
make the data more generally available.
The philosophy behind the supply and use of satellite remote sensing data,
on the other hand, is rather different, and the data, at least in the early days,
were often gathered on a speculative basis. The organization or agency that
is involved in launching a satellite, controlling the satellite in orbit, and
recovering the data gathered by the satellite is not necessarily the main user
of the data and is unlikely to be operating the satellite system on behalf of
a single user. It has been common practice not to collect data only from the
areas on the ground for which a known customer for the data exists. Rather,
data have been collected over enormous areas and archived for subsequent
supply when users later identify their requirements. A satellite system is
usually established and operated by an agency of a single country or by an
agency involving collaboration among the governments of a number of
countries. In addition to actually building the hardware of the satellite sys-
tems and collecting the remotely sensed data, there is the task of archiving
and disseminating the data and, in many cases, of convincing the potential
end-user community of the relevance and importance of the data to their
particular needs.
The approach to the reception, archiving, and distribution of satellite data
has changed very significantly between the launch of the first weather sat-
ellite in 1960 and the present time. These changes have been a result of huge
77
9255_C004.fm Page 78 Tuesday, February 27, 2007 12:35 PM
• AVHRR
• High-Resolution Infrared Radiation Sounder (HIRS/2)
• Stratospheric Sounding Unit (SSU)
• Microwave Sounding Unit (MSU)
• Space Environment Monitor (SEM)
• Argos data collection and platform location system.
Switching unit
SEM 160 bps HRPT data
1698.0 MHz
1707.0 MHz
DCS Manipulated Split-phase
720 bps
information Right-hand circular
rate processor
Spacecraft (MIRP)
HRPT
& instrument
telemetry 0.66 Mbs
AVHRR
APT analogue data APT transmitter
137.50/137.62 MHz
Mbs : Megabits per second
kbs : Kilobits per second
FIGURE 4.1
TIROS-N instrumentation. (NOAA.)
from the APT is of poorer quality than the full-resolution picture obtained
with the HRPT, the APT transmission can be received with simpler equip-
ment than what is required for the HRPT. (For more information on the APT,
see Summers [1989]; Cracknell [1997] and the references cited therein). The
DSB transmission contains only the data from the low data–rate instruments
and does not even include a degraded form of the AVHRR data.
Although the higher-frequency transmissions contain more data, there is
a price to be paid in the sense that both the data-reception equipment and
the data-handling equipment need to be more complicated and are, there-
fore, more expensive. For example, receiving the S-band HRPT transmission
requires a large and steerable reflector/antenna system instead of just a
simple fixed antenna (i.e., a metal rod or a piece of wire). Typically, the
diameter of the reflector, or “dish”, for a NOAA receiving station is between
1 and 2 m. In addition to having the machinery to move the antenna, one
also needs to have quite accurate information about the orbits of the space-
craft so that the antenna assembly can be pointed in the right direction to
receive transmissions as the satellite comes up over the horizon. Thereafter,
the assembly must be moved so that it continues to point at the satellite as
it passes across the sky. The other important consequence of having a high
data-rate is that more complicated and more expensive equipment are
needed to accept and store the data while the satellite is passing over.
For the TIROS-N/NOAA series of satellites, the details of the transmission
are published. The formats used for arranging the data in these transmissions
and the calibration procedure for the instruments, as well as the values of
the necessary parameters, are also published (Kidwell, 1998). Anyone is free
to set up the necessary receiving equipment to recover the data and then
use them. Indeed, NOAA has for a long time adopted a policy of positively
encouraging the establishment of local receiving facilities for the data from
this series of satellites. A description of the equipment required to receive
HRPT and to extract and archive the data is given by Baylis (1981, 1983)
based on the experience of the facility established a long time ago at Dundee
University (see Figure 4.2). In addition, one can now buy “off-the-shelf”
systems for the reception of satellite data from various commercial suppliers.
It should be appreciated that one can only receive radio transmissions from
a satellite while that satellite is above the horizon as seen from the position
of the ground reception facility. Thus, for the TIROS-N/NOAA series of
satellites, the area of the surface of the Earth for which AVHRR data can be
received by one typical data reception station, namely that of the French
Meteorological Service at Lannion in Northwest France, is shown in
Figure 4.3. For a geostationary satellite, the corresponding area is very much
larger because the satellite is much farther away from the surface of the Earth
(see Figure 1.6). Thus, although one can set up a receiving station to receive
direct readout data, if one wishes to obtain data from an area beyond the
horizon — or to obtain historical data — one has to adopt another approach.
One may try to obtain the data from a reception facility for which the target
area is within range. Alternatively, one may be able to obtain the data via
9255_C004.fm Page 81 Tuesday, February 27, 2007 12:35 PM
High density
recorder
Antenna
Front end
Bit Frame
Receiver
conditioner Synchronizer
Computer and
Mounting image processor
Decommutator
Tracking C.C.T.
control
Video
processor
Hard copy
FIGURE 4.2
Block diagram of a receiving station for AVHRR data. (Baylis, 1981.)
80°
°
60°
60
70°
60°
40° 40°
50°
Lannion
20° 20°
0°
40°
30°
20°
FIGURE 4.3
Lannion, France, NOAA polar-orbiting satellites data acquisition zone.
9255_C004.fm Page 82 Tuesday, February 27, 2007 12:35 PM
the reception and distribution facilities provided by the body responsible for
the operation of the satellite system in question for historical data and data
from areas beyond the horizon. In the case of the TIROS-N/NOAA series
of satellites, these satellites carry tape recorders on board and so NOAA is
able to acquire imagery from all over the world. In addition to the real-time,
or direct-readout, transmissions that have just been described, some of the
data obtained in each orbit are tape recorded on board the satellite and
played back while the satellite is within sight of one of NOAA’s own ground
stations (either at Wallops Island, VA, or Gilmore Creek, AK). In this way, it
is only possible to recover a small fraction (about 10%) of all the data obtained
in an orbit. The scheduling and playback are controlled from the NOAA
control room (see Needham, 1983). The data are then archived and distrib-
uted in response to requests from users. In a similar way, each Landsat
satellite carries tape recorders that allow global coverage of data; the data
are held by the EROS (Earth Resources Observation and Science) Data Center.
Governmental and intergovernmental space agencies that have launched
remote sensing satellites, such as the National Aeronautics and Space Admin-
istration in the United States and the European Space Agency in Europe and
many others around the world, have also established receiving stations, both
for receiving data from their own satellites and from other satellites.
TABLE 4.1
NOAA NESDIS Earth Observation Products
Atmosphere products
• National Climatic Data Center satellite resources
• Aerosol products
• Precipitation
• North America Imagery
• Satellite Precipitation Estimates and Graphics
• Satellite Services Division (SSD) Precipitation Product Overview
• Operational Significant Event Imagery (OSEI) Flood Events
• Tropics
• GOES Imagery (Atlantic; East Pacific)
• Defense Meteorological Satellite Program (DMSP)
• SSD Tropical Product Overview
• DMSP Tropical Cyclone Products
• NOAA Hurricanes
• Winds
• High Density Satellite Derived Winds
• CoastWatch Ocean Surface Winds
Land products
• OSEI Imagery: Dust Storms; Flood Events; Severe Weather Events; Storm Systems
Events; Unique Imagery
• Fire
• OSEI Fire Images Sectors (Northwest; West; Southwest; Southeast)
• GOES and POES Imagery (Southwestern U.S.; Northwestern U.S.; Florida)
• Hazard Mapping System Fire and Smoke Product
• Web Based GIS Fire Analysis
• Archive of Available Fire Products
• SSD Fire Product Overview
• NOAA Fire Weather Information Center
• Geology and Climatology
• Bathymetry, Topography, and Relief
• Geomagnetism
• Ecosystems
• Interactive Map
• National Geophysical Data Center (NGDC) Paleoclimatology
• NGDC Terrestrial Geophysics
• Snow and Ice
• OSEI Snow Images
• OSEI Ice Images
• SSD Snow and Ice Product Overview
• National Ice Center (Icebergs)
• Volcanic Ash
• Imagery (Tungurahua; Colima; St. Helens)
• Washington Volcanic Ash Advisory Center
• NGDC Volcano Data
• SSD Volcano Product Overview
• NGDC Natural Hazards Overview
Ocean Products
• Laboratory for Satellite Altimetry
• Sea Floor Topography
9255_C004.fm Page 87 Tuesday, February 27, 2007 12:35 PM
with the development of the Internet. The best way to find a data source
for any chosen satellite system is to search on the Internet using a powerful
search engine, such as Google (http://www.google.com/ ), and to use appro-
priate key words. Once the person has found the website of the source, he
or she should follow the instructions for acquiring the needed data.
9255_C004.fm Page 88 Tuesday, February 27, 2007 12:35 PM
9255_C005.fm Page 89 Wednesday, September 27, 2006 5:08 PM
5
Lasers and Airborne Remote
Sensing Systems
5.1 Introduction
As mentioned in Chapter 1, it is convenient to distinguish between active
and passive systems in remote sensing work. This chapter is concerned with
airborne remote sensing systems, most of which are active systems that
involve lasers. In any application of active, optical remote sensing (i.e.,
lasers), one of two principles applies. The first involves the use of the lidar
principle — that is, the radar principle applied in the optical region of the
electromagnetic spectrum. The second involves the study of fluorescence
spectra induced by a laser. These techniques were originally applied in a
marine context, with lidar being used for bathymetric work in rather shallow
waters and fluorosensing being used for hydrocarbon pollution monitoring.
The final section of this chapter is concerned with passive systems that use
gamma rays.
Until recently, no lasers were flown on spacecraft. However, light bounced
off a satellite from lasers situated on the ground was used to carry out
ranging measurements to enable the precise determination of the orbit of a
satellite. The use of lasers mounted on a remote sensing platform above the
surface of the Earth has, until recently, been restricted to aircraft. It is difficult
to use lasers on free-flying satellites because they require large collection
optics and extremely high power.
89
9255_C005.fm Page 90 Wednesday, September 27, 2006 5:08 PM
h = 21 ct (5.2)
In the early days, airborne lidars could only be used for differential mea-
surements and these found their application in bathymetric work in shallow
waters — that is, in making charts of the depth of shallow estuarine and
coastal waters.
Three early airborne laser systems — developed by the Canada Centre for
Remote Sensing (CCRS), the U.S. Environmental Protection Agency (EPA),
and the National Aeronautics and Space Administration (NASA) — are
described in general terms by O’Neil et al. (1981). The system developed by
the CCRS was primarily intended for the monitoring of oil pollution and was
backed by a considerable amount of work on laboratory studies of the fluo-
rescence spectra of oils (O’Neil et al., 1980; Zwick et al., 1981). After funding
cuts, the system developed by the CCRS was taken over by the Emergencies
Science Division of Environment Canada. In the present generation of the
system, which is known as the Laser Environmental Airborne Fluorosensor
(LEAF), laser-induced 64 spectral channel fluorescence data are collected at
100 Hz. The LEAF system is normally operated at altitudes between 100 and
166 m and at ground speeds of 100 to 140 knots (about 51 to 77 ms–1). The
LEAF is a nadir-looking sensor that has a footprint of 0.1 m by 0.3 m at 100 m
altitude. At the 100 Hz sampling rate, a new sample is collected approximately
every 60 cm along the flight path. The data are processed on board the aircraft
in real time, and the observed fluorescence spectrum is compared with stan-
dard reference fluorescence spectra for light refined, crude, and heavy refined
classes of oil and a standard water reference spectrum, all of which are stored
in the LEAF data analysis computer. When the value of the correlation
coefficient between the observed spectrum and the spectrum of a class of
petroleum product is above a certain threshold, and is greater than the corre-
lation with the water spectrum, the observed spectrum is identified as being
of that class of petroleum. The next generation laser fluorosensor to follow
LEAF, which is known as the Scanning Laser Environmental Airborne Fluo-
rosensor (SLEAF), will be enhanced in various ways (Brown et al., 1997).
The EPA system was developed primarily for the purpose of water-quality
monitoring involving the study of chlorophyll and dissolved organic carbon
(Bristow and Nielsen, 1981; Bristow et al., 1981). The first version of the
Airborne Oceanographic Laser (AOL) was built in 1977, to allow investiga-
tion of the potential for an airborne laser sensor in the areas of altimetry,
hydrography, and fluorosensing. NASA has operated the AOL since 1977 and,
during this period, the instrument has undergone considerable modifications,
including several major redesigns. It has remained a state-of-the-art airborne
9255_C005.fm Page 91 Wednesday, September 27, 2006 5:08 PM
Se
ns
or
ht
ig
nl
Atmosphere
Su A
A
D
C
Wave White caps
B β
E E
α Seaweed
Ripples E &
Water algae
Sunlit
slope
F Shadow
Rock slope
Sandbar
Mud/ooze
FIGURE 5.1
Light intensity reaching a satellite. (Bullard, 1983a.)
lines and, therefore, a large amount of data collection (each sounding line
represents a sampling over a very narrow swath). In addition to the con-
straint of time, shallow-water surveying presents the constant danger of
surveying boats running aground.
Attempts have been made to use passive multispectral scanner (MSS) data
from the Landsat series of satellites for bathymetric work in shallow waters
(Cracknell et al., 1982a; Bullard, 1983a, 1983b); however, a number of prob-
lems arise (see, for example, MacPhee et al. [1981]). These problems arise
because there are various contributions to the intensity of the light over a
water surface reaching a scanner flown on a satellite (see Figure 5.1), and
many of these contain no information about the depth of the water. The use
of MSS data for water-depth determination is based on mathematical mod-
elling of the total radiance of all wavelengths received at the scanner minus
the unwanted components, leaving only those attributable to water depth
(see Figure 5.2). By subtracting atmospheric scattering and water-surface
glint, the remaining part of the received radiance is due to what can be called
“water-leaving radiance.” This water-leaving radiance arises from diffuse
reflection at the surface and from radiation that has emerged after traveling
from the surface to the bottom and back again; the contribution of the latter
component depends on the water absorption, the bottom reflectivity, and the
9255_C005.fm Page 93 Wednesday, September 27, 2006 5:08 PM
Absorbtion of
Water surface
red light
Maximum
penetration depth
Sea bed visible Sea bed visible
(indicated by light to Sea bed not- (indicated by dark
dark grey on image) visible (indicated by dark grey on image) to light grey on image)
FIGURE 5.2
Depth of water penetration represented by a grey scale. (Bullard, 1983a.)
Position
fixing
Swath
FIGURE 5.3
A configuration for lidar bathymetry operation. (Muirhead and Cracknell, 1986.)
Depth d = t × c
2
Where c = velocity of light
in water Water surface
Depth
d
Bottom
FIGURE 5.4
Principles of operation of a lidar bathymeter. (O’Neil et al., 1980.)
9255_C005.fm Page 95 Wednesday, September 27, 2006 5:08 PM
receiver because the effective noise level due to this scattering increases along
with the desired signal. A useful parameter for describing the performance of
the sensor is the product of the mean attenuation coefficient and the maximum
recorded depth. Navigational accuracy is important and, in the early days of
lidar bathymetric work, this was a serious problem over open areas of water
that possessed no fixed objects to assist in the identification of position. With
the advent of the GPS, this is no longer a serious problem.
GPS satellites
IMU
Direction of
flight
One GPS
Groundstation
FIGURE 5.5
Representation of airborne lidar scanning system. (Based on Turton and Jonas, 2003.)
the computation for directly georeferencing the laser scans. The system com-
ponents are shown diagrammatically in Figure 5.5.
Airborne lidar scanning is an active remote sensing technology. Commonly
used in conjunction with an airborne digital camera, these systems emit laser
signals and as such can be operated at any time during the day or night.
Unlike lidar bathymetric systems, a single wavelength pulse is used, usually
at a near-infrared wavelength of about 1.5 µm, even though many systems
operate at the 1064 nm wavelength due to the possibility of using highly
efficient and stable NdYAG (neodynium yttrium aluminum garenet) lasers.
Information from an airborne lidar system is combined with ground-base
station GPS data to produce the x and y coordinates (easting and northing)
and z coordinate (elevation) of the reflecting points. A typical system can
generate these coordinates at a rate of several million per minute; the leading
edge systems can acquire 100,000 points per second, giving for each the posi-
tion of four single returns (therefore a maximum of 400,000 points per second).
Reflections for a given pair of x and y coordinates are then separated auto-
matically into signals reflected from the ground and those reflected from
aboveground features. Aboveground features from which reflections can
occur include high-voltage electricity transmission cables, the upper surface
of the canopy in a forest, and the roofs of buildings (see Figure 5.6). A general
processing scheme is illustrated in Figure 5.7.
40.00
AOL surface
AOL wavefrom bottom
35.00 Photo ground truth
30.00
MSL (meters + 120)
25.00
20.00
15.00
10.00
00
00
00
00
00
00
.0
.0
.0
.0
0.
0.
0.
0.
0.
0.
0.
20
40
60
80
10
12
14
16
18
20
Along track (meters)
FIGURE 5.6
Cross sectional lidar profile obtained over an area of forest under winter conditions during
March, 1979. (Krabill et al., 1984.)
Laser data
processing
INS data
Data classification
FIGURE 5.7
Block diagram of the processing scheme for an airborne lidar scanning system. (Dr. Franco Coren.)
9255_C005.fm Page 100 Wednesday, September 27, 2006 5:08 PM
60
50
Cumulative number of points
40
30
Data: Data12_count
20 Model: Gauss
Chi^2 = 14.91747
y0 0.37091 ± 1.78351
10 xc 0.00172 ± 0.00283
w 0.10405 ± 0.00758
A 8.14831 ± 0.55323
0
FIGURE 5.8
An example of the error distribution of elevation measurements with an airborne laser scanner.
(Dr. Franco Coren.)
0m 200 m 400 m
(a)
0m 200 m 400 m
(b)
FIGURE 5.9
Digital model of (a) the surface and (b) the ground derived from laser scanning and classifica-
tion, with ground resolution of 1m × 1m. (Istituto Nazionale di Oceanografia e di Geofisica
Serimentale.)
9255_C005.fm Page 103 Wednesday, September 27, 2006 5:08 PM
1 I1
2 I2
Output fibre optics
3 I3
Proximity focussed
16 I16
intensifiers
S/H G
Channel plate
Sample/hold
Photocathode and background Gain AGC
Fibre image Gain subtraction circuit
slicer
Gate (G1)
Concave holographic To be added
grating
Decay time τ Blue
G2 meters τ Red
G1 G2 G3 S/H
9255_C005.fm Page 104 Wednesday, September 27, 2006 5:08 PM
Dichroic
To data processing system
Gating/
Field stop timing Sync
UV blocking filter G3
10/90 beamsplitter Laser Lidar
Backscatter altimeter H
337.1 nm line filter
20.5 cm f/3.1 Photodiode
cassegrain Laser
P
telescope power meter
Nitrogen Trigger
laser Backscatter
B
amplitude
FIGURE 5.10
Block diagram of a fluorosensor electro-optical system. (O’Neil et al., 1980.)
Introduction to Remote Sensing
9255_C005.fm Page 105 Wednesday, September 27, 2006 5:08 PM
TABLE 5.1
Laser Transmitter Characteristics
Laser type Nitrogen gas laser
Wavelength 337 nm
Pulse length 3-nsec FWHM
Pulse energy 1 mJ/pulse
Beam divergence 3 mrad × 1 mrad
Repetition rate 100 Hz
(From O’Neil et al., 1980.)
TABLE 5.2
Laser Fluorosensor Receiver Characteristics
Telescope f/3·1 Dall Kirkham
Clear aperture 0·0232 m2
Field of view 3 mrad × 1 mrad
Intensifier on-gate period 70 nsec
Nominal spectral range 386–690 nm
Nominal spectral bandpass (channels 2–15) 20 nm/channel
Noise equivalent energy* ~4·8 × 10–17 J
Lidar altimeter range 75–750 m
Lidar altimeter resolution 1·5 m
* This is the apparent fluorescence signal (after background sub-
traction) collected by the receiver in one wavelength channel for a
single laser pulse that equals the noise in the channel. This figure
relates to the sensor performance at the time of collection of the
data presented by O’Neil et al. (1980). The noise equivalent energy
has been improved significantly.
(From O’Neil et al., 1980.)
9255_C005.fm Page 106 Wednesday, September 27, 2006 5:08 PM
from a few points under the flight path. These in situ measurements are
needed for the calibration of the airborne data because the data deal not
with a single chemical substance but rather with a group of chemically
related materials, the relative concentrations of which depend on the specific
mixture of the algal species present. Because the absolute fluorescence con-
version efficiency depends not only on the species present but also on the
recent history of photosynthetic activity of the organisms (due to changes
in water temperature, salinity, and nutrient levels as well as the ambient
irradiance), this calibration is essential if data are to be compared from day
to day or from region to region. The development of a more-advanced laser
fluorosensing system to overcome at least some of the need for simultaneous
in situ data using a short-pulse, pump-and-probe technique is described by
Chekalyuk et al. (2000). The basic concept is to saturate the photochemical
activity within the target with a light flash (or a series of ‘flashlets’) while
measuring a corresponding induction rise in the quantum yield of chloro-
phyll fluorescence (Govindjee, 1995; Kramer and Crofts, 1996).
In common with all optical techniques, the depth to which laser fluorosensor
measurements can be made is limited by the transmission of the excitation
and emission photons through the target and its environment. Any one of
the materials that can be monitored by laser fluorosensing can also be mon-
itored by grab sampling from a ship. While in situ measurements or grab
sample analyses are the accepted standard technique, the spatial coverage
by this technique is so poor that any temporal variations over a large area
are extremely difficult to unravel. For rapid surveys, to monitor changing
conditions, an airborne laser fluorosensor can rapidly cover areas of mod-
erate size and the data can be made available very quickly, with only a few
surface measurements needed for calibration and validation purposes.
One important use of laser fluorosensing from aircraft is oil-spill detection,
characterization, mapping, and thickness contouring. Laboratory studies
have shown that mineral oils fluoresce efficiently enough to be detected by
a laser fluorosensor and that their fluorescence spectra not only allow oil to
be distinguished from a seawater background but also allow classification
of the oil into three groups: light refined (e.g., diesel), crude, and heavy
refined (e.g., bunker fuel). The fluorescence spectra of three oils typical of
these groups are shown in Figure 5.11. When used for oil pollution surveil-
lance, a laser fluorosensor can perform three distinct operations: detect an
anomaly, identify the anomaly as oil and not some other substance and
classify the oil into one of the three broad categories just mentioned.
There has also long been a need to measure oil-slick thickness, both within
the spill-response community and among academics in the field. However,
although a considerable amount of work has been done, no reliable methods
currently exist, either in the laboratory or the field, for accurately measuring
oil-on-water slick thickness. A three-laser system called the Laser Ultrasonic
Remote Sensing of Oil Thickness (LURSOT) sensor, which has one laser
coupled to an optical interferometer, has been accurately used to measure
oil thickness (Brown et al., 1997). In this system, the measurement process
9255_C005.fm Page 108 Wednesday, September 27, 2006 5:08 PM
1.0
0.5
(×10)
FIGURE 5.11
Laboratory measured fluorescence spectra of Merban crude oil (solid line), La Rosa crude oil
(dash-dot line), and rhodamine WT dye (1% in water) (dashed line). (O’Neil et al., 1980.)
is initiated with a thermal pulse created in the oil layer by the absorption of
a powerful infrared carbon dioxide laser pulse. Rapid thermal expansion of
the oil occurs near the surface where the laser beam was absorbed. This
causes a steplike rise of the sample surface as well as the generation of an
ultrasonic pulse. This ultrasonic pulse travels down through the oil until it
reaches the oil-water interface, where it is partially transmitted and partially
reflected back toward the oil-air interface, where it produces a slight dis-
placement of the oil surface. The time required for the ultrasonic pulse to travel
through the oil and back to the surface again is a function of the thickness and
the ultrasonic velocity in the oil. The displacement of the surface is measured
by a second laser probe beam aimed at the surface. The motion of the surface
produces a phase or frequency shift (Doppler shift) in the reflected probe beam
and this is then demodulated with the interferometer; for further details see
Brown et al. (1997).
Analog to
Navigation altimeter digital Summing High
pressure temperature converter amplifier voltage
Detector package
Computer
FIGURE 5.12
Block diagram of a gamma ray spectrometer. (International Atomic Energy Agency [IAEA], 1991.)
consists of a single crystal of sodium iodide treated with thallium. The sides
of the crystal are coated with magnesium oxide, which is light reflecting. An
incoming gamma ray photon produces fluorescence in the crystal and the
photons that are produced are reflected onto a photomultiplier tube at the
end of the crystal detector. The output from the photomultiplier tube is then
proportional to the energy of the incident gamma ray photon. The pulses
produced by the photomultiplier tube are fed into a pulse height analyzer
which, essentially, produces a histogram of the energies of the incident
gamma rays — that is, it produces a gamma ray spectrum. The system shown
in Figure 5.12 has a bank of detectors, not just a single detector.
A detector takes a finite time to process the output resulting from a given
gamma ray photon; if another photon arrives within that time, it is lost. If
the flux of gamma ray photons is large, then a correction must be applied.
If two pulses arrive at the pulse height analyzer at exactly the same time,
the output is recorded as a single pulse with the sum of the energies of the
two pulses; this also is a problem with large fluxes of gamma ray photons,
and steps have to be taken to overcome it.
Originally, airborne gamma ray spectroscopy was introduced in the 1960s
for the purpose of exploration for ores of uranium. It was then extended into
more general geological mapping applications. The main naturally occurring
radioactive elements are one isotope of potassium (40K), and uranium (238U),
and thorium 232Th their daughter products. In addition to airborne gamma
ray spectroscopy uses in studying natural levels of radioactivity for geolog-
ical mapping, it can also be used to study man-made radioactive contami-
nation of the environment. It is possible to distinguish different radioactive
9255_C005.fm Page 110 Wednesday, September 27, 2006 5:08 PM
1.0 Potassium
40
K-1.46
Normalized channel count rate 0.8
0.6
Uranium
Potassium
0.2
Thorium
0
0 1.0 2.0 3.0
Energy (MeV)
(a)
Uranium
Pb-0.35
Bi-1.76
Bi-0.61
Bi-1.12
Bi-2.20
1.0
214
214
214
214
214
0.8
Normalized channel count rate
0.6
0.4
Total
Potassium
Uranium
count
0.2
Thorium
0
0 1.0 2.0 3.0
Energy (MeV)
(b)
FIGURE 5.13
Gamma ray spectra of (a) 40K, (b) 238U, and (c) 232 Th. The positions of the three radioactive
elements’ windows are shown. (IAEA, 1991.)
9255_C005.fm Page 111 Wednesday, September 27, 2006 5:08 PM
Thorium
Ac-0.91
-0.97
1.0
Ti-2.62
Ti-0.58
208
208
228
Normalized channel count rate
0.8
0.6
0.4
Total count
Potassium
Uranium
0.2
Thorium
0
0 1.0 2.0 3.0
Energy (MeV)
(c)
materials because the energy (or frequency) of the gamma rays emitted by
a radioactive nuclide is characteristic of that nuclide. The gamma-ray spectra
of 40K, 238U, and 232Th are shown in Figure 5.13. The spectral lines are broad-
ened as a result of the interaction of the gamma rays with the ground and
the intervening atmosphere between the ground and the aircraft. Back-
ground radiation, including cosmic rays, is also present, and there is also
the effect of radioactive dust washed out of the atmosphere onto the ground
or the aircraft, and of radiation from the radioactive gas radon (222Rn), which
occurs naturally in varying amounts in the atmosphere. Moreover, the
gamma rays are attenuated by their passage through the atmosphere;
roughly speaking, about half of the intensity of the gamma rays is lost for
every 100 m of height. For mapping of natural radioactivity using fixed-wing
aircraft, a flying height of 120 m is most commonly used. To fly lower is
hazardous, unless the terrain is very flat. In addition, the field of view
(sampling area) is smaller; to fly higher will mean dealing with a smaller
signal. Therefore, for accurate mapping, one must have an accurate value
of the flying height (from a radar altimeter carried on board the aircraft).
More details of the theory and techniques of airborne gamma ray spectros-
copy are given in a report published by the International Atomic Energy
Agency (IAEA, 1991).
Airborne gamma ray spectrometer systems designed for mapping natural
radioactivity can also be used for environmental monitoring surveys. For
instance, mapping of the fallout in Sweden from the accident at the nuclear
power station in Chernobyl on April 25 and 26, 1986, is described in some
detail in the report on gamma ray spectroscopy by the IAEA (1991). That
report also describes the successful use of airborne surveys to locate three
9255_C005.fm Page 112 Wednesday, September 27, 2006 5:08 PM
6
Ground Wave and Sky Wave
Radar Techniques
6.1 Introduction
The original purpose for which radar was developed was the detection of
targets such as airplanes and ships. In remote sensing applications, over-the-
land radars are used to study spatial variations in the surface of the land
and also the rather slow temporal variations on the land surface. Before the
advent of remote sensing techniques, data on sea state and wind speeds at
sea were obtained from ships and buoys and were accordingly only available
for a sparse array of points. Wave heights were often simply estimated by
an observer standing on the deck of a ship. As soon as radar was invented,
scientists found that, at low elevation angles, surrounding objects and terrain
caused large echoes and often obliterated genuine targets; this is the well-
known phenomenon clutter. Under usual circumstances, of course, the aim
is to reduce this clutter. However, research on the clutter phenomenon
showed that the backscattered echo became larger with increasing wind
speed. This led to the idea of using the clutter, or backscattering, to measure
surface roughness and wind speed remotely. Remote sensing techniques
using aircraft, and more specifically satellites, have the very great advantage
of being able to provide information about enormous areas of the surface of
the Earth simultaneously. However, remote sensing by satellite-flown instru-
ments using radiation from the visible or infrared parts of the electromag-
netic spectrum has the serious disadvantage that the surface of the sea is
often obscured by cloud. Although data on wind speeds at cloud height are
obtainable from a succession of satellite images, these would not necessarily
be representative of wind speeds at ground level.
It is, of course, under adverse weather conditions that one is likely to be
particularly anxious to obtain sea state and marine weather data. Aircraft are
expensive to purchase and maintain and their use is restricted somewhat by
adverse weather conditions; satellite remote sensing techniques can provide
a good deal of relevant information at low cost. Satellites are even more
expensive than aircraft; however, this fact may be overlooked if someone else
113
9255_C006.fm Page 114 Friday, February 16, 2007 10:19 PM
Sk
y-
Sid ra
w
Altim
av
e- dar
e
lo
ok
eter
in
g
Ground-wave
Line o
f sight
FIGURE 6.1
Ground and sky wave radars for oceanography. (Shearman, 1981.)
has paid the large capital costs involved and the user pays only the marginal
costs of the reception, archiving, and distribution of the data. Satellites, of
course, have the advantage over other remote sensing platforms in that they
provide coverage of large areas. If one is concerned with only a relatively small
area of the surface of the Earth, similar data can be obtained about sea state
and near-surface wind speeds using ground-based or ship-based radar sys-
tems. Figure 6.1 is taken from a review by Shearman (1981) and illustrates
(though not to scale) ground wave and sky wave techniques.
A distinction should be made between imaging and nonimaging active micro-
wave systems. Side-looking airborne radars flown on aircraft and synthetic
aperture radars (SARs) flown on aircraft and spacecraft are imaging devices
and can, for instance, give information about wavelengths and about the direc-
tion of propagation of waves. A substantial computational effort involving
Fourier analyses of the wave patterns is required to achieve this. In the case of
SAR, this computational effort is additional to the already quite massive com-
putational effort involved in generating an image from the raw data (see
Section 7.4). Other active microwave instruments, such as altimeters and scat-
terometers, do not form images but give information about wave heights and
wind speeds. This information is obtained from the shapes of the return pulses
received by the instruments. The altimeter (see Section 7.2) operates with short
pulses traveling vertically between the instrument and the ground and is used
to determine the shape of the geoid and the wave height (rms). A scatterometer
uses beams that are offset from the vertical. Calibration data are used to deter-
mine wave heights and directions and wind speeds and directions.
Three types of ground-based radar systems for sea-state studies are available
(see Figure 6.1):
1
St (R , θ , ϕ ) = tλ (θ )PG
t (θ , ϕ )
4π R 2
(6.1)
where Pt is the power transmitted by the antenna, G(q, j) is the gain factor
representing the directional characteristics of the antenna i.e. PtG(q, j) is the
power per unit solid angle transmitted in the direction (q, j), tl(q) is the
transmittance and is slightly less than 1, and the factor {1/(4p R2)} allows for
the spreading out of the signal over a sphere of radius R, where R is the range.
For a satellite system, tl (θ) is the transmittance through the whole atmosphere.
Now consider an individual target that is illuminated by a radar beam.
This target may absorb, transmit, or scatter the radiation, but we are only
concerned with the energy that is scattered back toward the radar and we
define the scattering cross section s as the ratio of the reflected power per
unit solid angle in the direction back to the radar divided by the incident
power density from the radar (per unit area normal to the beam). s has the
units of area. The scatterer therefore acts as a source of radiation of magni-
tude s St(R, q, j) and so the power density arriving back at the radar is:
tλ (θ )σ St (R , θ , ϕ ) tλ2 (θ )σ PGt (θ , ϕ )
Sr = = (6.2)
4π R 2
( 4π )2 R 4
The power, Pr , entering the receiver is therefore Sr Ae(q, j), where Ae(q, j)
is the effective antenna area that is related to the gain by:
λ 2G(θ , ϕ )
Ae (θ , φ ) = (6.3)
4π
9255_C006.fm Page 116 Friday, February 16, 2007 10:19 PM
and therefore:
Ae (θ , ϕ )tλ2 (θ )σ PG
t (θ , ϕ ) tλ2 (θ )PG
t (θ , ϕ ) λ σ
2 2
Pr = = (6.4)
( 4π )2 R 4 ( 4π )3 R 4
so that:
and therefore, using the form in Equation 6.4, we can write s as:
( 4π )3 R 4 Pr
σ= . (6.6)
λ tλ (θ )2 G(θ , ϕ )2 Pt
2
Note that this process is for what we call the monostatic case — in other
words, when the same antenna is used for the transmitting and receiving of
the radiation. When transmission and reception are performed using different
antennae, which may be in quite different locations as in the case of sky
wave radars, the corresponding equation can be derived in a similar way,
except that it is necessary to distinguish between the different ranges, direc-
tions, gains, and areas of the two antennae.
Equations 6.5 and 6.6 are for the power received from one scattering
element at one instant in time. The measured backscatter is the sum of the
backscatter from all the individual small elements of surface in the area that
is viewed by the radar. Equation 6.4, therefore, can be written for an indi-
vidual scatterer labeled by i as:
tλ2 (θ )PG
t i (θ , ϕ )λ σ i
2 2
Pri = (6.7)
( 4π )3 Ri4
and the total received power is then obtained from the summation over i of
all the individual Pri, so that:
Pr = ∑P
i =1
ri (6.8)
In the case of the sea, one must modify the approach because the sea is in
constant motion and therefore the surface is constantly changing. We assume
that there are a sufficiently large number, N, of scatterers, contributing random
9255_C006.fm Page 117 Friday, February 16, 2007 10:19 PM
phases to the electric field to be able to express the total received power, when
averaged over time and space, as the sum:
N
Pr = ∑P
i =1
ri (6.9)
If we assume that the sea surface is divided into elements of size ∆Ai, each
containing a scatterer, the normalized radar cross section s 0 can be defined as:
σi
σ0 = (6.10)
∆ Ai
The value of s 0 depends on the roughness of the surface of the sea and this,
in turn, depends on the near-surface wind speed. However, it should be
fairly clear that one cannot expect to get an explicit expression for the wind
speed in terms of s 0; it is a matter of using a model, or models, relating s0
to the wind speed and then fitting the experimental data to the chosen model.
The value of s 0 increases with increasing wind speed and decreases with
increasing angle of incidence and depends on the beam azimuth angle rel-
ative to the wind direction. Because of the observed different behavior of s 0
in the three different regions of incidence angle ([a] 0 to 20°, [b] 20 to 70°,
and [c] above 70°), there are different models for these three regions. In the
case of ground wave and sky wave radars, it is the intermediate angle of
incidence, where Bragg scattering applies, that is relevant. For the altimeter
(see Section 7.2), it is the low incidence angles (i.e., for q in the range from
0° to about 20°) that apply. In this case, it is assumed that specular reflection
is the dominant factor and so what is done is to use a model where the sea
surface is made up of a large number of small facets oriented at various
angles. Those that are normal, or nearly normal, to the radar beam will give
strong reflections, whereas the other facets will give weak reflections.
If one is concerned with detecting some object, such as a ship or an
airplane, with a radar system, then one makes use of the fact that the object
produces a massively different return signal from the background and there-
fore the object can be detected relatively easily. However, in remote sensing
of the surface of the Earth, one is not so much concerned with detecting an
object but with studying the variations in the nature or state of the part of
the Earth’s surface that is being observed, whether the land or the sea.
Differences in the nature or state of the surface give rise to differences in s i
of the individual scatterers and therefore, through Equation 6.8 or
Equation 6.9, to differences in the received power of the return signal.
However, inverting Equation 6.8 or Equation 6.9 to use the measured value
of the received power to determine the values of si , or even the value of the
normalized cross section s 0, is not feasible. One is therefore reduced to
constructing models of the surface and comparing the values of the calcu-
lated received power for the various models with the actually measured
value of the received power.
9255_C006.fm Page 118 Friday, February 16, 2007 10:19 PM
0
a b
−16 d
−24
−32 e
−40
−0.8 −0.4 0.0 0.4 0.8
Doppler frequency (Hz)
FIGURE 6.2
Features of radar spectra used for sea-state measurement and the oceanographic parameters derived
from them: (a) ratio of two first-order Bragg lines— wind direction; (b) - 10 dB width of larger first-
order Bragg line—wind speed; (c) Doppler shift of first-order Bragg lines from expected values—ra-
dial component of surface current; (d) magnitudes of first-order Bragg lines—ocean wave height
spectrum for one wave-frequency and direction; and (e) magnitude of second-order struc-
ture—ocean wave height spectrum for all wave-frequencies and directions (sky wave data for 10.00
UT, 23 August 1978, frequency 15 MHz data-window, Hanning, FFT 1024 points, averages 10, slant
range 1125 km). (Shearman, 1981.)
footprint, low power output, and a 360-degree possible viewing angle that
minimizes siting constraints and maximizes coverage area. The SeaSonde can
be remotely controlled from a central computer in an office or laboratory and
set for scheduled automatic data transfers. It is suitable for fine-scale moni-
toring in ports and small bays, as well as open ocean observation over larger
distances up to 70 km. For extended coverage, a long-range SeaSonde can
observe currents as far as 200 km offshore. The main competitor to SeaSonde
is a German radar called WERA (standing for WElen RAdar). This is a phased-
array system sold by German company Helzel Messtechnik GmbH. The first
WERAs operated at 25 to 30MHz but, with current interest in lower frequen-
cies to obtain longer ranges, they now operate at 12 to 16MHz. Pisces is another
commercially available phased-array system but it is higher specification, and
therefore higher priced, than WERA and has a longer range. Pisces, WERA, and
SeaSonde use frequency-modulated continuous waveform radar technology,
whereas OSCR and the original CODAR were pulsed systems.
Decametric ground wave systems have now been used for over 20 years
to study surface currents over coastal regions. Moreover, these systems have
now developed to the stage that their costs and processing times make it
feasible to provide a near–real time determination of a grid of surface cur-
rents every 20 to 30 minutes. This provides a valuable data set for incorpo-
ration into numerical ocean models (Lewis et al., 1998).
study the sea surface at distances between 1000 km and 3000 km from the
radar installation. The development and operation of a sky wave radar system
is a large and expensive undertaking. However, there is considerable military
interest in the imaging aspect of the use of sky wave radars, and it is doubtful
whether any nonmilitary operation would have the resources to construct and
operate a sky wave radar. One example of the military significance is provided
by the case of the stealth bomber, a half-billion dollar batlike superplane
developed for the U.S. military to evade detection by radar systems. Stealth
aircraft are coated with special radar absorbing material to avoid detection by
conventional microwave radar; however, sky wave radar uses high-frequency
radio waves, which have much longer wavelengths than microwaves. A sky
wave radar can detect the turbulence in the wake of a stealth aircraft in much
the same way that a weather radar is used to detect turbulent weather ahead
so that modern airliners can divert and avoid danger and inconvenience to
passengers. In addition to observing the turbulent wake, the aircraft itself is
less invisible to a sky wave radar than it is to a conventional radar. Moreover
stealth aircraft, such as the U.S. Nighthawk F117A, are designed with sharp
leading edges and a flat belly to minimize reflections back toward conventional
ground-based radars. A sky wave radar bounces down from the ionosphere
onto the upper surfaces that include radar-reflecting protrusions for a cockpit,
engine housings, and other equipment. An additional feature of a sky wave
radar is that it is very difficult to jam because of the way the signal is propa-
gated over the ionosphere.
For the waves on the surface of the sea, only the components of the wave
vector directly toward or away from the radar are involved in the Bragg con-
dition. The relative amplitudes of the positively and negatively Doppler-shifted
lines in the spectrum of the radar echo from a particular area of the sea indicate
the ratio of the energy in approaching and receding wind-driven sea waves.
Should there be only a positively shifted line present, the wind is blowing
directly toward the radar; conversely, should there be only a negatively shifted
line, the wind is blowing directly away from the radar. If the polar diagram of
the wind-driven waves about the mean wind direction is known, the measured
ratio of the positive and negative Doppler shifts enables the direction of the
mean wind to be deduced. This is achieved by rotating the wave-energy polar
diagram relative to the direction of the radar beam until the radar beam’s
direction cuts the polar diagram with the correct ratio (see Figure 6.3). Two wind
directions can satisfy this condition; these directions are symmetrically oriented
on the left and right of the direction of the radar beam. This ambiguity can be
resolved using observations from a sector of radar beam directions and making
use of the continuity conditions for wind circulation (see Figure 6.4).
In practice, the observed positive and negative Doppler shifts are not quite
equal in magnitude. This occurs because an extra Doppler shift arises from
the bodily movement of the water surface on which the waves travel, this
movement being the surface current. The radar, however, is only capable of
determining the component of the total surface current along the direction
of the radar beam.
9255_C006.fm Page 122 Friday, February 16, 2007 10:19 PM
Radar
beam
dB dB dB
− + Hz − + Hz − + Hz
Doppler shift
FIGURE 6.3
Typical spectra obtained for different wind orientations relative to the radar boresight.
(Shearman, 1981.)
Figure 6.2 shows a sky wave radar spectrum labeled with the various
oceanographic and meteorological quantities that can be derived from it. In
addition to the quantities that have already been mentioned, other quantities
can be derived from the second-order features. It should be noted that current
measurements from sky wave radars are contaminated by extra Doppler
shifts due to ionospheric layer height changes. If current measurements are
to be attempted, one must calibrate the ionospheric Doppler shift; this may
be done, for instance, by considering the echoes from an island.
There are a number of practical considerations to be taken into account
for sky wave radars. The most obvious of these is that, because of the huge
distances involved, they require very high power transmission and very
sensitive receiving system (see Section 10.2.4.3 for a further discussion).
We ought perhaps to consider the behavior and properties of the iono-
sphere a little more. The lowest part of the atmosphere, called the tropo-
sphere, extends to a height of about 10 km. The troposphere contains 90%
of the gases in the Earth’s atmosphere and 99% of the water vapor. It is the
behavior of this part of the atmosphere that constitutes our weather. Above
the troposphere is the stratosphere, which reaches to a height of about 80 km
above the Earth’s surface. The boundary between the troposphere and the
stratosphere is called the tropopause. The ozone layer, which is so essential
to protect life forms from the effects of ultraviolet radiation, is situated in
the lower stratosphere. Ozone (O3) is formed by the action of the incoming
9255_C006.fm Page 123 Friday, February 16, 2007 10:19 PM
24.2.82
25.2.82
FIGURE 6.4
Radar-deduced wind directions (heavy arrows) compared with Meteorological-Office analyzed
winds. The discrepancies in the lower picture are due to the multiple peak structure on this
bearing. (Wyatt, 1983.)
2
fp
n = 1− (6.11)
f
9255_C006.fm Page 124 Friday, February 16, 2007 10:19 PM
where
fp is the plasma frequency given by f p = (1/2π ) (ee2 Ne/ε 0me ) ,
ee is the charge on an electron,
Ne is the density of free electrons,
e o is the permittivity of free space, and
me is the mass of an electron.
i S− r
λo
S+
∆i ∆i ∆s ∆s
− +
H x
λs
(a)
i r
S−
∆−s
∆i ∆i ∆+s
S+
(b)
−
kos kor
koi
kos
+
∆i +K
x
ki −K
ks− 1x
ki ks+
z
(c)
FIGURE 6.5
(a) Scattering from a sinusoidally corrugated surface with H<l0. i, r, and s indicate the incident,
specularly reflected, and first-order scattered waves, respectively; (b) The backscatter case, ∆s
= p – ∆ i ; (c) The case showing general three-dimensional geometry with vector construction for
scattered radio waves.
sinusoidal swell wave of height H (H«l 0) and wavelength ls, traveling with
its velocity in the plane of incidence (see Figure 6.5[a]). There will be three
scattered waves, with grazing angles of reflection of ∆i, ∆s+ and ∆s-, where:
cos ∆s± = cos ∆i ± l0/ls (6.12)
If one of these scattered waves returns along the direction of the incident
wave, then ∆s− = p − ∆i (see Figure 6.5[b]) so that:
l0/ls = cos ∆i – cos ∆s− = cos ∆i − cos (p − ∆i) = 2 cos ∆i (6.13)
i.e. λ = 2 λs cos ∆i
which is just the Bragg condition.
9255_C006.fm Page 126 Friday, February 16, 2007 10:19 PM
where n is a small integer. However, it has been shown that this higher-order
scattering is unimportant in most cases in which decametric radio waves are
incident on the surface of the sea; it only becomes important for very short
radio wavelengths and for very high sea states (Barrick, 1972b).
The condition expressed in Equation 6.12 can be regarded as one compo-
nent of a vector equation:
ks = ki ± K (6.15)
where ki, ks, and K are vectors in the horizontal plane and associated with
the incident and reflected radio waves and the swell, respectively.
The above discussion supposes that the incident radio wave, the normal
to the reflecting surface, and the wave vector of the swell are in the same
plane. This can be generalized to cover cases in which swell waves are
traveling in a direction that is not in the plane of incidence of the radio wave
(see Figure 6.5[c]).
The relationship between the Doppler shift observed in the scattered radio
wave and the swell wave is:
fs = fi ± fw (6.16)
where
fs is the frequency of the scattered wave,
fi is the frequency of the incident wave, and
fw is the frequency of the water wave.
An attempt could be made to determine the wave-directional spectrum by
using first-order returns and by using a range of radar frequencies and radar
look directions. This would involve a complicated hardware system. In
practice, it is likely to be easier to retain a relatively simple hardware system
and to use second-order, or multiple scattering, effects to provide wave
directional spectrum data. It is important to understand and quantify the
second-order effects because of the opportunities they provide for measuring
the wave height, the nondirectional wave-height spectrum, and the directional
wave height spectrum by inversion processes (see, for example, Shearman,
1981). The arguments given above can be extended to multiple scattering. For
successive scattering from two water waves with wave vectors K1 and K2, one
would use the following equations in place of Equations 6.15 and 6.16:
ks = ki ± K1 ± K2 (6.17)
9255_C006.fm Page 127 Friday, February 16, 2007 10:19 PM
and
where
fw1 and fw2 are the frequencies of the two waves.
If we impose the additional constraint that the scattered radio wave must
constitute a freely propagating wave of velocity c, then:
This results in scattering from two sea waves traveling at right angles,
analogous to the corner reflector in optics or microwave radar.
The method used by Barrick involves using a Fourier series expansion of
the sea-surface height and a Fourier series expansion of the electromagnetic
field. The electromagnetic fields at the boundary, and hence the coefficients
in the expansion of the electromagnetic fields, are expanded using pertur-
bation theory subject to the following conditions:
• The height of the waves must be very small compared with the radio
wavelength.
• The slopes at the surface must be small compared with unity.
• The impedance of the surface must be small compared with the
impedance of free space.
where s 1(w) and s 2(w) are the first-order and second-order scattering cross
sections, respectively, and they are given by:
where
k0 = the radar wave vector;
w = Doppler frequency;
wB = 2 gk0 = Bragg resonant frequency;
S(k) = sea-wave directional spectrum
9255_C006.fm Page 128 Friday, February 16, 2007 10:19 PM
and
∞ π
∑ ∫∫ Γ
2
σ (ω ) = 2 π k
2 6 4
0 S(m k)S(m′ k′) × δ (ω − m gk − m′ gk ′ )kdkdθ (6.22)
m , m′=±1 0 − π
where
k, k¢ = wave numbers of two interacting waves where k + k′ = –2k0;
k, q = polar coordinates of k;
Γ = coupling coefficient = ΓH + ΓEM; and
ΓH = hydrodynamic coupling coefficient:
i k + k′ − ( kk′ − k.k′) (ω 2 + ω B2 )
=− (6.23)
2 mm′ kk′ (ω 2 − ω B2 )
and ΓEM = electromagnetic coupling coefficient:
where
∆ = normalized electrical impedance of the sea surface.
The problem then is to invert Equations 6.21 and 6.22 to determine S(k),
the sea-wave directional spectrum, from the measured backscattering cross
section. These equations would enable one to compute the Doppler spectrum
(i.e., the power of the radio wave echo as a function of Doppler frequency)
by both first-order and second-order mechanisms, given the sea-wave height
spectrum in terms of wave vector. However, no simple direct technique is
available to obtain the sea-wave spectrum from the measured radar returns
using inversions of Equations 6.21 and 6.22. One approach is to simplify the
equation and thereby obtain a solution for a restricted set of conditions.
Alternatively, a model can be assumed for S(k) including some parameters,
in order to calculate s 1 (w) and to determine the values of the parameters
by fitting to the measured values of s 2 (w); for further details see Wyatt
(1983). This is one example of the general problem mentioned at the end of
Section 6.2 in relation to the inversion of Equations 6.8 and 6.9.
9255_C007.fm Page 129 Wednesday, September 27, 2006 12:27 PM
7
Active Microwave Instruments
7.1 Introduction
Three important active microwave instruments — the altimeter, the scatter-
ometer, and the synthetic aperture radar (SAR) — are considered in this
chapter. Examples of each of these have been flown on aircraft. The first
successful flight of these instruments in space was on board Seasat. Seasat
was a proof-of-concept mission that only lasted for 3 months before the
satellite failed. Although improved versions of these types of instruments
have been flown on several subsequent satellites, including Geosat, Earth
Remote-Sensing Satellite–1 (ERS-1), ERS-2, TOPEX/Poseidon, Jason-1, and
Envisat, the general principles of what is involved and examples of the
information that can be extracted from the data of each of them are very
well illustrated by the Seasat experience.
The principal measurement made by an altimeter is of the time taken for the
round trip of a very short pulse of microwave energy that is transmitted
vertically downward by the satellite, reflected at the surface of the Earth
(by sea or land), and then received back again at the satellite. The distance
of the satellite above the surface of the sea is then given by:
h = 1/2ct (7.1)
where c is the speed of light.
129
9255_C007.fm Page 130 Wednesday, September 27, 2006 12:27 PM
Seasat
Orbit
Instrument
corrections
Altimeter
Atmospheric
corrections
h h∗
Ocean Laser
surface site
Geophysical
Ocean corrections
surface
topography Geoid hg
Bottom
topography Reference
ellipsoid
FIGURE 7.1
Schematic of Seasat data collection, modelling and tracking system.
The Seasat altimeter transmitted short pulses at 13.5 GHz with a duration
of 3.2 µ s and pulse repetition rate of 1020 Hz using a 1-m diameter antenna
looking vertically downward. The altimeter was designed to achieve an
accuracy of ±10 cm in the determination of the geoid (details of the design
of the instrument are given by Townsend, 1980). In order to achieve this
accuracy, one must determine the distance between the surface of the sea
and the satellite very accurately. The altitude h* measured with respect to a
reference ellipsoid (see Figure 7.1) can be expressed as:
h* = h + hsg + hi + ha + hs + hg + ht + h0 + e (7.2)
where h* is the distance from the center of mass of the satellite to the reference
ellipsoid at the subsatellite point; h is the height of the satellite above the
surface of the sea as measured by the altimeter; hsg represents the effects of
spacecraft geometry, including the distance from the altimeter feed to the
center of mass of the satellite and the effect of not pointing vertically; hi is
the total height equivalent of all instrument delays and residual biases; ha is
the total atmospheric correction; hs is the correction due to the surface and
radar pulse interaction and skewness in the surface wave height distribu-
tions; hg is the subsatellite geoid height; ht is the correction for solid earth
and ocean tides; h0 is the ocean-surface topography due to such factors as
9255_C007.fm Page 131 Wednesday, September 27, 2006 12:27 PM
−60
0338 0339 0340 0341
Time (GMT)
FIGURE 7.2
Sea surface height over sea mount-type features.
Anguilla
−40
Trench at edge of
Venezuelan basin
−60 Puerto Rican trench
FIGURE 7.3
Sea surface height over trench-type features.
0
Dynamic height (m) (hO)
−1
−2
Gulf stream
FIGURE 7.4
Dynamic height over the Gulf Stream.
9255_C007.fm Page 133 Wednesday, September 27, 2006 12:27 PM
∞ l +1 l
R
∑ ∑ P (sin θ )(C
GM
V (r , θ , λ ) = l
m
lm cos mλ + Slm sin mλ ) (7.3)
R l= 0 r m= 0
where θ is the co-latitude (i.e., the latitude measured from the North Pole),
λ is the longitude, G is the gravitational constant, M is the mass of the Earth,
and R is mean equatorial radius of the Earth.
The gravitational potential is therefore described by the values of the
coefficients Clm and Slm. It is obviously not feasible to determine these
9255_C007.fm Page 134 Wednesday, September 27, 2006 12:27 PM
L l
and
L l
∆g(θ , λ ) = γ ∑
l= 2
(l − 1) ∑ P (sin θ )(C
m= 0
l
m
lm cos mλ + Slm sin mλ ) (7.5)
where γ is a constant.
In addition to the various satellite-flown altimeters previously mentioned
that followed the Seasat altimeter, a number of satellite missions (CHAMP,
GRACE, and GOCE) have recently been dedicated to the study of the Earth’s
gravity (i.e., the study of the geoid); for details, see pages 627 to 630 of the
book by Robinson (2004).
As well as using the time of flight of the radar pulse to determine the
height of the altimeter above the surface of the sea, one can study the shape
of the return pulse to obtain information about the conditions at the surface
of the sea, especially the roughness of the surface and, through that, the near-
surface wind speed. For a perfectly flat horizontal sea surface, the leading
edge of the return pulse would be a very sharp square step function corre-
sponding to a time given by Equation 7.1 for radiation that travels vertically;
radiation traveling at an angle inclined to the vertical arrives slightly later
and causes a slight rounding of this leading edge. If large waves are present
on the surface of the sea, some radiation is reflected from the tops of the
waves, corresponding to a slightly smaller value of h and therefore a slightly
smaller value of t; in the same way, an extra delay of the radiation reflected
by the troughs of the waves occurs. Thus, for a rough sea, the leading edge
of the return pulse is considerably less sharp than the leading edge for a
calm sea (see Figure 7.5). Another way to think about this is to consider the
size of the “footprint” of a radar altimeter pulse on the water surface — that
is, the area of the surface of the sea that contributes to the return pulse
received by the altimeter; this depends on the sea state. At a given distance
from the nadir, the probability of a wave having facets that reflect radiation
back to the satellite increases with increasing roughness of the surface of the sea.
The area actually illuminated is the same; it is the area from which reflected
radiation is returned to the satellite that varies with sea state. For a low sea
state, the spot size for the Seasat altimeter was approximately 1.6 km. For a
higher sea state, the spot size increased up to about 12 km.
9255_C007.fm Page 135 Wednesday, September 27, 2006 12:27 PM
60
40
20
0
0 8 16 24 32 40 48 56 60
Waveform sample no.
FIGURE 7.5
Return pulse shape as a function of significant wave height.
There is no exact theoretical formula which can be used to determine the wind
speed from the shape of the return pulse via the SWH. The relationship
between the change in shape of the return pulse and the value of H1/3 at the
surface was determined empirically beforehand and, in processing the altim-
eter data from the satellite, a look-up table containing this empirical relation-
ship was used. A comparison between the results obtained from the Seasat
altimeter and from buoy measurements is presented in Figure 7.6. Comparisons
between satellite-derived wave heights and measurements from buoys for
9255_C007.fm Page 136 Wednesday, September 27, 2006 12:27 PM
6.0
PAPA
41001
42001
42003
44004
4.0 46001
SWH (buoy) (m) 46005
2.0
0
0 2.0 4.0 6.0
SWH (on board algorithm) (m)
FIGURE 7.6
Scatter diagram comparing SWH estimates from the National Oceanic and Atmospheric
Administration (NOAA) buoy network and ocean station PAPA with Seasat altimeter onboard
processor estimates (51 observations).
12
SWH (m)
8
4
0 PCA to Fico
16
σo(dB)
FIGURE 7.7
Altimeter measurements over Hurricane Fico.
9255_C007.fm Page 137 Wednesday, September 27, 2006 12:27 PM
12
z
z
Buoy wind (ms-1)
z
8
Z
z
z
z z
z
Z
4 Y
Y
Z
0
0 4 8 12
Seasat wind (Brown algorithm) (ms-1)
FIGURE 7.8
A scatter plot of Seasat radar altimeter inferred wind speeds as a function of the corresponding
buoy measurements. (Guymer, 1987.)
The determination of the wind speed is also carried out via σ 0, the nor-
malized scattering cross section, determined from the received signal, where:
σ 0 = a0 + a1(AGC) + a2(h) + LP + La (7.6)
where
AGC is the automatic gain control attenuation,
h is the measured height,
LP represents off-nadir pointing losses, and
La represents atmospheric attenuation.
For Seasat, the values of a0, a1(AGC), and a2(h) were determined from prelaunch
testing and by comparison with the Geodynamics Experimental Ocean Satel-
lite–3 (GEOS-3) satellite altimeter at points where the orbits of the two satellites
intersected. The calibration curve used to convert σ0 into wind speed was
obtained using the altimeter on the GEOS-3 satellite, which had been calibrated
with in situ data obtained from data buoys equipped to determine wind
speeds. Comparisons between wind speeds derived from the Seasat altimeter
and from in situ measurements using data buoys are shown in Figure 7.8.
As previously mentioned, several satellite-flown altimeters have been used
since the one flown on Seasat, and the accuracy of the derived parameters
has been improved. However, the principles involved in analyzing the data
remain the same. Comparisons between satellite-derived wave heights and
measurements from buoys for a number of more recent systems than Seasat
are illustrated by Robinson (2004).
9255_C007.fm Page 138 Wednesday, September 27, 2006 12:27 PM
λs sin θ i = 21 nλ (7.7)
et al., 1982); it is generally lower for horizontal polarization than for vertical
polarization, and it appears to depend very little on the microwave frequency
in the range 5 to 20 GHz. An empirical formula that is used for the back-
scattering coefficient is:
or
where U is the wind speed, ϕ is the relative wind direction, and P indicates
whether the polarization is vertical or horizontal.
The coefficients a0, a1, and a2, or the functions G(ϕ, θi, P) and H(ϕ, θi, P), are
derived from fitting measured backscattering results with known wind speeds
and directions in calibration experiments. The form of the backscattering
coefficient as a function of wind speed and direction is shown in Figure 7.9.
Originally, these functions were determined with data from scatterometers
flown on aircraft but, after the launch of Seasat, the values of these functions
have been further refined. Assuming that the functions in Equation 7.8 have
been determined, one can then use this equation with measurements of σ 0
for two or more azimuth angles ϕ to determine both wind speed and wind
direction.
30.0 4.2
0
25.0 4.0
−2
20.0 3.7
Wind speed (ms-1) range (dB)
−4
15.0 3.4
−6
σoVV (dB)
−8 10.0 2.9
−10
−12
5.0 2.0
−14
FIGURE 7.9
Backscatter cross section σ ° against relative wind direction for various wind speeds. Vertical
polarization of 30° incidence angle. (Offiler, 1983.)
9255_C007.fm Page 140 Wednesday, September 27, 2006 12:27 PM
800 km
25°
25°
55°
Satellite
track
4 1
Doppler Antenna
cell number
400 km 600 km 2
3
FIGURE 7.10
The Seasat scatterometer viewing geometry: section in the plane of beams 1 and 3 (top diagram),
beam illumination pattern and ground swath (bottom diagram). This scatterometer operated
at 14.6 GHz (Ku-band) and a set of Doppler filters defined 15 cells in each antenna beam. Either
horizontal or vertical polarization measurements of backscatter could be made.
The scatterometer on the Seasat satellite used four beams altogether; two
of them pointed forward, at 45º to the direction of flight of the satellite,
and two pointed aft, also at 45° to the direction of flight (see Figure 7.10).
Two looks at a given area on the surface of the sea were obtained from the
forward-pointing and aft-pointing beams on one side of the spacecraft; the
change, as a result of Earth rotation, in the area of the surface actually
viewed is quite small. The half-power beam widths were 0.5° in the hori-
zontal plane and about 25° in the vertical plane. This gave a swath width
of about 500 km on each side, going from 200 km to 700 km away from
the subsatellite track. The return signals were separated to give backscat-
tering data from successive areas, or cells, along the strip of sea surface
being illuminated by the transmitted pulse. The spatial resolution was thus
approximately 50 km. The extraction of the wind speed and direction from
the satellite data involves the following steps:
Identifying the position of each cell on the surface of the Earth and
determining the area of the cell and the slant range
Calculating the ratio of the received power to the transmitted power
9255_C007.fm Page 141 Wednesday, September 27, 2006 12:27 PM
Determining the values of the system losses and the antenna gain
in the cell direction from the preflight calibration data
Calculating σ 0 from the radar equation and correcting this calculation
for atmospheric attenuation derived from the Scanning Multichannel
Microwave Radiometer (SMMR) (which was also flown on the Seasat
satellite) as well as for other instrumental biases.
It is then necessary to combine the data from the two views of a given cell
from the fore and aft beams and thence determine the wind speed and
direction using look-up tables for the functions G(ϕ, θi, P) and H(ϕ, θi, P). The
answer, however, is not necessarily unique; there can be as many as four
solutions, each with similar values for wind speed but with quite different
directions (see Figure 7.9). The scatterometer on Seasat was designed to
measure the surface wind velocity with an accuracy of ±2 ms–1 or ±10%
(whichever is the greater) in speed and ±20° in direction, over the range of
4 to 24 ms−1 in wind speed.
In spite of the Seasat satellite’s relatively short lifespan, some evaluations of
the derived parameters were obtained by comparing them with in situ mea-
surements over certain test areas. One such exercise was the Gulf of Alaska
Experiment, which involved several oceanographic research vessels and buoys
and an aircraft carrying a scatterometer similar to the one flown on Seasat
(Jones et al., 1979). Comparisons with the results from in situ measurements
showed that the results obtained from the Seasat scatterometer were generally
correct to the level of accuracy specified at the design stage, although system-
atic errors were detected and this information was used to update the algo-
rithms used for processing the satellite data (Schroeder et al., 1982).
A second example is the Joint Air-Sea Interaction (JASIN) project, which
took place in the north Atlantic between Scotland and Iceland during the
period that Seasat was operational. Results from the JASIN project also showed
that the wind vectors derived from the Seasat scatterometer data were accurate
well within the values specified at the design stage. Again, these results were
used to refine the algorithms used to derive the wind vectors from the scat-
terometer data for other areas (Jones et al., 1981; Offiler, 1983). The Satellite
Meteorology Branch of the U.K. Meteorological Office made a thorough inves-
tigation of the Seasat scatterometer wind measurements, using data for the
JASIN project, which covered a period of 2 months. The Institute of Oceano-
graphic Sciences (then at Wormley, U.K.) had collated much of the JASIN data,
but the wind data measured in situ applied to the actual height of the ane-
mometer that provided the measurements, which varied from 2.5 m above
sea level to 23 m above sea level. For comparison with the Seasat data, the
wind data were corrected to a common height of 19.5 m above sea level. Each
Seasat scatterometer value of the wind velocity was then paired, if possible,
with a JASIN observation within 60 km and 30 minutes; a total of 2724 such
pairs were obtained. Because more than one solution for the direction of the
wind derived from the scatterometer data was possible, the value that was
closest in direction to the JASIN value was chosen. Comparisons between the
9255_C007.fm Page 142 Wednesday, September 27, 2006 12:27 PM
20
N = 2724
18 r = 0.84
SASS = 0.2 +
450
N = 2724
400
r = 0.99
350 SASS = 2.7 +
SASS wind direction (ms-1)
0.99 × JASIN
300
250
200
150
100
50
0
−50
−50 0 50 100 150 200 250 300 350 400 450
JASIN wind direction (°)
(b)
FIGURE 7.11
Scatter diagrams of (a) wind speed and (b) wind direction measurements made by the Seasat
scatterometer against colocated JASIN observations. The design root-mean-square limits of
2 ms−1 and 20° are indicated by the solid parallel lines and the least-squares regression fit by
the dashed line. Key: * = 1 observation pair; 2 = 2 coincident observations; etc.; ‘0’ = 10 and ‘@’ =
more than 10. (Offiler, 1983.)
9255_C007.fm Page 143 Wednesday, September 27, 2006 12:27 PM
wind speeds obtained from the Seasat scatterometer and the JASIN surface
data are shown in Figure 7.11(a); similar comparisons for the direction are
given in Figure 7.11(b). Overall, the scatterometer-derived wind velocities
agreed with the surface data to within ±1.7 ms–1 in speed and ±17° in direction.
However, data from one particular Seasat orbit suggest that serious errors in
scatterometer-derived wind speeds may be obtained when thunderstorms are
present (Guymer et al., 1981; Offiler, 1983).
One of the special advantages of satellite-derived data is its high spatial
density. This is illustrated rather well by the example of a cold front shown
in Figure 7.12(a) and Figure 7.13. Figure 7.12(a) shows the synoptic situation
at midnight GMT on August 31, 1978, and Figure 7.12(b) shows the wind
field. These images were both derived from the U.K. Meteorological Office’s
10-level model objective analysis on a 100-km grid. Fronts have been added
manually, by subjective analysis. The low pressure over Iceland had been
moving north-eastward, bringing its associated fronts over the JASIN area
by midnight. On August 31, 1978, Seasat passed just south of Iceland at 0050
GMT, enabling the scatterometer to measure winds in this area (see Figure
7.13, which also shows the observations and analysis at 0100 GMT). The
points M and T indicate two stations that happened to be on either side of
the cold front. At most points, there are four possible solutions indicated,
but the front itself shows clearly in the scatterometer-derived winds as a line
of points at which there are only two, rather than four, solutions. With
experience, synoptic features such as fronts, and especially low pressure
centers, can be positioned accurately, even with the level of ambiguity of
solutions. The subjective analysis of scatterometer-derived wind fields has
been successfully demonstrated by, for example, Wurtele et al. (1982).
A previously mentioned, Seasat lasted only 3 months in operation. Since
Seasat, various scatterometers have been flown in space. The first scatter-
ometer flown after Seasat was on the European Space Agency’s ERS-1
satellite, which was launched in 1991. The next was a U.S. instrument,
NSCAT, which was launched in 1996. After the failure of NSCAT, another
scatterometer, Sea Winds, was launched on the QuikScat platform in 1999.
Instead of a small number of fixed antennae, this scatterometer uses a
rotating antenna. It is capable of measuring wind speed to ±2 ms–1 in the
range 3 to 20 ms–1 and to 10% accuracy in the range 20 to 30 ms–1 and wind
direction to within 20°. Various empirical models relating σ 0 to wind
velocity have been developed for these systems (see for example Robinson
[2004]). The accuracy of the retrieved wind speeds has thus been improved
and scatterometers have become accepted as operational instruments, used
by meteorologists as a source of real-time information on global wind
distribution and invaluable for monitoring the evolution of tropical
cyclones and hurricanes. Oceanographers also have come to rely on the
scatterometer record for forcing ocean models. As operational systems are
developed for ocean forecasting, developers will look to scatterometers to
provide near-real time input in, for example, oil spill dispersion models or
wave forecasting models.
9255_C007.fm Page 144 Wednesday, September 27, 2006 12:27 PM
1006
100
1004
2 10
24
1024 L
L
1008
1012
H
10
H
10
10
14
16
1014
10
1028
10
18
12
10
1022
(a)
(b)
FIGURE 7.12
Example of (a) mean sea level pressure and (b) 1000 mbar vector winds for August 31, 1978.
9255_C007.fm Page 145 Wednesday, September 27, 2006 12:27 PM
65 1012
64
10
14
63
1016
62
61
1018
Latitude (°)
H
60 GE
1020
59 M, W2
T
1022
58
1024
57 Key:
L
JASIN observations of 01Z
GE Gardline endurer
56 H Hecla
M Meteor
T Tydeman
W2 Buoy W2
55
20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5
West longitude (°)
FIGURE 7.13
Cold front example, 0050 GMT, orbit 930 (vertical polarization). (Offiler, 1983.)
Master Signal
oscillator film
Signal
film
Multiplier CRT
and
modulator Video Optical
amp correlator
Amplifier
Mixer
Image film
Antenna Receiver
User
Terrain
Real time Post mission
FIGURE 7.14
Imaging radar operations.
have very much poorer spatial resolution (from 27 km to 150 km for the
SMMR on Seasat or Nimbus-7 [see Section 2.5]). Better spatial resolution
can be achieved with an active microwave system but, as mentioned in
Section 2.5, a conventional (or real aperture) radar of the size required
cannot be carried on a satellite. SAR provides a solution to the size
constraints.
The reconstruction of an image from SAR data is not trivial or inexpen-
sive in terms of computer time, and the theories involved in the develop-
ment of the algorithms that have to be programmed are complex. This
means that the use of SAR involves a sophisticated application of radar
system design and signal-processing techniques. Thus, an SAR for remote
sensing work consists of an end-to-end system that contains a conventional
radar transmitter, an antenna, and a receiver together with a processor
capable of making an image out of an uncorrelated Doppler phase history.
A simplified version of an early system is shown in Figure 7.14. As with
any other remote sensing system, the actual design used depends on the
user requirements and on the extent to which it is possible to meet these
requirements with the available technology. The system illustrated is based
on the earliest implementation technique used to produce images from
SAR — that of optical processing. Although optical processing has some
advantages, and was very important for the generation of quicklook images
at the time of Seasat, it has now been replaced by electronic (digital)
processing techniques.
9255_C007.fm Page 147 Wednesday, September 27, 2006 12:27 PM
The key to SAR image formation lies in the Doppler effect — in this case,
the shift in frequency of the signal transmitted and received by a moving
radar system. The usual expression for Doppler frequency shift is:
vf v
∆f = ± =± (7.10)
c λ
The velocity, v, in this expression is the radial component of the velocity
which, in this case, is the velocity of the platform (aircraft or satellite) that
is carrying the radar. The positive sign corresponds to the case of approach
of source and observer, and the negative sign corresponds to the case of
increasing separation between source and observer. For radar, there is a two-
way transit of the radio waves between the transmitter and receiver giving
a shift of:
2v
∆f =± (7.11)
λ
For SAR then, the surfaces of iso-Doppler shift are cones with their axes
along the line of flight of the SAR antenna and with their vertices at the
current position of the antenna (see Figure 7.15); the corresponding iso-
Doppler contours on the ground are shown in Figure 7.16.
A few points from the theory of conventional (real aperture) radar should
be reconsidered for airborne radars. For an isotropic antenna radiating
power P, the energy flux density at distance R is:
P
(7.12)
4π R 2
Iso-Doppler
cone
h
V
θ
Iso-Doppler
contour
R
x
FIGURE 7.15
Iso-Doppler cone.
9255_C007.fm Page 148 Wednesday, September 27, 2006 12:27 PM
0 1 2 3 4 5 6 7 8 9
0 y/h
1
80°
2
75°
x/h
4 θ = 65°
FIGURE 7.16
Iso-Doppler ground contours.
If the power is concentrated into a solid angle Ω instead of being spread out
isotropically, the flux will be:
P
(7.13)
ΩR 2
in the direction of the beam and 0 in other directions. The half-power beam
width of an aperture can be expressed as:
1
θ= (7.14)
η
where
η is the size of the aperture expressed in wavelengths,
θ can be expressed as
λ
θ=K (7.15)
D
where λ is the wavelength, D is the aperture dimensions, and K is a numerical
factor, the value of which depends on the characteristics of the particular
antenna in question. K is of the order of unity and is often taken to be equal
to one for convenience.
For an angular resolution θ, the corresponding linear resolution at range
R will be given by Rθ. If the same antenna is used for both transmission and
reception, the angular resolution is reduced to θ and the linear resolution
becomes Rλ/2D. For a radar system mounted on a moving vehicle, this value
is the along-track resolution. From this formula, one can see that for con-
ventional real aperture radar, resolution is better the closer the target is to
9255_C007.fm Page 149 Wednesday, September 27, 2006 12:27 PM
Range
Time tn
RESAZ
Range or
crosstrack
direction
Flight
path
FIGURE 7.17
Angular resolution.
the radar. Therefore, a long antenna and a short wavelength are required for
good resolution.
Now consider the question of the resolution in a direction perpendicular
to the direction of motion of the moving radar system. A high-resolution
radar on an aircraft is mounted so as to be side-looking, rather than looking
vertically downward; the acronym SLAR (side-looking airborne radar)
follows from this. The reason for looking sideways is to remove the problem
of ambiguity, or coalescence, that would arise involving the two returns from
points equidistant from the sub-aircraft track if a vertical-looking radar were
used. The radiation pattern is illustrated in Figure 7.17 (i.e., a narrow beam
directed at right angles to the direction of flight of the aircraft). A pulse of
radiation is transmitted and an image of a narrow strip of the Earth’s surface
can be generated from the returns (see Figure 7.18). By the time the next
Flight
path
C
ra ath
y t od
ub e
Slant e
range
Ground
track A
Ground range 1 2 3 B
FIGURE 7.18
Pulse ranging.
9255_C007.fm Page 150 Wednesday, September 27, 2006 12:27 PM
pulse is transmitted and received, the aircraft has moved forward a little and
another strip of the Earth’s surface is imaged. A complete image of the swath
AB is built up by the addition of the images of successive strips. Each strip
is somewhat analogous to a scan line produced by an optical or infrared
scanner. Suppose that the radar transmits a pulse of length L (L = cτ, where τ
is the duration of the pulse); then if the system is to be able to distinguish
between two objects, the reflected pulses must arrive sequentially and not
overlap. The objects must therefore be separated by a distance along the
ground that is greater than L/(2 cos ψ), where ψ is the angle between the
direction of travel of the pulse and the horizontal. The resolution along
the ground in the direction at right angles to the line of flight of the platform,
or the range resolution as it is called, is thus c/(2βcosψ), where β, the pulse
bandwidth, is equal to 1/τ.
It is possible to identify limits on the pulse repetition frequency (PRF) that
can be used in an SAR. The Doppler history of a scatterer as the beam passes
over it is not continuous but is sampled at the PRF. The sampling must be
at a frequency that is at least twice the highest Doppler frequency in the
echo, and this sets a lower limit for the PRF. An upper limit is set by the
need to sample the swath unambiguously in the range direction — in other
words, the echoes must not overlap. The PRF limits prove to be:
2ν c
≤ PRF ≤ (7.16)
D 2W cos ψ
where W is the swath width along the ground in the range direction.
These are very real limits for a satellite system and effectively limit the swath
width achievable at a given azimuth resolution.
It is important to realize that an SAR and a conventional real aperture
radar system achieve the same range resolution; the reason for utilizing
aperture synthesis is to improve along-track resolution (also called the angu-
lar cross-range resolution or azimuth resolution). It should also be noticed
that the range resolution is independent of the distance between the ground
and the vehicle carrying the radar. The term “range resolution” is used to
mean the resolution on the ground and at right angles to the direction of
flight; it is not a distance along the direction of propagation of the pulses.
To increase the range resolution, for a given angle ψ, the pulse duration τ
has to be made as short as possible. However, it is also necessary to transmit
enough power to give rise to a reflected pulse that, on return to the antenna,
will be large enough to be detected by the instrument. In order to transmit
a given power while shortening the duration of the pulse, the amplitude of
the signal must be increased; however, it is difficult to design and build
equipment to transmit very short pulses of very high energy. A method that
is very widely adopted to cope with this problem involves using a “chirp”
instead of a pulse of a pure single frequency. A chirp consists of a long pulse
9255_C007.fm Page 151 Wednesday, September 27, 2006 12:27 PM
FIGURE 7.19
Seasat SAR image of the Tay Estuary, Scotland, from orbit 762 on August 19, 1978, processed
digitally. (RAE Farnborough.)
9255_C007.fm Page 153 Wednesday, September 27, 2006 12:27 PM
52°30'N
1°00'E
52°00'N
The english
channel
1°30'E
51°30'N
FIGURE 7.20
Optically processed Seasat SAR image of the English Channel from orbit 762 on August 19, 1978.
FIGURE 7.21
Digitally processed data for part of the scene shown in Figure 7.20. (RAE Farnborough.)
O2
B
ξ Bz
O1
By
θ r2
r1
H
O y
FIGURE 7.22
Diagram to illustrate the geometry associated with the phase difference for interferometric SAR.
point P. Then the path difference between the paths of the two return signals
is 2(r2 – r1) and the corresponding phase difference ϕ is given by:
ϕ = (4π/λ)|r2 – r1| (7.17)
The baseline B representing the separation of the positions of the two anten-
nae can be written as (0, By , Bz), where each By and Bz may be positive or
negative. Thus r2 = r1 + B and a simple vector calculation shows that”
r2(=|r2|) = r1 + Bysinθ + Bzcosθ (7.18)
where it is assumed that the baseline B is short compared with r1 and r2, so
that terms of the order of B2 can be neglected. Therefore:
ϕ = {4π/λ } (Bysinθ + Bzcosθ) (7.19)
The height z(x, y) of the point P above the chosen datum is given by:
z(x, y) = H – r1cosθ (7.20)
and the angle ξ , the baseline tilt angle, can be brought in by writing:
FIGURE 7.23
A simulated noise-free histogram for a single pyramid on an otherwise flat surface. (Woodhouse,
2006.)
8
Atmospheric Corrections to Passive
Satellite Remote Sensing Data
8.1 Introduction
Distinction should be made between two types of situations in which remote
sensing data are used. In the first type, a complete experiment is designed
and carried out by a team of people who are also responsible for the analysis
and interpretation of the data obtained. Such experiments are usually
intended either to gather geophysical data or to demonstrate the feasibility
of an environmental applications project involving remote sensing tech-
niques. The second type of situation is one in which remotely sensed data
are acquired on a speculative basis by the operator of an aircraft or satellite
and then distributed to potential users at their request. In this second situation,
it is necessary to draw the attention of the users to the fact that atmospheric
corrections may be rather important if they propose to use the data for envi-
ronmental scientific or engineering work.
Useful information about the target area of the land, sea, or clouds is
contained in the physical properties of the radiation leaving that target
area. Remote sensing instruments measure the properties of the radiation
that arrives at the instrument. This radiation has traveled some distance
through the atmosphere and accordingly has suffered both attenuation and
augmentation in the course of that journey. The problem that faces the user
of remote sensing data is the difficulty in accurately regenerating the details
of the properties of the radiation that left the target area from the data
generated by the remote sensing instrument. An attempt to set up the
radiative transfer equation to describe all the various processes that corrupt
the signal that leaves the target area on the land, sea, or cloud from first
principles is a nice exercise in theoretical atmospheric physics and, of
course, is a necessary starting point for any soundly based attempt to apply
atmospheric corrections to satellite data. However, in a real situation, the
problem soon arises that suitable values of various atmospheric parameters
have to be inserted into the radiative transfer equation in order to arrive
159
9255_C008.fm Page 160 Saturday, February 17, 2007 12:43 AM
x, y, and z, and of the time variable, t. Because of the paucity of the data,
it is common to assume a horizontally stratified atmosphere — in other
words, the atmospheric parameters are assumed to be functions of the
height z but not the x and y coordinates in a horizontal plane. The situation
may be simplified further by assuming that the atmospheric parameters
are given by some model atmosphere based only on the geographical
location and the time of year. However, this approach is not realistic
because the actual atmospheric conditions differ quite considerably from
such a model. It is clearly much better to try to use values of the atmo-
spheric parameters that apply at the time that the remotely sensed data
are collected. This can be done by using:
l
∫
ϕ λ (l) = ϕ λ (0)exp − sec θ K λ ( z)dz
0
( 8.1)
l
∫
ϕκ (l) = ϕκ (0)exp − sec θ Kκ ( z)dz
0
(8.2)
are being conducted to study the physical properties and motions of a given
layer of the atmosphere, it may be necessary to make allowance for contri-
butions to a remotely sensed signal from other atmospheric layers. The areas
of work in which atmospheric effects have been of greatest concern to the
users of remote sensing data so far have been those in which water bodies,
such as lakes, lochs, rivers, and oceans, have been studied in order to deter-
mine their physical or biological parameters.
In most cases, users of remote sensing data are interested in knowing how
important the various atmospheric effects are on the quality of image data
or on the magnitudes of derived physical or biological parameters; users are
not usually interested in the magnitudes of the corrections to the radiance
values per se. However, to assess the relative importance of the various
atmospheric effects, one must devote some attention to:
There are several different approaches that one can take to applying atmospheric
corrections to satellite remote sensing data for the extraction of geophysical
parameters. We note the following options:
• One can ignore the atmospheric effects completely, which is not quite
as frivolous or irresponsible as it might first seem. In practice, this
approach is perfectly acceptable for some applications.
• One can calibrate the data with the results of some simultaneous in
situ measurements of the geophysical parameter that one is trying
to map from the satellite data. These in situ measurements may be
obtained for a training area or at a number of isolated points in the
scene. However, the measurements must be made simultaneously
with the gathering of the data by the satellite.
9255_C008.fm Page 164 Saturday, February 17, 2007 12:43 AM
Many examples of the use of this approach can be found for data
from both visible and infrared channels of aircraft- and satellite-
flown scanners. Some relevant references are cited in Sections 8.4.2
and 8.5.1. The method involving calibration with simultaneous in
situ data is capable of yielding quite accurate results. It is quite
successful in practice although, of course, the value of remote sens-
ing techniques can be considerably enhanced if the need for simul-
taneous in situ calibration data can be eliminated. One should not
assume, however, that the calibration for a given geographical area
on one day can be taken to apply to the same geographical area on
another day; the atmospheric conditions may be quite different.
In addition to the problems associated with variations in the atmo-
spheric conditions from day to day, there is also the quite serious
problem that significant variations in the atmospheric conditions are
likely even within a given scene at any one time. To accurately account
for all of these variations, one would need to have available in situ
calibration data for a much finer network of closely packed points
than would be feasible. While it is, of course, necessary to have some
in situ data available for initial validation checks and for subsequent
occasional monitoring of results derived from satellite data, to use a
large network of in situ calibration data largely negates the value of
using remote sensing data anyway, because one important objective
of using remote sensing data is to eliminate costly fieldwork.
Given the difficulty in determining values of geophysical param-
eters from satellite data for which results of simultaneous in situ
measurements are not available, many people have adopted meth-
ods that involve trying to eliminate atmospheric effects rather than
trying to calculate atmospheric corrections. For example:
• One can use a model atmosphere, with the details and parameters of
the model adjusted according to the geographical location and the time
of year. This method is more likely to be successful if one is dealing
with an instrument with low spatial resolution that is gathering data
over wide areas for an application that involves taking a global view
of the surface of the Earth. In this situation, the local spatial irregularities
and rapid temporal variations in the atmosphere are likely to cancel out
and fairly reliable results can be obtained. This approach is also likely
to be relatively successful for situations in which the magnitude of the
atmospheric correction is relatively small compared with the signal
from the target area that is being observed. All these conditions are
satisfied for passive microwave radiometry and so this approach is
moderately successful for Scanning Multichannel Microwave
Radiometer (SMMR) or Special Sensor Microwave Imager (SMM/I)
data (see, for example, Alishouse et al. [1990]; Gloersen et al. [1984];
Hollinger et al. [1990]; Njoku and Swanson [1983]; and Thomas [1981]).
9255_C008.fm Page 165 Saturday, February 17, 2007 12:43 AM
• One can also use a model atmosphere but make use of such simul-
taneous meteorological data as may actually be available instead of
using only assumed values based on geographical location and time
of year. This simultaneous meteorological data may be obtained
from one of several possible sources. The satellite may, like the
Television InfraRed Observation Satellite-N (TIROS-N) series of sat-
ellites, carry other instruments, in addition to the scanner, that are
used for carrying out meteorological sounding through the atmo-
sphere below the satellite (see Section 8.4).
• One can attempt to eliminate atmospheric effects in one of various
ways. For instance, one can use a multilook approach in which a
given target area on the surface of the sea is viewed from two
different directions. Alternatively, one can attempt to eliminate
atmospheric effects by exploiting a number of different spectral
channels to try to cancel out the atmospheric effects between these
channels. These methods will be discussed in Section 8.4.
FIGURE 8.1
Contributions to satellite-received radiance for emitted radiation.
atmosphere is 0. Thus, the radiance reaching the detector from the view
angle q is:
L1 (κ ) = ε B(κ , Ts )τ (κ , θ ; p0 , 0) (8.3)
where t (k ,q; p, p1) is the atmospheric transmittance for wave number k
and direction q between heights in the atmosphere where the pressures are
p and p1.
dτ (κ , θ ; p, p1 )
dL2 (κ ) = B(κ , T( p)) dp (8.5)
dp
The upwelling emitted radiation received at the satellite can thus be written as:
p
dτ (κ , θ ; p, 0)
L2 (κ ) =
∫ B(κ , T(p))
p0
dp
dp (8.6)
9255_C008.fm Page 167 Saturday, February 17, 2007 12:43 AM
where p0 is the atmospheric pressure at the sea surface and T(p) is the
temperature at the height at which the pressure is p.
This expression is based on the assumption of local thermodynamic equilib-
rium and the use of Kirchhoff’s law to relate the emissivity to the absorption
coefficient.
p0
dτ (κ , θ ; p, p0 )
∫ B(κ , T(p))
0
dp
dp (8.7)
p0
dτ (κ , θ ; p, p0 )
∫
L3 (κ ) = (1 − ε )τ (κ , θ ; p0 , 0) B(κ , T( p))
0
dp
dp (8.8)
L ∗ (κ ) = L1 (κ ) + L2 (κ ) + L3 (κ ) + L4 (κ ) (8.9)
Tb = T1 + T2 + T3 + T4 (8.10)
9255_C008.fm Page 168 Saturday, February 17, 2007 12:43 AM
FIGURE 8.2
Contributions to satellite-received radiance for reflected solar radiation; 1, 2, 3, and 4 denote
L1(k), L2(k), L3(k), and L4(k), respectively.
9255_C008.fm Page 169 Saturday, February 17, 2007 12:43 AM
FIGURE 8.3
Components of the sensor signal in remote sensing of water. (Sturm, 1981.)
Accordingly, various paths between the Sun and the sensor are considered for
reflected radiation reaching the sensor (see Figure 8.2 and Figure 8.3):
• L1(k): radiation that follows a direct path from the Sun to the target
area and thence to the sensor
• L2(k): radiation from the Sun that is scattered towards the sensor,
either by single or multiple scattering in the atmosphere, without
the radiation ever reaching the target area
• L3(k): radiation that does not come directly from the Sun but, rather,
first undergoes a scattering event before reaching the target area and
then passes to the sensor directly
• L4(k): radiation that is reflected by other target areas of the land, sea,
or clouds and is then scattered by the atmosphere towards the sensor.
9255_C008.fm Page 170 Saturday, February 17, 2007 12:43 AM
E0 (mW/(cm2µm))
200
180
160
140
120
100
80
60 λ (µm)
0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85
FIGURE 8.4
Solar extraterrestrial irradiance (averaged over the year) as a function of wavelength (from four
different sources). (Sturm, 1981.)
9255_C008.fm Page 171 Saturday, February 17, 2007 12:43 AM
discrepancy is explained by the fact that the radiation from the Sun itself varies.
Annual fluctuations in the radiance received at the Earth’s atmosphere asso-
ciated with the variation of the distance from the Sun to the Earth can be taken
into account mathematically. The eccentricity of the ellipse describing the orbit
of the Earth is 0.0167. The minimum and maximum distances from the Sun
to the Earth occur on January 3 and July 2, respectively. The extraterrestrial
solar irradiance for Julian day D is given by the following expression:
{ }
2
E0 (D) = E0 1 + 0.167 cos (2π/365)(D − 3) (8.11)
where KκM ( z) and KκA ( z) refer to molecular and aerosol attenuation coefficients.
Each of these absorption coefficients can be written as the product of NM(z)
or NA(z), the number of particles per unit volume at height z, and a quantity
skM or skA, known as the effective cross section:
The quantities
∫
z
τκM ( z) = σ λM N M ( z)dz (8.14)
0
and
∫
z
τκA ( z) = σ λA N A ( z)dz (8.15)
0
are called the molecular optical thickness and the aerosol optical thickness,
respectively. It is convenient to separate the molecular optical thickness into
a sum of two components:
8π 3 (n2 − 1)2
σ λMs = (8.18)
3N 2 λ 4
where
n = refractive index,
N = number of air molecules per unit volume, and
l = wavelength.
1.0
Molecular scattering
0.1 1
τλ 2
Aerosol scattering
1 3
0.01
Aerosol absorption
0.001
0.5 1.0 1.5 2.0 2.5
λ (µm)
FIGURE 8.5
Normal optical thickness as a function of wavelength. (Sturm, 1981.)
TABLE 8.1
Ozone Optical Thickness for Vertical Path Through the Entire Atmosphere
Atmosphere Type
Ozone abs. 1 2 3 4 5
Wavelength Coefficient
l (µm) k 0l(cm–1) V 0(∞) = 0·23 V 0(∞) = 0·39 V 0(∞) = 0·31 V 0(∞) = 0·34 V 0(∞) = 0·45
0·44 0·001 0·0002 0·0004 0·0003 0·0003 0·0005
0·52 0·055 0·0128 0·0213 0·0173 0·0187 0·0245
0·55 0·092 0·0215 0·0356 0·0289 0·0312 0·0409
0·67 0·036 0·0084 0·0139 0·0113 0·0122 0·0160
0·75 0·014 0·0033 0·0054 0·0044 0·0048 0·0062
V0(∞) is the visibility range parameter which is related to the optical thickness t 0l(∞) and the
absorption coefficient k 0l for ozone by t 0l(∞) = k 0l V0(∞)
(Sturm, 1981)
τ λA = Aλ − B (8.19)
dIκ (θ , ϕ )
= −γ κ Iκ (θ , ϕ ) + ψ κ (θ , ϕ ) (8.20)
ds
where Ik(q,j) is the intensity of electromagnetic radiation of wave number
k in the direction (q,j), s is measured in the direction (q,j), and gk is an
extinction coefficient.
The first term on the right-hand side of this equation describes the atten-
uation of the radiation both by absorption and by scattering out of the
direction (q, j). The second term describes the augmentation of the radiation,
both by emission and by scattering of additional radiation into the direction
(q,j); this term can be written in the form:
ψ κ (θ , ϕ ) = ψ κA (θ , ϕ ) + ψ κS (θ , ϕ ) (8.21)
9255_C008.fm Page 176 Saturday, February 17, 2007 12:43 AM
where ykA (q,f) is the contribution corresponding to the emission and can, in
turn, be written in the form:
ψ κA (θ , ϕ ) = γ κA B(κ , T ) (8.22)
where gkA is an extinction coefficient and B(k, T) is the Planck distribution
function for black-body radiation:
2 hc 2κ 3
B(κ , T ) = (8.23)
exp( hcκ/kT ) − 1
ψ κS (θ , ϕ ) = γ κS Jκ (θ , ϕ ) (8.24)
1 dIκ (θ , ϕ ) γA γS
− = Iκ (θ , φ ) − κ B(κ , T ) − κ Jκ (θ , ϕ ) (8.25)
γκ ds γκ γκ
or
dIκ (θ , ϕ )
= Iκ (θ , ϕ ) − (1 − ω )B(κ , T ) − ω Jκ (θ , ϕ ) (8.26)
dτ
where dt = – gk ds, t = optical thickness, gk = gkA + gkS, and w = gkS/gk .
The differential equation is then expressed in terms of optical thickness t
rather than the geometrical path length s.
At microwave frequencies, where hck (=hf) « kT (f = frequency) and the
Rayleigh-Jeans approximation can be made, namely that:
2 hc 2κ 3
B(κ , T ) = 2 cκ 2 kT (8.27)
(1 + hcκ/kT ) − 1
TB (θ , ϕ , 0) = TB (θ , ϕ , τ )e −τ
∫
+ Teff (θ , ϕ , τ ′)e − τ ′ dτ ′
0
(8.28)
9255_C008.fm Page 177 Saturday, February 17, 2007 12:43 AM
where
The quantity dtk(z, ∞ )/dz may be written as Kk(z) for convenience. Using
Equation 8.30 to give the intensity of radiation Ik(q,f)dk that is received in
a (narrow) spectral band of width dk we have:
∞
levels by NOAA and at direct readout stations all around the world and at
a global level by NOAA using tape-recorded data covering the whole Earth.
We shall describe the determination of sea-surface temperatures from data
from the thermal-infrared channels of the AVHRR because this is by far the
most widely used source of thermal-infrared data from satellites. The left-hand
side of Figure 8.6 illustrates the physical processes involved in the passage of
emitted radiation leaving the surface of the sea and traveling up through the
atmosphere to the satellite where it enters the AVHRR and gives rise to signals
in the detectors. Radiation that arrives at a satellite is incident on detectors
that produce voltages in response to this radiation incident upon them. The
voltages produced are then digitized to create the digital numbers that are
transmitted back to Earth. The data generated by a satellite-borne thermal-
infrared scanner are received at a ground receiving station as a stream of digital
numbers — often as 8-bit numbers, but 10-bit numbers in the case of the
AVHRR. The right-hand side of Figure 8.6 illustrates the steps involved in the
procedure applied to the processing of the data, including:
Calibration
Satellite
Satellite-received
infrared
Satellite- intensity
received
infrared
Invert Planck
distribution
Atmospheric
effects
Brightness
temperature
Water-leaving
infrared Atmospheric
corrections
Sea
Sea surface
temperature
FIGURE 8.6
Diagram to illustrate the determination of sea surface temperature from satellite thermal-
infrared data. (Cracknell, 1997.)
but tractable (see, for example, Section 3.1 of Cracknell [1997]). Or, alterna-
tively, one can choose a set of transformation equations relating the geo-
graphical coordinates of a pixel to the scan line and column numbers of the
pixels in the raw data and determine the coefficients in these equations by
a least squares fit to a set of ground control points. Or one can use a com-
bination of both approaches, using the ephemeris data to obtain a first
approximation to the geographical coordinates of a pixel and then using a
very small number of ground control points to refine these values. It is usual
to then resample the data to a standard grid in the geographical projection
system chosen. For further details, see Section 3.1 of Cracknell (1997).
The next step is to convert the digital numbers (in the range of 0 to 1023 in
the case of the AVHRR) output by the scanner and received on the ground
into the values of the satellite-received radiance, L*(k), where k refers to the
spectral channel centered around the wave number k. This involves using
in-flight calibration data to calculate the intensity of the radiation incident on
the instrument. The calibration of the thermal-infrared channels of the AVHRR
is achieved using two calibration sources that are viewed by the scanner
9255_C008.fm Page 181 Saturday, February 17, 2007 12:43 AM
between successive scans of the surface of the Earth; these two sources com-
prise a black-body target of measured temperature on board the spacecraft
and a view of deep space. Taking the scanner data, together with preflight
calibration data supplied by NOAA, the digital data can be converted into
radiances (for details, see Section 2.2 of Cracknell [1997]).
Assuming that the energy distribution of the incident radiation is that
of black-body radiation, one can calculate the temperature corresponding
to that radiation by inverting the Planck radiation formula. This tempera-
ture is known as the brightness temperature, Tb. The accuracy that can be
attained in determining the brightness temperature depends on the internal
consistency and stability of the scanner and on the accuracy with which it
can be calibrated. Brightness temperatures can be determined to an accuracy
in the region of 0.1 K from the AVHRR on the NOAA series of polar-orbiting
satellites.
The inversion of the Planck distribution function to obtain the brightness
temperature is then a standard mathematical operation. The satellite-
received radiance, L*(k, Tb), is given by:
β
ln( L * (κ , Tb )) = α + (8.33)
Tb
9255_C008.fm Page 182 Saturday, February 17, 2007 12:43 AM
can be generated, where a and b are parameters that depend on the selected
range of Tb and the absolute value of Tb. This formula can then be used
instead of a look-up table to calculate the brightness temperatures very
quickly for the bulk of the scene (Singh and Warren, 1983).
Having calculated the brightness temperature Tb for the whole of the area
of sea surface in the scene, the atmospheric correction must then be calculated
because the objective of using thermal-infrared scanner data is to obtain infor-
mation about the temperature or the emissivity of the surface of the land or
sea. It is probably fair to say that the problem of atmospheric corrections has
been studied fairly extensively in relation to the sea but has received little
attention for data obtained from land surface areas. As seen in Section 8.3, in
addition to surface radiance, upwelling atmospheric radiance, downwelling
atmospheric radiance, and radiation from space must be determined. Moreover,
as indicated in Section 8.2, radiation propagating through the atmosphere is
attenuated. These effects are considered separately here.
Figure 8.7 shows the contributions from sea-surface emission, reflected
solar radiation, and upwelling and downwelling emission for the 3.7 µm
channel of the AVHRR; the units of radiance are T.R.U. (where 1 T.R.U. = 1 mW
m–2 sr–1 cm). At this wavelength, the intensity of the reflected solar radiation
is very significant in relation to the radiation emitted from the surface,
whereas atmospheric emission is very small. Figure 8.8 shows data for the
11 µm channel of the AVHRR. It can be seen that the reflected radiation is
of little importance but that the atmospheric emission, though small, is not
entirely negligible.
The data in Figure 8.7 and Figure 8.8 are given for varying values of the
atmospheric transmittance. However, in order to make quantitative correc-
tions to a given set of thermal-infrared scanner data, one must know the
actual value of the atmospheric transmittance or atmospheric attenuation at
the time that the scanner data were collected. Of the three attenuation mech-
anisms mentioned in Section 8.3.3 — namely, Rayleigh (molecular) scattering,
aerosol scattering, and aerosol absorption by gases — absorption by gases is
the important mechanism in the thermal-infrared region, where water vapor,
carbon dioxide, and ozone are the principal atmospheric absorbers and emitters
(see Figure 2.13). To calculate the correction that must be applied to the
brightness temperature to give the temperature of the surface of the sea, one
must know the concentrations of these substances in the column of atmo-
sphere between the satellite and the target area on the surface of the sea.
Computer programs, such as LOWTRAN, can do this and are based on
solving the radiative transfer equation for a given atmospheric profile. Exam-
ples of the results for a number of standard atmospheres are given in Table
8.2. These calculations have been performed for the three wavelengths cor-
responding to the center wavelengths of channels 3, 4, and 5 of the AVHRR.
Table 8.2 illustrates the variations that can be expected in the atmospheric
correction that needs to be applied to the calculated brightness temperature
for various different atmospheric conditions. In reality, atmospheric condi-
tions, especially the concentration of water vapor, vary greatly both spatially
9255_C008.fm Page 183 Saturday, February 17, 2007 12:43 AM
0.24
0.020
0.20
0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
Contrib. from atmospheric emission (T.R.U.)
0.12
0.0005
0.08
0.04 0
0 –0.0005
–0.04 –0.0010
0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
Vertical atmospheric transmittance Vertical atmospheric transmittance
FIGURE 8.7
Various components of the satellite-recorded radiance in the 3.7 µm channel of the AVHRR.
(Singh and Warren, 1983.)
and temporally. Considering the temporal variations first, the variation in the
atmospheric conditions with time, at any given place, is implied by the values
given in Table 8.2. It is also illustrated to some extent in Figure 8.9 by the two
lines showing the atmospheric corrections to the AVHRR-derived brightness
temperatures, using radiosonde data 12 hours apart for the weather station
at Lerwick, Scotland. The effect of the spatial variation is also illustrated in
Figure 8.9 by the five lines obtained using simultaneous radiosonde data to
give the atmospheric parameters at five weather stations around the coastline
of the U.K. The calculations were performed using the method of Weinreb
and Hill (1980), which incorporates a version of LOWTRAN. For a sea-
surface temperature of 15°C (288 K), the correction varies from about 0.5 K
at some stations to about 1.5 K at other stations.
Thus, for a reliable determination of the atmospheric correction, one needs
to use the atmospheric profile that applied at the time and place at which
the thermal-infrared data were collected. The use of a model atmosphere,
9255_C008.fm Page 184 Saturday, February 17, 2007 12:43 AM
80 0.006
70
0.004
60
50 0.002
0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
Contrib. from atmospheric emission (T.R.U.)
50
Contrib. from reflected atmospheric
0.40
40
emission (T.R.U.)
0.30
30
20 0.20
10 0.10
0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
Vertical atmospheric transmittance Vertical atmospheric transmittance
FIGURE 8.8
Various components of the satellite-recorded radiance in the 11 µm channel of the AVHRR.
(Singh and Warren, 1983.)
based on geographical location and season of the year, will not give good
results. Consequently, atmospheric corrections need to be carried out on a
quite closely spaced network of points, if not on a pixel-by-pixel basis. The
ideal source of atmospheric data for this purpose is the TOVS, which is flown
on the same NOAA polar-orbiting meteorological satellites as the AVHRR
and which therefore provides data coincident in both space and time with
the thermal-infrared AVHRR data. For a period, TOVS data were used by
NOAA in the production of their atmospherically corrected sea-surface
temperature maps. However, the use of TOVS data requires a very large
amount of computer time and therefore is expensive. First, it is necessary to
invert the TOVS data to generate an atmospheric profile (using software
based on solving the radiative transfer equation). A calculated atmospheric
9255_C008.fm Page 185 Saturday, February 17, 2007 12:43 AM
TABLE 8.2
Atmospheric Attentuation, Ta, for Various Standard Atmospheres
Atmosphere Channel H2O Lines CO2 03 N2 Cont. H2O Cont.
1 (a) 1·31 0·46 0 0·22 0·41
Tropical 2(b) 0·86 0·30 0 0 3·44
3(c) 2·80 0·55 0 0 4·89
1 0·95 0·42 0 0·20 0·26
Midlatitude summer 2 0·59 0·27 0 0 1·61
3 2·05 0·49 0 0 2·39
1 0·38 0·36 0 0·17 0·09
Midlatitude winter 2 0·21 0·21 0 0 0·23
3 0·83 0·39 0 0 0·35
1 0·85 0·42 0 0·20 0·24
Subarctic summer 2 0·53 0·26 0 0 1·23
3 1·87 0·47 0 0 1·86
1 0·76 0·45 0 0·22 0·19
U.S. Standard 2 0·48 0·29 0 0 0·78
3 1·76 0·53 0 0 1·20
(a) 1 refers to the 3·7 µm channel
(b) 2 refers to the 11 µm channel
(c) 3 refers to the 12 µm channel
Hemsby 1100
20 Camborne 1100
16
Stornoway 1100
12 Lerwick 1100
Lerwick 2300
Shanwell 1100
08
Attenuation (K)
04
276
0 274 278 280 282 284 286 288 290 292 294
Sea Surface Temperature (K)
–04
–08
FIGURE 8.9
Atmospheric attenuation versus sea surface temperature calculated using radiosonde data for
five stations around the U.K. (Callison and Cracknell, 1984.)
9255_C008.fm Page 186 Saturday, February 17, 2007 12:43 AM
for a two-channel system, where k1 and k2 refer to the two spectral channels, or:
for a three-channel system, where k1, k2, and k3 refer to three spectral channels
and e0, e1, e2, and e3 are coefficients that must be determined. In three spectral
intervals, or “windows,” the vertical transmittance of the atmosphere may
be as high as 90%; these windows are from 3 to 5 µm, 7 to 8 µm, and 9.5 to
14 µm (see Figure 2.13). Of these three windows, the 3 to 5 µm window is
the most nearly transparent; unfortunately, this window has proven to be of
less practical value than the others because of the large amount of reflected
solar radiation at this wavelength. The most widely used source of thermal-
infrared data from satellites is the AVHRR, which was designed, built, and
flown to obtain operational sea surface temperatures from space. Channel 3
of the AVHRR is in the 3 to 5 µm window and channels 4 and 5 are in the
9.5 to 14 µm window. With the launch of NOAA-7 in 1981, which carried
the first five-channel AVHRR instrument, NOAA implemented the multi-
channel sea surface temperature algorithms that provided access to global
sea surface temperature fields with an estimated accuracy of 1 K or better
(McMillin and Crosby, 1984). Formulas of the type of Equation 8.35 can be
used with nighttime data from the three thermal-infrared channels. However,
the daytime data from channel 3 contains a mixture of emitted infrared
radiation and reflected solar radiation and, of course, these two components
cannot be separated. Therefore, with daytime data, one can only use a two-
channel formula of the type of Equation 8.34. Given that the form of
Equation 8.34 and Equation 8.35 can be justified on theoretical grounds, it is
also possible to derive theoretical expressions for e0, e1, e2, and e3. However,
these expressions inevitably involve some of the parameters that specify the
atmospheric conditions. In practice, therefore, the values of these coefficients
are determined by a multivariate regression analysis fitting to in situ data
from buoys. A tremendous amount of effort has gone into trying to establish
the “best” set of values for the coefficients e0, e1, e2, and e3 (Barton, 1995;
Barton and Cechet, 1989; Emery et al., 1994; Kilpatrick et al., 2001; McClain
et al., 1985; Singh and Warren, 1983; Walton, 1988). It seems that if one relies
on using a universal set of coefficients for all times and places, then the
accuracy that can now be obtained is around 0.6 K. Some more recent work
has been directed toward allowing the values of the coefficients e0, e1, e2, and e3
to vary according to atmospheric conditions and geographical location in an
attempt to improve the results.
We now turn briefly to the multilook method. In this method, one tries to
eliminate the effect of the atmosphere by viewing a given target area on the
surface of the sea from two different directions, instead of in two or three
different wavelength bands, as is done in the multichannel approach just dis-
cussed. Attempts have been made to do this by using data from two different
satellites, one being a geostationary satellite and one being a polar-orbiting
9255_C008.fm Page 188 Saturday, February 17, 2007 12:43 AM
satellite (Chedin et al., 1982; Holyer, 1984). However, for various reasons, this
approach was not very accurate. The multilook method has been developed in
the Along Track Scanning Radiometers ATSR and ATSR-2, which have been
flown on the European satellites ERS-1 and ERS-2. Instead of a simple across-
track scanning mechanism, which only gives one look at each target area on the
ground, this instrument uses a conical scanning mechanism so that in one
rotation it scans a curved strip in the forward direction (forward by about 55°)
and also scans a curved strip vertically below the spacecraft. Thus it obtains two
looks at each target area on the ground that are almost, but not quite, simulta-
neous. It is claimed that this instrument produces results with smaller errors
(~0.3 K) than those obtained with AVHRR data using the multichannel methods
(~0.6 K). However, the AVHRR has the great advantage of being an operational
system, of providing easily available data to direct-readout stations all over the
world, and of having a historical archive of more than 25 years of global data.
TABLE 8.3
Radiance Values per Pixel
Scanner Number of Channels
Landsat MSS 4 (occasionally 5)
Landsat TM 7
AVHRR 5 (sometimes 4)
Meteosat 3
SMMR 10
9255_C008.fm Page 190 Saturday, February 17, 2007 12:43 AM
scanning radiometer data are therefore used quite widely to provide sea
surface and ice surface temperatures. The value of the emissivity of sea-
water at microwave frequencies is, unlike in the thermal-infrared case, very
significantly different from 1; there are two values, eH and eV, for horizon-
tally and vertically polarized radiation, respectively. These can actually be
determined theoretically from the Fresnel reflection coefficients, rH and rV,
because εH,V = 1 – rH,V2. The values of rH and rV are:
cos θ − e − sin 2 θ
ρH = (8.36)
cos θ + e − sin 2 θ
and
e cos θ − e − sin 2 θ
ρV = (8.37)
e cos θ + e − sin 2 θ
Conversion of the digital data output from the sensors into satellite-
received radiance (in absolute physical units)
Determination of the Earth-surface-leaving radiance from the satellite-
received radiance by performing the appropriate atmospheric
corrections
Use of appropriate models or algorithms relating the surface-leaving
radiance to the required physical quantity.
in the output data stream. The radiation falls on the scanner, is filtered, and
falls on to detectors; the voltage generated by each detector is digitized to
produce the output digital data, and these data are simply numbers on some
scale (e.g., between 0 and 255 or between 0 and 1023). The problem then is
to convert these digital numbers back into values of the intensity of the
radiation incident on the instrument. The instruments flown in a spacecraft
are, of course, calibrated in a laboratory before they are integrated into a
satellite system prior to the launch of the satellite (see, for instance,
Section 2.2.2 of Cracknell [1997]). Once a spacecraft has been launched, the
calibration may change; this might occur as a result of the violence of the
launch process itself, or it might be caused by the different environment in
space (the absence of any atmosphere, cyclic heating by the Sun and cooling
when on the dark side of the Earth) or to the decline in the sensitivity of the
components with age. Once the satellite is in orbit, the gain of the electronics
(the amplifier and digitizer) can be periodically tested by applying a voltage
ramp (or voltage staircase) to the digitizing electronics. The output is then
transmitted in the data stream. If an instrument has several gain settings,
this test must be done separately for each gain setting. However, the use of
a voltage ramp (or staircase) only checks the digitizing electronics and does
not check the optics and detecting part of the system End-to-end calibration
is achieved by recording the output signal when the scan mirror is pointed
at a standard source of known radiance. For some satellite-flown scanners,
provision has been made for the scanning optics to view a standard source,
either on board the spacecraft or outside the spacecraft (deep space, the Sun,
or the Moon); however, this provision is not always made. For instance,
although in-orbit (in-flight) calibration of the thermal bands of the AVHRR
is available (see Section 8.4.2), it is not available for band 1 and band 2, the
visible and near-infrared bands, of the AVHRR.
Teillet et al. (1990) identified three broad categories of methods for the
postlaunch calibration of satellite-flown scanners that have no on-board or
in-flight calibration facilities. These are:
Using these methods, it has been possible to successfully carry out post-
launch calibration of the visible and near-infrared bands of the AVHRR (for
more details, see Section 2.2.6 of Cracknell [1997]).
We now turn to the consideration of Coastal Zone Color Scanner (CZCS)
and SeaWiFS visible and near-infrared data, which are widely used for the
determination of water quality parameters. For the visible bands of CZCS,
in-flight calibration was done by making use of a standard lamp on board
the spacecraft. Every so often the scanner viewed the standard lamp and the
corresponding digital output was included in the data stream; however, this
procedure was not without its problems, particularly those associated with
the deterioration of the output of the lamp (see, for example, Evans and
Gordon [1994] and Singh et al. [1985]). SeaWiFS is a second-generation ocean
color instrument and, in its design, use was made of many of the lessons
learned from its predecessor, the CZCS.
The prelaunch calibration of a scanner such as SeaWiFS involves recording
the output of the instrument when it is illuminated by a standard source in a
laboratory before it is flown in space (see Barnes et al. [1994; 2001] and the
references quoted therein). We have already noted that as time passes while a
scanner is in its orbit, the sensitivity of the detecting system can be expected to
decrease; it is the purpose of the in-orbit calibration to study this decreased
sensitivity. Thus the satellite-received radiance for each band of SeaWiFS is
represented by:
where l is the wavelength of the band (in nm), DN is the SeaWiFS signal (as
a digital number), DN0 is the signal in the dark, k2(g) is the prelaunch cali-
bration coefficient (in mW cm–2 sr–1 µm–1 DN–1), g is the electronic gain, a (t0)
is a vicarious correction (dimensionless) to the laboratory calibration on the
first day of in-orbit operations (t0), and the coefficients b (dimensionless), l (in
day −1), and d (in day−2) are used to calculate the change in the sensitivity for
the band at a given number of days (t – t0) after the start of operations.
For each scan by SeaWiFS, DN0 is measured as the telescope views the
black interior of the instrument. For SeaWiFS, the start of operations (t0) is
the day of the first Earth image, which was obtained on September 4, 1997.
Values of k2(g) for each band of SeaWiFS were determined from the prelaunch
calibration measurements, and the values of the other coefficients — a(t0),
9255_C008.fm Page 194 Saturday, February 17, 2007 12:43 AM
FIGURE 8.10
Changes in the radiometric sensitivity of SeaWiFS as determined from lunar measurements.
(Barnes et al., 2001.)
9255_C008.fm Page 195 Saturday, February 17, 2007 12:43 AM
The intensity of the radiation LL(λ) leaving the surface of the land is
larger than that leaving the water so that, proportionately, the atmo-
spheric effects are less significant in land-based studies than in
aquatic studies that utilize optical scanner data.
The data used in land-based applications tend to make greater use
of the red and near-infrared bands, where the atmospheric effects
are less important than in the blue end of the spectrum, which is
particularly important in aquatic applications.
effects are proportionately less important than for the oceans. In addition,
one cannot possibly determine the contribution of atmospheric aerosols to the
top-of-the atmosphere radiance over the land or clouds, as is done in the
calibration of the instrument for ocean measurements (see below). For the ocean,
the water-leaving radiance is small in the near-infrared region, so that most
of the satellite-received near-infrared radiation comes from the atmosphere
and not from the ocean surface. For land measurements, the near-infrared
surface radiance can be very large, contaminating the radiances that could
be used to determine aerosol type and amount. Therefore, SeaWiFS provides
no information on atmospheric aerosols for regions of land or cloud. The
SeaWiFS Project has developed a partial atmospheric correction for land
measurements that calculates the Rayleigh component of the upwelling
radiance, including a surface pressure dependence for each SeaWiFS band.
Along with this correction, the SeaWiFS Project has incorporated algorithms
to provide land surface properties, using the NDVI and the enhanced veg-
etation index. In addition, an algorithm has been developed to produce
surface reflectances. Each of these algorithms uses SeaWiFS top-of-the-atmo-
sphere radiances determined by the direct calibration of the instrument. To
date, there is no vicarious calibration for SeaWiFS measurements of the land
or of clouds.
A very important aquatic application of visible and near-infrared scanner
data from satellites is in the study of ocean color and the determination of
the values of chlorophyll concentrations in lakes, rivers, estuaries, and open
seas and of suspended sediment concentrations in rivers and coastal waters.
CZCS was the first visible-band scanner designed for studying ocean color
from space; SeaWiFS was its successor. What makes atmospheric corrections
so important in aquatic studies is the fact that the signal that reaches the
satellite includes a very large component that is due to the atmosphere and
does not come from the water-leaving radiance. Table 8.4 indicates the scale
of the problem. This table shows that, for visible band data obtained over
water, the total atmospheric contribution to the satellite-received radiance
approaches 80% or 90% of the signal received at the satellite. This is a much
more serious problem than for thermal-infrared radiation, which is used to
measure sea surface temperatures (see Section 8.4.3). In the case of thermal-
infrared radiation, surface temperatures are in the region of 300 K and the
atmospheric effect is equivalent to a few degrees or to perhaps 1% or 2% of
the signal. As shown in Table 8.4, in the optical wavelength, the “noise” or
“error” or “correction” comprises 80% to 90% of the signal. It is thus impor-
tant to make these corrections very carefully.
The various atmospheric corrections to data from optical scanners flown
on satellites have been discussed in general terms in Section 8.3. As men-
tioned, for visible wavelengths, the absorption by molecules involves ozone
only. In this section, the term “visible” is taken to include, by implication,
the near-infrared wavelength as well, up to about 1.1 µm or so in wavelength.
The ozone contribution is relatively small and not too difficult to calculate
to the accuracy required. The most important contributions are Rayleigh and
9255_C008.fm Page 197 Saturday, February 17, 2007 12:43 AM
TABLE 8.4
Typical Contributions to the Signal Received by
a Satellite-Flown Visible Wavelength Sensor
In Clear Water
l (nm) TLw (%) Lp (%) TLr (%)
440 14.4 84.4 1.2
520 17.5 81.2 1.3
550 14.5 84.2 1.3
670 2.2 96.3 1.5
750 1.1 97.0 1.9
In Turbid Water
l (nm) TLw (%) Lp (%) TLr (%)
440 18.1 80.8 1.1
520 32.3 66.6 1.1
550 34.9 64.1 1.0
670 16.4 82.4 1.2
750 1.1 97.4 1.5
Lw = water-leaving radiance
Lp = atmospheric path radiance
Lr = surface reflected radiance
T = transmission coefficient
(Adapted from Sturm [1981])
aerosol scattering, both of which are large, particularly toward the blue end
of the optical spectrum. Moreover, because reflected radiation is the concern,
light that reaches the satellite by a variety of other paths has to be considered
in addition to the sunlight reflected from the target area (see Section 8.3.2
and Figure 8.3). It is, therefore, not surprising that the formulation of the
radiative transfer equation for radiation at visible wavelengths is rather
different from the approach used in Section 8.4 for microwave and ther-
mal-infrared wavelengths, where the corrections that have to be made to
the satellite-received radiance to produce the Earth-leaving radiance are
of the order of 1% or 2%. It is accordingly clear that the application of
atmospheric corrections to optical scanner data to recover quantitative
values of the Earth-leaving radiance is very much more difficult and
therefore needs to be performed much more carefully than in the thermal-
infrared case.
In aquatic applications, it is the water-leaving radiance that is principally
extracted from the satellite data. From the water-leaving radiance, one can
attempt to determine the distribution and concentrations of chlorophyll and
suspended sediment (see Section 8.5.3). Ideally, the objective is to be able to
do this without any need for simultaneous in situ data for calibration of the
remotely sensed data. The large size of the atmospheric effects (see Table 8.4)
means that the accuracy that can be obtained in the extraction of geophysical
9255_C008.fm Page 198 Saturday, February 17, 2007 12:43 AM
some in situ data was gathered simultaneously with the acquisition of the
satellite data by the scanner. The improvements introduced with SeaWiFS
addressed this problem. With SeaWiFS's increased number of spectral bands,
it became possible to use what is essentially a multichannel technique for
the atmospheric corrections.
For SeaWiFS, a development from the darkest pixel method, using longer
wavelength near-infrared bands and matching with a database of models
using a wide variety of atmospheric aerosols, has been adopted. This
involves using two infrared bands, at 765 nm and 865 nm (i.e., bands 7 and
8), to estimate the aerosol properties of the atmosphere at the time that the
image was generated. These are then extrapolated to the visible bands fol-
lowing Gordon and Wang (1994). The algorithms are expressed in terms of
a reflectance, r(l), defined for any radiance L(l) at a given wavelength and
viewing and solar geometry, as:
where F0 is the extraterrestrial solar irradiance and q0 is the solar zenith angle.
Then the satellite measured reflectance, rt(l), can be written as:
The contributions on the right-hand side of this equation are the same as
those in Equation 8.39, except that an extra term involving molecule-aerosol
interactions has been added. rr (l) and rw (l) can be calculated, as was done
for CZCS data. Then it is assumed that for bands 7 and 8 (i.e., at 765 nm and
865 nm), the water leaving reflectance rw(l) really is 0 (distinct from at 670 nm,
where it was assumed to be 0 in dealing with CZCS data). Thus, the last
term on the right hand side of the equation for rt(l) vanishes and the
equation can be rearranged to give:
ρas (λ )
ε(λ , 865) = (8.43)
ρas (865)
M = A(rij )B (8.44)
where M is the value of the marine parameter and rij is the ratio of the water-
leaving radiances L(li) and L(lj) in the bands centered at the two wavelengths
li and lj.
Various workers used relations of this type in regression analyses with
log-transformed data for their own data sets. If a significant linear relation-
ship was found, an algorithm of the form in Equation 8.44 was obtained.
Table 8.5 contains a list of such algorithms proposed by several workers
(Sathyendranath and Morel, 1983). In subsequent work during the next few
years, the CZCS data set was worked on very thoroughly by many workers
and further results were obtained (reviews of the work on CZCS are given,
for example, by Gordon and Morel [1983] and Barale and Schlittenhardt
9255_C008.fm Page 203 Saturday, February 17, 2007 12:43 AM
TABLE 8.5
Some Values of Parameters Given by Different Workers for Algorithms for
Chlorophyll and Suspended Sediment Concentrations from CZCS Data
rij A B N r2
part of this period, attempts were made to develop better algorithms (for a
review, see O’Reilly et al. [1998]). Algorithm development work has taken
two lines. One is to continue to attempt to determine empirical algorithms
of more general applicability than to just one data set. The other is to attempt
to develop some semi-analytical model, which inevitably will contain some
parameters. At present, the empirical models still appear to be more suc-
cessful (further discussion is given by Robinson [2004]).
9255_C009.fm Page 205 Tuesday, February 27, 2007 12:39 PM
9
Image Processing
9.1 Introduction
Much of the data used in remote sensing exists and is used in the form
of images, each image containing a very great deal of information. Image
processing involves the manipulation of images and is used to:
• Extract information
• Emphasize or de-emphasize certain aspects of the information
contained in the image
• Perform statistical or other analyses to extract nonimage information.
205
9255_C009.fm Page 206 Tuesday, February 27, 2007 12:39 PM
FIGURE 9.1
Grayscale wedge.
0 to 255, denotes the intensity, or grayscale value, associated with one ele-
ment (a picture element or pixel) of the image. For a human observer, how-
ever, the digital image needs to be converted to analogue form using a chosen
“gray scale” relating the numerical value of the element in the array to the
density on a photographic material or to the brightness of a spot on a screen.
The digital image, when converted to analogue form, consists of an array of
pixels that are a set of identical plane-filling shapes, almost always rectangles,
in which each pixel is all the same shade of gray and has no structure within
it. To produce a display of the image on a screen or to produce a hard copy
on a photographic medium, the intensity corresponding to a given element
of the array is mapped to the shade of gray assigned to the pixel according
to a grayscale wedge, such as that shown in Figure 9.1. A human observer,
however, cannot distinguish 256 different shades of gray; 16 shades would
be a more likely number (see Figure 9.2). It is possible to produce a “positive”
(a) (b)
FIGURE 9.2
Image showing (a) 16-level gray scale, (b) 256-level gray scale.
9255_C009.fm Page 207 Tuesday, February 27, 2007 12:39 PM
n(I)
0 I 255
FIGURE 9.3
Sketch of histogram of n (I) number of pixels with intensity I, against I for an image with good
contrast.
and “negative” image from a given array of digital data, although the
question of which is positive and which is negative is largely a matter of
definition. A picture with good contrast can be obtained if the intensities
associated with the pixels in the image are well distributed over the range
from 0 to 255 — that is, for a histogram such as the one shown in Figure 9.3.
For a color image, it is most convenient to think of three separate digital
arrays, each of the same structure, so that the suffices or coordinates x and y
used to label a pixel are the same in each of the three arrays (i.e., the arrays
are coregistered). Each array is then assigned to one of the three primary
colors (red, green, or blue) of a television or computer display monitor or
to the three primary colors of a photographic film. These three arrays may
have been generated for the three primary colors in a digital camera, in
which case the image is a true-color image. Or they may have been generated
in three separate channels of a multispectral or hyperspectral scanner, in
which case the image is a false-color composite in which the color of a
pixel in the image is not the same as the color on the ground (see Figure 2.9,
for example). The array assigned to red produces a wedge like the one shown
in Figure 9.1, representing the intensity of red to be assigned to the pixels
in the image. A similar wedge occurs for the arrays assigned to green and
blue. This leads to the possibility of assigning any one of 2563 colors, or more
than 16 million colors, to any given pixel. Although many images handled
in remote sensing work are in color, recalling the underlying structure in
terms of the three primary colors is often quite useful because image pro-
cessing is commonly performed separately on the three arrays assigned to
the three primary colors.
In addition to digital image processing, which is widely practiced these
days, a tradition of analogue image processing also exists. This tradition
dates from the time when photographic techniques were already well
established but computing was still in its infancy. In 1978, the first space-borne
9255_C009.fm Page 208 Tuesday, February 27, 2007 12:39 PM
TABLE 9.1
Common Digital Image Processing Operations
Histogram generation Multispectral classification
Contrast enhancement Neighborhood averaging and filtering
Histogram equalization Destriping
Histogram specification Edge enhancement
Density slicing Principal components
Classification Fourier transforms
Band ratios High-pass and low-pass filtering
synthetic aperture radar (SAR) was flown on Seasat and the output generated
by the SAR was first optically processed or “survey processed” to give
“quicklook” images; digital processing of the SAR data, which is especially
time-consuming, was then carried out later only on useful scenes selected
by inspection of the survey-processed quicklook images. Even now it is still
sometimes useful to make comparisons between digital and optical image
processing techniques. For example, optical systems are often simple in
concept, relatively cheap, and easy to set up and they provide very rapid
image processing. Also, from the point of view of introducing image pro-
cessing, optical methods often demonstrate some of the basic principles
involved rather more clearly than could be done digitally. This chapter is
concerned only with image processing relating to remote sensing problems.
In the vast majority of cases, remotely sensed image data are processed
digitally and not optically, although in most cases, the final output products
for the user are presented in analogue form. Image processing, using the
same basic ideas, is widely practiced in many other areas of human activity.
Table 9.1 summarizes some of the common digital image processing
operations that are likely to be performed on data input as an array of
numbers from some computer-compatible medium. Most of these processes
have corresponding operations for the processing of analogue images. A
number of formal texts on image processing are also available (see, for
example, Jensen [1996] and Gonzalez and Woods [2002]).
areas (blue) could be obtained. The scene represented by this image could
then be thought of as having been classified into areas of land or cloud and
areas of water. This example represents a very simple, two-level density slice
and classification scheme. More complicated classification schemes can obvi-
ously be envisaged. For example, water could perhaps be classified into
seawater and fresh water and land could be classified into agricultural land,
urban land, and forest. Further subdivisions, both of the water and of the
land area, can be envisaged. However, the chance of achieving a very detailed
classification on the basis of a single TM band is not very good; a much better
classification could be obtained using several bands (see Section 9.7).
0 M1 M2 255
I
FIGURE 9.4
Sketch of histogram for a midgray image with very poor contrast.
9255_C009.fm Page 211 Tuesday, February 27, 2007 12:39 PM
I ′( x , y) = T( I )I ( x , y) (9.1)
where T(I) is the transfer function.
255
T(I)
0 M1 M2 255
I
FIGURE 9.5
Transfer function for contrast enhancement of the image with the histogram shown in Figure 9.4.
9255_C009.fm Page 212 Tuesday, February 27, 2007 12:39 PM
255
T(I)
0
M1 M2 255
I
FIGURE 9.6
Transfer function for a linear contrast stretch for an image with the histogram shown in Figure 9.4.
x and y. This particular function would be suitable for stretching the contrast
of an image for which the histogram was “bunched” between the values
M1 and M2 (see Figure 9.4). It has the effect of stretching the histogram out
much more evenly over the whole range from 0 to 255 to give a histogram
with an appearance such as that of Figure 9.3. The main problem is to decide
on the form to be used for the transfer function T(I). It is very common to
use a simple linear stretch (as shown in Figure 9.6). A function that more
closely resembles the transformation function in Figure 9.5 is shown in
Figure 9.7, where T(I) is made up of a set of straight lines joining the points
255
T(I)
0
0 M1 M2 255
I
FIGURE 9.7
A transfer function consisting of three straight-line segments.
9255_C009.fm Page 213 Tuesday, February 27, 2007 12:39 PM
j = T(i) (9.2)
The graphs of pi(i) against i and of pj ( j) against j are simply the histograms
of the original image and of the transformed image, respectively. pj ( j) and
pi(i) are related to each other by
so that
di
p j ( j) = pi (i) (9.4)
dj
j=
∫ p (w)dw
0
i (9.5)
j = T(i) (9.6)
T(i) =
∫ p (w)dw
0
i (9.7)
9255_C009.fm Page 214 Tuesday, February 27, 2007 12:39 PM
2.0
pi (i)
1.0
0
0 0.5 1.0
i
FIGURE 9.8
Schematic histogram defined by Equation 9.8.
That is, Equation 9.7 defines the particular transfer function that will achieve
histogram equalization. This can be illustrated analytically for a simple
example. Consider pi(i) shown in Figure 9.8, where
pi (i) = 4i 0 ≤ i ≤ 1/2
(9.8)
= 4(1 − i) 1/2 ≤ i ≤ 1
T(i) = 2i 2 0 ≤ i ≤ 1/2
(9.9)
= −1 + 4i − 2i 2 1
/2 ≤ i ≤ 1
1.0
T (i)
0.5
0
0 0.5 1.0
i
FIGURE 9.9
Transfer function defined in Equation 9.9.
9255_C009.fm Page 215 Tuesday, February 27, 2007 12:39 PM
1.0
pj (j)
0.5
0
0 0.5 1.0
j
FIGURE 9.10
Transformed histogram obtained from pi(i) in Figure 9.9 using the transfer function shown in
Figure 9.9.
J =I
J = T(I ) = ∑ PI(J )
J =0
(9.10)
where the original histogram is simply a plot of PI(I) versus I in this notation.
The histogram for the transformed gray levels J generated in this way is the
analogue, for the discrete case, of the uniform histogram produced by
Equation 9.7; it will not be quite uniform, however, because discrete rather
than continuous variables are being used. The degree of uniformity does,
however, increase as the number of gray levels is increased.
The last transformation has been concerned with transforming the histo-
gram of the image under consideration to produce a histogram for which
pj( j) was simply a constant. It is, of course, also possible to define a transfer
function to produce a histogram corresponding to some other given function,
such as a Gaussian or Lorentzian function, rather than just a constant.
↑y … … … … … …
… … … … … … …
… … x − 1, y + 1 x, y + 1 x +1, y + 1 … …
… … x − 1, y x, y x +1, y … …
… … x − 1, y − 1 x, y − 1 x + 1, y − 1 … …
… … … … … … …
… … … … … … x→
FIGURE 9.11
Neighboring pixels of pixel x, y.
and then take appropriate action to enhance the boundary. This process
involves what is known as spatial filtering. Edge enhancement is an example
of high-pass filtering — that is, it emphasizes high-spatial-frequency features,
features that involve intensities I(x,y) that change rapidly with x and y (of
which edges are an example). It reduces or blocks low-spatial frequency
features — that is, features that involve intensities I(x, y) that change only
slowly with x and y. Edges are located by considering the intensity of a pixel
in relation to the intensities of neighboring pixels and a transformation is
made that enhances or sharpens the appearance of the edge. This filtering
in the spatial domain is much simpler mathematically than filtering in the
frequency domain, involving Fourier transforms, which we shall consider
later in this chapter (see Section 9.9).
A linear spatial filter is a filter in which the intensity I(x, y) of a pixel, located
at x, y in the image, is replaced in the output image by some linear combination,
or weighted average, of the intensities of the pixels located in a particular
spatial pattern around the location x, y (see Figure 9.11). This is sometimes
referred to as two-dimensional convolution filtering. Some examples of the
shapes of masks or templates that can be used are shown in Figure 9.12.
For many purposes, a 3 × 3 mask is quite suitable. A coefficient is then chosen
FIGURE 9.12
Examples of various convolution masks or templates.
9255_C009.fm Page 217 Tuesday, February 27, 2007 12:39 PM
for each location in the mask, so that a template is produced; for a 3 × 3 mask,
we therefore have:
c1 c2 c3
Template = c4 c5 c6 (9.11)
c7 c8 c9
Therefore, using this template for the pixel (x, y), the intensity I(x, y) would
be replaced by:
This mask can then be moved around the image one step at a time and all
the pixel values can be replaced on a pixel-by-pixel basis. At the boundaries
of the image, one needs to decide how to deal with the pixels in the first
and last row and in the first and last column when the mask, so to speak,
spills over the edge of the image. For these rows and columns, it is common
to simply use the values obtained using the mask for the pixels in the adjacent
row or column.
What effect a transformation of this kind has on the image is determined
by the choice of the coefficients in the template. Edge enhancement is only
one of several enhancements that are possible to achieve by spatial filtering
using mask templates. For instance, random noise can be reduced or
removed from an image using spatial filtering (see Section 9.6.3). In order
to identify an edge, the differences between the intensity of a pixel and those
of adjacent pixels are examined by studying the gradient or rate of change
of intensity with change in position. Near an edge, the value of the gradient
is large; whereas some distance from an edge, the gradient is small. Examples
of masks that enhance or sharpen edges in an image include
−1 −1 −1
Mask A = −1 9 −1 (9.13)
−1 −1 −1
and
1 −2 1
Mask B = −2 5 −2 (9.14)
1 −2 1
If the pixel intensities in the vicinity of x, y are all of much the same value,
then the value of I ′( x , y) after applying one of these masks will only be
altered very slightly; in the extreme case where all nine relevant pixels have
9255_C009.fm Page 218 Tuesday, February 27, 2007 12:39 PM
exactly the same intensity, these masks will cause no change at all in the
pixel intensity at x, y. However, if x, y is at the edge of a feature in the image,
there will be some large differences between I(x, y) and the intensities of
some of the neighboring pixels and the effects of applying mask A or B will
be to cause I ′( x , y) to be considerably different from I(x, y) and the appear-
ance of the edge to be enhanced.
These two masks are not sensitive to the direction of the edge; they have
a general effect of enhancing edges. If one is seeking to enhance edges that
are in a particular orientation, then one can be selective. For example, if an
edge is vertical (or north-south [N-S]), then it will be marked by large
differences, in each row, between the intensities of adjacent pixels at the
edge. Thus, at or near the edge, the magnitude of I(x, y) – I(x + 1, y) or
I(x – 1, y) – I(x, y) will be large. Therefore, we can replace I(x,y) by:
The constant K is included to ensure that the intensity remains positive. For
eight-bit data, the value of K is commonly chosen to be 127. If no edge is
present, then I(x, y) and I(x + 1, y) will not be markedly different and the
value of I ′( x , y) will be close to the value of K. In the vicinity of an edge, the
value of |I(x, y) – I(x + 1, y)| will be larger so that, at the edge, the values
of I ′( x , y) will be farther away from K (both below and above K). A vertical
edge will thus be enhanced, but a horizontal edge will not. As an alternative
to Equation 9.15, one can use a 3 × 3 mask, such as:
−1 0 1
Mask C = −1 0 1 (9.16)
−1 0 1
to detect a vertical edge. In a similar manner, one can construct similar masks
to those in Equation 9.15 and Equation 9.16 to enhance horizontal edges or
edges running E-W, NE-SW, or NW-SE. Other templates can be used to
enhance edges that are in particular orientations (see, for example, Jensen
[1996]).
What is behind Equation 9.15 or Equation 9.16 is essentially to look at the
gradient, or the first derivative, of the pixel intensities perpendicular to an
edge. Another possibility is to use a Laplacian filter, which looks at the
second derivative and which, like Mask A, is insensitive to the direction of
the edge. The following are some examples of Laplacian filters:
0 −1 0 −1 −1 −1 1 −2 1
−1
4 −1 −1
8 −1 and −2 4 −2 .
0 −1 0 −1 −1 −1 1 −2 1
9255_C009.fm Page 219 Tuesday, February 27, 2007 12:39 PM
All of the above edge-enhancement, high-pass filters are linear filters (i.e.,
they involve taking only linear combinations of pixel intensities). However,
some nonlinear edge detectors also exist. One example is the Sobel Edge
Detector, which replaces the intensity I(x, y) of pixel x, y by:
I ′( x , y) = X 2 + Y 2 (9.17)
where
X = {I(x − 1,y +1) + 2I(x, y +1) + I(x +1, y +1)} − {I(x −1, y −1)
+ 2I(x, y −1) + I(x +1, y −1)} (9.18)
and
Y = {I((x − 1, y + 1) + 2I(x−1, y) + I(x − 1, y − 1)} − {I(x + 1, y + 1)
+ 2I(x + 1, y) + I(x + 1, y − 1)} (9.19)
This detects vertical, horizontal, and diagonal edges. Numerous other nonlin-
ear edge detectors are available (see, for example, Jensen [1996]). An example
of an image in which edges have been enhanced is shown in Figure 9.13.
• Spatial filtering
• Frequency filtering using Fourier transforms
• Averaging of multiple images.
1 1 1
1
Mask D = 1 1 1 (9.20)
9
1 1 1
(a)
(b)
FIGURE 9.13
Illustration of edge enhancement: (a) original image and (b) enhanced image.
become lost in the smoothing and so a less severe mask could be used,
such as:
1 1 1
4 2 4
1 1 1
Mask E = 1 (9.21)
9 2 2
1 1 1
4 2 4
9255_C009.fm Page 221 Tuesday, February 27, 2007 12:39 PM
or
1 1 1
1
Mask F = 1 2 1 (9.22)
10
1 1 1
Filters of these types (D, E, and F) can be described as low-pass filters because
they preserve the low-spatial-frequency features but remove the high-spatial
frequency, or local, features (i.e., the noise).
The previous discussions assume that one is considering random noise in
the image. If the noise is of a periodic type, such as for instance the six-line
striping found in much of the old Landsat multispectral scanner (MSS) data,
then the idea of neighborhood averaging can be applied on a line-by-line,
rather than on a pixel-by-pixel, basis. However, alternatively, using a Fourier-
space filtering technique is particularly appropriate; in this case, the six-line
striping gives rise, in the Fourier transform or spectrum of the image, to a
very strong component at the particular spatial frequency corresponding to
the six lines in the direction of the path of travel of the scanner. By identifying
this component in the Fourier transform spectrum of the image, removing
just this component, and then taking the inverse Fourier transform, one has
a very good method of removing the striping from the image. (Fourier
transforms are discussed in more detail in Section 9.9).
The third method, the averaging of multiple images, is only applicable in
certain rather special circumstances. First of all, one must have a number of
images that are assumed to be identical except for the presence of the random
noise. This means that the images must be coregistered very precisely to one
another. Then what is done for any given pixel, specified in position by
coordinates x and y, is to take an average of the intensities of the pixels in
the position x and y in each of the coregistered images. Because noise is
randomly distributed in the various images, this averaging tends to reduce
the effect of the noise and enhance the useful information. In practice, this
kind of multiple-image averaging is most important in SAR images. A SAR
image contains speckle that is frequently overcome by using subsets of the
raw data to generate a small number of independent images that all contain
speckle. These images will automatically be coregistered and will all contain
speckle, but the speckle in each image will be independent of that in each
of the other images; consequently the averaging of these multiple images
reduces the speckle and enhances the useful information very considerably.
MSS scan
line
FIGURE 9.14
Illustration of the variation of the spectra of different land cover classes in MSS data: band 1,
blue; band 2, green; band 3, red; band 4, near-infrared; band 5, thermal-infrared. (Adapted from
Lillesand and Kiefer, 1987.)
Band 4
“A” 10 10
10 10 10
13 13 13 10 10 10
13 13 13 27 10 10
13 27 27
13 131313 27 27 27
2727 27
Band 6
Band 5
FIGURE 9.15
Sketch to illustrate the use of cluster diagrams in a three-dimensional feature space for three-
band image data.
form a cluster that is distinct from, and quite widely separated from, the
clusters corresponding to pixels associated with other types of land cover.
Therefore, provided the pixels in the scene do group themselves into well-
defined clusters that are quite clearly separated from one another, Figure 9.15
can be used as the basis of a classification scheme for the scene. Each cluster
can be identified with a certain land cover, either from a study of a portion
of the scene selected as a training area or from experience. By specifying the
coordinates (i.e., the intensities in the three bands) for each cluster and by
specifying the size and land cover of each cluster, one should be able to
assign any given pixel to the appropriate class. If “training data” are used
from a portion of the scene to be classified, quite good accuracy of classifi-
cation is obtainable. But any attempt to classify a sequence of scenes obtained
from a given area on a variety of different dates, or a set of scenes from
different areas, with the same training data, should only be made with
extreme caution. This is because, as mentioned, the intensity of the radiation
received in a given spectral band, at a remote sensing platform, depends not
only on the reflecting properties of the surface of the land or sea but also on
the illumination of the surface and on the atmospheric conditions. If satellite-
received radiances, or aircraft-received radiances, are used without conver-
sion to surface reflectances or normalized surface-leaving radiances (where
the normalization takes account of the variation in solar illumination of the
surface), the classification using a diagram of the form of Figure 9.15 is not
immediately transferable from one scene to another.
If more than three spectral bands are available, the computer programs
used to implement the classification that is illustrated in Figure 9.15 can
readily be generalized to work in an N-dimensional space where N is the
number of bands to be used.
9255_C009.fm Page 225 Tuesday, February 27, 2007 12:39 PM
where i = 1,2,…n.
9255_C009.fm Page 226 Tuesday, February 27, 2007 12:39 PM
The quantities yi are then called components of the complex depicted by the
tests. Now consider only normally distributed systems of components that
have zero correlations and unit variances; this may be summarized conve-
niently by writing:
E(yiyj) = dij (9.24)
where dij is the Kronecker delta.
The argument is simplified by supposing that the functions fi are linear
functions of the components so that:
n
xi = ∑a y
j =1
ij j (9.25)
Assuming that the matrix A, with elements aij, is nonsingular, this relationship
can be inverted and the components yk written in terms of the variables xi:
n
yk = ∑b x
i =1
ki i (9.26)
TABLE 9.2
Correlations for Hotelling’s Original Example
i j 1 2 3 4
1 1.000 0.698 0.264 0.081
2 0.698 1.000 –0.061 0.092
3 0.264 –0.061 1.000 0.594
4 0.081 0.092 0.594 1.000
TABLE 9.3
Principal Components for Hotelling’s Original Example
Y1 Y2 Y3 Y4 Totals
Root 1.846 1.465 0.521 0.167 3.999
% of total variance 46.5 36.5 13 4 100
Reading speed 0.818 –0.438 –0.292 0.240
Reading power 0.695 –0.620 0.288 –0.229
Arithmetic speed 0.608 –0.674 –0.376 –0.193
Arithmetic power 0.578 .660 0.459 0.143
9255_C009.fm Page 228 Tuesday, February 27, 2007 12:39 PM
xi = { Ii (1, 1), Ii (1, 2), ... Ii (1, N ), ... Ii ( N , 1), Ii ( N , 2), ... Ii ( N , N )} (9.28)
All the image data are now contained in this set of n vectors xi, where each
vector xi is of dimension N2.
The covariance matrix of the vectors xi and xj is now defined as
where xi = E(xi), the expectation or mean, of the vector xi, and the prime is
used to denote the transpose.
From the data, the mean and the variance, (Cx)ii, can also be estimated:
N2
∑ x (k )
1
xi = 2 i (9.30)
N k =1
and
N2 N2
N
1
(Cx )ii = 2 ∑k =1
N
1
{ xi ( k ) − xi }{ xi ( k ) − xi }′ = 2 ∑ x (k)x (k)′ − x x ′
k =1
i i i i (9.31)
The mean vector will be of dimension n and the covariance matrix will be
of dimension n × n.
The objective of the Hotelling transformation is to diagonalize the covariance
matrix — that is, to transform from a set of bands that are highly correlated with
one another to a set of uncorrelated bands or principal components. In order to
achieve the required diagonalization of the covariance matrix, a transformation
9255_C009.fm Page 229 Tuesday, February 27, 2007 12:39 PM
yi = A( x j − x j ) (9.33)
C y = AC x A′ (9.34)
pixel with coordinate x and we suppose that there are M pixels in the row.
The Fourier transform F(u) is then given by:
1 M −1
F(u) =
M x= 0 ∑
f ( x)exp −2π i ux
M
(9.35)
or
1 M −1
F( k ) =
M x= 0 ∑
f ( x)exp{−2π ik x} (9.36)
where k = u/M.
Thus, the function f(x) is composed of, or synthesized from, a set of harmonic
(sine and cosine) waves that are characterized by k. k, which is equal to 1/l
(where l is the wavelength), can be regarded as a spatial frequency; it is also
commonly referred to as the wave number because it is the number of
wavelengths that are found in unit length (i.e., in 1 m). Thus:
u 1
k= = (9.37)
M λ
or
M
λ= . (9.38)
u
A small value of u (also a small value of k) corresponds to a long wavelength;
thus, it characterizes a component that varies slowly in space (i.e., as one
moves along the row in the image). A large value of u (also a large value of k)
corresponds to a small wavelength; thus, it characterizes a component that
varies rapidly in space (i.e., as one moves along the row in the image). Thus,
F(u) represents the contribution of the waves of spatial frequency u to the
function f(x), in this case to the intensities of the pixels along one row of the
image. The Fourier transform for a very simple function of one variable:
f(x) = 0 – ∞ < x < a and a < x < ∞
=1 –a<x<a (9.39)
is shown in Figure 9.16; this will be recognized in optical terms as correspond-
ing to the intensity distribution in the diffraction pattern from a single slit.
An important property of Fourier transforms is that they can be inverted.
Thus, if the Fourier transform F(u) is available, one can reconstruct the
function f(x):
M −1
f ( x) = ∑ F(u)exp +2π i uxM
u= 0
(9.40)
9255_C009.fm Page 231 Tuesday, February 27, 2007 12:39 PM
F(u)
0 u
FIGURE 9.16
Standard diffraction pattern obtained from a single slit aperture; u = spatial frequency, F(u) =
radiance.
where the plus sign is included to emphasize the difference in sign in the
exponent between this equation and Equation 9.35.
An image, of course, is two-dimensional, so one must also consider a
function f(x, y) that denotes the intensity of the pixel with coordinates x and y;
f(x, y) is what we have previously called I(x, y). Then the Fourier transform
F(u, v) of the function f(x, y) is given by:
1 1 M −1 N −1
F(u, v) = ∑∑
M N x= 0 y= 0
f ( x , y)exp −2π i ux + vy
M N
(9.41)
where M is the number of columns and N is the number of rows in the image.
The image can then be reconstructed by taking the inverse Fourier transform:
M −1 N −1
f ( x , y) = ∑ ∑ F(u, v)exp +2π i uxM + vyN
u= 0 v = 0
(9.42)
F (u, v)
v u
FIGURE 9.17
Example of two-dimensional Fourier transform.
9255_C009.fm Page 232 Tuesday, February 27, 2007 12:39 PM
of the Fourier transform shown in Figure 9.16. The Fourier transform F(u, v)
contains the spatial frequency information of the original image.
So far, in Equation 9.40 and Equation 9.42, we have achieved nothing
except the regeneration of an exact copy of the original image. We now turn
to filtering. The range of the allowed values of u is from 0 to M – 1 and of
v is from 0 to N – 1; thus, there are M × N pairs of values of u and v. If,
instead of using all the components F(u, v), one selects only some of them to
include in an equation like Equation 9.42, one will obtain a function that is
recognizable as being related to the original image but has been altered in
some way. Low values of u and v correspond to slowly varying components
(i.e., to low-spatial-frequency components); high values of u and v corre-
spond to rapidly varying components (i.e., to high-spatial-frequency com-
ponents). If instead, of including all the terms F(u,v) in Equation 9.42, one
only includes the low frequency terms, one will obtain an image that is like
the original image, but where the low frequencies are emphasized and the
high frequencies are de-emphasized or removed completely; in other words,
one can achieve low-frequency filtering. Similarly, if one includes only the
high-frequency terms, one will obtain an image that is like the original image
but where the high frequencies are emphasized and the low frequencies are
de-emphasized or removed completely; in other words, one achieves high-
frequency filtering. We now take the Fourier transform F(u, v) of the original
image f(x, y) and construct a new function G(u, v) where:
G( u , v ) = H ( u , v ) F( u , v ) (9.43)
M −1 N −1
g( x , y) = ∑ ∑ G(u, v)exp +2π i uxM + vyN
u= 0 v = 0
(9.44)
So, to summarize:
F( u , v ) Fourier transform
↓ multiply by filter H ( u , v )
G( u , v ) = H ( u , v ) F( u , v )
g( x , y ) processed/filtered image.
9255_C009.fm Page 233 Tuesday, February 27, 2007 12:39 PM
H (u, v)
0 d (u, v)
(a)
H (u, v)
1
1/ 2
0 d (u, v)
(b)
H (u, v)
1
0 d (u, v)
(c)
H (u, v)
0 d (u, v)
(d)
FIGURE 9.18
Four filter functions: (a) ideal filter; (b) Butterworth filter; (c) exponential filter; and (d) trape-
zoidal filter.
d( u , v ) = u 2 + v 2 (9.45)
These are relatively simple functions; more complicated filters can be used
for particular purposes.
9255_C009.fm Page 234 Tuesday, February 27, 2007 12:39 PM
F ( u, v)
v u
FIGURE 9.19
Fourier transform for a hypothetical noise-free image.
F (u, v)
v u
FIGURE 9.20
Fourier transform shown in Figure 9.19 with the addition of some random noise.
9255_C009.fm Page 235 Tuesday, February 27, 2007 12:39 PM
FIGURE 9.21
Example of a Landsat MSS image showing the 6-line striping effect, band 5, path 220, row 21,
of October 24, 1976, of the River Tay, Scotland. (Cracknell et al., 1982.)
of the image and then taking the inverse Fourier transform of that transform
to regenerate the image again. However, the basic idea is that it may be
easier to identify spurious or undesirable effects in the Fourier transform
than in the original image. These effects can then be removed. This is a form
of filtering but, unlike the filtering discussed in Section 9.6, this filtering is
not carried out on the image itself (i.e., in the spatial domain), but on its
Fourier transform (i.e., in the frequency domain [by implication, the spatial
frequency domain]). Having filtered the transform to remove imperfections,
the image can then be reconstructed by performing the inverse Fourier
transform. An improvement in the quality of the image is then often obtained
in spite of the optical aberrations or numerical errors or approximations.
In taking a Fourier transform of a two-dimensional object, such as a film
image of some remotely sensed scene, one is analyzing the image into its
component spatial frequencies. This is what a converging lens does when an
object is illuminated with a plane parallel beam of coherent light. The complex
field of amplitude and phase distribution in the back focal plane is the Fourier
transform of the field across the object; in observing the diffraction pattern or
in photographing it, one is observing or recording the intensity data and not
the phase data. Actually, to be precise, the Fourier transform relation is only
exact when the object is situated in the front focal plane; for other object
positions, phase differences are introduced, although these do not affect the
appearance of the diffraction pattern. It will be clear that, because rays of light
are reversible, the object is the inverse Fourier transform of the image. The
inverse transform can thus be produced physically by using a second lens. As
already mentioned, the final image produced would, in principle, be identical
to the original object, although it will actually be degraded as a result of the
aberrations in the optical system. This arrangement has the advantage that,
by inserting a filter in the plane of the transform, the effect of that filter on the
reconstructed image can be seen directly and visually (see Figure 9.22).
FIGURE 9.22
Two optical systems suitable for optical filtering. (Wilson, 1981.)
9255_C009.fm Page 237 Tuesday, February 27, 2007 12:39 PM
The effects that different types of filters have when the image is recon-
structed can thus be studied quickly and effectively. This provides an exam-
ple of a situation in which it is possible to investigate and demonstrate effects
and principles much more simply and effectively with optical image pro-
cessing techniques than with digital methods.
The effect of some simple filters can be illustrated with a few examples
that have been obtained by optical methods. A spatial filter is a mask or
transparency that is placed in the plane of the Fourier transform (i.e., at T
in Figure 9.22), and various types of filter can be distinguished:
• A blocking filter (a filter that is simply opaque over part of its area)
• An amplitude filter
• A phase filter
• A real-valued filter (a combination of an amplitude filter and a phase
filter, where the phase change is either 0 or p)
• A complex-valued filter that can change both the amplitude and the
phase.
A blocking filter is, by far, the easiest type of filter to produce. Figure 9.23(a)
shows an image of an electron microscope grid, and Figure 9.23(b) shows
(a) (b)
(c) (d)
(e) (f )
FIGURE 9.23
(a) The optical transform from an electron microscope grid; (b) image of the grid; (c) filtered
transform; (d) image due to (c); (e) image due to zero-order component and surrounding four
orders; and (f) image when zero-order component is removed. (Wilson, 1981.)
9255_C009.fm Page 238 Tuesday, February 27, 2007 12:39 PM
FIGURE 9.24
Raster removal. (Wilson, 1981.)
its optical Fourier transform. Figure 9.23(c) shows the transform with all the
nonzero ky components removed. Consequently, when the inverse transform
is taken, no structure remains in the y direction (see Figure 9.23[d]). The
effects of two other blocking filters are shown in Figure 9.23(e) and (f). The
six-line striping present in Landsat MSS images has already been mentioned.
By using a blocking filter to remove the component in the transform corre-
sponding to this striping, one can produce a destriped image. The removal
of a raster from a television picture is similar to this (see Figure 9.24). One
might also be able to remove the result of screening a half-tone image; the
diffraction pattern from a half-tone object contains a two-dimensional
arrangement of discrete maxima, with the transform of the picture centered
on each maximum. A filter that blocks out all except one order can produce
an image without the half-tone. This approach can also be applied to the
smoothing of images that were produced on old-fashioned computer output
devices, such as line printers and teletypes (see Figure 9.25).
FIGURE 9.25
Half-tone removal. (Wilson, 1981.)
9255_C009.fm Page 239 Tuesday, February 27, 2007 12:39 PM
FIGURE 9.26
Edge enhancement by high-pass filtering. (Wilson, 1981.)
The more high spatial frequencies present in the Fourier transform, the
more fine detail can be accounted for in an image. As previously noted (see
Section 9.6.2), a high-pass filter that allows high spatial frequencies to pass
but blocks the low spatial frequencies leads to edge enhancement of the
original image because the high spatial frequencies are responsible for the
sharp edges (see Figure 9.26).
One final point is worth mentioning before we leave Fourier transforms.
There are many other situations apart from the processing of remotely sensed
images in which Fourier transforms are used in order to try to identify a
periodic feature that is not very apparent from an inspection of the original
image or system itself. This is very much what is done in X-ray and electron
diffraction work in which the diffraction pattern is used to identify or quantify
the periodic structure of the material that is being investigated. Similarly,
Fourier transforms of images of wave patterns on the surface of the sea,
obtained from aerial photography or from a SAR on a satellite, are used to
find the wavelengths of the dominant waves present. Because dealing with
a representation of the Fourier transform as a function of two variables using
three dimensions in space is inconvenient, it is more common to represent
the Fourier transform as a grayscale image in which the value of the trans-
form F(u,v) is represented by the intensity at the corresponding point in the
u,v plane. Such representations of the Fourier transform are very familiar to
physicists and the like who encounter them frequently as films of optical,
X-ray, or electron diffraction patterns.
9255_C009.fm Page 240 Tuesday, February 27, 2007 12:39 PM
9255_C010.fm Page 241 Tuesday, February 27, 2007 12:46 PM
10
Applications of Remotely Sensed Data
10.1 Introduction
Remotely sensed data can be used for a great variety of practical applications,
all of which relate, in general, to Earth resources. For convenience, and
because the innumerable applications are so varied and far reaching, in this
chapter these applications are classed into major categories, each coming
under the purview of some recognized professional discipline or specialty.
These categories include applications to the:
• Atmosphere
• Geosphere
• Biosphere
• Hydrosphere
• Cryosphere.
241
9255_C010.fm Page 242 Tuesday, February 27, 2007 12:46 PM
the capability of modern forecast systems and to quantify the associated ben-
efits in the delivery of a real-time nowcast service. The FDP was not just about
providing new and improved systems that could be employed by forecasters;
rather, it demonstrated the benefits to end users by undertaking verification of
nowcasts and impact studies.
hk_comp
CAPPI
R_C_030_256Y
10 : 00 : 00
25 Jul 2001
Task: PPIVOL_∗
PRF: 500/375 Hz
Height: 3.0 km
Max range: 256 km
Proj: AED
FIGURE 10.3
World radiosonde network providing 4,000 soundings per day. (NASA/JPL/AIRS Science Team,
Chahine, 2005.)
Hour
180°E 120°W 60°W 0° 60°E 120°E 180°W
80°N 80°N
40°N 40°N
0° 0°
40°S 40°S
80°S 80°S
−3 −2 −1 0 1 2 3
Hour from January 27, 2003 0000 Z
10
25
15
100
10
1000 0
180 200 220 240 260 280 300 320
Temperature (K)
FIGURE 10.5
Comparison of AIRS retrieval (smooth line) with that of a dedicated radiosonde (detailed line)
obtained for the Chesapeake Platform on Sept 13, 2002. (NASA/JPL/AIRS Science Team.)
obvious that there were serious problems with the vectors. The standard
accuracy specified for surface wind speed data is ±2 ms–1 for wind speeds up
to 20 ms–1, and 10% for wind speeds above that, and the accuracy for wind
direction is ±20°. Gemmill et al. (1994) found that, although the wind speed
retrievals met their specifications, the wind direction selections did not. To
improve accuracy, the vector solutions are ranked according to a probability
fit. A background wind field from a numerical model is used to influence the
initial directional probabilities. The vector solutions are then reranked accord-
ing to probability determined by including the influence of the background
field (Stoffelen and Anderson, 1997). A final procedure (not used by ESA)
may then be carried out on the scatterometer wind swath to ensure that all
the winds present a reasonable and consistent meteorological pattern. This
procedure, the sequential local iterative consistency estimator (SLICE), works
by changing the directional probabilities (and rank) of each valid solution
using wind directions from surrounding cells. The SLICE algorithm was
developed by and is being used by the U.K. Meteorological Office.
The NASA Scatterometer (NSCAT), which was launched aboard Japan’s
Advanced Earth Observing Satellite (ADEOS; MIDORI in Japan) in August
1996, was the first dual-swath, Ku band scatterometer to fly since Seasat. From
September 1996, when the instrument was first turned on, until premature
termination of the mission due to satellite power loss in June 1997, NSCAT
returned a continuous stream of global sea-surface wind vector measurements.
The NSCAT mission proved so successful that plans for a follow-up mission
were accelerated to minimize the gap in the scatterometer wind database. The
QuikSCAT mission launched the SeaWinds instrument in June 1999, with an
1800-km swath during each orbit, providing approximately 90% coverage of
the Earth’s oceans every day and wind-speed measurements of 3 to 20 ms−1,
with an accuracy of 2 ms−1; a directional accuracy of 20°; and a wind vector
resolution of 25 km (see Figure 10.6). A follow-up SeaWinds instrument is
scheduled for launch on the ADEOS-II platform in August 2006.
Although the altimeters flown on Skylab, Geodynamics Experimental
Ocean Satellite–3, and Seasat were primarily designed to measure the height
of the spacecraft above the Earth’s surface, the surface wind speed, although
not direction, can be inferred from the shape of the returned altimeter pulse.
systems and the feasibility of developing a lower cost system for civil oceano-
graphic work has been examined (Georges and Harlan 1999).
The Australian Jindalee Operational Radar Network, JORN, has evolved
over a period of 30 years at a cost of over $A1.8 billion. It is primarily an
imaging system which enables Australian military commanders to observe
all air and sea activity, including detecting stealth aircraft, north of Australia
to distances of at least 3000 km (see, for example, http://www.defencetalk.com/
forums/archive/index.php/t-1832.html).
The U.S. Air Force’s over-the-horizon-backscatter (OTH-B) air defense
radar system is probably by far the largest radar system in the world. It was
developed to warn against Soviet bomber attacks when the planes were still
thousands of miles from U.S. air space. Six 1 MW OTH radars see far beyond
the range of conventional microwave radars using 5-28 MHz waves reflected
by the ionosphere. With the end of the Cold War (just months after their
deployment), the three OTH radars on the U.S. West Coast were moth-
balled, but the three radars on the U.S. East Coast were redirected to
counter-narcotics surveillance. In 1991, NOAA recognized their potential
for environmental monitoring and asked the Air Force’s permission to look
at the part of the radar echo that the Air Force throws away—the ocean
clutter. Tropical storms and hurricanes were tracked, and a system was
developed for delivering radar-derived winds to the National Hurricane
Center. The combined coverage of the six OTH-B radars is about 90 million
square kilometres of open ocean where few weather instruments exist. Tests
have also demonstrated the ability of OTH radars to map ocean currents
(Georges and Harlan, 1994a, 1994b, Georges 1995).
Whereas OTH radars map surface wind directions on demand over large,
fixed ocean areas, active and passive microwave instruments on board sev-
eral polar-orbiting satellites measure wind speeds along narrow swaths
determined by their orbital geometries. Satellites, however, do not measure
wind directions very well. Thus, the capabilities of ground-based and satellite-
based ocean-wind sensors complement each other. Figure 10.7 shows 24 hours
of ERS-1 scatterometer coverage over the North Atlantic (color strips). The
wind speed is color coded, with orange indicating highest speeds. The OTH-B
wind directions for the same day are superimposed, filling in the gaps in
the satellite data.
25
15
5
90 70 50 30
by weather satellites, and much has been learned about the structure and
movements of these small but powerful vortices from the satellite evi-
dence. Satellite-viewed hurricane cloud patterns enable the compilation
of very-detailed classifications and the determination of maximum wind
speeds. The use of enhanced infrared imagery in tropical cyclone analysis
adds objectivity and simplicity to the task of determining tropical storm
intensity. Satellite observations of tropical cyclones are used to estimate
their potential for spawning hurricanes. The infrared data not only afford
continuous day and night storm surveillance but also provide quantitative
information about cloud features that relate to storm intensity; thus, cloud-
top temperature measurements and temperature gradients can be used in
place of qualitative classification techniques employing visible wave-
bands. The history of cloud pattern evolution and knowledge of the cur-
rent intensity of tropical storms are very useful for predicting their
developmental trend over a 24-hour period and allow an early warning
capability to be provided for shipping and for areas over land in the paths
of potential hurricanes.
Hurricanes and typhoons exhibit a great variety of cloud patterns, but
most can be described as having a comma configuration. The comma tail is
composed of convective clouds that appear to curve cyclonically into a
center. As the storm develops, the clouds form bands that wrap around the
storm center producing a circular cloud system that usually has a cloud-free,
9255_C010.fm Page 255 Tuesday, February 27, 2007 12:46 PM
dark eye in its mature stage. The intensity of hurricanes is quantifiable either
by measuring the amount by which cold clouds circle the center or by using
surrounding temperature and eye criteria. Large changes in cloud features
are related to the intensity, whereas increased encirclement of the cloud
system center by cold clouds is associated with a decrease in pressure and
increase in wind speed.
Weather satellites have almost certainly justified their expense through the
assistance they have given in hurricane forecasting alone. The damage
caused by a single hurricane in the United States is often of the order of
billions of dollars, the most notable recent storm having been Hurricane
Katrina that devastated a substantial part of New Orleans in August 2005
(see Figure 10.8). As Hurricane Katrina gained strength in the Gulf of Mexico
on Sunday August 28, 2005, the population of New Orleans was ordered to
evacuate. Up to 80% of New Orleans was flooded after defensive barriers
against the sea were overwhelmed.
The tracking of Hurricane Katrina by satellite allowed the advance
evacuation of New Orleans and undoubtedly saved many lives. In 1983,
Hurricane Alicia caused an estimated $2.5 billion of damage and was
responsible for 1,804 reported deaths and injuries. In November 1970, a
tropical cyclone struck the head of the Bay of Bengal and the loss of life
caused by the associated wind, rain, and tidal flooding exceeded 300,000
people. Indirectly, this disaster was the trigger that led to the establishment
of an independent state of Bangladesh. Clearly, timely information about the
behavior of such significant storms may be almost priceless.
The World Climate Research Programme aims to discover how far it is possible
to predict natural climate variation and man’s influence on the climate. Satel-
lites contribute by providing observations of the atmosphere, the land surface,
the cryosphere and the oceans with the advantages of global coverage, accuracy,
and consistency. Quantitative climate models enable the prediction and detec-
tion of climate change in response to pollution and the “greenhouse” effect. In
addition to the familiar meteorological satellites, novel meteorological missions
have been established to support the Earth Radiation Budget Experiment
(ERBE) and the International Satellite Cloud Climatology Project (ISCCP).
−2
−4
−6
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05
Year
FIGURE 10.9
Deviations of global monthly mean cloud amount from long-term total period mean
(1983–2005). (ISSCP.) (http://isccp.giss.nasa.gov/climanal1.html)
60N
30N
30S
60S
(a)
60N
30N
30S
60S
0 50 100 HWANG/GSFC
(b)
in a non-El Niño year. The most significant increases occurred in the eastern
Pacific Ocean.
During the 1982–1983 El-Niño event, significant perturbations in a diverse
set of geophysical fields occurred. Of special interest are the planetary-scale
fields that act to modify the outgoing longwave radiation (OLR) field at the
top of the atmosphere. The most important is the effective “cloudiness”;
specifically, perturbations from the climatological means of cloud cover,
9255_C010.fm Page 259 Tuesday, February 27, 2007 12:46 PM
(a)
(b)
(c)
TABLE 10.1
Mean Skin Surface Temperature
during January 1979
Area Temperature (°C)
Global 14·14
N. hemisphere 11·94
S. hemisphere 16·35
Dec 2001
NVAP–NG water vapor
0 10 20 30 40 50 60 70
(MM)
and vertical distribution, of ozone and of the other gases are necessary. There
is a long-established data set of ground-level measurements of the total
amount of ozone in the atmospheric column and there are also some mea-
surements of ozone concentration profiles obtained using ozonesondes. This
dataset has now been augmented by data from several satellite systems.
These are principally the TOVS on the NOAA polar-orbiting satellites, var-
ious versions of the TOMS (Total Ozone Mapping Spectrometer) and a num-
ber of SBUV (Solar Backscattered UV) instruments.
Channel 9 of the HIRS, one of the TOVS instruments, which is at a wave-
length of 9.7 µm, is particularly well suited for monitoring the atmospheric
ozone concentration; this is a (general) “window” (i.e. transparent) channel,
except for absorption by ozone. The radiation emitted from the Earth’s surface
and received by the HIRS instruments in this channel is attenuated by the
ozone in the atmosphere. The less ozone, the greater the amount of radiation
reaching the satellite. TOVS data have been used to determine atmospheric
ozone concentration from 1978 to the present time and images are now reg-
ularly produced from TOVS data giving hemispherical daily values of total
ozone. An advantage of TOVS over the other systems that use solar UV
radiation is that TOVS data are available at night time and in the polar regions
in winter. The drawbacks are that when the Earth’s surface is too cold (e.g.
in the high Antarctic Plateau), too hot (e.g. the Sahara desert), or too obscured
(e.g. by heavy tropical cirrus clouds) the accuracy of this method declines.
The other two groups of instruments, the TOMS and SBUV types, differ
principally in two ways. First, the TOMS instruments are scanning instru-
ments and the SBUV instruments are nadir-looking only, and secondly, the
TOMS instruments measure only the total ozone content of the atmospheric
column, while the SBUV instruments measure both the vertical profile and
the total ozone content. At about the same time that the first TOVS was
flown, the Nimbus-7 satellite was launched and this carried, among other
instruments, the first Total Ozone Mapping Spectrometer (TOMS).
The work using instruments which are able to measure the ozone concen-
tration using solar UV radiation began with the Backscatter Ultraviolet
(BUV) instrument flown on Nimbus-4 which was launched in 1970, followed
in 1978 by the Nimbus-7 Solar Backscatter Ultraviolet (SBUV) instrument.
These measurements have been continued from 1994 with SBUV/2 instru-
ments on board the NOAA-9 -11, -14, -16 and -17 satellites, and TOMS
instruments on the Russian Meteor-3, Earth Probe, and Japanese ADEOS
satellites. The European Space Agency’s Global Ozone Monitoring Experi-
ment (GOME) on the ERS-2 satellite, also performing backscattered ultra-
violet measurements, complements the US efforts. The primary measure-
ment objective of GOME is the measurement of total column amounts and
profiles of ozone and of other gases involved in ozone photochemistry. Due to
a failure of the ERS-2 tape recorder only GOME observations made while in
direct contact with ground stations have been available since June 22, 2003. A
GOME instrument will also be flown on the ESA Metop-1 mission scheduled
(at the time of writing) to be launched in October 2006, and the Metop-2
9255_C010.fm Page 264 Tuesday, February 27, 2007 12:46 PM
Oct 1980 Oct 1981 Oct 1982 Oct 1983 Oct 1984 Oct 1985
Oct 1986 Oct 1987 Oct 1988 Oct 1989 Oct 1990 Oct 1991
100 140 180 220 260 300 340 380 420 460 500
WFDOAS VI
DU GOMEI O3 NH March 1996–2005
100 200 300 400 500
10.2.6.5 Summary
The chief advantages of satellite remote sensing systems for climatology are:
• Weather satellite data for the whole globe are far more complete
than conventional data.
• Satellite data are more homogeneous than those collected from a
much larger number of surface observatories.
• Satellite data are often spatially continuous, as opposed to point
recordings from the network of surface stations.
• Satellites can provide more frequent observations of some parame-
ters in certain regions, especially over oceans and high latitudes.
• The data from satellites are collected objectively, unlike some con-
ventional observations (e.g., visibility and cloud cover).
• Satellite data are immediately amenable to computer processing.
with surface traces of faults and fracture zones that control patterns of
topography, drainage, and vegetation that serve as clues to their recognition.
The importance of finding these features is that lineaments often represent
major fracture systems responsible for earthquakes and for transporting and
localizing mineral solutions as ore bodies at some stage in the past.
A major objective in geological mapping is the identification of rock types
and alteration products. In general, most layered rocks cannot be directly
identified in satellite imagery because of limitations in spatial resolution and
the inherent lack of unique or characteristic differences in color and bright-
ness of rocks whose types are normally distinguished by mineral and chem-
ical content and grain sizes. Nor is it possible to determine the stratigraphic
age of recognizable surface units directly from remotely sensed data unless
the units are able to be correlated with those of known age in the scene or
elsewhere. In exceptional circumstances, certain rocks exposed in broad out-
crops can be recognized by their spectral properties and by their distinctive
topographic expressions. However, the presence of covering soil and vege-
tation tends to mask the properties favorable to recognition.
FIGURE 10.16
Visible (left) and thermal-infrared (right) images of a buried stream channel. (WesternGeco.)
evident, but given the knowledge that the infrared image was acquired at
night, certain inferences may be drawn. One is that the horizontal portion
of the stream channel is probably of coarser sand and gravel than the vertical
portion. Its bright signal suggests more readily flowing water, which is
warmer than the night-air-cooled surrounding soil. The very dark signal
adjoining and above the horizontal portion of the stream course, and to both
sides of the vertical portion, probably represents clay, indicating moisture
that cooled down many years ago and remained cold compared with the
surrounding soil because of poor permeability and thermal conductivity.
Such imagery could be invaluable for investigating groundwater, avenues
for pollution movement, exploration for placers, sand, gravel clay, and
emplacing engineering structures.
Unlike hydrogeological studies or mineral exploration, engineering geo-
logical investigations are often confined to the upper 5 m of the Earth’s
surface. Thermal-infrared line scanning has proven to be very effective
within the near-surface zone, and it is becoming increasingly useful in a
variety of engineering geological problems such as those aimed at assessing
the nature of material for foundations, excavations, construction materials
and drainage purposes.
Density of materials and ground moisture content are the two dominant
factors influencing tonal variations recorded in thermal imagery. Subsurface
solid rock geology may be interpreted from one or more of a number of
“indicators,” including vegetation changes, topographic undulations, soil
variations, moisture concentrations, and mineralogical differences. As far as
particle size or grading is concerned, in unconsolidated surface materials,
9255_C010.fm Page 271 Tuesday, February 27, 2007 12:46 PM
lighter tones are caused by coarser gradings. Lighter tones also result from
a greater degree of compaction of surface materials.
20. 21. 22. 25. 30. 35. 40. 45. 50. (°C)
Miyake Island 1983.10.5 19:00 stereo–pair Left:EL. = 80°W, Right:EL.80°E
than the northward slope in the northern hemisphere and temperature dif-
ferences may be clearly seen even on predawn thermal-infrared data. The cor-
rection for topographic conditions may be as large as 1°C for night-time data.
Differences of emissivity between objects due to different land cover con-
ditions are a further difficulty in the evaluation of thermal-infrared data.
These differences may have to be taken into account when the data are
analyzed. A land cover map may be compiled on the basis of the multispec-
tral characteristics of objects. Surface temperature may be obtained from
nonvegetated areas identified in the land-cover map. Because of vegetation
cover, the ground-surface temperature data available are usually very
sparsely distributed. If one assumes that the overall distribution pattern of
the ground-surface temperature reflects the underground temperature dis-
tribution, which is usually governed by the geological structure of the area,
a trend-surface analysis may be applied to interpolate between the sparse
ground temperature data and therefore make it possible to visualize the
trend of the ground-surface temperature, which is usually obscured by the
highly frequent change in observed ground-surface temperatures. Of course,
by combining thermal-infrared data with other survey data, even more useful
information can be drawn and a better understanding of the survey area
achieved. The refined remote sensing data may be cross-correlated by
computer manipulation of multivariable data sets, including geological,
geophysical, and geochemical information. These remote sensing data,
together with other remote and direct sensing measurements, may then be
used to target drill holes to test the geothermal site for geothermal resources.
FIGURE 10.19
Typical thermal coal pile fire detector fitted high up on an automatic pan and tilt scanning
mechanism to provide views of an entire stockyard. (Land Instruments International.)
FIGURE 10.20
X-band SAR image of an arid environment in Arizona. (WesternGeco.)
energy away from, rather than back to, the receiver. These “dark” signals
are from the finest material in the area, probably clays and silts. Stringers of
bright signals may also be observed in the Bolson plain. These are probably
caused by coarse sands and gravels and represent the braided courses of the
highest velocity streams entering the area.
Thus radar, in this sense, is useful in the exploration for sand and gravel
construction materials and placers — that is, for emplacing construction on
the good, coarse materials rather than on clays and silts. Radar data is also
particularly effective for the identification of lineaments, depending on the
imaging geometry. Radar has been used for detecting and mapping many
other environmental factors. A major product used for such work is the
precision mosaic, from which geological maps, geomorphological maps, soil
maps, ecological conservation maps, land use potential maps, agricultural
maps, and phytological maps have been generated.
FIGURE 10.21
GGM01 showing geophysical features, July 2003. (University of Texas Center for Space Research
and NASA.)
situations where the latter may be limited in its data acquisition, such as in
the presence of power lines and metal fences, but its operational requirements
are more demanding and costly.
Recent satellite missions equipped with highly precise inter-satellite and
accelerometry instrumentation have been observing the Earth’s gravitational
field and its temporal variability. The CHAllenging Minisatellite Payload
(CHAMP), launched in July 2000, and the Gravity Recovery and Climate
Experiment (GRACE), launched in March 2002, provide gravity fields, and
anomalies, on a routine basis. Figure 10.21 shows the GRACE Gravity
Model 01 (GGM01) that was released on July 21, 2003, based upon a prelim-
inary analysis of 111 days of in-flight data gathered during the commission-
ing phase of the mission. This model is 10 to 50 times more accurate than
all previous Earth gravity models. The ESA Gravity Field and Steady State
Ocean Circulation Explorer mission is scheduled for launch in 2006.
sonar system that collects and processes seafloor depth data. It produces
three-dimensional bathymetric images over a wide swath in near-real time.
Following the 9.2 magnitude earthquake that occurred on December 26, HMS
Scott deployed to the area and quickly collected a significant amount of
bathymetric data. The data were then used to create three-dimensional
images for evaluation to contribute to further understanding of that partic-
ular earthquake and to assist in the prediction of such events in the future.
Figure 10.22 shows a bathymetric image of the boundary between the Indian
Ocean and Asian tectonic plates. In the left foreground at the base of the
blue is a 100 metre deep channel, termed ‘The Ditch’, which is believed to
have been formed by the earthquake. The deep channel cutting across the
image has formed through erosion as convergence between the plates has
uplifted the seabed causing erosion (Henstock et al., 2006).
In all but the most technologically advanced countries, up-to-date and accu-
rate assessments of total acreage of different crops in production, anticipated
yields, stages of growth, and condition (health and vigor) are often incomplete
or untimely in relation to the information needed by agricultural managers.
These managers are continually faced with decisions on planting, fertilizing,
watering, pest control, disease, harvesting, storage, evaluation of crop quality,
and planning for new cultivation areas. Remotely sensed information is used
to predict marketing factors, evaluate the effects of crop failure, assess damage
from natural disasters, and aid farmers in determining when to plough, water,
spray, or reap. The need for accurate and timely information is particularly
acute in agricultural information systems because of the very rapid changes
in the condition of agricultural crops and the influence of crop yield predictions
on the world market; it is for these reasons that, as remote sensing technology
has developed, the potential for this technology to be used in this field has
received widespread attention. Previously, aircraft surveys were sporadically
used to assist crop and range managers in gathering useful data, but given
the cost and logistics of aircraft campaigns and the advent of multivisit mul-
tispectral satellite sensors designed specifically for the monitoring of vegeta-
tion, attention has shifted to the use of satellite imagery for agricultural
monitoring. Crop identification, yield analysis, and validation and verification
activities rely on a revisit capability throughout the growing season, which is
now available from repetitive multispectral satellite imagery.
Color-infrared film is sensitive to the green, red, and near-infrared (500 to
900 nm) portions of the electromagnetic spectrum and is widely used in
aerial and space photographic surveys for land use and vegetation analysis.
Living vegetation reflects light in the green portion of the visible spectrum
to which the human eye is sensitive. Additionally, it reflects up to 10 times
as much in the near-infrared (700 to 1100 nm) portion of the spectrum, which
is just beyond the range of human vision. When photosynthesis decreases,
either as a result of normal maturation or stress, a corresponding decrease
in near-infrared reflectance occurs. Living or healthy vegetation appears as
various hues of red in color-infrared film. If diseased or stressed, the color
response shifts to browns or yellows due to the decrease in near-infrared
reflectance. Color-infrared film is also effective for haze penetration because
blue light is eliminated by filtration.
Crops are best identified from computer-processed digital data that repre-
sent quantitative measures of radiance. In general, all leafy vegetation has a
similar reflectance spectrum regardless of plant or crop species. The differences
between crops, by which they are separated and identified, depend on the
degree of maturity and percentage of canopy cover, although differences in
soil type and soil moisture may serve to confuse the differentiation. However,
if certain crops are not separable at one particular time of the year, they may
be separable more readily at a different stage of the season due to differences
in planting, maturing, and harvesting dates. The degree of maturity and the
yield for a given crop may also influence the reflectance at any stage of growth.
This maturity and yield can be assessed as the history of any crop is traced in
9255_C010.fm Page 279 Tuesday, February 27, 2007 12:46 PM
10.4.1 Agriculture
The 3-year Large Area Crop Inventory Experiment (LACIE), using Landsat
MSS imagery, first demonstrated that the global monitoring by satellite of food
and fiber production was possible. Where LACIE’s mission was to prove the
feasibility of Landsat for yield assessment of one crop, wheat, this activity has
now extended to the monitoring of multiple crops on a global scale. The LACIE
experiment highlighted the potential impact that a credible crop yield assess-
ment could have on world food marketing, administration policy, transporta-
tion, and other related factors. In 1977, during Phase 3 of LACIE, it was decided
to test the accuracy of Soviet wheat crop yield data by using Landsat-3 to
assess the total production from early season to harvesting in the then Union
of Soviet Socialist Republics (USSR). In January 1977, the USSR officially
announced that it expected a total grain crop of 213.3 million metric tons. This
was about 13% higher than the country’s 1971 to 1976 average. Because Soviet
wheat historically accounted for 48% of its total grain production, the antici-
pated wheat yield would have been about 102 million metric tons for the year.
LACIE computations, made after the Soviet harvests, but prior to the USSR
release of figures, estimated Russian wheat production at 91.4 million metric
tons. In late January 1978, the USSR announced that its 1977 wheat production
had been 92 million metric tons. The U.S. Department of Agriculture (USDA)
final estimate was 90 million metric tons. Previous USDA assessments of Soviet
wheat yield had had an accuracy of 65/90, meaning that the USDA’s conven-
tionally collected data could have an accuracy of ±10% only 65% of the time.
The LACIE program was designed to provide a crop-yield assessment accu-
racy of 90/90, or within ±10% in 90% of the years the system was used.
Earth observation satellites are now routinely used for a broad range of
agricultural applications. In developed countries, where producers are often
sophisticated users of agricultural and meteorological information, satellite
data is widely used in many “agri-business” applications. By providing
frequent, site-specific insights into crop conditions throughout the growing
9255_C010.fm Page 280 Tuesday, February 27, 2007 12:46 PM
season, derived satellite data products help growers and other agriculture
professionals manage crop production risks efficiently, increasing crop yields
while minimizing environmental impacts.
The USDA’s Foreign Agricultural Service now provides near-to-real-time
agrometeorology data to the public through its Production Estimates and
Crop Assessment Division. One of the most prominent services has been the
development of a Web-based analytical tool called Crop Explorer that pro-
vides timely and accurate crop condition information on a global scale. The
Crop Explorer website (http://www.pecad.fas.usda.gov/cropexplorer/) features
near-real-time global crop condition information based on satellite imagery
and weather data. Thematic maps of major crop growing regions depict
vegetative vigor, precipitation, temperature, and soil moisture. Time-series
charts show growing season data for specific agrometeorological zones.
Regional crop calendars and crop area maps are also available for selected
regions of major agricultural significance. Every 10 days, more than 2,000
maps and 33,000 charts are updated on the Crop Explorer website, including
maps and charts for temperature, precipitation, crop modelling, soil mois-
ture, snow cover, and vegetation indices. Indicators are further defined by
crop type, crop region, and growing season.
In Europe, the Monitoring Agriculture through Remote Sensing Tech-
niques (MARS) project is a long-term endeavor to monitor weather and crop
conditions during the current growing season and to estimate final crop
yields for Europe by harvest time. The MARS project has developed, tested,
and implemented methods and tools specific to agriculture using remote
sensing to support four main activities:
10.4.2 Forestry
In forestry, multispectral satellite data have proven effective in recognizing
and locating the broadest classes of forest land and timber and in sepa-
rating deciduous, evergreen, and mixed (deciduous-evergreen) commu-
nities. Further possibilities include measurement of the total acreage given
to forests, and changes in these amounts, such as the monitoring of the
deforestation and habitat fragmentation of the tropical rainforest in the
Amazon. The images in Figure 10.23 show the progressive deforestation
of a portion of the state of Rondônia, Brazil. Systematic cutting of the
forest vegetation started along roads and then fanned out to create the
“feather” or “fishbone” pattern shown in the eastern half of the 1986 (b)
and 1992 (c) images.
Approximately 30% (3,562,800 km2) of the world’s tropical forests are in
Brazil. The estimated average deforestation rate from 1978 to 1988 was
15,000 km2 per year. In 2005, the federal government of Brazil indicated that
26,130 km2 of forest were lost in the year up to August 1, 2004. This figure
was produced by the National Institute for Space Research (INPE) in Brazil
on the basis of 103 satellite images covering 93% of the so-called “Defores-
tation Arc,” the area in which most of the trees are being cut down. INPE
has developed a near-real-time monitoring application for deforestation
detection known as the Real Time Deforestation Monitoring System.
Figure 10.24 shows an overview of the Hayman forest fire burning in the
Pike National Forest 35 miles south of Denver, CO. The images were collected
on June 12, 2002, and June 20, 2002, by Space Imaging’s IKONOS satellite.
Each photo is a composite of several IKONOS images that have been reduced
in resolution and combined to better visualize the extent of the fire’s foot-
print. In these enhanced color images, the burned area is purple and the
healthy vegetation is green.
According to the U.S. Forest Service, when the June 12 image was taken,
the fire had consumed 86,000 acres and had become Colorado’s worst fire
ever. The burned area on this image measures approximately 32 km × 17 km
(20 miles × 10.5 miles). This type of imagery is used to assess and measure
damage to forest and other types of land cover, for fire modelling, disaster
9255_C010.fm Page 282 Tuesday, February 27, 2007 12:46 PM
6 mi
(a)
6 mi
(b)
6 mi
(c)
Hayman Fire
of products, from text messages with just the coordinates of active fires to
e-mails with a JPEG attachment showing an image of the area with the active
fire. These e-mails may also contain attribute data, such as the geographic
coordinates for the pixel flagged, the time and date of data acquisition, and
a confidence value.
10.4.3 Spatial Information Systems: Land Use and Land Cover Mapping
In recent years, satellite data have been incorporated into sophisticated infor-
mation systems of a geographical nature, allowing the synthesis of remotely
sensed data with existing information. The growing complexity of society
has increased the demand for timely and accurate information on the spatial
distribution of land and environmental resources, social and economic indi-
cators, land ownership and value, and their various interactions. Land and
geographic information systems attempt to model, in time and space, these
diverse relationships so that, at any location, data on the physical, social,
and administrative environment can be accessed, interrogated, and com-
bined to give valuable information to planners, administrators, resource
scientists, and researchers. Land information systems are traditionally parcel
based and concerned with information on land ownership, tenure, valuation,
and land use and tend to have an administrative bias. Their establishment
depends on a thorough knowledge of the cadastral system and of the hori-
zontal and vertical linkages that occur between and within government
departments that collect, store, and utilize land data. Geographic information
systems have developed from the resource-related needs of society and are
primarily concerned with inventory based on thematic information, partic-
ularly in the context of resource and asset management (see Burrough, 1986;
Rhind and Mounsey, 1990). The term “geospatial information system” is
used often to describe systems that encompass and link both land and
geographic information systems, but the distinctions in terminology are
seldom observed in common use.
By their very nature, geographic information systems rely on data from
many sources, such as field surveys, censuses, records from land title-deeds,
and remote sensing. The volume of data required has inevitably linked the
development of these systems to the development of information technology
with its growing capacity to store, manipulate, and display large amounts of
spatial data in both textural and graphical form. Fundamental to these systems
is an accurate knowledge of the location and reliability of the data; accordingly,
remote sensing is able to provide a significant input to these systems, in terms
of both the initial collection and subsequent updating of the data.
“Land use” refers to the current use of the land surface, whereas “land
cover” refers to the state or cover of the land only. Remotely sensed data of
the Earth’s surface generally provide information about land cover, with
interpretation or additional information being needed to ascertain land use.
Land use planning is concerned with achieving the optimum benefits in the
development and management of land, such as food production, housing,
9255_C010.fm Page 286 Tuesday, February 27, 2007 12:46 PM
(a) (b)
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
51° 51°
Sea surface temperature (°C)
50° −1 2 5 8 11 14 17 20 23 50°
49° 49°
47° 47°
46° 46°
45° 45°
44° 44°
42° 42°
41° 41°
40° 40°
39° 39°
38° 38°
37° 37°
36° 36°
35° 35°
34° 34°
33° 33°
32° 32°
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
Because of the relatively strong currents associated with the main core and
eddies, commercial shipping firms and sailors take advantage of these cur-
rents, or avoid them, and realize savings in fuel and transit time.
A temperature map obtained from SMMR data averaged over 3 days to
provide the sea ice and ocean-surface temperature, spectral gradient ratio,
and brightness temperature over the polar region at 150 km resolution has
already been given in Figure 2.15. Information on the sea-ice concentration,
spectral gradient, sea-surface wind speed, liquid water over oceans, percent
polarization over terrain, and sea-ice multiyear fractions may also be
obtained from the SMMR.
9255_C010.fm Page 293 Tuesday, February 27, 2007 12:46 PM
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
51° 51°
Chlorophyll (mg/m3)
49° 49°
47° 47°
46° 46°
45° 45°
44° 44°
42° 42°
41° 41°
40° 40°
39° 39°
38° 38°
37° 37°
36° 36°
35° 35°
34° 34°
33° 33°
32° 32°
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
TABLE 10.2
Oil Slick Detectability by SAR at Different Wind Speeds (ENVISYS)
Wind Speed ms−1 Oil Slick Detectability
0 No backscatter from the undisturbed sea surface, hence no signature of
oil slicks.
0-3 Excellent perturbation of the slightly roughened sea surface with no
impact from the wind on the oil slick itself. A high probability of false-
positives due to local wind variations.
3-7 Reduced false-positives attributable to local low-wind areas. The oil
slick will still be visible in the data and the background more
homogeneous.
7-10 Only thick oil visible. The maximum wind strength for slick detection
is a variable depending on the oil type and slick age. Thick oil may be
visible with wind stronger than 10 ms−1.
FIGURE 10.28
Oil spill in the eastern Atlantic off northwest Spain in November 2002 from the tanker Prestige. (ESA.)
be viewed in light of the lifetime of the oil slicks. A small slick might disperse
in hours, whereas a larger slick might have a lifetime of several days.
Figure 10.28 shows an Envisat ASAR image as an example of the environ-
mental utility of satellites for detecting and monitoring oil spills. The image
shows an oil spill in the eastern Atlantic Ocean off northwest Spain that
occurred in November 2002 when the tanker Prestige sank with most of its
cargo of 25 million barrels of oil. The tanker was positioned at the head of
the oil slick in the southwest portion of the image. The Prestige started leaking
fuel on November 14, when she encountered a violent storm about 150 miles
off Spain’s Atlantic coast. For several days, the leaking tanker was pulled
away from the shore, but it split in half on November 19. About 1.5 million
barrels of oil escaped, some reaching coastal beaches in the east portion of
the image. The image also shows that when under tow the crippled tanker
was actually carried southward spreading the oil spill into a long “fuel front”
to the west of the coast, exposing almost the entire Atlantic coastline of
Galicia. The towing operation was considered by some to be a mistake
9255_C010.fm Page 296 Tuesday, February 27, 2007 12:46 PM
because it did not take into account that the winds in autumn normally blow
from the west, and forecasts indicated westerly (eastward-flowing) winds
over the area for the period.
FIGURE 10.29
SLAR image of an icebreaker and drilling ship. (Canada Centre for Remote Sensing.)
FIGURE 10.30
SAR-derived iceberg analysis of the Southern Ocean for February 23, 2006, in support of the
2005–2006 round-the-world Volvo Ocean Race. (C-CORE.)
9255_C010.fm Page 298 Tuesday, February 27, 2007 12:46 PM
FIGURE 10.31
SAR image used to produce Figure 10.30 showing icebergs in the Southern Ocean (C-CORE).
Icebergs typically have a stronger radar signal return than the open ocean.
After initial processing to remove “cluttering” effects from ocean waves, the
shape, number of pixels, and intensity of the signal returns were analyzed
to identify icebergs and to differentiate between icebergs and ships, which
can appear similar (see Figure 10.31).
Figure 10.32 shows the mean monthly surface emissivity for January 1979
measured at 50.3 GHz as derived from the analysis of HIRS-2/MSU data for
the whole globe. Sea ice extent and snow cover can be determined from this
field. The emissivity of snow-free land is typically 0.9 to 1.0, whereas the
emissivity of a water surface ranges from 0.5 to 0.65, increasing with decreas-
ing surface temperature. Mixed ocean-land areas have intermediate values.
The continents are clearly indicated as well as a number of islands, seas, and
lakes. Snow-covered land has an emissivity of 0.85 or less, with emissivity
decreasing with increasing snow depth. The snow line, clearly visible in
North America and Asia, gives good agreement with that determined from
visible imagery. Newly frozen sea ice has an emissivity of 0.9 or more. Note
for example Hudson Bay, the Sea of Okhotsk, the center of Baffin Bay, and
the Chuckchi, Laptev, and East Siberian Seas. Mixed sea ice and open water
has emissivities between 0.69 and 0.90. The onset of significant amounts of
sea ice is indicated by the 0.70 contour. Comparisons of this in Baffin Bay,
the Denmark Strait, and the Greenland Sea show excellent agreement with
the 40% sea ice extent determined from the analysis of SMMR data from the
same period. Multiyear ice, such as found in the Arctic Ocean north of the
Beaufort Sea, is indicated by emissivities less than 0.80.
9255_C010.fm Page 299 Tuesday, February 27, 2007 12:46 PM
B-15a iceberg
50 km
FIGURE 10.33
MODIS image of January 13, 2005, showing McMurdo Sound break into pieces and the giant
B-15A iceberg, 129 km (80 miles) in length. (NASA/GSFC/MODIS Rapid Response Team.)
and large-scale homogeneous ice surfaces, but SIRAL’s design was intended
to provide detailed views of irregular sloping edges of land ice as well as
nonhomogenous ocean ice. CryoSat would have monitored precise changes
in the thickness of the polar ice sheets and floating sea ice and should have
provided conclusive evidence of rates at which ice cover may be diminishing.
A CryoSat-2 replacement mission is expected to be launched in March 2009.
10.7 Postscript
Since the launch of Landsat-1 in 1972, a continuous and growing stream of
satellite-derived Earth resources data have become available. It is certain
that tremendous amounts of additional remote sensing data will become
available, but the extent to which the data will actually be analyzed and
interpreted for solving “real-world” problems is somewhat less certain.
There is a shortage of investigations that interpret and utilize the information
to advantage because investment in the systems involved in producing the
data continues not to be matched with a similar investment in the use made
of it. Although remotely sensed data have been used extensively in research
9255_C010.fm Page 301 Tuesday, February 27, 2007 12:46 PM
References
303
9255_C011.fm Page 304 Wednesday, September 27, 2006 6:51 PM
Bartholmé, E., and Belward, A.S. “GLC2000: A New Approach to Global Land Cover
Mapping from Earth Observation Data,” International Journal of Remote Sensing,
26:1959, 2005.
Barton, I.J. “Satellite-Derived Sea Surface Temperatures: Current Status,” Journal of
Geophysical Research, 100:8777, 1995.
Barton, I.G., and Cechet, R.P. “Comparison and Optimization of AVHRR Sea Surface
Temperature Algorithms,” Journal of Atmospheric and Oceanic Technology, 6:1083,
1989.
Baylis, P.E. “Guide to the Design and Specification of a Primary User Receiving
Station for Meteorological and Oceanographic Satellite Data,” in Remote Sensing
in Meteorology, Oceanography, and Hydrology. Edited by Cracknell, A.P. Chichester,
U.K.: Ellis Horwood, 1981.
Baylis, P.E. “University of Dundee Satellite Data Reception and Archiving Facility,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: D. Reidel, 1983.
Bernstein, R.L. “Sea Surface Temperature Estimation Using the NOAA-6 Satellite
Advanced Very High Resolution Radiometer,” Journal of Geophysical Research,
87C:9455, 1982.
Bowers, D.G., Crook, P. J. E. and Simpson, J. H., An Evaluation of Sea Surface
Temperature Estimates from the AVHRR.” Remote Sensing and the Atmosphere:
Proceedings of the Annual Technical Conference of the Remote Sensing Society,
Liverpool, Reading: Remote Sensing Society, December 1982.
Bristow, M., and Nielsen, D. Remote Monitoring of Organic Carbon in Surface Waters.
Report No. EPA-600/4-81-001, Las Vegas, NV: Environmental Monitoring Sys-
tems Laboratory, U.S. Environmental Protection Agency, 1981.
Bristow, M., Nielsen, D., Bundy, D. and Furtek, R., “Use of Water Raman Emission
to Correct Airborne Laser Fluorosensor Data for Effects of Water Optical
Attenuation,” Applied Optics, 20:2889, 1981.
Brown, C.E., Fingas, M.F., and Mullin, J.V., “Laser-Based Sensors for Oil Spill Remote
Sensing,” in Advances in Laser Remote Sensing for Terrestrial and Oceanographic
Applications. Edited by Narayanan R.M., and Kalshoven, J.E. Proceedings of SPIE,
3059:120, 1997.
Bullard, R.K. “Land into Sea Does Not Go,” in Remote Sensing Applications in
Marine Science and Technology. Edited by Cracknell, A.P. Dordrecht: D. Reidel,
1983a.
Bullard, R.K. “Detection of Marine Contours from Landsat Film and Tape,” in Remote
Sensing Applications in Marine Science and Technology. Edited by Cracknell, A.P.
Dordrecht: D. Reidel, 1983b.
Bullard, R.K., and Dixon-Gough, R.W. Britain from Space: An Atlas of Landsat Images.
London: Taylor & Francis, 1985.
Bunkin, A.F., and Voliak, K.I. Laser Remote Sensing of the Ocean. New York: Wiley, 2001.
Burrough, P.A. Principles of Geographical Information Systems for Land Resources Assess-
ment. Oxford: Oxford University Press, 1986.
Callison, R.D., and Cracknell, A.P. “Atmospheric Correction to AVHRR Brightness
Temperatures for Waters around Great Britain,” International Journal of Remote
Sensing, 5:185, 1984.
Chahine, M., “Measuring Atmospheric Water and Energy Profiles from Space,” 5th
International Scientific Conference on the Global Energy and Water Cycle,
GEWEX, 20-24 June, 2005.
9255_C011.fm Page 305 Wednesday, September 27, 2006 6:51 PM
References 305
Chappelle, E.W., Wood, F.M., McMurtrey, J.E. and Newcombe, W.W., “Laser-Induced
Fluorescence of Green Plants. 1: A Technique for Remote Detection of Plant
Stress and Species Differentiation,” Applied Optics, 23:134, 1984.
Chedin, A., Scott, N. A. and Berroir, A., “A Single-Channel Double-Viewing Angle
Method for Sea Surface Temperature Determination from Coincident Meteosat
and TIROS-N Radiometric Measurements,” Journal of Applied Meteorology,
21:613, 1982.
Chekalyuk, A.M., “Demidov, A.A., Fadeev, V.V., and Gorbunov, M.Yu., Lidar Mon-
itoring of Phytoplankton and Dissolved Organic Matter in the Inner Seas of
Europe,” Advances in Remote Sensing, 3:131, 1995.
Chekalyuk, A.M., Hoge, F.E., Wright, C.W. and Swift, R.N., “Short-Pulse Pump-and-
Probe Technique for Airborne Laser Assessment of Photosystem II Photochem-
ical Characteristics,” Photosynthesis Research, 66:33, 2000.
Clark, D.K., Gordon, H.R., Voss, K.J., Ge, Y, Broenkow,W. and Trees, C., “Validation
of Atmospheric Correction over the Oceans,” Journal of Geophysical Research,
102:17209, 1997.
Colton, M.C., and Poe, G.A. “Intersensor Calibration of DMSP SSM/I's: F-8 to F14,
1987–1997,” IEEE Trans on Geoscience and Remote Sensing, 37/1:418, 1999.
Colwell, R.N. Manual of Remote Sensing. Falls Church, VA: American Society of Pho-
togrammetry, 1983.
Cook, A.F. “Investigating Abandoned Limestone Mines in the West Midlands of
England with Scanning Sonar,” International Journal of Remote Sensing, 6:611,
1985.
Cracknell, A.P. Ultrasonics. London: Taylor & Francis, 1980.
Cracknell, A.P. The Advanced Very High Resolution Radiometer. London: Taylor and
Francis, 1997.
Cracknell, A.P., Remote Sensing and Climate Change: Role of Earth Observation
.Berlin: Springer-Praxis, 2001.
Cracknell, A.P., MacFarlane, N., McMillan, K., Charlton, J. A., McManus, J. and
Ulbricht, K. A., ”Remote Sensing in Scotland Using Data Received from Satel-
lites. A Study of the Tay Estuary Region Using Landsat Multispectral Scanning
Imagery,” International Journal of Remote Sensing, 3:113, 1982.
Cracknell, A.P., and Singh, S.M. “The Determination of Chlorophyll-a and Suspended
Sediment Concentrations for EURASEP Test Site, during North Sea Ocean
Colour Scanner Experiment, from an Analysis of a Landsat Scene of 27th June
1977.” Proceedings of the 14th Congress of the International Society of Photogram-
metry, Hamburg, International Archives of Photogrammetry, 23(B7):225, 1980.
Crombie, D.D. “Doppler Spectrum of Sea Echo at 13.56 Mc/s,” Nature, 175:681, 1955.
Curlander, J., and McDonough, R. Synthetic Aperture Radar: Systems and Signal Pro-
cessing. New York: Wiley, 1991.
Cutrona, L.J., Leith, E. N., Porcello, L. J. and Vivian, W. E., “On the Application of
Coherent Optical Processing Techniques,” Proceedings of the IEEE, 54:1026, 1966.
Emery, W.J., “Yu, Y., Wick, G.A., Schlüssel, P. and Reynolds, R.W., Correcting Infrared
Satellite Estimates of Sea Surface Temperature for Atmospheric Water Vapour
Contamination,” Journal of Geophysical Research, 99:5219, 1994.
Eplee, R.E., et al. “Calibration of SeaWiFS. II. Vicarious Techniques,” Applied Optics,
40:6701, 2001.
Evans, R.H., and Gordon, H.R. “Coastal Zone Color Scanner System Calibration: A
Retrospective Examination,” Journal of Geophysical Research, 99:7293, 1994.
9255_C011.fm Page 306 Wednesday, September 27, 2006 6:51 PM
References 307
Gordon, H.R., and Morel, A. Remote Assessment of Ocean Color for Interpretation of
Satellite Visible Imagery: A Review. New York: Springer, 1983.
Gordon, H.R., and Wang, M. “Retrieval of Water-Leaving Radiance and Aerosol
Optical Thickness over the Oceans with SeaWiFS: A Preliminary Algorithm,”
Applied Optics, 33:443, 1994.
Govindjee. “Sixty-Three Years since Kautski: Chlorophyll-a Fluorescence,” Australian
Journal of Plant Physiology, 22:131, 1995.
Graham, L.C. “Synthetic Interferometer Radar for Topographic Mapping,” Proceed-
ings of the IEEE, 62:763,1974.
Guymer, T.H. “Remote Sensing of Sea-Surface Winds,” in Remote Sensing Applications
in Meteorology and Climatology. Edited by Vaughan, R.A. Dordrecht: D. Reidel,
1987.
Guymer, T.H., Businger, J. A., Jones, W. L. and Stewart, R. H., “Anomalous Wind
Estimates from the Seasat Scatterometer,” Nature, 294:735, 1981.
Hansen, M.C., and Reed, B. “A Comparison of the IGBP DISCover and University
of Maryland 1-km Global Land Cover Products,” International Journal of Remote
Sensing, 21:1365, 2000.
Henderson, F.M. and Lewis, A.J., Manual of Remote Sensing, volume 2, Principles and
Applications of Imaging Radar, New York: Wiley, 1998.
Hoge, F.E. “Oceanic and Terrestrial Lidar Measurements,” in Laser Remote Chemical
Analysis. Edited by Measures, R.M. New York: Wiley, 1988.
Hoge, F.E., and Swift, R.N. “Oil Film Thickness Measurement Using Airborne Laser-
Induced Water Raman Backscatter,” Applied Optics, 19:3269, 1980.
Hoge, F.E., and Swift, R.N. “Absolute Tracer Dye Concentration Using Airborne
Laser-Induced Water Raman Backscatter,” Applied Optics, 20:1191, 1981.
Hoge, F.E., et al. “Water Depth Measurement Using an Airborne Pulsed Neon Laser
System,” Applied Optics, 19:871, 1980.
Hoge, F.E., et al. “Active-Passive Airborne Ocean Color Measurement. 2: Applica-
tions,” Applied Optics, 25:48, 1986.
Hoge, F.E., et al. “Radiance-Ratio Algorithm Wavelengths for Remote Oceanic Chlo-
rophyll Determination,” Applied Optics, 26:2082, 1987.
Hoge, F.E., and Swift, R.N. “Oil Film Thickness Using Airborne Laser-Induced Oil
Fluorescence Backscatter,” Applied Optics, 22:3316, 1983.
Hollinger, J.P., Peirce, J.L. and Poe, G.A, “SSM/I Instrument Evaluation,” IEEE
Transactions on Geoscience and Remote Sensing, 28:781, 1990.
Holyer, R.J. “A Two-Satellite Method for Measurement of Sea Surface Temperature,”
International Journal of Remote Sensing, 5:115, 1984.
Hooker, S.B., and McClain, C.R. “The Calibration and Validation of SeaWiFS Data,”
Progress in Oceanography, 45:427, 2000.
Hooker, S.B., Esaias, W.E., Feldman, G.C., Gregg, W.W. and McClain, C.R., An
Overview of SeaWiFS Ocean Color, National Aeronautics and Space Administra-
tion (NASA) Tech. Memo. 104566 1. Edited by Hooker, S.B. and Firestone, E.R.
Greenbelt, MD: NASA Goddard Space Flight Center, 1992.
Hotelling, H. “Analysis of a Complex of Statistical Variables into Principal Compo-
nents,” Journal of Educational Psychology, 24:417, 1933.
Hutchison, K.D., and Cracknell, A.P., Visible Infrared Imager Radiometer Suite, A New
Operational Cloud Imager, Boca Raton: CRC - Taylor and Francis, 2006.
International Atomic Energy Agency. Airborne Gamma Ray Spectrometer Surveying.
Vienna, International Atomic Energy Agency, 1991.
9255_C011.fm Page 308 Wednesday, September 27, 2006 6:51 PM
Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective. Upper
Saddle River, NJ: Prentice Hall, 1996.
Jones, W.L., et al. “Seasat Scatterometer: Results of the Gulf of Alaska Workshop,”
Science, 204:1413, 1979.
Jones, W.L., et al. “Evaluation of the Seasat Wind Scatterometer,” Nature, 294:704, 1981.
Kidwell, K.B. NOAA Polar Orbiter Data User’s Guide (TIROS-N, NOAA-6, NOAA-7,
NOAA-8, NOAA-9, NOAA-10, NOAA-11, NOAA-12, NOAA-13, and NOAA-14),
November 1998 revision. MD: U.S. Department of Commerce, 1998.
Kilpatrick, K.A., et al. “Overview of the NASA/NOAA Advanced Very High Reso-
lution Radiometer Pathfinder Algorithm for Sea Surface Temperature and
Associated Matchup Database,” Journal of Geophysical Research, 106:9179, 2001.
Kim, H.H. “New Algae Mapping Technique by the Use of Airborne Laser Fluorosen-
sor,” Applied Optics, 12:1454, 1973.
Kolawole, M.O. Radar Systems, Peak Detection, and Tracking. Oxford: Newnes, 2002.
Kondratyev, K.Y., and Cracknell, A.P., Observing Global Climate Change, London:
Taylor and Francis, 1998.
Krabill, W.B., Collins, J.G., Link, L.E., Swift, R.N., and Butler, M.L., “Airborne Laser
Topographic Mapping Results,” Photogrammetric Engineering and Remote Sens-
ing, 50:685, 1984.
Krabill, W.B., Thomas, R.H., Martin, L.F., Swift, R.N., and Frederick, E.B., “Accuracy
of Airborne Laser Altimetry over the Greenland Ice Sheet,” International Journal
of Remote Sensing, 16: 1211,1994.
Krabill, W.B., et al. Collins, J.G., Link, L.E., Swift, R.N. and Butler, M.L., 1984, “Air-
borne Laser Topographic Mapping Results,” Remote Sensing of the Environment,
50:685, 1984.
Kramer, D.M., and Crofts, A.R. “Control and Measurement of Photosynthetic Electron
Transport in vivo,” in Photosynthesis and the Environment. Edited by Baker, N.R.
Dordrecht, Kluwer, 1996.
Kramer, H.J. Observation of the Earth and its Environment. Berlin: Springer, 2002. An
updated and even more comprehensive version of this book is available on
the website: http://directory.eoportal.org/pres_ObservationoftheEarthandits
Environment.html.
Labs, D., and Neckel, H. “The Absolute Radiation Intensity of the Centre of the Sun
disc in the Spectral Range 3288-12480 Å,” Zeitschrift für Astrophysik, 65:133, 1967.
Labs, D., and Neckel, H. “The Radiation of the Solar Photosphere,” Zeitschrift für
Astrophysik, 69:1, 1968.
Labs, D., and Neckel, H. “Transformation of the Absolute Solar Radiation Data into
the International Practical Temperature Scale of 1968,” Solar Physics, 15:79, 1970.
Lang, M., Lichtenthaler, H.K., Sowinska, M., Heisel, F. and Miehé, J.A., “Fluorescence
Imaging of Water and Temperature Stress in Plant Leaves,” Journal of Plant
Physiology, 148:613, 1996.
Lang, M., Stober, F. and Lichtenthaler, H.K., “Fluorescence Emission Spectra of Plant
Leaves and Plant Constituents,” Radiation and Environmental Biophysics, 30:333,
1991.
Lauritson, L., Nelson, G. J. and Porto, F. W., Data Extraction and Calibration of TIROS-
N/NOAA Radiometers, NOAA Technical Memorandum NESS 107. Washington,
DC: U.S. Department of Commerce, 1979.
Lewis, J.K., Shulman, I. and Blumberg, A.F., “Assimilation of Doppler Radar Current
Data into Numerical Ocean Models,” Continental Shelf Research, 18:541, 1998.
9255_C011.fm Page 309 Wednesday, September 27, 2006 6:51 PM
References 309
Maul, G.A., and Sidran, M. “Comment on Anding and Kauth,” Remote Sensing of
Environment, 2:165, 1972.
Muirhead, K., and Cracknell, A.P. “Review Article: Airborne Lidar Bathymetry,”
International Journal of Remote Sensing, 7:597, 1986.
Narayanan, R.M., and Kalshoven, J.E., eds. Proceedings of SPIE: Advances in Laser
Remote Sensing for Terrestrial and Oceanographic Applications, Orlando, FL, April
21–22, 1997, 3059. Bellingham, WA: SPIE-International Society for Optical
Engineering, 1997.
National Aeronautics and Space Administration (NASA). Landsat Data User’s Hand-
book. Document No. 76SDS4258. Greenbelt, MD: NASA, 1976.
Needham, B.H. “NOAA’s Activities in the Field of Marine Remote Sensing,” in Remote
Sensing Applications in Marine Science and Technology. Edited by Cracknell, A.P.
Dordrecht: D. Reidel, 1983.
Njoku, E.G., and Swanson, L. “Global Measurements of Sea Surface Temperature,
Wind Speed, and Atmospheric Water Content from Satellite Microwave Radi-
ometry,” Monthly Weather Review, 111:1977, 1983.
Offiler, D. “Surface Wind Vector Measurements from Satellites,” in Remote Sensing
Applications in Marine Science and Technology. Edited by Cracknell, A.P. Dordrecht:
D. Reidel, 1983.
O’Neil, R.A., Buga-Bijunas, L. and Rayner, D. M., ”Field Performance of a Laser
Fluorosensor for the Detection of Oil Spills,” Applied Optics, 19:863, 1980.
O’Neil, R.A., Hoge, F. E. and Bristow, M. P. F., “The Current Status of Airborne Laser
Fluorosensing,” Proceedings of 15th International Symposium on Remote Sensing
of Environment, Ann Arbor, MI, May 1981.
O’Reilly, J.E., et al. “Ocean Colour Chlorophyll Algorithms for SeaWiFS,” Journal of
Geophysical Research, 103:24937, 1998.
Prabhakara, G., Dalu, G. and Kunde, V. G., “Estimation of Sea Surface Temperature
from Remote Sensing in the 11- to 13-µm Window Region,” Journal of Geophys-
ical Research, 79:5039, 1974.
Rao, P.K., Holmes, S.J., Anderson, R.K., Winston, J.S. and Lehr, P.E., Weather Satellites:
Systems, Data and Environmental Applications. Boston: American Meteorological
Society, 1990.
Rao, P.K., Smith, W. L. and Koffler, R., “Global Sea Surface Temperature Distribution
Determined from an Environmental Satellite,” Monthly Weather Review, 100:10,
1972.
Rencz, A.N. and Ryerson, R.A., Manual of Remote Sensing, volume 3, Remote Sensing
for the Earth Sciences,New York: Wiley, 1999.
Rhind, D.W., and Mounsey, H. Understanding Geographic Information Systems. London:
Taylor & Francis, 1991.
Rice, S.O. “Reflection of Electromagnetic Waves from Slightly Rough Surfaces,” in
Theory of Electromagnetic Waves. Edited by Kline, M. New York: Interscience,
1951.
Robinson, I.S. Measuring the Oceans from Space: The Principles and Methods of Satellite
Oceanography. Berlin: Springer-Praxis, 2004.
Rogers, A.E.E., and Ingalls, R.P. “Venus: Mapping the Surface Reflectivity by Radar
Interferometry,” Science, 165:797, 1969.
Ryerson, R.A., Manual of Remote Sensing: Remote Sensing of Human Settlements, Falls
Church: ASPRS, 2006.
Sabins, F.F. Remote Sensing: Principles and Interpretation. New York: John Wiley, 1986.
9255_C011.fm Page 311 Wednesday, September 27, 2006 6:51 PM
References 311
Sathyendranath, S., and Morel, A. “Light Emerging from the Sea: Interpretation and
Uses in Remote Sensing,” in Remote Sensing Applications in Marine Science and
Technology. Edited by Cracknell, A.P. Dordrecht: D. Reidel, 1983.
Saunders, R.W. “Methods for the Detection of Cloudy Pixels,” Remote Sensing and the
Atmosphere: Proceedings of the Annual Technical Conference of the Remote Sensing
Society, Liverpool, December 1982, Reading: Remote Sensing Society, 1982.
Saunders, R.W., and Kriebel, K.T. “An Improved Method for Detecting Clear Sky
Radiances from AVHRR Data,” International Journal of Remote Sensing, 9:123,
1988.
Schneider, S.R., McGinnis, D. F. and Gatlin, J. A., Use of NOAA/AVHRR Visible and
Near-Infrared Data for Land Remote Sensing. NOAA Technical Report NESS 84.
Washington, DC: U.S. Department of Commerce, 1981.
Schroeder, L.C., et al. “The Relationship Between Wind Vector and Normalised Radar
Cross Section Used to Derive Seasat-A Satellite Scatterometer Winds,” Journal
of Geophysical Research, 87:3318, 1982.
Schwalb, A. The TIROS-N/NOAA A-G Satellite Series (NOAA E-J) Advanced TIROS-N
(ATN). NOAA Technical Memorandum NESS 116. Washington, DC: United
States Department of Commerce, 1978.
Shearman, E.D.R. “Remote Sensing of Ocean Waves, Currents, and Surface Winds
by Dekametric Radar,” in Remote Sensing in Meteorology, Oceanography, and
Hydrology. Edited by Cracknell, A.P. Chichester, U.K.: Ellis Horwood, 1981.
Sheffield, C. Earthwatch: A Survey of the Earth from Space. London: Sidgwick and
Jackson, 1981.
Sheffield, C. Man on Earth. London: Sidgwick and Jackson, 1983.
Sidran, M. “Infrared Sensing of Sea Surface Temperature from Space,” Remote Sensing
of the Environment, 10:101, 1980.
Singh, S.M., Cracknell, A. P. and Charlton, J. A., “Comparison between CZCS Data
from 10 July 1979 and Simultaneous in situ Measurements for Southeastern
Scottish Waters,” International Journal of Remote Sensing, 4:755, 1983.
Singh, S.M., et al. “Cracknell, A. P. and Spitzer, D., 1985, Evaluation of Sensitivity
Decay of Coastal Zone Colour Scanner (CZCS) Detectors by Comparison with
in situ Near-Surface Radiance Measurements,” International Journal of Remote
Sensing, 6:749, 1985.
Singh, S.M., and Warren, D.E. “Sea Surface Temperatures from Infrared Measure-
ments,” in Remote Sensing Applications in Marine Science and Technology. Edited
by Cracknell, A.P. Dordrecht: D. Reidel, 1983.
Smart, P.L., and Laidlaw, I.M.S. “An Evaluation of Some Fluorescent Dyes for Water
Tracing,” Water Resources Research, 13:15, 1977.
Stephens, G.L., et al. “A Comparison of SSM/I and TOVS Column Water Vapor Data
over the Global Oceans,” Meteorology and Atmospheric Physics, 54:183, 1994.
Stoffelen, A., and Anderson, D. “Ambiguity Removal and Assimilation of Scattero-
meter Data,” Quarterly Journal of the Royal Meteorological. Society, 123:491, 1997.
Sturm, B. “The Atmospheric Correction of Remotely Sensed Data and the Quantita-
tive Determination of Suspended Matter in Marine Water Surface Layers,” in
Remote Sensing in Meteorology, Oceanography, and Hydrology. Edited by Cracknell,
A.P. Chichester, U.K.: Ellis Horwood, 1981.
Sturm, B. “Selected Topics of Coastal Zone Color Scanner (CZCS) Data Evaluation,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: Kluwer, 1983.
9255_C011.fm Page 312 Wednesday, September 27, 2006 6:51 PM
Sturm, B. “CZCS Data Processing Algorithms,” in Ocean Colour: Theory and Applica-
tions in a Decade of CZCS Experience. Edited by Barale, V., and Schlittenhardt,
P.M. Dordrecht: Kluwer, 1993.
Summers, R.J. Educator’s Guide for Building and Operating Environmental Satellite Receiving
Stations. NOAA Technical Report NESDIS 44. Washington, DC: United States
Department of Commerce, 1989.
Tapley, B.D., et al. “The Gravity Recovery and Climate Experiment: Mission Overview
and Early Results,” Geophysical Research Letters, 31:L09607, 2004.
Teillet, P.M., Slater, P.N., Ding, Y., Santer, R.P., Jackson, R.D. and Moran, M.S., “Three
methods for the absolute calibration of the NOAA AVHRR sensors in flight,”
Remote Sensing of Environment, 31, 105, 1990.
Thekaekara, M.P., Kruger, R. and Duncan, C. H., “Solar Irradiance Measurements
from a Research Aircraft,” Applied Optics, 8:1713, 1969.
Thomas, D.P. “Microwave Radiometry and Applications,” in Remote Sensing in
Meteorology, Oceanography, and Hydrology. Edited by Cracknell, A.P. Chichester,
U.K.: Ellis Horwood, 1981.
Tighe, M.L. “Topographic Mapping from Interferometric SAR Data is Becoming an
Accepted Mapping Technology,” in Conference Proceedings of Map Asia 2003,
Putra World Trade Centre, Kuala Lumpur, Malaysia, October 13–15, 2003.
www.gisdevelopment.net/proceedings/mapasia/2003/index.htm.
Townsend, W.F. “An Initial Assessment of the Performance Achieved by the Seasat-1
Radar Altimeter,” IEEE Journal of Oceanographical Engineering, OE-5:80, 1980.
Turton, D., and Jonas, D. “Airborne Laser Scanning: Cost-Effective Spatial Data,” in
Conference Proceedings of Map Asia 2003, Putra World Trade Centre, Kuala Lumpur,
Malaysia, October 13-15, 2003. www.gisdevelopment.net/proceedings/mapasia/2003/
index.htm.
Ustin, S., Manual of Remote Sensing, volume 4, Remote Sensing for Natural Resource
Management and Environmental Monitoring, New York: Wiley, 2004.
Valerio, C. “Airborne Remote Sensing Experiments with a Fluorescent Tracer,” in
Remote Sensing in Meteorology, Oceanography, and Hydrology. Edited by Cracknell,
A.P. Chichester, U.K.: Ellis Horwood, 1981.
Valerio, C. “Airborne Remote Sensing and Experiments with Fluorescent Tracers,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: Kluwer, 1983.
Vermote, E., and El Saleous, N. “Absolute Calibration of AVHRR Channels 1 and 2,”
in D’Souza, G., Belward, A.S. and Malingreau, J-P (eds) Advances in the Use of
NOAA AVHRR Data for Land Applications. Dordrecht: Kluwer, 1996.
Vermote, E., and Roger, J.C. “Radiative Transfer Modelling for Calibration and
Atmospheric Correction,” in Advances in the Use of NOAA AVHRR Data for Land
Applications. Edited by D’Souza, G. et al. Dordrecht: Kluwer, 1996.
Voigt, S., et al. “Integrating Satellite Remote Sensing Techniques for Detection and
Analysis of Uncontrolled Coal Seam Fires in North China,” International Journal
of Coal Geology, 59:, 121, 2004.
Wadhams, P., Tucker, W.B., Krabill, W.B., Swift, R.N., Comiso, J.C. and Davis, N.R.,
“Relationship between Sea Ice Freeboard and Draft in the Artic Basin, and
Implications for Ice Thickness Monitoring,” Journal of Geophysical Research,
97:20325, 1992.
Walton, C.C. “Nonlinear Multichannel Algorithm for Estimating Sea Surface Tem-
perature with AVHRR Satellite Data,” Journal of Applied Meteorology, 27:115,
1988.
9255_C011.fm Page 313 Wednesday, September 27, 2006 6:51 PM
References 313
Ward, J.F. “Power Spectra from Ocean Movements Measured Remotely by Ionospheric
Radio Backscatter,” Nature, 223:1325, 1969.
Weinreb, M.P., and Hill, M.L. Calculation of Atmospheric Radiances and Brightness Tem-
peratures in Infrared Window Channels of Satellite Radiometers. NOAA Technical
Report NESS 80. Rockville, MD: U.S. Department of Commerce, 1980.
Werbowetzki, A. Atmospheric Sounding User’s Guide. NOAA Technical Report NESS
83. Washington, DC: U.S. Department of Commerce, 1981.
Wilson, H.R. “Elementary Ideas of Optical Image Processing,” in Remote Sensing in
Meteorology, Oceanography, and Hydrology. Edited by Cracknell, A.P. Chichester,
U.K.: Ellis Horwood, 1981.
Wilson, S.B., and Anderson, J.M. “A Thermal Plume in the Tay Estuary Detected by
Aerial Thermography,” International Journal of Remote Sensing, 5:247, 1984.
Woodhouse, I.H. Introduction to Microwave Remote Sensing. Boca Raton: CRC Press, 2006.
Wu, X., et al. “A Climatology of the Water Vapor Band Brightness Temperatures from
NOAA Operational Satellites,” Journal of Climate, 6:1282, 1993.
Wurtele, M.G.,. Woiceshyn, P. M., Peteherych, S., Borowski, M. and Appleby, W. S.,
“Wind Direction Alias Removal Studies of Seasat Scatterometer-Derived Wind
Fields,” Journal of Geophysical Research, 87:3365, 1982.
Wyatt, L. “The Measurement of Oceanographic Parameters Using Dekametric Radar,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: Kluwer, 1983.
Zwick, H.H., Neville, R. A. and O’Neil, R. A., “A Recommended Sensor Package
for the Detection and Tracking of Oil Spills,” Proceedings of an EARSeL ESA
Symposium, ESA SP-167, 77, Voss, Norway, May 1981.
Bibliography
The following references are not specifically cited in the text but are general
references that readers may find useful as sources of further information or
discussion.
Allan, T. D., 1983, Satellite Microwave Remote Sensing (Chichester, U.K.: Ellis Horwood).
Carter, D.J. The Remote Sensing Sourcebook: A Guide to Remote Sensing Products, Services,
Facilities, Publications and Other Materials. London: Kogan Page, McCarta, 1986.
Cracknell, A.P., ed. Remote Sensing in Meteorology, Oceanography, and Hydrology. Chichester,
U.K.: Ellis Horwood, 1981.
Cracknell, A.P., ed. Remote Sensing Applications in Marine Science and Technology. Dordrecht:
Kluwer, 1983.
Cracknell, A.P., et al. (eds.). Remote Sensing Yearbook. London: Taylor & Francis, 1990.
Curran, P.J. Principles of Remote Sensing. New York: Longman, 1985.
Drury, S.A. Image Interpretation in Geology. London: George Allen & Unwin, 1987.
D’Souza, G., et al. Advances in the Use of NOAA AVHRR Data for Land Applications.
Dordrecht: Kluwer, 1996.
Griersmith, D.C., and Kingwell, J. Planet Under Scrutiny: An Australian Remote Sensing
Glossary. Canberra: Australian Government Publishing Service, 1988.
Hall, D.K., and Martinec, J. Remote Sensing of Ice and Snow. London: Chapman and
Hall, 1985.
9255_C011.fm Page 314 Wednesday, September 27, 2006 6:51 PM
References 315
Trevett, J.W. Imaging Radar for Resources Surveys. London: Chapman and Hall, 1986.
Ulaby, F.T., et al. Microwave Remote Sensing: Active and Passive: Volume 1, MRS Funda-
mentals and Radiometry. Reading, MA: Addison-Wesley, 1981.
Ulaby, F.T., Moore, R. K. and Fung, A. F., Radar Remote Sensing and Surface Scattering
and Emission Theory. Reading, MA: Addison-Wesley, 1982.
Ulaby, F.T., Moore, R. K. and Fung, A. F., From Theory to Applications. London: Adtech,
1986.
Vertsappen, H.T. Remote Sensing in Geomorphology. Amsterdam: Elsevier, 1977.
Widger, W.K.Meteorological Satellites. New York: Holt, Rinehart & Winston, 1966.
Yates, H.W., and Bandeen, W.R. “Meteorological Applications of Remote Sensing
from Satellites,” Proceedings IEEE, 63:148, 1975.
9255_C011.fm Page 316 Wednesday, September 27, 2006 6:51 PM
9255_A001.fm Page 317 Friday, February 16, 2007 5:03 PM
Appendix
Abbreviations and Acronyms
This list includes many of the abbreviations and acronyms that one is likely
to encounter in the field of remote sensing and is not limited to those used
in this book. The list has been compiled from a variety of sources including:
317
9255_A001.fm Page 318 Friday, February 16, 2007 5:03 PM
Appendix 319
Appendix 321
Appendix 323
Appendix 325
TM Thematic Mapper
TOGA Tropical Oceans Global Atmosphere
TOMS Total Ozone Mapping Spectrometer
TOPEX/Poseidon NASA/CNES Ocean Topography
Experiment
TOS TIROS Operational System
TOVS TIROS Operational Vertical Sounder
TRF Technical Reference File
TRMM Tropical Rainfall Monitoring Mission
TRSC Thailand Remote Sensing Center
Index
327
9255_index.fm Page 328 Tuesday, February 27, 2007 12:57 PM
Index 329
El Niño, 257–260 G
Electromagnetic radiation; see also Planck’s
radiation formula GAC (global area coverage), 55, 69, 76
geological information from, 268–270 Gamma ray spectroscopy, 108–112
infrared, 25–26 Gens, R., 155, 157
microwave, 27–29 Geoid, measurement of, 129–131, 133–134
near-infrared, 26 Geostationary Operational Environmental
spectrum, 23, 24 Satellite (GOES), 52, 59, 61, 162, 284
visible, 24–25 Geostationary Operational Meteorological
wavelengths, 22, 24–29 Satellite (GOMS), 52, 59, 63
Electrically Scanning Microwave Radiometer Geostationary meteorological satellites, 59–64
(ESMR), 188 GLI (Global Line Imager), 72
Environmental Protection Agency (EPA), 90 Global area coverage (GAC), 55, 69, 76
Envisat, 72 Global Line Imager (GLI), 72
EPA, 90 Global Ozone Monitoring by Occultation of
ERBE (Earth Radiation Budget Experiment), Stars (GOMOS), 267
51, 53, 257 Global positioning system, see GPS
ERS (ESA Remote Sensing) satellites, 129, Global Telecommunications System, see GTS
143, 155, 188, 254 GOCE, 134
global ozone and, 264 GOES (Geostationary Operational Environ-
limitation of coverage frequency, 75 mental Satellite), 52, 59, 61, 284
overview, 70–71 GOMOS (Global Ozone Monitoring by
pollution monitoring, 294–295 Occultation of Stars), 267
surface wind shear, 250 GOMS (Geostationary Operational Meteo
ERTS-1, see Landsat rological Satellite), 52, 59, 63
ESA Remote Sensing (ERS) satellites, Gordon, H.R., 202
70–71, 75 GPS (Global Positioning System), 19–20,
ESMR (Electrically Scanning Microwave 91, 96–99
Radiometer), 188 GRACE (Gravity Recovery and Climate
EUMETSTAT (European organization for the Experiment), 134, 276, 289
Exploitation of Meteorological Graham, L.C., 155
Satellites), 50, 58–59, 63 Gravity Recovery and Climate Experiment
EUMETSTAT Polar System (EPS), 58 (GRACE), 134, 276, 289
European Centre for Medium-Range Ground wave systems, 118–120
Forecasts (ECMWF), 246 GTS, 19
European organization for the Exploitation of
Meteorological Satellites (EUMETSAT),
50, 58–59, 63 H
European Space Agency (ESA), 143, 250
Haute Resolution Visible (HRV), 67–68, 75
High-Resolution Infrared Radiation Sounder
F (HIRS/2), 54, 177, 260
HIRS/2 (High-Resolution Infrared Radiation
FDP (Forecast Demonstration Project), Sounder), 54, 177, 260
242–243 Hotelling, H., 225–229
Feng-Yun satellites, 59, 64 HRV (Haute Resolution Visible), 67–68, 75
Forecast Demonstration Project (FDP), Hurricane prediction and tracking, 136, 143,
242–243 252, 254–256
Forecasting, weather radars in, Hydrology, 287–289
243–245
Forestry, satellites and, 281–285
Fourier series, 114, 127 I
Fourier transforms, 229–239
filters, 236–239 IAEA (International Atomic Energy Agency),
inversion, 230–231 111, 112
optical analogue, 235–236 ICESat, 300
9255_index.fm Page 330 Tuesday, February 27, 2007 12:57 PM
Index 331
M N
Manual of Remote Sensing, 3 NASA
Marine Optical Buoy (MOBY), 201–202 airborne laser systems, 90
Medium-Resolution Imaging Spectrometer AOL and, 91
(MERIS), 72 AQUA satellite, 146, 284, 285, 287
MERIS (Medium-Resolution Imaging geostationary meteorological satellites,
Spectrometer), 72 52, 59, 64
Meteorological Operational (MetOp) ICESat, 300
spacecraft, 51, 58–59 Landsat and, 64–65, 66, 67
Meteorological remote sensing satellites, MISR, 250
50–64 NSCAT (NASA Scatterometer), 143, 251
geostationary meteorological satellites, polar-orbiting satellites, 51
59, 63–64 SeaWinds and, 72
polar-orbiting meteorological satellites, 50–59 Terra satellite, 272, 284, 285, 287
MetOp (Meteorological Operational) TOPEX/Poseidon and, 71
spacecraft, 51, 58–59 NVAP (Water Vapor Project), 262
Metosat satellites National Environmental Satellite, Data, and
atmospheric correction and, 162 Information Service (NESDIS), 84–85,
ERS and, 70 86–87
features, 61 National Oceanic and Atmospheric Admin-
MSG (Meteosat Second Generation), 52, 59 istration (NOAA), 14, 15, 17–18
overview, 63–64 POES program, 50
spatial resolution, 74–75 Wave Propagation Laboratory, 119
Microwave sensors, 38–44 National Polar-Orbiting Operational
Microwave Sounding Unit (MSU), Environmental Satellite System
54, 177, 260 (NPOESS), 267
MOBY (Marine Optical Buoy), 201–202 National Space Development Agency
Moderate-Resolution Imaging Spectro- (NASDA), 64
radiometer (MODIS), 72, 284–285 NDVI (normalized difference vegetation
MODIS (Moderate-Resolution Imaging index), 195–196
Spectroradiometer), 72, 284–285 NdYAG (neodymium yttrium aluminum
Morel, A., 202 garnet) lasers, 98
MSU (Microwave Sounding Unit), Near-polar orbiting satellites, 6, 13,
54, 177, 260 14, 15–16
Multifunctional Transport Satellite-1 NEMS (Nimbus-E Microwave Spectrometer),
(MTSAT-1), 52, 64 188
Multilooking, 153 Neodymium yttrium aluminum garnet
Multispectral images, 221–224 (NdYAG) lasers, 98
contrast enhancement, 222 NESDIS, 84–85, 86–87, 284
overview, 221–222 Nimbus-E Microwave Spectrometer
visual classification, 223–224 (NEMS), 188
9255_index.fm Page 332 Tuesday, February 27, 2007 12:57 PM
Index 333
T V
Telemetry stations, 19 Van Genderen, J.L., 155, 157
Television InfraRed Observation Satellite VEGETATION, 63, 68, 287
(TIROS-N) series, 10, 15, 49 Very High Resolution Radiometer (VHRR),
Telstar, 12 53, 54, 57, 64
Temperature changes, determination with VHRR (Very High Resolution Radiometer),
satellites, 246–248 53, 54, 57, 64
Thematic Mapper, 93 Visible and near-infrared sensors, 29-34
Thermal-infrared scanners, 175-188 classification scheme for, 29
airborne, 35, 36–38 multispectral scanners, 30-31, 32-34
AVHRR, 179-188 push-broom scanners, 31
data processing, 179-181 Visible Infrared Spin Scan Radiometer
LOWTRAN, 182, 183 (VISSR), 52, 59
radiative transfer equation, 175-178 Visible wavelength scanners, 191-204
9255_index.fm Page 335 Tuesday, February 27, 2007 12:57 PM
Index 335
(a)
(b)
(a)
(b)
(a)
(b)
(c)
(a)
(b)
(c)
(a) (b)