0% found this document useful (0 votes)
12 views36 pages

Aerial Photography

Aerial photography is the science of taking photographs from the air to study the Earth's surface, providing a unique perspective and valuable data for various applications. The document outlines the history, characteristics, and types of aerial photography, including vertical and oblique photographs, as well as the basic requirements and factors influencing image quality. It also discusses the differences between aerial photographs and maps, and the planning process for aerial photography, including considerations for scale and camera types.

Uploaded by

abdullah khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views36 pages

Aerial Photography

Aerial photography is the science of taking photographs from the air to study the Earth's surface, providing a unique perspective and valuable data for various applications. The document outlines the history, characteristics, and types of aerial photography, including vertical and oblique photographs, as well as the basic requirements and factors influencing image quality. It also discusses the differences between aerial photographs and maps, and the planning process for aerial photography, including considerations for scale and camera types.

Uploaded by

abdullah khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit I

AERIAL PHOTOGRAPHY

Introduction

The term "photography" is derived from two Greek words meaning "light" ( phos ) and
"writing" ( graphien ). Photography means the art, hobby, or profession of taking photographs,
and developing and printing the film or processing the digitized array image. Photography is
production of permanent images by means of the action of light on sensitized surfaces (film or
array inside a camera), which finally giving rise to a new form of visual art.
The word aerial originated in early 17th century. [Formed from Latin aerius , from Greek
aerios , from air.] Aerial Photography means photography from the air.

Definitions:

Aerial photography: Aerial photography is defined as the science of making photographs from
the air, for studying the surface of the earth.
Aerial photograph: Aerial photograph is the image of ground surface taken by the camera
placed in the aircraft. The photograph is taken from certain altitude above Mean Sea Level and
the altitude depends on scale of photograph and focal length of camera.

Aerial Photography is one of the most common, versatile and economical forms of
remote sensing. It is a means of fixing time within the framework of space (de Latil , 1961).
Aerial photography was the first method of remote sensing and even used today in the era of
satellite and electronic scanners. Aerial photographs will still remain the most widely used type
of remote sensing data.

History of Aerial Photography


1038 AD - Al Hazen of Basra is credited with the explanation of the principle of the camera
obscura.
1490 - Leonardo da Vinci was intrigued by the atmosphere and by its effects on the colors and
distinctness of distant objects. Though other artists had already begun to mimic the influences of
the atmosphere, he was the first to make careful measurements and suggest rules for applying
them realistically. He called this subject aerial perspective.
1666 - Sir Isaac Newton, while experimenting with a prism, found that he could disperse light
into a spectrum of red, orange, yellow, green, blue, indigo, and violet. Utilizing a second prism,
he found that he could recombine the colors into white light.
1802 - Thomas Young puts forth basic concepts of the Young-Von Helmholtz Theory of color
vision: Three separate sets of cones in the retina of the eye, one tuned to red, one to blue, and one
to green.
1827 - Niepce takes first picture of nature from a window view of the French countryside using a
camera obscura and an emulsion using bitumen of Judea, a resinous substance, and oil of
lavender (it took 8 hours in bright sunlight to produce the image).
1839 - Daguerre announces the invention of Daguerrotype which consisted of a polished silver
plate, mercury vapors and sodium thiosulfate ("hypo") that was used to fix the image and make it
permanent.
1839 - William Henry Fox Talbot invents a new method of photography, a system of imaging on
silver nitrate of silver chromate treated paper and using a fixative solution of salt.
1830's - The invention of stereoscopes.
1855 - James Clerk Maxwell, a Scottish physicist, describes color additive theory for the
producing color
Photographs.
1858 - Gasper Felix Tournachon "Nadar" takes the first aerial photograph from a captive
balloon from an altitude of 1,200 feet over Paris.
1889 - Arthur Batut takes the first aerial photograph using a kite of Labruguiere France.
1899 - George Eastman produced a nitrocellulose based film type that retained the clarity of the
glass plates which were in use at the time and introduced the first Kodak camera.
1900 - Max Planck's revelation of 'quanta' and the mathematical description of the 'black body'
lays the foundation for numerous developments in quantum mechanics.
1903 - The Bavarian Pigeon Corps uses pigeons to transmit messages and take aerial photos, and
someone named Julius Neubronne patented the breast mounted pigeon camera.
1906 - Albert Maul, using a rocket propelled by compressed air, took an aerial photograph from
a height of 2,600 feet, the camera was ejected and parachuted back to earth.
1908 - Wilbur Wright was the pilot, and together with L. P. Bonvillain on board, he acquired the
first remotely sensed image from an airplane in France. The next year, the first aerial motion
pictures were recorded in Italy with another photographer on board.
1914 - WWI provided a boost in the use of aerial photography, but after the war, enthusiasm
waned.
1946 - First space photographs from V-2 rockets.
1954 - Westinghouse, under sponsorship from USAF, develops first side-looking airborne radar
(SLAR) system.
1954 - U-2 takes first flight.

Characteristics of Aerial Photography


1. Synoptic viewpoint: Aerial photographs give a bird s eye view of large areas enabling us
to see surface features in their spatial context. They enable the detection of small scale
features and spatial relationships that would not be found on the ground.
2. Time freezing ability: They are virtually permanent records of the existing conditions on
the Earth s surface at one point in time, and used as an historical document.
3. Capability to stop action: They provides a stop action view of dynamic conditions and
are useful in studying dynamic phenomena such as flooding, moving wildlife, traffic, oil
spills, forest fires.
4. Three dimensional perspective: It provides a stereoscopic view of the Earth s surface
and make it possible to take measurements horizontally and vertically a characteristic that
is lacking for the majority of remotely sensed data.
5. Spectral and spatial resolution: Aerial photographs are sensitive to radiation in
wavelengths that are outside of the spectral sensitivity of the human eye (0.3 m to 0.9 m
versus 0.4 m to 0.7 m).
6. Sensitive: They are sensitive to objects outside the spatial resolving power of human eye.
7. Availability: Aerial photographs are readily available at a range of scales for much of the
world.
8. Economy: They are much cheaper than field surveys and are often cheaper and more
accurate than maps.

Difference between Maps and Aerial Photographs

Aerial Photograph Map


It is a central Projection. It is an orthogonal Projection.
An aerial photograph is geometrically A map is a geometrically correct representation
incorrect. The distortion in the geometry is of the part of the earth projected.
minimum at the centre and increases towards
the edges of the photographs.
The scale of the photograph is not uniform. The scale of the map is uniform throughout the
map extent.
Enlargement/reduction does not change the Enlargement/reduction of the maps involves
contents of the photographs and can easily be redrawing it afresh.
carried out.
Aerial photography holds good for inaccessible The mapping of inaccessible and inhospitable
and inhospitable areas. areas is very difficult and sometimes it
becomes impossible.

Actual photographic images of the ground These are portrayed by standard conventional
Features symbols

TYPES OF AERIAL PHOTOGRAPHS

Aerial photography most commonly used by military personnel may be divided into two major
types, the vertical and the oblique. Each type depends upon the attitude of the camera with
respect to the earth's surface when the photograph is taken.

a. Vertical. A vertical photograph is taken with the camera pointed as straight down as possible
(Figures-1). Allowable tolerance is usually + 3° from the perpendicular (plumb) line to the
camera axis. The result is coincident with the camera axis.
A vertical photograph has the following characteristics:
(1) The lens axis is perpendicular to the surface of the earth.
(2) It covers a relatively small area.
(3) The shape of the ground area covered on a single vertical photo closely approximates a
square or rectangle.
(4) Being a view from above, it gives an unfamiliar view of the ground.
(5) Distance and directions may approach the accuracy of maps if taken over flat terrain.
(6) Relief is not readily apparent.
Figure 8-1. Relationship of the vertical aerial photograph with the ground.
b. Low Oblique. This is a photograph taken with the camera inclined about 30° from the
vertical (Figure 2). It is used to study an area before an attack, to substitute for a reconnaissance,
to substitute for a map, or to supplement a map. A low oblique has the following characteristics:
(1) It covers a relatively small area.
(2) The ground area covered is a trapezoid, although the photo is square or rectangular.
(3) The objects have a more familiar view, comparable to viewing from the top of a high hill or
tall building.
(4) No scale is applicable to the entire photograph, and distance cannot be measured. Parallel
lines on the ground are not parallel on this photograph; therefore, direction (azimuth) cannot be
measured.
(5) Relief is discernible but distorted.
(6) It does not show the horizon.
Figure 8-4. Low oblique photograph.

c. High Oblique. The high oblique is a photograph taken with the camera inclined about 60°
from the vertical (Figures 8-6). It has a limited military application; it is used primarily in the
making of aeronautical charts. However, it may be the only photography available. A high
oblique has the following characteristics:
(1) It covers a very large area (not all usable).
(2) The ground area covered is a trapezoid, but the photograph is square or rectangular.
(3) The view varies from the very familiar to unfamiliar, depending on the height at which the
photograph is taken.
(4) Distances and directions are not measured on this photograph for the same reasons that they
are not measured on the low oblique.
(5) Relief may be quite discernible but distorted as in any oblique view. The relief is not
apparent in a high altitude, high oblique.
(6) The horizon is always visible.

Figure 8-6. High oblique photograph.

Basic Requirements of Aerial Photography:


In order to be useful for the preparation of base maps and for interpretation in natural
resources surveys the aerial photography should fulfill the following requirements:
1. The photography should provide faithful image of even the smallest detail perceptible from
the camera station
2. The definition of the image should be sharp, bright and clear
3. The photography should be distortion free
4. It should be continuous with sufficient overlapping of successive photographs.
5. Tilt and crab should be within tolerable limits
6. Scale variation should be within tolerable limits

Factors Influencing the Image Quality of Photographs


The major factors which influence the image quality of photographs can broadly be classed
under six categories as given below:
1. Reflectivity of the object: Light intensity and distribution, shade, colour
2. Atmospheric factors: Haze, clouds
3. Aircraft Vibration: steadiness
4. Camera: Rigidity of the lens, shutter and magazine assembly, efficiency of shutters, scatter
and loss through filters, optional flatness of filter, light loss and scatter, spectral transmission
through filter, distortion and aberrations of the lens
5. Negative and positive base and emulsions: Speed and sensitivity of the emulsion, flatness of
the base, dimensional stability of the base
6. Processing and printing: Mode of processing of negative, condition of printing equipment,
mode of printing; quality of chemicals

Process of Aerial Photography:


The process of Aerial Photography includes three stages
1. Planning for photography
2. Planning and execution of photographic flights
3. Processing of negatives and production of positive copies

1. PLANNING FOR PHOTOGRAPHY


For obtaining suitable photo coverage of an area consideration must be given to several factors
while designing the flight specifications. Details in respect of these should be provided and
requirements specified while ordering for fresh photography.
A. Area to be photographed: The limits of the area should be indicated on, the existing
degree sheets or maps of scale 1:2,50,000. Large area may be divided into blocks A,B,C
etc
B. Purpose of photography: This is important as it helps in the designing of the
photographic specifications and planning of the flight mission.
C. Type of photography: Aerial photographs are generally classified according to the
orientation of the optical axis of the camera.
1. Vertical aerial photograph is one taken by a camera pointing vertically downwards. This also
includes photographs when optical axis is inclined up to three degrees from vertical (fig-la).
2. Oblique aerial photographs are those taken when the optical axis is considerably inclined from
the vertical. When inclination is sufficiently great to permit photography of the horizon, they are
called high oblique photographs (fig-1C), those with less inclination are called low oblique
photographs (fig-lb).
Vertical aerial photography is commonly used in photo interpretation. Other types are rarely
used.
If either colour, infrared or false colour photography is required for special jobs, it should be
specified.
D. Scale of Photography: The scale of aerial photograph is a ratio of photograph image distance to
ground distance. This ratio is same as that of camera focal length to camera height - (f/H). An
aerial photograph does not have uniform scale. This variation in scale is due to:
(i) Optical photographic deficiencies: These are caused by optical distortions due to
inferior, camera lens, faulty shutters, film shrinkage or failure of film flattening mechanism in
camera focal plane.
(ii) Inclination of optical axis. Such inclination is referred to as tilt caused by movement
of camera normal to the direction of flight. The images are displaced radially towards the
isocentre on the upper side of a tilted photograph and away from the isocentre on the lower side.
Along the axis of tilt, there is no displacement.
(iii) Topographic relief of the terrain photographed: All the objects that extend above or
below a specified ground datum have their photographic images displaced. The images of ground
objects with greater elevation will be displaced radially outwards from the centre of the
photograph. Conversely, ground points lying below a selected datum plane are displaced radially
inward.
The scales generally used in natural resource surveys vary between 1:5,000 and 1:50,000
depending upon the purpose for which the photographs are used. Two scales commonly
preferred are 1: 15,000 and 1: 25,000. For general mapping purpose in the field of geology, the
scale 1:50,000 or1: 60,000 are suitable. These scales have the advantage of corresponding to the
scale of modern top sheets. Selection of scale often depends on relief and other considerations.
The higher the relief of the terrain and higher the density of vegetation, the smaller should be the
scale selected. While selecting the scale of photography for geological interpretation, advantages
and, disadvantages of large and small scale should be considered.
The scale required should be specified while ordering for photography as it has a bearing
on the selection of camera and aircraft and fixing the flying height of the aircraft.

Table 1: Advantages and disadvantages of the small and large scale Aerial Photos
Small Scale Large Scale

Advantages

(1) Regional picture is obtained (1) Details of topography are well brought out
(2) Association of features is clearly brought out (2) Geological details are clear
(3) Number of photos is less, reducing cost and labour
(4) Mosaic making is easier
(5) Details of topography of larger area are obtained
(6) Planimetry if accurate

Disadvantages:

(1) Low lying areas are obscured (1) More photos to be handled
(2) Geological details are not clear (2) Distortion due to relief is more

E. Aerial Cameras: There are four main types of aerial cameras in use viz. the FRAME
type, STRIP type, PANORAMIC and MULTISPECTRAL.

1. STRIP TYPE: In the STRIP type camera the film is moved continuously along the focal
plane and a narrow slit shaped aperture kept open constantly. The film speed is adjusted
to the aircraft speed so as to obtain ground coverage with equal scale in the X and Y
directions. Such cameras are often used for path recovery in geophysical and other non-
imaging surveys and for low altitude, high speed reconnaissance work.

2. PANORAMIC TYPE: In the panoramic camera only the portion of the lens on or near
the optical axis is used with the lens scanning through large angles across the direction of
flight and the film is advanced parallel to the direction of scanning at rates compatible
with the vehicle ground speed, to obtain continuity of ground coverage along flight.
Generally, scan angles are over 1200 and, the scan direction is called the panoramic
direction.

3. MULTISPECTRAL TYPE: The Multispectral camera is used to simultaneously image


the terrain in different spectral bands, to accentuate different features selectively. These
are actually assemblages of cameras with identical lens systems, but different filters,
imaging either on different parts of the same film roll or on different film rolls. In the
latter case one or more of the cameras can be loaded with colour or colour infrared film.
The most common type of multispectral camera (also known as multi-band camera) has
four lens - systems, imaging on adjacent portions of the same film.

4. FRAME TYPE: The FRAME type camera is the most common with which successive
exposures are taken on an entire frame format, through a lens fixed relative to the focal
plane, the film being moved between exposures. Distortion free and high resolving power
aerial cameras of the FRAME type, such as wild RC (A) RC- 8, RC- 10 or Zeiss RMK-A
ensure good image quality and metric characteristics.

Based on the angle of coverage the following aerial cameras are used:
1. Normal or Standard angle camera: having a lens with an angle of coverage upto 750 and
the focal length ranging from 200 to 300 mm. The precision of the planimetry is the highest eg.,
wild RC 5 (A).
2. Wide angle camera: having a lens with an angle of coverage between 750 and 1000 and the
focal length ranging from 100 to 150 mm. The precision of height measurement is higher e.g.
wild RC-5 (a) and wild RC-8.
3. Super wide-angle camera: having a lens with an angle of coverage greater than 1000 and the
focal length ranging from 45 to 90 mm (f/8). The precision of height measurement is the highest.
Details of these cameras are given in the table – 2.

Table – 2: Details of the Cameras used in Aerial Photography

Camera Focal Length of the Lens Format Size (cm)


Wild RC-5 (A) 11.5 cm and 21 cm 18 cm .x 18 cm
Wild RC-8 11.5 cm 18 cm .x 18 cm
Wild RC- 8 (Universal) 15 cm 23 cm x 23 cm
Wild RC-10 15 cm 23 cm x 23 cm
Zeiss RMK-A 15 cm and 30 cm 23 cm x 23 cm
Eagle IX/F-49 6" & 12" 9" x 9"
Eagle IX/F-52 10", 12"& 20" 9" x 9"
K-20 6" 8.25" x 7"

While the lenses of 11.5cm and 15cm focal length are referred to as Wide angle lenses, the lens
of 21cm focal length is referred to as Normal angle lens and the lens of 30cm focal length is
referred to as Narrow angle lens.
F. Flight Direction: In aerial photography E-W direction of flight is generally preferred on
account of the winds. Some other direction may also be decided upon in consideration of
other factors. The direction along the length of the area is commonly decided upon to
keep the number of strips to minimum. For geological interpretation, 'flight direction
across the strike of the formations (cross stripping) is preferred in highly folded areas to
ensure sufficient overlap across the strike. It is also preferred in high mountainous areas
where relief displacement is more.

G. Flying Height of aircraft: As the scale of photography is the function of focal length of
the camera lens and flying height, the less the flying height the more the scale variations
in a rugged country. To keep the scale variations within tolerable limits the flying height
should be kept more in rugged mountainous area. The desired scale can in such cases be
maintained by using camera. lens of proportionately longer focal length. The ceiling
heights and minimum speeds of the Aircraft commonly used in India and abroad are
given below:

Aircraft Ceiling Height (km) Minimum speed (km/hr)


India
Dakota 5.6 - 6.2 240
Avro 7.8 600
Cessna 9 350
Canberra 14 560
Abroad
U–2 21 798
Rockell X – 15 108 6620

H. Forward and Lateral Overlaps


Forward Overlap: The forward overlap (fig-2a) generally chosen is 60% ~ 5%. In no case it
should be less than 53%. In mountainous areas it is safer to have overlap of 65%.
Lateral Overlap: In general, a lateral overlap (fig-2b) of 20% + 5% is specified. In areas of high
relief such as Himalayas a lateral overlap of 35% is specified to cater for relief displacements.
I. Time of Photography: The time of photography should be so decided as to avoid
long shadows and haze conditions. Long shadows obscure the detail and bring down the
interpretational value of the photographs. Normally the time is confined to the period
when the sun is between 300 and 600 (8 to 10 AM and 2 to 4 PM are preferred). In
mountainous areas however the period around noon is preferred to avoid shadows of the
hills. In tropics where the atmospheric haze is the main consideration, the time is limited
to 1.5 to 3 hrs after the sunrise.
J. Season of Photography
Selection of the season for photography depends on various factors such as seasonal
changes in light reflection, seasonal changes in vegetation cover, seasonal changes in
climatological factors. The flying season in India is generally from September to October
and March to April. The purpose of photography however dictates the season to a great
extent. For the photogrammetric, geological and soil surveys the ground should be visible
as clearly as possible. In forested areas such a time will be when the trees shed their
leaves. In higher latitudes and altitudes the melting of snow has to be awaited. The soil
should be without standing crops. Thus, for these purposes early spring to beginning of
summer is most suitable. In forestry surveys, the density of foliage is important. For the
land use surveys, it is preferable to have the photography when the crops are standing.
Therefore, for these purposes the later part of the year from the end of rainy season to the
beginning of winter is suitable.
K. Type and Number of Prints
The type and number of prints required, whether glossy, matt, semi-matt, single weight or
double weight should be decided and stated while requisitioning for Aerial Photography.

PLANNING AND EXECUTION OF PHOTOGRAPHIC FLIGHTS


Aerial photography is a delicate operation and demands painstaking preparation and
professional execution. Many factors must be considered and many problems solved before the
execution of a photographic flight.
The purpose of the project largely
determines the scale and other specifications
required. The proper camera, necessary
filters, suitable film, the photographic plate
and other equipment will be selected and the
flight mission is assigned to trained
photographic crew.
While photographing, the plane takes
a back and forth path over the area along
predetermined parallel flight lines (fig-2c).
The lines are equally spaced and generally
run along the length of the area. The spacing
between the flight lines is predetermined
according to the requirement of lateral
overlap.

Within a single flight line


photographs are taken in succession at fixed intervals. Thus, interval between successive
exposures is determined taking into account the speed of the aircraft and the forward overlap
required on successive photographs. The timing of successive exposures is regulated by an
instrument called "intervalometer" which is set to trigger the camera at proper intervals of time.

The intervalometer setting may be readjusted when necessary or the exposure interval
may be varied or maintained manually. The exposure time, aperture opening and similar other
adjustments are controlled by the photographer. Cross or oblique winds may force the pilot to
head the plane diagonally into the wind to keep it moving along the predetermined flight line.
The plane in such a case will point in one direction while it moves along a different line. This is
referred to as crabbing and when this occurs the camera is turned till its sides are parallel to the
actual flight direction. Due to this wind effects there occur some of the coverage errors which are
stated below.

Coverage Errors: There are two main causes of unsatisfactory ground


coverage: Drift and Crab.

Drift is the lateral shift of the aircraft from the flightline; this may be caused by pilot
error or the effect of wind on the aircraft.

Crab occurs when the aircraft is not oriented with the flightline; photo edges are not
parallel to the flightline and it usually occurs when the pilot is trying to compensate for a
cross wind and orients the plane into the wind to maintain the flightline.

Numbering of photographs
Job Number: Every photographic task is allotted a job number by Surveyor General of India for
easy reference and handling. The task carried out by I.A.F. is given a number'suffixed by letter
'A' while that carried out by A.S.Co. is suffixed by 'B', and that by the NRSA is suffixed by 'C',
EX: 346-A, 331-B.
Strip Number: If the strips are flown E-W, numbering of the strips is given from N to S. If they
are flown N-S, the numbering is given from W to E.
Photo Number: If the strip is flown E-W, the photos of the strip are numbered from W to E. If
the strip is flown N-S, the photos are numbered from S to N.

INFORMATION RECORDED ON AERIAL PHOTOGRAPHS


Fiducial marks: Fiducial marks or collimating marks for the determination of the principal
points.
Altimeter reading: Recording of Altimeter reading for knowing the flying height of 'the aircraft
above mean sea level (msl) at the time of exposure.
Time: Recording of time at the moment of exposure.
Level bubble: To indicate the tilt of the camera axis at the moment of exposure (not very
accurate).
Principal distance: For determining the scale of the photograph.
Number of the photograph: e.g. 342-A/52-13
342 - Job number, (A - Indian Air Force, B - Air Survey Co. C - NRSA)
52 - Strip number
13 - Photo number
Number of camera: Useful for obtaining camera calibration report, if required.
Date of photography: Written later on.

Aerial Cameras:
A camera designed for use in aircraft and containing a mechanism to expose the film in
continuous sequence at a steady rate is called as aerial camera or aerocamera.
A camera used for vertical aerial photography for mapping purposes is called aerial
survey camera. At present there are two major manufacturers of aerial survey cameras, Leica-
Helava System (LH System )- RC 30 camera and Z/I Imaging - (RMK-TOP) Camera. Modern
aerial survey cameras produce negatives measuring 23cmsx23cms (9 x 9in) Up to 600
photographs may be recorded in a single film roll.
Aerial cameras by virtue of their application have certain special characteristics as compared
to normal cameras. Important of them are
1. Lens remains focused to infinity
2. Bigger format
3. Arrangement of the operation of the camera from a moving platform to neutralise the
effect of aircraft motion and vibration.
4. High resolution lenses to overcome the atmospheric effects to move some extents.
5. Arrangement for exposures at predominated intervals.

Types of Arial cameras: There are four main types of aerial cameras in use viz. the FRAME
type, STRIP type, PANORAMIC and MULTISPECTRAL.

STRIP TYPE: In the STRIP type camera the film is moved continuously along the focal plane
and a narrow slit shaped aperture kept open constantly. The film speed is adjusted to the aircraft
speed so as to obtain ground coverage with equal scale in the X and Y directions. Such cameras
are often used for path recovery in geophysical and other non-imaging surveys and for low
altitude, high speed reconnaissance work.
PANORAMIC TYPE: In the panoramic camera only the portion of the lens on or near the optical
axis is used with the lens scanning through large angles across the direction of flight and the film
is advanced parallel to the direction of scanning at rates compatible with the vehicle ground
speed, to obtain continuity of ground coverage along flight. Generally, scan angles are over 1200
and, the scan direction is called the panoramic direction.

MULTISPECTRAL TYPE: The Multispectral camera is used to simultaneously image the


terrain in different spectral bands, to accentuate different features selectively. These are actually
assemblages of cameras with identical lens systems, but different filters, imaging either on
different parts of the same film roll or on different film rolls. In the latter case one or more of the
cameras can be loaded with colour or colour infrared film. The most common type of
multispectral camera (also known as multi-band camera) has four lens - systems, imaging on
adjacent portions of the same film.

FRAME TYPE: The FRAME type camera is the most common with which successive exposures
are taken on an entire frame format, through a lens fixed relative to the focal plane, the film
being moved between exposures. Distortion free and high resolving power aerial cameras of the
FRAME type, such as wild RC (A) RC- 8, RC- 10 or Zeiss RMK-A ensure good image quality
and metric characteristics.

Main parts of frame aerial cameras: The three basic components of aerial camera as
shown in the generalised cross section are
1. Magazine: the camera magazine houses the reels which hold exposed and unexposed
films, and it also contains the advancing and film flattening mechanisms. Film flattening
is very important in aerial camera because non-flatness would not only decrease the
image quality (blurring) but also displace points, particularly in the corners. The focal
plane of an aerial camera is the plane in which all light rays through the lens cone come
to a focus. A frame bound the focal plane
2. Camera body: The camera body is a one piece casting, which usually houses the drive
mechanism, driving motor,
operating handless and
levers, electric connections,
switches and other
accessories.
3. Lens cone assemblies: The
lens cone assembly contains
a number of parts and serves
several functions. Contained
within this assembly are the
lens, filter, shutter and
diaphragm. The Aperture
opening controls the amount
of light entering the camera.
The Shutter determines the
time period during which the
film will be exposed to light and the film reacts to incident light to form the latent image.
Various types of Films, filters and Lenses are used in aerial photography and
photogrammetry.
Films: Photographic film is a sheet of plastic (polyester, nitrocellulose or cellulose acetate)
coated with an emulsion containing light-sensitive silver halide salts (bonded by gelatin) with
variable crystal sizes that determine the sensitivity, contrast and resolution of the film.

Types of films:

a. Panchromatic film. Radiometric sensitivity of the silver halide crystals in the panchromatic
film emulsion encompasses the visible portion, blue through the red (0.4 to 0.7 micron), of the
spectrum. It is usually desirable to use a minus blue (yellow) or bright red filter to reduce the
effects of haze and smog. There is greater latitude in exposure and processing of black-and-white
panchromatic films than there is with color films, which assures a greater chance of success in
every photo mission.
b. Color. Color aerial photography entails the taking of photographs in natural color by means of
a three-layer emulsion sensitive to blue, green, and red visible colors. Both color negative and
color positive film types are available. Color photography requires above-average weather
conditions, meticulous care in exposure and processing, and color-corrected lenses. For these
reasons, color photography and color prints are more expensive than panchromatic.
c. Infrared. Infrared emulsions have greater sensitivity to red and the near-infrared. They record
the longer red light waves, which penetrate haze and smoke. Thus, infrared film can be used on
days that would be unsuitable for ordinary panchromatic films. It is also useful for the
delineation of water and wet areas, and for certain types of vegetation, environmental and
landuse studies. Its chief disadvantage is a greatly increased contrast, which may tend to cause a
loss of image information.
d. Color infrared. Color infrared has many of the same uses as black-and-white infrared; in
addition the nuances of color help in photo interpretation. Because healthy vegetation (normally
green) are recorded as reds on this emulsion, it is often termed "false color film." It is used in the
detection of diseased plants and trees, identification and differentiation of a variety of fresh and
salt water growths for wetland studies, and many water pollution and environmental impact
studies. A color-corrected camera lens is required. The cost of obtaining infrared color is greater
than that for black and white. Because of the cost of making infrared color prints, color
transparencies may be used and viewed on a light table.
Lenses: The camera lens is the most important and most expensive part of an aerial camera.
Lenses used in aerial cameras are highly corrected compound lenses consisting of several
elements. Various Lenses used in aerial photogrammetry are given below:

A simple lens consists of a piece of optical glass that has been ground so that it has either two
spherical surfaces or one spherical surface and one flat surface. The lens primary function is to
gather light rays from object points and bring them to focus at some distance on the opposite side
of the lens.
 A lens accomplishes this function through the principle of refraction.
 Lenses are classified by the curvature of the two optical surfaces.
 A lens is biconvex (or double convex, or just convex) if both surfaces are convex.
 If both surfaces have the same radius of curvature, the lens is equiconvex.
 A lens with two concave surfaces is biconcave (or just concave).
 If one of the surfaces is flat, the lens is Plano-convex or Plano-concave depending on the
curvature of the other surface.
 A lens with one convex and one concave side is convex-concave or meniscus. It is this
type of lens that is most commonly used in corrective lenses.

Filters: A filter is piece of material (frequently glass) placed between the film and the
reflected light rays coming from the scene, which absorbs unwanted light rays, thus not allowing
them to reach the film. The filter serves three purposes: (1). It reduces the effect of atmospheric
haze, (2). It help provide uniform light distribution over the entire format and (3). It protects the
lens from damage and dust.
Filters for aerial cameras are classified by their use (American Society of Photogrammetry
1968). The following classifications are used here: (1) Antivignetting filter, (2) Polarization
filter, (3) Haze compensation filter, (4) Color correction filter and (5) Narrow band-pass filter.

(1) Antivignetting filters: Antivignetting filters compensate for unequal light transmission
associated with very wide angle lenses for 9”X9”. Format and larger cameras. At present
it is impossible to design these lenses so that they transmit as much as light to the corners
and edges of the film as to the center. Thus, these filters have a slightly darkened central
area that gradually diminishes from the center to the outside edge. This filter should be
matched to the characteristics of a particular lens and should be checked for light
transmission characteristics for all f/stop settings.
(2) Polarizing filters: Polarizing filters are used to penetrate haze and to reduce reflections
from surfaces such as water. Smith (1968) stated that research and use of Polarizing
filters for vertical aerial photography has been neglected. In some of his experiments he
polarized water surfaces to the extent that shallow ocean bottoms appeared as if all the
water has been removed. Polarization is effective only when the angle of reflection is
approximately 35o and therefore polarization must be well planned in relation to the sun
angle. There is no effect when the sun is directly above the aircraft.
(3) Haze cutting filters: Atmospheric haze is caused by weather conditions (dust and
moisture particles) or by humans through their creation of smoke and other air pollutants.
Haze is the scattering of blue light by these particles in the atmosphere. The most
commonly used filters in aerial photography are the haze cutting filters. Haze cutting
filters remove various amounts of blue light and range from almost clear to dark yellow.
They range in colour from yellow to red and are used in situations of “light haze” to
“heavy haze”. It should be noted that haze also depends on the altitude besides
atmospheric conditions. Higher the altitude of the photography greater will be the effect
of haze. The manufacturers usually tabulate the various haze filters indicting the
wavelengths upto which absorption takes place. The use of appropriate filter compensates
for the reduction of the ground contrast of due to presence of haze. Some examples of
filters used in black and white aerial photography:
Very light haze or low altitude photography – Light yellow filter (500 nm)
Medium haze or medium altitude photography – Deep yellow filter (540 nm)
Very heavy haze or high altitude photography – Red filter (600 nm)
(4) Colour – Correction filters: Colour composite filters are most commonly used in the
colour printing process and sometimes in photos taken with unusual light sources such as
fluorescent lights. They are available in cyan, magenta, yellow, blue, green and red in
various densities ranging from 0.025 to 0.50.
(5) Narrow band pass filters: Narrow band pass filters, sometimes called spectrazonal
filters, are used for colour separation purposes when using multiband colour
enhancement.
PREPARATION OF PHOTO INDEX
The purpose of this is to show the position of anyone photo relative to the other and also it's
approximate geographical position on a published map. These are of two types: (1) Photographic
Index, and (2) Line Index.
Photographic Index: It is normally prepared for areas where no reliable map coverage exists
and for operations such as reconnaissance. The index is carried out by using photos on a smaller
scale than the actual aerial photography.
Line Index: It is a line map showing the photo layout in a mosaic as it covers the terrain. The
layout is with flight lines, photo numbers etc. It is prepared on 1 inch: 4 miles or 1:2,50,000 scale
and the longitudes and latitudes are marked at intervals of 15 minutes.

II. PHOTOGRAMMETRY

The word photogrammetry was coined by the geographer KERSTEN 'in 1855, but was introduced to
the international literature by the German Scientist, MEYDENBAUER in 1867. However, the French Colonel
Aime Laussedat is considered as the founder of photogrammetry.
Photogrammetry is composed of three Greek words: (i) Photos, meaning light, (ii) Gramma, meaning
something drawn or written and (iii) Metron, meaning to measure. In the literal sense photogrammetry is the'
art of scribing and measuring by light'. In a broader sense, it is the method of determining the shapes, sizes and
positions of objects using their imaged positions. Thus, it includes;
(i) Photographing an object,
(ii) Measuring the image of the object on the processed photographs, and
(iii) Reducing the measurements to some useful form such as a topographical map' or a numerical
cadaster.
The subject of photogrammetry deals with the geometrical aspects of aerial photographs and includes
the science of obtaining reliable measurements by means of quantitative study of the photographs.

Geometry of Aerial Photographs:


Photographs taken by a lens system are of central projection and fall under the category of central perspective
which is characterised by the fact that all straight lines joining corresponding object and image points pass
through a point' called the perspective centre. An aerial photograph is a central perspective picture. For
understanding the geometry of an aerial photograph, it is necessary to consider a low oblique photograph.

Geometry of Vertical Photograph: In the vertical aerial photograph the optical axis coincides with the
vertical dropped from the perspective centre. As a result, the principal point, nadir point and isocentre
coincide. The relationship of the camera lens, positive print and the ground in a vertical aerial photograph is
shown in fig-3b. The point of intersection of the optical axis of the camera with the photo plane and ground
plane are referred to respectively as photo principal point (p) and ground principal point (P). The distance
along the optical axis from the perspective centre to the photo plane is the focal length (f) and the, vertical
distance from the ground to the perspective centre is the flying height (H).

Scale in Vertical Photoqraphs: The photo scale is given by the simple relation - image distance divided by
ground distance. In the case of -a truly vertical photograph of a flat terrain it can be shown by simple geometry
that the scale is equal to focal length divided by flying height i.e. f/H. From this relation it is evident that in a
vertical photograph the scale varies with relief variations on the ground, „f ‟ being constant.
Scale in Oblique Photographs: In the case of a tilted photograph the scale is not constant even if the terrain is
flat. The scale is constant only along any particular line on the photo, parallel to the axis of tilt. Such lines are
called “plate parallels”. Perpendicular to the axis of tilt the scale varies. It can be geometrically shown that in
the case of a flat terrain the scales along plate parallel passing through the principal point, the isocentre and the
nadir point are as follows, where θ is the angle of tilt, „f ‟is the focal length of the camera and 'H' is the flying
height.
Scale along the plate parallel passing through the principal point f cos θ/H
Scale' along the plate parallel passing through the isocentre f/H
Scale along the plate parallel passing through the nadir point f/H cos θ

The above relations show that the scale in an oblique photograph of a flat terrain is the same as that of a
vertical photograph (f /H) only along the plate parallel passing through the isocentre of the oblique photograph.
This plate parallel is called the "isometric parallel".

No photograph, however, exists which has covered a terrain that is ideally flat allover except in the
case of large water bodies. As such, the scale variations in an oblique photograph are due to the accumulative
effect of tilt and relief.
Image displacement due to Relief
In aerial photographs, the relief variations of the
terrain cause shifting of images from their
correct planimetric positions. The image
displacement due to relief in a vertical aerial
photograph is diagrammatically shown in fig-5.
Due to the height „h‟ of an object at B, its
planimetric position on the photograph is
displaced by a distance of a – b. It can be shown
that the displacement due to relief = r.h/H, where
'r' is the displacement measured on the photo
from nadir along the base, „h‟ is height of the
object and 'H' is flying height.

Image displacement due to tilt: In the case of a


tilted photograph of a flat terrain the image
displacement equals to:

for a point which is on the principal line (line


through the principal point and nadir point),
where, „I a‟ is the distance from isocentre to
the image on the photo and „θ‟ is the angle of
tilt and „f‟, the focal length. If the image is not
on the principal line and if the line joining
isocentre and image makes an angle Ø with the
principal line then the tilt displacement is
equal to

STEREOSCOPIC PARALLAX IN A STEREOPAIR OF PHOTOGRAPHS


In the case of a stereo pair of aerial photographs the shift in the point of observation (shift in the
camera position between two successive exposures) causes apparent displacement in the position of
an object. This is called parallax. Considering a pair of aerial photographs of equal focal length, the
stereoscopic parallax of a point is the algebraic difference of the distances of the two images from
their respective photograph nadirs measured in a horizontal plane and parallel to the air base.
Calculation of Heights using Parallax Difference
The stereoscopic parallax varies with the elevation and the difference in the stereoscopic parallaxes
or "parallax difference" of two pints imaged on a stereo pair of photographs is customarily used in
the determination of the difference in elevation between them. The parallax difference due to an
elevation difference of _h above a reference plane is diagrammatically represented in fig-6. It can
be shown geometrically that the elevation _h above the reference plane is given by the formula:
It must be kept in mind that the above formula "gives correct result when the photographs are truly
vertical.

Measurement of Parallax difference with Parallax Bar


In the above equation the value of „_p‟ is measured accurately from the photographs by an instrument
called the parallax bar. 'It consists of two glasses engraved with measuring marks (dots) connected by a bar
whose length can be changed by a micrometer screw. When seen through the stereoscope, the dots are fused
stereoscopically and appear as a single dot having a fixed position in space.” As the separation is changed, the
ot appears to rise or fall. By proper adjustment it can be placed on any feature on the stereo-model surface and
reading obtained from the micrometer drum. The micrometers are mostly numbered increasingly as distance
between corresponding points is decreasing. This means the point with a larger parallax gives a higher reading
corresponding with a point of higher elevation. Deducting this value from the value obtained of a point on the
reference plane we get a minus value of „_p‟ for points above the reference plane and plus value for points
below the reference plane for substitution in the above formula.

STEREOSCOPY
A pair of photographs taken from two camera stations covering some common area constitutes a stereo-pair
which when viewed in certain manner gives the impression as if a three dimensional model of the c common
area is being seen. The stereoscopic vision is the capacity of the brain to perceive the objects in three
dimensions by physiologically fusing the two different views of the object by the right and the left eye.
For obtaining stereovision from a stereo-pair of photographs the following conditions must be
fulfilled:
1. The optical axes of the camera must be approximately in one plane though the eyes can accommodate to a
limited degree.
2. The ratio of the distance between the exposure stations and the flying height or the base – height ratio (B/H)
must have an appropriate value. If this value is < 0.2 the depth perception is no stronger than if only one
photograph is used. The ideal value, though not exactly known is about 0.25.
3. The scale of the two photographs should be approximately the same. Differences up to 15% may be
successfully fused. For continuous observations, however, differences > 5% may be disadvantageous.
4. Each photograph of the pair should be viewed with one eye only.
5. The brightness of the photographs should be similar.
6. While viewing the photographs should be given the same relative position as they had during the time of
exposure.
Pseudoscopy
For normal vision the left eye should see only the left hand photo and the right eye should see the right hand
photo. If this condition is reversed the depressions appear as elevations and the elevations as depressions. Such
a condition is known as pseudoscopy.
Distortions in a stereo model
The natural relief in a stereo model is obtained only when the base – height ratio is maintained at the ideal
value of 0.25. The base – height ratio used in aerial photography, however, vary from 1:3 to 1:1 or even 1:0.6.
The stereoscopic image obtained from these photographs is, therefore, always different and distorted.
There are other factors also which influence the stereo model:
1. As the eye base changes from the photograph stations to 65 mm (average eye base) the scale of the model
alters but the view remains similar in all other respects.
2. As the photographs are observed at a distance which is not equal to the principal distance (for easy
understanding this term may be taken as synonym of focal length though it is not exactly so). The
magnification, and more importantly the ratio between X, Y scale against the Z (height) scale changes. We get
a flattened model, if this distance is smaller than the principal distance and exaggerated if it is greater than the
principal distance.
3. Deformation in the model is produced because the eyes are moved away from the vertical through the
principal points.
4. Lastly, the shape of the object, the shadows and the association of 'other features influence the depth
perception.
A piece of tracing paper is placed on one of the photos and flight line and rays to the top and bottom (say a and
b) points on the given slope are marked. Then the tracing is placed on the second photo, so that the flight line
coincides and the point 'a' falls on the ray to 'a'. Then the rays to 'a' and 'b' are drawn. The principal point of the
second photo is marked on the flight line. The distance between the intersections of rays to 'a' and 'b' gives the
horizontal distance„d‟ between points 'a' and 'b' on photo scale. The distance between the principal points of
the left and right photos marked on the flight line gives the value of PR . Easy and quick determination of dips
from the photographs can be achieved by converting the apparent dips as visualised in the photographs to the
true values by working out what is called the "Personal exaggeration factor" and using a graph for reading the
true dip values. The personal exaggeration factor is determined by first working out the value of stereoscopic
constant "K", which varies from person to person, by viewing in a stereogram provided for this purpose. The
personal exaggeration factor is then calculated by multiplying the value of "K" with Base-Height ratio which is
equal to b/ f, where 'b' is the photo base and f' is focal length.
Unit II
REMOTE SENSING
I. BASIC CONCEPTS
Remote sensing may be defined as acquisition of physical data of an object from a distance
without touching or securing an actual contact. The formation of image on the retina of a human
eye is a common example of remote sensing. The eye collects information from only a part of the
electromagnetic radiation (visible light) reflected from the external objects. The information
collected on the retina is transmitted to the mind, which physiologically processes these signals
to form a complete picture.

Though the human eye - mind system can be considered to be a most advanced remote
sensing system, it gathers information from the outside world only to the extent of information
brought to it by the visible light which forms a very small part of the electromagnetic spectrum.
If it is able to sense other ranges of the electromagnetic spectrum as well, it would be able to see
much more of the outside world because information sent by objects over a wide range of the
EM Spectrum is always reaching the eye.
In modern remote sensing, the information given out by the objects over a wide range of
the EM Spectrum is captured by what are known as remote sensors and this information is read
and interpreted to know about the objects of interest.

BASIC UNITS AND DEFINITIONS:

Electromagnetic waves can be described in terms of their velocity, wavelength, and frequency. All
electromagnetic waves travel at the same velocity (C) of 299,793 km/sec or velocity, for practical purposes C=
3x108 m/sec. This is commonly spoken of as the velocity of light, although light is only one form of
electromagnetic energy. The wavelength, λ of electromagnetic waves is the distance from any position in a
cycle to the same position in the next cycle, measured in the standard metric system. The different units for
distance in this system are given in table – 1.

Table – 1: Units for Distance in Metric System


Frequency, v, is the number of wave crests passing a given point in a specified unit of time.
Frequency was formerly expressed as cycles per second, but today, Hertz is the unit for a frequency of one
cycle per second. The terms used to designate frequencies are given in the following Table. The relationship of
velocity (C), wavelength (λ), and frequency (v), is shown by the expression:
C = λ .v

Terms used to designate frequency:

Electromagnetic energy or Electromagnetic Spectrum

As a result of motion of atoms and molecules, any object at temperatures above. absolute zero emits
electromagnetic radiation. Further, when objects receive radiation from natural sources like the Sun or an
artificial source, a part of it is reflected. Every object in nature has its unique pattern of reflected, emitted and
absorbed radiation. The nature and quantity of EM radiation received from a body, upon detection, becomes a
valuable source of data for interpreting the important properties of the object.

Electromagnetic energy refers to all energy that moves with the velocity of light in a harmonic wave
pattern. The wave concept explains the propagation of energy, but this energy is detectable only in terms of its
interaction with matter. In this interaction, electromagnetic energy behaves as though it consists of many
individual bodies called' photons'. The dual concept of waves and particles may be demonstrated for light.

The energy that emanates from a source propagates between the source and the object surface in the form of
the waves of energy at a speed of light (3x108 m/sec or 300,000 km/sec). Such energy propagation is called the
Electromagnetic Radiation (EMR). The energy waves vary in size and frequency. The plotting of such
variations is known as the Electromagnetic Spectrum (Fig. 7.3). On the basis of the size of the waves and
frequency, the energy waves are grouped into Gamma, X–rays, Ultraviolet rays, Visible rays, Infrared rays ,
Microwaves and Radio waves. Each one of these broad regions of spectrum is used in different applications.
However, the visible, infrared and microwave regions of energy are used in remote sensing.

Electromagnetic spectral regions


There are different electromagnetic spectral regions depending on the ranges of wavelength within
which they fall. The following table gives the different spectral regions and their wavelength ranges
and their characteristics relevant to remote sensing.
Source of EM radiation
The different sources of EM radiation are given in the following table.
CONCEPT OF SIGNATURES
Any set of observable characteristics which directly or indirectly lead to the identification of an object and/or
its condition is termed signature. Spectral, spatial, temporal and polarisation variations are four major
characteristics of the targets which facilitate discrimination. Spectral variations are the chang¬es in the
reflectance or emittance of objects as a function of wavelength. Spatial arrangements of terrain features
providing attributes such as shape, size and texture of objects which lead to the identification of objects are
termed as spatial variations. Temporal variations are the changes of reflectiv¬ity or emissivity with time. They
can be diurnal and/or seasonal. The varia¬tion in reflectivity during the growing cycle of a crop helps
distinguish crops which may have similar spectral reflectances but whose growing cycles may not be same.
Polarisation variations relate to the changes in the polarisation of the radiation reflected or emitted by an
object. The degree of polarisation is a characteristic of the object and hence can help in distinguishing the
object. Such studies have been particularly useful in microwave region. Signatures are not, however,
completely deterministic. They are statistical in nature with a certain mean value and some dispersion around
it.

Spectral response of some natural earth surface features


Vegetation
The spectral reflectance of vegetation (Fig. 4) is quite distinct. Plant pigments, leaf structure and total water
content are the three important factors which influence the spectrum in the visible, near IR and middle IR
wavelength regions
respectively. Low reflec¬tance
in the blue and red regions
corresponds to two
chlorophyll absorption bands
centered at 0.45 and 0.65 mm
respectively. A rela¬tive lack
of absorption in the green
region allows normal
vegetation to look green to
ones eyes. In the near infrared,
there is high (~45 per¬cent)
reflectance, transmittance of
similar magnitude and
absorptance of only about five
percent. This is essentially
controlled by the internal
cellular structure of the leaves.
As the leaves grow, inter
cellular air spaces increases
and the reflectance increases.
As vegetation becomes stressed or senescent, chlorophyll absorption decreases, red reflectance increases and
also there is a decrease in inter cellular air spaces, decreasing the reflectance in the near infrared. This is the
reason why the ratio of the reflectance in the near infrared to red or any of the derived indices from this data
are sensitive indicators of vegetation growth/vigour. In the middle infrared region of the spec¬trum, the
spectral response of green vegetation is dominated by strong absorption bands due to water molecules at 1.4,
1.9 and 2.7 mm.In the middle IR reflectance peaks occur at 1.6 and 2.2 mm. It has been shown that total
incident solar radiation absorbed in this region is directly proportional to the total amount of leaf water content
(Tucker,1980).

Soil
Typical soil reflectance curve shows a generally increasing trend with wavelength in the visible and near
infrared regions. Some of the parameters which influence soil reflectance are the moisture content, the amount
of organic matter, iron oxide, relative percentages of clay, silt and sand, and the roughness of the soil surface.
As the moisture content of the soil increases, the reflectance decreases and more significantly at the water
absorption bands. In a thermal infrared image moist soils look darker compared to the dry soils. In view of the
large differences in dielectric constant of water and soil at microwave frequencies, quantification of soil
moisture becomes possible

Water
Water absorbs most of the radiation in the near infrared and middle infrared regions. This property enables
easy delineation of even small water bodies. In the visible region the reflectance depends upon the reflectance
that occurs from the water surface, bottom material and other suspended materials present in the water column.
Turbidity in water generally leads to increase in its reflectance and the reflec¬tance peak shifts towards longer
wavelength. Increase in the chlo-rophyll concentration leads to greater absorption in the blue and red regions.
Dissolved gases and many inorganic salts do not manifest any changes in the spectral response of water

Snow and Clouds


Snow has very high reflectance upto 0.8 mm and then decreases rapidly afterwards. In case of clouds, there is
non-selective scattering and they appear uniformly bright throughout the range 0.3 to 3 mm. The cloud tops
and snow generally have same temperature and hence it is not easily possible to separate these in the thermal
infrared region. Hence the two atmospheric windows in the middle infrared wavelength regions 1.55 to 1.75
and 2.11 to 2.35 mm are important for snow cloud discrimination.

STAGES IN REMOTE SENSING


Below illustrates the processes used in remote sensing data acquisition. These basic processes that help in the
collection of information about the properties of the objects and phenomena of the earth surface are as follows
:
(a) Source of Energy (sun/self-emission);
(b) Transmission of energy from the source to the surface of the earth;
(c) Interaction of energy with the earth‟s surface;
(d) Propagation of reflected/emitted energy through atmosphere;
(e) Detection of the reflected/emitted energy by the sensor;
(f) Conversion of energy received into photographic/digital form of data;
(g) Extraction of the information contents from the data products; and
(h) Conversion of information into Map/Tabular forms.

Figure 7.2 Stages in Remote Sensing Data Acquisition


a. Source of Energy: Sun is the most important source of energy used in remote sensing. The
energy may also be artificially generated and used to collect information about the objects and
phenomena such as flashguns or energy beams used in radar (radio detection and ranging).
b. Transmission of Energy from the Source to the Surface of the Earth: The energy that emanates
from a source propagates between the source and the object surface in the form of the waves of
energy at a speed of light (3 X 108 m per second). Such energy propagation is called the
Electromagnetic Radiation (EMR). The energy waves vary in size and frequency. The plotting of
such variations is known as the Electromagnetic Spectrum. The electromagnetic (EM) spectrum
is the continuous range of electromagnetic radiation, extending from gamma rays (highest
frequency & shortest wavelength) to radio waves (lowest frequency & longest wavelength)
(Jensen, 2007). However, the visible, infrared and microwave regions of EM Spectrum are used
in remote sensing (Fig-----).
c. Interaction of Energy with the Earth’s Surface: The propagating energy finally interacts with
the objects of the surface of the earth. This leads to absorption, transmission, reflection or
emission of energy from the objects. We all know that all objects vary in their composition,
appearance forms and other properties. Hence, the objects‟ responses to the energy they receive
are also not uniform. Besides, one particular object also responds differently to the energy it
receives in different regions of the spectrum (Fig. 7.5). For example, a fresh water body absorbs
more energy in the red and infrared regions of the spectrum and appears dark/black in a satellite
image whereas turbid water body reflects more in blue and green regions of spectrum and
appears in light tone (Fig. 7.4).Figure 7.4 Spectral Signature of Soil, Vegetation and Waterd.

d. Propagation of Reflected/Emitted Energy through Atmosphere: When energy is reflected from


objects of the earth‟s surface, it re–enters into the atmosphere. You may be aware of the fact that
atmosphere comprises of gases, water molecules and dust particles. The energy reflected from
the objects comes in contact with the atmospheric constituents and the properties of the original
energy get modified. Whereas the Carbon dioxide (CO2), the Hydrogen (H), and the water
molecules absorb energy in the middle infrared region, the dust particles scatter the blue energy.
Hence, the energy that is either absorbed or scattered by the atmospheric constituents never
reaches to sensor placed onboard a satellite and the properties of the objects carried by such
energy waves are left unrecorded.

e. Detection of Reflected/Emitted Energy by the Sensor: The sensors recording the energy that
they receive are placed in a near–polar sun synchronous orbit at an altitude of 700 – 900 km.
These satellites are known as remote sensing satellites (e.g. Indian Remote Sensing Series). As
against these satellites, the weather monitoring and telecommunication satellites are placed in a
Geostationary position (the satellite is always positioned over its orbit that synchronises with the
direction of the rotation of the earth) and revolves around the earth (coinciding with the direction
of the movement of the earth over its axis) at an altitude of nearly 36,000 km (e.g. INSAT series
of satellites). A comparison between the remote sensing and weather monitoring satellites is
given in Box (7.1). Figure 7.6 shows the orbits of Sun-Synchronous and Geostationary satellites
respectively.Figure 7.6 Orbit of Sun Synchronous (Left) and Geostationary (Right) Satellites

f. Conversion of Energy Received into Photographic/ Digital Form of Data: The radiations
received by the sensor are electronically converted into a digital image. It comprises digital
numbers that are arranged in rows and columns. These numbers may also be converted into an
analogue (picture) form of data product. The sensor onboard an earth-orbiting satellite
electronically transmits the collected image data to an Earth Receiving Station located in
different parts of the world. In India, one such station is located at Shadnagar near Hyderabad.g.
Extraction of Information Contents from Data Products: After the image data is received at the
earth station, it is processed for elimination of errors caused during image data collection. Once
the image is corrected, information extraction is carried out from digital images using digital
image processing techniques and from analogue form of data products by applying visual
interpretation methods. Conversion of Information into Map/Tabular Forms: The interpreted
information is finally delineated and converted into different layers of thematic maps. Besides,
quantitative measures are also taken to generate a tabular data.

Remote Sensing - Sensors and Platforms


Sensors
Sensors can be classified as passive or active. Sensors, which sense natural radiations, either emitted or
reflected from the Earth, are called passive sensors. It is also possible to produce electromagnetic radiation of a
specific wavelength or band of wavelengths and illuminate a terrain on the Earth‟s surface. The interaction of
this radiation with the target could then be studied by sensing the scattered radiation from the targets. Such
sensors, which produce their own electromagnetic radiation are called active sensors. A photographic camera,
which uses only sunlight, is a passive sensor; whereas the one, which uses a flash bulb, is an active sensor.
Again, sensors (active or passive) could be either imaging, like the camera, or non-imaging, like the
nonscanning radiometer. Sensors are also classified on the basis of range of electromagnetic region in which
they operate such as optical or microwave (Figure 1).

Various types of sensors generally used in Earth observation are described briefly.
1. Photographic and Television Cameras: Photographic cameras are the oldest and probably the most
widely used imaging systems. They have been successfully used from aircraft, balloons, manned and
unmanned spacecraft. A multiband camera enables simultaneous photography of a ground scene in more than
one spectral band. Some of the limitations of photographic cameras are their limited spectral response (only up
to ~ 0.9 μm) and dynamic range, non-amenability to digital processing and problems associated with
reproducibility of the quality of the imagery. Television cameras were the first imaging systems used in space
to get the imagery of the Earth telemetered down as electrical signals.

2. Optical Mechanical Scanners: In case of an optical mechanical scanner, the radiation emitted (or
reflected) from the scene is intercepted by a scan mirror, which diverts the radiation to a collecting
telescope (Figure 2). The telescope focuses the radiation to a detector. The detector receives radiation
from an area on the ground (picture elements or pixel), defined by the detector size and focal length of
the telescope. By rotating the scan mirror which is normally inclined at 45o to the optical axis, the
detector starts looking at the adjacent pixels on the ground. Typical instruments using this principle
include LANDSAT MSS and TM and the very high resolution radiometer onboard INSAT.

3. Linear Imaging Self Scanning Sensors (LISS): In this system, the basic sensor is a linear array of
solid-state detectors. The optics focuses a strip of terrain in the cross track direction on to the sensor
array. The image from each detector is stored and shifted out sequentially to get a video signal. The
motion of the platform produces successive scan lines, thereby producing a two-dimensional picture
(Figure 3). The spatial resolution primarily depends on the number of photo detectors available in a
linear array and the required swath. Such sensors are expected to give a resolution of a few tens of
metres even from geostationary altitudes
4. LIDAR: With advancement of high power laser technology in the optical and IR region, active laser
remote sensing is promising new means of obtaining useful information on Earth and its environment,
especially related to atmospheric constituents and phenomenon. The laser system used for remote
sensing is referred to as LIDAR (acronym for light detection and ranging, simila to RADAR).
5. Passive Microwave Radiometer: Microwave radiometers are passive sensors used to measure the
emitted energy. The emitted energy is collected by a suitable antenna. The signal is represented as an
equivalent temperature, that is, the temperature of a black body source which would produce the same
amount of signal in bandwidth of the system.
6. Microwave Active Sensors: Side looking airborne radar (SLAR) was the first active sensor used to
produce imagery of the terrain from the backscattered microwave radiation. The antenna mounted
sideways on an aircraft transmits a pulsed microwave energy which illuminates the ground. The return
signal is received by the same antenna and is processed either on board or on ground. The radar
returns scattered rays back from different points in the field of view and are separated in phase at the
radar receiver. Scattered energy depends on the radar cross-section of the target, the wavelength, the
slant range and the radiation pattern. The spatial resolution of a SLAR at a certain height and look
angle is controlled by two independent system parameters, namely, pulse duration for range resolution
(R), and antenna length for azimuth resolution. SLAR cannot produce fine resolution radar imagery
from satellite altitudes. Synthetic aperture radar (SAR) overcomes this problem.

PLATFORMS:

The sensor systems need to be placed on suitable observation platforms. They can be stationary or mobile
depending upon the needs of observation and constraints. For an imaging system, in general, the spatial
resolution becomes poorer as the platform height increases, but the area coverage increases. Thus a trade off
between the resolution and synoptic view is necessary in choosing the platform altitude. Further the platform‟s
ability to support the sensor, in terms of weight, volume, power, etc., and its stability have to be considered.
Though aircraft, balloons, rockets and satellites have been used as platforms, the most extensively used are
aircrafts and satellites. Aircrafts are mainly useful for surveys of local or limited regional interest. One of the
major advantages is their ability to be available at a particular location at a specified time. They can be used at
low altitudes (~1 km) to few tens of kilometres depending on aircraft. Currently there are aircrafts fitted with
multiple sensors, capable of observations covering the whole range of the electromagnetic spectrum. The
major limitation is the high cost for global coverage and even for regional coverage on repetitive basis.
Earth observation from a satellite platform provides a synoptic view of a large area, which is very
useful for understanding interrelationships between various features; further it can be made under known solar
zenith angle, providing similar illumination conditions. Another major advantage of satellite is its ability to
provide repetitive observations of the same area with intervals of a few minutes to a few weeks, depending on
the sensor and orbit. This capability is very useful to monitor dynamic phenomena such as cloud evolution,
vegetation cover, snow cover etc. Spacecraft consists mainframe, power, thermal, altitude and orbit control,
telemetry, tracking and command systems, etc. besides the payload (sensor).

Orbits
Two types of spacecraft orbits are possible: (i) geostationary and (ii) near earth orbit (Figure 4). For a satellite
orbiting in the equatorial plane of Earth from west to east at about 36000 km above the Earth, the period of
revolution of satellite exactly coincides with that of the rotation of the Earth about its own axis. Thus the
satellite appears stationary with respect to the Earth. Such an orbit is called geostationary. Geostationary
satellites are extensively used for communication and meteorological observations. Due to the large distance
from Earth, high resolution imaging from geostationary satellites is difficult. Resolution of about a kilometre
has been successfully obtained from a number of geostationary satellites, including INSAT-2E.

Near-Earth orbit height varies from a few hundred kilometres to several thousand kilometres. Most
useful orbit in this category for remote sensing is the circular, near polar, sun-synchronous orbit. In a sun-
synchronous orbit, all points at a given latitude (say on a descending pass) will have the same local mean solar
time. Further, the ground trace of the sun-synchronous satellite can be made to recur over a scene exactly at
intervals of fixed number of days by maintaining the height of the orbit to a close tolerance, thus ensuring
repetitive observations of a scene at the same local time. However, it should be noted that the solar zenith
angle changes due to seasonal variations cannot be eliminated. Polar orbits facilitate global coverage and
circularity ensures that spatial resolution is maintained.
SENSOR PARAMETERS OR SENSOR RESOLUTIONS:
The major sensor parameters which have bearing on optimum utilization of data include:
(i) spatial resolution – the capability of the sensor to discriminate the smallest object on
the ground;
(ii) spectral resolution – the spectral bandwidth with which the imagery is taken;
(iii) radiometric sensitivity – the capability to differentiate the spectral
reflectance/emittance between various targets; and
(iv) Temporal Resolution or dynamic range – the minimum to reflectance that can be
faithfully measured. In addition, the sensor should produce imagery with geometric
fidelity. Repetitivity, or at what time interval same area gets imaged by the sensor is
another important parameter of the sensor-orbit system. It is not possible to
simultaneously get the best of all parameters. Hence trade-off between various
parameters is required to realise a sensor system.

The below illustrates the examples of sensor characteristics of various sensors:

Satellite LANDSAT 7 ETM + IRS-P6 LISS III SRTM


Sensor
Bands C- SIR X - SAR
Blue-Green Green
Green Red
VNIR

Red NIR
Near IR SWIR

SWIR-I
SWIR

SWIR-II

TIR
Panchromatic
Bands and 1 (0.45-0.515) 5.6 cm 3.1 cm
wavelength 2 (0.525-0.605) 0.52 – 0.59
(μm) or 3 (0.63-0.69) 0.62 – 0.68
Spectral 4 (0.75-0.90) 0.76 – 0.86
Resolution 5 (1.55-1.75) 1.55 to 1.7
7 (2.08-2.35)
6 (10.40-12.5)
8 (0.52-0.90)
Spatial Band 1-5,7: 30 23.5 3 arc/second (90)
Resolution (m) Band 6: 60
at nadir Band 8: 15
Swath width 183 x 170 Band 1-3: 142 225 km 50 km
Band 4: 148
Temporal 16 days 24 days duration of 11 days
resolution
Orbit altitude 705 km 817 km 233 km
Date of capture September, 2002 June, 2007 February, 2000

AERIAL PHOTO/ IMAGERY INTERPRETATION or PHOTO


INTERPRETATION TECHNIQUES:

Interpretation may be defined as prediction of what cannot be directly observed. Photo/image


interpretation is that branch-of science in which photographs/image are used to obtain certain
specific information by analysing and interpreting the images. An interpreter may be interested
in extracting, information from the photographs or imagery for various purposes.

RECOGNITION ELEMENTS: The most important and commonly employed recognition


elements in photo interpretation are: a) Tone b) Texture c) Colour d) Pattern e) Shape f) Size g)
Association

1. Tone: It is a measure of the relative amount of light reflected by an object and actually
recorded on a photograph/image. In black and white photography colours of different objects are
reduced to various shades of grey depending upon the reflectivity of the objects. Various shades
of grey can be recognised from white at one end to very dark or black at the other. Depending
upon the density of shade the various tones may be defined as very light, light grey, medium
grey, dark grey, dark, very dark, etc.
It must be emphasised here that the tone of an object as depicted on the photograph is not
an absolute value. It is a relative value. Basically, an interpreter is not interested in absolute tonal
values, but in relative values or in tonal contrasts, which enable identification of one object from
the other. The tonal value of an object in a given photograph is a net result of several factors,
most of them subject to variation.
The factors, that influence the tonal value are indeed numerous, but important among
them are (a) angle of the sun at the time of photography, (b) atmospheric factors like cloud
cover, haze (c) type of films used (viz. slow, fast etc), (d) the amount of exposure, (e) processing
of film, and so on. The tone of an outcrop will also be governed by several factors such as (a)
weathered or fresh rock, (b) smooth or rough surface, (c) amount of moisture in the rock, (d) bare
rock, or rock with vegetation and/or grass cover, (e) rock with moss or lichen cover and so on.

2. Texture: This may be defined as the frequency of tonal change within an image
produced by a number of unit features too small to be discernible individually on the
photograph. The scale of photo, therefore, had an important bearing on the texture. The
texture may be defined as coarse, fine, mottled, banded, dotted, smooth, rough, even,
uneven, speckled, granular, blocky, rippled, matted, etc.
Texture is, however, rarely used identification or correlation exhibiting the same tone is
variation between them may help the other as the lone criterion of where two rock units in
contact, the textural in distinguishing one from the other.
3. Colour: Colour as a criterion for recognition of objects can be successfully employed
while handling colour photographs/imagery. On the basis of colour differences, different
litho units can perhaps be better identified and delineated on colour photographs/imagery
than in black and white photographs/imagery. It may be possible to identify gossans or
leached cappings because of their distinctive colour. However, where two lithounits have
almost the same or similar colour, these can be better appreciated and delineated on black
and white aero-space data since the two are likely to have different reflectivity because of
differences in mineralogical composition. Colour images may be 'True colour' or 'False
colour' depending upon the film type and filters used. In false colour photography using
appropriate filters, certain features can be enhanced for better delineation. The black and
white multispectral or multiband photography can be combined, in additive colour viewer
to obtain 'true' or 'false' colour image.
4. Pattern: It refers to the orderly spatial arrangement of geologic, topographic or
vegetation features. These normally develop due to orderly distribution of landforms,
drainage networks, erosion, alignment of vegetation, distribution of linear structural
features such as joints or fracture pattern is an important element in interpretation.
5. Drainage pattern
Drainage pattern is an important element in geologic interpretation of aerial photos. In bedrock
areas the drainage pattern depends for the most part on the lithologic character of the underlying
rock, the attitude of these rock bodies and arrangement and spacing of the plane of lithologic and
structural weakness encountered by runoff. Different drainage patterns and their significance is
briefly given below:
a. Dendritic drainage: This is the most common drainage pattern and is characterised by a
tree-like. branching system in which tributaries join the gently curving main stream at
acute angles. The occurrence of this drainage system indicates homogeneous, uniform
soil and rock materials and is typical of the landforms of soft sedimentary rocks, volcanic
tuff, dissected deposits of thick glacial till and old dissected coastal plains.
b. Trellis drainage: Trellis patterns are modified dendritic forms with parallel tributaries
and short parallel gullies occurring at right angles. This pattern indicates a bedrock
structure rather than a type of bedrock and usually indicates tilted, interbedded,
sedimentary rocks in which the main, parallel channels follow the strike of the beds.
c. Parallel drainage: Parallel drainage systems develop on homogeneous, gentle,
uniformly sloping surfaces whose main collector streams may indicate a fault or fracture.
Tributaries characteristically join the mainstream at approximately the same angle. Such
landforms as young coastal plains and large basalt flows are excellent regional examples
of this drainage pattern.
d. Radial drainage: A circular network of almost parallel channels flowing away from a
central high point characterizes this pattern. A major collector stream is usually found in
a curvilinear alignment around the bottom of the elevated topographic feature.
Volcanoes, isolated hills, and domelike landforms exhibit this type of drainage network.
e. Annular drainage: This type of pattern is developed on topographic forms usually
similar to those associated with radial patterns, but in this case the bedrock joints or
fracturing control the parallel tributaries. Granitic or sedimentary domes may develop this
type of pattern
f. Rectangular drainage: Rectangular patterns are also variations of a dendritic system.
Here the tributaries join the mainstream at right angles and form rectangular shapes
controlled by bedrock jointing, foliations, or fracturing. The stronger or more harsh the
pattern, the thinner the soil cover. These patterns are often formed in slate, schist, and
gneiss, in resistive sandstone in arid climates, or in sandstone in humid climates.

6. Vegetation pattern: Study and analysis of vegetation pattern also often yield clues to the
identification of underlying lithology or structures. For instance a linear vegetation
pattern clearly reveals a joint or a fracture or a fault.
7. Shape: Some geologic features can be recognised on the photographs merely by their
shape. For example volcanic cones or impact craters reveal themselves merely by their
shape. Alluvial fans or sand dunes are other features which can be easily recognised
merely by their characteristic shape. Domal structures and doubly plunging anticlines
reveal themselves through their oval or elongate oval shapes.
8. Size: Size of features observed on photos/images sometimes helps in the interpretation.
Larger than average linear features generally indicate faults. The thickness of beds may at
times be a supporting factor in stratigraphic correlation.
9. Association: This is one of the important recognition elements. Glacial terrains for
example can be distinguished easily when we observe their association with glaciated
region. Kettles which are features of glaciated terrain may be mistaken for similar
looking sink holes in limestone country unless the association is taken into account. In
several other ways a close examination of associated features help not only in identifying
geological features, but also in keeping the interpretation in the right direction.

You might also like