0% found this document useful (0 votes)
10 views40 pages

Final Tvet

The document provides an overview of remote sensing and image interpretation, detailing the principles, applications, and techniques involved in capturing and analyzing aerial and satellite imagery. It covers topics such as photogrammetry, classification of photographs, scale measurement, and the elements of image interpretation, emphasizing the importance of various resolutions and characteristics of remote sensing data. Additionally, it includes practical exercises for acquiring satellite images and creating layer stacked images for analysis.

Uploaded by

Dems Weldeyesus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views40 pages

Final Tvet

The document provides an overview of remote sensing and image interpretation, detailing the principles, applications, and techniques involved in capturing and analyzing aerial and satellite imagery. It covers topics such as photogrammetry, classification of photographs, scale measurement, and the elements of image interpretation, emphasizing the importance of various resolutions and characteristics of remote sensing data. Additionally, it includes practical exercises for acquiring satellite images and creating layer stacked images for analysis.

Uploaded by

Dems Weldeyesus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

REMOTE SENSING AND IMAGE INTERPRETATION

Introduction

Remote Sensing, which includes aerial photographs and satellite images, refers to data collection
taken from a significant distance from the subject. This often refers to photographs and video
taken from above at a significant altitude. Remote sensing produces images of a much larger
area of the Earth's surface than a person on the ground can photograph. It also shows the position
and relationship between objects and geographic features within the area in the image.
Combining special sensors with remote imaging can help determine the health of forests,
movement of camouflaged military vehicles, and study changes in geographic features.

Aerial photographs are produced by exposing film to solar energy reflected from Earth.
Photographs and other images of the Earth taken from the air and from space show a great deal
about the planet's landforms, vegetation, and resources. Aerial and satellite images, known as
remotely sensed images, permit accurate mapping of land cover and make landscape features
understandable on regional, continental, and even global scales. Transient phenomena, such as
seasonal vegetation vigor and contaminant discharges, can be studied by comparing images
acquired at different times.

Basic Principles of RS Image Data


Remote sensing image data are more than a picture; they are measurements of Electro Magnetic
Energy (EME). Image data are stored in a regular grid format (rows and columns). A single
image element is called a pixel, a contraction of 'picture element'. For each pixel, the
measurements are stored as Digital Numbers (DN) values. Typically, for each measured
wavelength range a separate data set is stored, which is called a band or a channel and sometimes
a layer.

Photogrammetry
The photogrammetry has been derived from three Greek words: Phos or phot: means light Gramma:
means something drawn or written whereas Metrein: means to measure. This definition, over the years,
has been enhanced to include interpretation as well as Measurement with photographs. The art, science,
and technology of obtaining reliable information about physical objects and the environment through
process of recording, measuring, and interpreting photographic images. Originally, photogrammetry was
considered as the science of analyzing only photographs. But now it also includes analysis of other
records as well, such as radiated acoustic energy patterns and magnetic phenomenon. Photogrammetry
includes two aspects:
Metric: it involves making precise measurements from photos and other information source
to determine, in general, relative location of points. Most common application: preparation of
planimetric and topographic maps. Metric photogrammetry is classically divided into two,
terrestrial photogrammetry and aerial photogrammetry. Photographs of terrain in an area are
taken by a precision photogrammetric camera mounted in an aircraft flying over an area.
a) Terrestrial Photogrammetry: photographs of terrain in an area are taken from fixed
and usually known position or near the ground and with the camera axis horizontal or
nearly so.
b) Aerial Photogrammetry: photographs of terrain in an area are taken from air or
airplane. It is a type used for surveying applications.

Interpretative: It involves recognition and identification of objects and judging their


significance through careful and systematic analysis. It includes photographic interpretation
which is the study of photographic images. It also includes interpretation of images acquired in
Remote Sensing using photographic images.

Classification of Photographs
Depending on the position of camera at the time of photography, photograph is broadly
classified in to two.
 Terrestrial photograph and
 Aerial photograph

Terrestrial Photograph: When photograph is taken with ground-based camera, the position
and orientation of which might be measured directly at the time of exposure, or with photo-
theodolite having camera station on the ground and the axis of camera is horizontal or nearly
horizontal.

Figure: Photo Theodolite


Aerial photography: It is commonly classified as either vertical or oblique. Vertical photos are
taken with the camera axis directed as nearly vertically as possible. The following paragraphs
give details of classification of vertical photographs used in different applications on the basis
of the alignment of optical axis:
a) Truly Vertical: If optical axis of the camera is held in a vertical or nearly vertical position.
b) Oblique: Photograph taken with the optical axis intentionally inclined to the vertical.
Following are different types of oblique photographs:
 High oblique: Oblique this contains the apparent horizon of the earth
 Low oblique: Apparent horizon does not appear.
c) Trimetrogon: Combination of a truly vertical and two oblique photographs in which the
central photo is vertical and side ones are oblique. Mainly used for reconnaissance.
d) Convergent: A pair of low oblique taken in sequence along a flight line in such a manner
that both the photographs cover essentially the same area with their axes tilted at a fixed
inclination from the vertical in opposite directions in the direction of flight line so that the
forward exposure of the first station forms a stereo-pair with the backward exposure of the
next station.

Scale Measurement from Aerial Photograph


The scale of the photographs is important to acquire information from photographs. The scale of
a given photograph can be determined using following methods:
a) Measuring the distance between two well-defined points on the ground as well as on the
photograph.
Photo Distance
Scale of Photograph =
Ground Distance
Applications of Photogrammetry
Photogrammetry has been used in several areas. The following descriptions give an overview of
various applications areas of photogrammetry.
 Geology: Structural geology, investigation of water resources, analysis of thermal patterns
on earth's surface, geomorphological studies including investigations of shore features.
 Forestry: Timber inventories cover maps, acreage studies
 Agriculture: Soil type, soil conservation, crop planting, crop disease, crop-acreage.
 Design and construction: Data needed for site and route studies specifically for alternate
schemes for photogrammetry. Used in design and construction of dams, bridges,
transmission lines, planning of cities and highways, new highway locations, detailed design
of construction contracts, planning of civic improvements.
 Cadaster: Cadastral problems such as determination of landlines for assessment of taxes.
Large-scale cadastral maps are prepared for reapportionment of land.
 Environmental Studies: Land-use studies.
 Exploration: To identify and zero down to areas for various exploratory jobs such as oil
or mineral exploration.
 Military intelligence: Reconnaissance for deployment of forces, planning maneuvers,
assessing effects of operation, initiating problems related to topography, terrain conditions
or works.
 Miscellaneous: Crime detection, traffic studies, oceanography, meteorological
observation, architectural and archaeological surveys, contouring beef cattle for animal
husbandry etc.
Identifying Features Using Stereoscopy
A pair of stereoscopic photographs or images can be viewed stereoscopically by looking at the
left image with the left eye and the right image with the right eye. This is called stereoscopy.
Stereoscopy is based on Porro-Koppe‘s principle that the same flight path will be generated
through an optical system id a light source is projected onto the image taken by an optical system.
The principle will be realized in a stereo model if a pair of stereoscopic images are reconstructed
using the relative location or tilt at the time the photography was taken. Such an adjustment is
called relative orientation in photogrammetric terms. The eye-base and the photo- base must be
parallel in order to view at a stereoscopic model.

Usually a stereoscope is used for image interpretation. There are several types of stereoscope,
for example, portable lens stereoscope, stereo mirror scope, stereo zoom transfer scope etc. The
process of stereoscopy for aerial photography is as follows. At first the centers of both aerial
photographs, call the principle point, should be marked. Secondly, the principle point of the right
image should be plotted in its position on the left image. At the same time, the principle point of
the left image should be plotted on the right image. The principle points and the transferred
points should be aligned a long a straight line, called the base line, with the appropriate
separation (normally 25-30 cm in the case of a mirror stereoscope) as shown in the Figure below.
By viewing through the binoculars, a stereoscopic model can now be seen.

The advantage of stereoscopy is the ability to extract 3D for example, classification between tall
trees and low trees, terrestrial features such as height of terraces, slope gradient, detailed
geomorphology in flood plains, and dip of geological layers and so on.
The principle of height measurement by stereoscopic vision is based on the use of parallax,
which corresponds to the distance between image points, of the same object on the ground, on
the left and right image. The height difference can be computed if the parallax difference is
measured between two points of different height using a parallax bar, as shown in the figure
(See Figure below).

Figure: Principle of stereoscopic principle

Image Interpretation
Image interpretation is the extraction of qualitative and quantitative information in the form of a
map, about the shape, location, structure, function, quality, condition, relationship of/between
objects, etc. by using human knowledge or experience. As a narrow definition, photo-
interpretation is sometimes used as a synonym of image interpretation.

Image interpretation in satellite remote sensing can be made using a single scene of a satellite
image, while usually a pair of stereoscopic aerial photography is used in photo-interpretation to
provide stereoscopic vision using, for example, a mirror stereoscope. Such a single photo-
interpretation is discriminated from stereo photo-interpretation.

Image reading is an elemental form of image interpretation. It corresponds to simple


identification of objects using such elements as shape, size, pattern, tone, texture, color, shadow
and other associated relationships. Image reading is usually implemented with interpretation
keys with respect to each object. Image measurement is the extraction of physical quantities such
as length, location, height, density, temperature and so on, by using reference data or calibration
data deductively or inductively.

Basic Image Data Characteristics


Remote sensing image data of the earth's surface acquired from either aircraft or spacecraft
platforms is readily available in digital format; spatially the data is composed of discrete picture
elements, or pixels and radiometrically it is quantized into discrete brightness levels. Even data
that is not recorded in digital form initially can be converted into discrete data by use of digit
using equipment. The great advantage of having data available digitally is that it can be processed
by computer either for machine assisted information extraction or for enhancement before an
image product is formed. The latter is used to assist the role of photointerpretation.

Remote sensing images are characterized by their spectral, spatial, radiometric, and temporal
resolutions.
Remote sensing images can be described in terms of their resolution: spatial, spectral,
radiometric and temporal.
 Spatial resolution: a measure of the smallest object that can be resolved by the sensor, or
the area on the ground represented by each pixel. The finer the resolution, the lower the
number. For instance, a spatial resolution of 79 meters is coarser than a spatial resolution
of 10 meters.
 Spectral resolution: refers to the specific wavelength intervals in the electromagnetic
spectrum that a sensor can record. For example, band 1 of the Landsat TM sensor records
energy between 0.45 and 0.52 μm in the visible part of the spectrum.
 Radiometric resolution: refers to the dynamic range, or number of possible data file
values in each band. This is referred to by the number of bits into which the recorded
energy is divided. For instance, in 8-bit data, the data file values range from 0 to 255 for
each pixel, but in 7-bit data, the data file values for each pixel range from 0 to 128.
Temporal resolution: refers to how often a sensor obtains imagery of a particular area. For
example, the Landsat satellite can view the same area of the globe once every 16 days. SPOT,
on the other hand, can revisit the same area every three days. Temporal resolution is an
important factor to consider in change detection studies.
Image analysis: is the understanding of the relationship between interpreted information and the
actual status or phenomenon, and to evaluate the situation. Extracted information will be finally
represented in a map form called an interpretation map or thematic map. Generally, the accuracy
of image interpretation is not adequate without some ground investigation. Ground
investigations are necessary, first when the keys are established and then when preliminary map
is checked.

Elements of Image Interpretation


The following eight elements are mostly used in image interpretation; size, shape, shadow, tone,
color, texture, pattern and associated relationship or context.
a) Size: a proper photo-scale should be selected depending on the purpose of interpretation.
Approximate size of an object can be measured by multiplying the length on the image by
the inverse of the photo-scale.

Figure: Satellite view of a part of a city


b) Shape: the specific shape of an object as it is viewed from above will be imaged on a
vertical photograph. Therefore, the shape looking from a vertical view should be known.
For example, the crown of a conifer tree looks like a circle, while that of a deciduous tree
has an irregular shape. Airports, harbors, factories and so on, can also be identified by their
shape.
Figure: Image of an area
c) Shadow is usually a visual obstacle for image interpretation. However, shadow can also
give height information about towers, tall buildings etc., as well as shape information from
the non-vertical perspective such as the shape of a bridge.

Shadow is a helpful element in image interpretation. It also creates difficulties for some objects
in their identification in the image. Knowing the time of photography, we can estimate the solar
elevation/illumination, which helps in height estimation of objects. The outline or shape of a
shadow affords an impression of the profile view of objects.

Figure: Shadow of objects used for interpretation


d) Tone/Color: Tone is the relative brightness of grey level on black and white image or
color/F.C.C image. Tone is the measure of the intensity of the reflected or emitted radiation
of the objects of the terrain. Lower reflected objects appear relatively dark and higher reflected
objects appear bright.

Color is more convenient for the identification of object details. For example, vegetation types
and species can be more easily interpreted by less experienced interpreters using color
information. Sometimes color infrared photographs or false color images will give more specific
information, depending on the emulsion of the film or the filter used and the object being imaged.

In panchromatic photographs, any object will reflect its unique tone according to the reflectance.
For example, dry sand reflects white, while wet sand reflects black. In black and white near
infrared photographs, water is black and healthy vegetation white to light gray.

(a) (b)
Figure: Satellite image of area in gray scale and in standard false color composite
e) Texture is a group of repeated small patterns. For example, homogeneous grassland
exhibits a smooth texture; coniferous forests usually show a coarse texture. However, this
will depend on the scale of the photograph or image.

Figure: High resolution image showing different textures.


f) Pattern is a regular usually repeated shape with respect to an object. For example, rows of
houses or apartments, regularly spaced rice fields, interchanges of highways, orchards etc.,
can provide information from their unique patterns.

Figure: High resolution image showing different Pattern


g) Associated relationships or context: Associated combination of elements, geographic
characteristics, configuration of the surroundings or the context of an object can provide
the user with specific information for image interpretation.

Figure: Image of associated relationship


Practical Session
1. Acquiring RS data/Satellite Image
This exercise uses the USGS Earth Explorer website ((http://earthexplorer.usgs.gov/)) to search
for and download available satellite data (Landsat 8 OLI/TIRS and ESA’ Sentinel-2 images) for
further processing and analysis.

Procedures: Follow the instruction given below to query and download target images (scenes)
Example of search criteria:
Path= 168/169
Row=55/56
Date: January 2020
Data Set: OLI
Cloud Cover: less than 10 %Data
type Level 1: OLI L1T

Table of Landsat 8 Operational Land Imager (OLI) and Thermal InfraredSensor (TIRS)
Data Set
Additional Criteria

Download the required image


2. Creating Layer Stacked Images
Objective: To create a multispectral image combining the separate channels of a multiband image
in order to display the layer stacked image in various band combinations of RGB colors.

Steps
1. From the file menu in ERDAS IMAGINE, click on the “Raster” tab. Then select “Spectral”
from the Resolution grouping. Finally, click on “Layer Stack”

2. In the Layer Selection and Stacking dialog window, create a layer stacked image (a multi-band
image), composed of LandSat-8 spectral bands (Band 1-7). Add each of the Input Files, one by
one, as follows (be sure to click on “Add” to add each image individually to the Layer List).
3. Next, name the output file: Multispectral and select the “File type: IMAGINE Image (*.img)”
from the drop-down list. Also, check “Ignore Zero in Stats.” under Output Options. The output
file will be saved as an *.img file, ERDAS IMAGINE’s native format for raster files.

4. Select “OK” on the “Layer Selection and Stacking” dialog windows to save the multispectral
image and wait for few minutes until processed.

5. Click on “Close” on the Process List dialog window that appears when completed
5. Display the layer stacked (multispectral. Img) image in ERDAS IMAGINE: select “File/
Open/Raster Layer.”
6. Select “OK” to display the multispectral image in the 2D Viewer
3. Creating a Subset Image
Objective: To create an image subset of interest from the layer stacked multiband image,
which is useful for easy display and processing such as performing thematic classification.
Steps
1.Open the original multispectral image, as a False Color Composite (FCC)
2.Click the Inquire icon from the main menu ribbon and then Inquire Box. Drag the white
inquire box in the 2D View window to encompass the area of interest in the subset image.
Then click on “Apply” and leave the inquire box open

3.Next, from the main menu bar, select the Raster tab/ Subset & Chip/Create Subset Image. In
the Subset dialog window that opens, the Input file should be populated with the multispectral
image of Landsat scene currently displayed in the 2D View window. In the Subset Definition
section, click on the “From Inquire Box” button. This should automatically load the
coordinates from the inquire box. Name the Output File. ”Ignore Zero in Stats” and “OK”
to begin the subset operation
4.Then open the subset image: File /Open | Raster Layer. Next, click on the Raster Options
tab and in the “Layers to Colors” section enter the band combination of: 5 (red), 4 (green), 3
(blue). Check the “Fit to Frame” option. Then click on “OK”.

The result is a smaller multi-band image that can be processed more quickly compared to the
whole scene original image.
Displaying the subset data
4. Mosaic
Data Set:
1. North-AA.img
2. South-AA.img
Procedure:
1. Open the image North-AA.img and South-AA.img in a 2D View.

2. Click home tab> Fit to Frame

3. Raster>MosaicPro

4. The data for both images display in the MosaicPro Image List CellArray, and a graphic of the images
display in the canvas of the MosaicPro workspace. If the Image List is not automatically displayed at the
bottom of the MosaicPro workspace, click Edit > Show Image Lists, and select it.

5. MosaicPro Image List displays at the bottom of the MosaicPro workspace with the images listed in the
CellArray.

6. Click the Output Image icon to define output image. In the Output Image Options dialog under Define
Output Map Area(s), make sure that Union of All Inputs is selected and click OK.

7. Click Process > Run Mosaic to run the mosaic process.

8. In the Output File Name dialog, enter the name you want to use in the directory of your choice, then
press Enter.
9. Click the Output Options tab.

10. Check the Stats Ignore Value box to activate it.

11. Click OK in the Run Mosaic dialog. The Process List dialog shows the status of the processes.

12. Open a 2D View.


13. Click File button > Recent and select your output file name from the Recent Documents list.
14. Click Home tab > Fit to Frame.
15. View your output file in the 2D View.
Activity: Create Mosaic Image of NE and NW Topographic Map of Addis Ababa using
template of Area of Interest (AOI) provided in the lab.

5. Geo-referencing/Georectification
Rectification is the process of projecting the data onto a plane and making it conform to a map projection
system. Images can be converted to real-world ground coordinates by referencing the image to another source
that is in the desired map projection. Source information may be obtained from another image, vector
coverages, or map coordinates. In order to accomplish this task, ground control points (GCPs) need to be
selected from both the input source and the reference source. GCPs are points that are used to depict the same
location on the Earth's surface.
Geo-referencing is the process of assigning a real-world map coordinates (Geographic Coordinate System or
Universal Transverse Mercator Coordinate Systems) to a geometrically distorted image. This is accomplished
by using locations on the geometrically correct reference topographic map or satellite image which are readily
identifiable on the geometrically distorted image called Ground Control Points. Road intersection, Assigning
map coordinates to the image data is called geo-referencing. Since all map projection systems are associated
with map coordinates, rectification involves geo-referencing.
In this session, you rectify the scanned map of Addis Ababa City, using a geo-referenced panchromatic
image of the same area. The image is rectified to the Geographic Coordinate Systems.

Required Data sets


 City Map of Addis Ababa-city map of addis ababa.bmp (unknown coordinate
system)
 Panchromatic image of Addis Ababa City-addis.img (known coordinate system)

1. Display Data
First, you display the scanned map to be rectified.
1. ERDAS IMAGINE must be running with one 2D View open in the Workspace. Open city map of addis
ababa.bmp that has not been rectified.
The city map of Addis ababa.bmp displays in 2D View #1.

2. Start Multipoint Geometric Correction Workspace


You start the Multipoint Geometric Correction workspace from 2D View #1—the View displaying the file
to be rectified .In this session, you rectify an image using a polynomial transformation.
2. Click Multispectral tab > Transform & Orthocorrect group > Control Points to start the
Multipoint Geometric Correction workspace. The Set Geometric Model dialog opens.
3. In the Set Geometric Model dialog, select Polynomial and click OK.
Multipoint Geometric Correction and the GCP Tool Reference Setup dialog both open. The input city map
of addis ababa.bmp is loaded in the group of three Input Views of the Multipoint Geometric Correction
workspace.
4. In the GCP Tool Reference Setup dialog, accept the default setting Image Layer (New Viewer) by
clicking OK.
5. The GCP Tool Reference Setup dialog closes. The Reference Image Layer File Selector opens.
Navigate to the image file addis.img and select it. Click OK.
6. The Reference Map Information dialog opens, reporting the map information for addis.img, the
reference image. Click OK.
If you want to change the map information for a reference image, it is possible to modify it using the
Reference Map Information dialog.
7. When you click OK, wait a few moments for the second group of three Views to open in the 1
Multipoint Geometric Correction workspace. This group is the Reference Views, displaying the .
reference image, addis.img.
The Polynomial Model Properties dialog also opens. Click the Minimize button on the Polynomial
Model Properties dialog to minimize the dialog since you do not select any parameters from this dialog at
this point.
Multipoint Geometric Correction Workspace

Input scanned map - city map of addis ababa.bmp Reference Image – addis.img

Over View Zoom View Main View

GCP Cell Array

3. Collect GCPs
Here you will collect Ground Control Points (GCPs) in the Input Scanned Map (the Scanned Map to be
rectified) and the corresponding GCPs in the Reference image.
The Multipoint Geometric Correction workspace is set in Automatic GCP Editing mode by default.
The Toggle Fully Automatic GCP Editing Mode icon is active, indicating that this is the case.
1. In the Main View for city map of addis ababa.bmp, drag the zoom bounding box to one of the areas
shown in the following picture. The circled areas are good locations for GCPs. You should choose
points that are easily identifiable in both images, such as road intersections.
2. Click the Create GCP icon , then click in the zoom bounding box in the Main View to collect the
first GCP for the Input scanned map (city map of addis ababa.bmp).
The point you have selected is marked as GCP #1 in the three View panes. The X and Y coordinates for
GCP #1 are listed in the CellArray as X Input and Y Input.

Good Targets to Collect GCPs

GCP #1: Wingate Roundabout (Near Petros wePaulos)


GCP #2: Kality Roundabout (Near Drivers and Mechanics Traing Center)
GCP #3: Adwa Square
GCP #4: Ayer Tena Square
3. In the Zoom View pane, click and move the GCP to the desired position.
4. In order to make GCP #1 easier to see, right-click in the Color column to the right of GCP #1 in the
GCP CellArray and select the color Yellow.
5. Click in the Main View for addis.img, then scroll your mouse wheel a bit to zoom out, until it is
zoomed to approximately the same scale as city map of addis ababa.bmp. Locate the geographic link
box and drag it to the corresponding image area in addis.img where GCP #1 is located in the other
image.
To change the color of the link box, right-click on the link box and click Link Box Color. In the Link
Box Color dialog, click the arrow of the Chooser button to select a different color
6. Now you collect a GCP for the corresponding point in the reference image. Click the Create GCP icon
, then click in the Zoom View for addis.img to collect the corresponding GCP #1 in the reference
image (addis.img). In the GCP CellArray, note the X and Y coordinates for Reference image GCP #1
are reported as X Ref. and Y Ref. in the same row (1).
7. In the Main View for city map of addis ababa.bmp, click and drag the link box to an area suitable for
collecting the next GCP.
8. Click in the Zoom View pane to collect the next GCP. Right-click in the Color column in the GCP
CellArray and select a contrasting color.
9. Click in the Main View for addis.img, then move the link box to the corresponding area where you
just collected the GCP in the Input image.
10. Collect a GCP for the corresponding point in the reference image. Click the Create GCP icon ,
then click in the Zoom View for addis.img to collect the corresponding GCP in the reference image
(addis.img).
The GCPs you collect should be spread out across the image, and should not form a single line.
11. Collect at least two more GCPs by repeating steps 7, 8, 9, and 10.
After you collect the fourth GCP in the Input View, note that the GCP is automatically matched in the
Reference View. This occurs with all subsequent GCPs that you collect.
12. After you finish collecting GCPs in the Views, the GCP CellArray should look similar to the
following example:

GCP CellArray

When you select a Point # row, the Status bar displays the Control Point Error for X coordinates, the
Control Point Error for Y coordinates, and the Total Control Point Error. A total error of less than 1 pixel
error would make it a reasonable resampling. Selecting GCPs
Selecting GCPs is useful for moving GCPs graphically or deleting them. You can select GCPs graphically
(in the View) or in the GCP CellArray.
To select a GCP using the GCP icons in the View, click the button and click the desired GCP icon in the
View. When a GCP is selected, you can drag it to the desired location. This method is best for moving an
individual GCP.
To select a GCP using the CellArray, click in the Point # column of the desired GCP. Note that in this method,
the corresponding GCPs in the Input View and in the Reference View are selected. All of the Views drive to
the selected GCP. You can also right-click in any of the rows in the Point # column to see other options.
Deleting a GCP
To delete a GCP, in the GCP CellArray, right-click in the row in the Point # column of the desired GCP and
select Delete Selection.

4. Calculate Transformation Matrix from GCPs


A transformation matrix is a set of numbers computed from the GCPs that can be plugged into polynomial
equations. These numbers are called the transformation coefficients. The polynomial equations are used to
convert source file coordinates to rectified map coordinates.
The transformation coefficients are reported in a CellArray in the Transformation tab in the Polynomial
Model Properties dialog.
To open the Polynomial Model Properties dialog and the Transformation tab, click in the Multipoint
Geometric Correction toolbar.

Minimum Number of GCPs


For the best rectification results, you should always collect more than the minimum number of GCPs, and
the GCPs should be well-distributed and as precise as possible.
If the minimum number of points is not satisfied, then an error message dialog opens. The coefficients in the
Transformation tab, RMS errors, and residuals are blank. At this point, you will not be allowed to save the
transformation or resample the data.

Automatic Transformation Calculation


The Automatic Transformation Calculation function computes the transformation in real time as you edit the
GCPs or change the selection in the CellArray. The Automatic Transformation Calculation function is
enabled by default in the Multipoint Geometric Correction workspace.

5. Digitize Check Points


GCPs may be designated as control points or check points. Control points are used to calculate the
geometric transformation model. Check points are not used in the calculation, but used to independently
evaluate the error in the transformation.
13. In the GCP CellArray, turn all of the GCPs to yellow by right-clicking the Point # column and click
Select All, then right-clicking in each of the two Color columns and selecting Yellow.
14. To deselect the GCPs, right-click in the Point # column and click Select None.
15. In the last row of the CellArray, right-click in each of the two Color columns and select Magenta.
All of the check points you add in the next steps are Magenta, which distinguishes them from the GCPs.
16. Select the last row of the CellArray by clicking in the Point # column next to that row.
17. Select Edit > Set Point Type > Check from the Multipoint Geometric Correction menu bar.
All of the points you add in the next steps are classified as check points.
18. Select Edit > Point Matching from the menu bar.
The GCP Matching dialog opens.
19. In the GCP Matching dialog under Threshold Parameters, change the Correlation Threshold to 0.8,
and then press Enter on your keyboard.
20. Click the Discard Unmatched Point checkbox to activate it.
21. Click Close in the GCP Matching dialog.
22. Now you will create five check points in the Input View and in the Reference View, using the same
method as you did when creating the GCPs.
Start by creating the first check point near to GCP #1, then continue in sequence.
The Point IDs, X and Y Input coordinates, X and Y Reference coordinates, and Match values are reported in
the CellArray.
If the previously input points were not accurate, then the check points you designate may go unmatched and
are automatically discarded.
23. Click the Compute Error icon to compute the error for the check points.
Select the last GCP check point in the CellArray. In the Status bar, the Check Point Error for X coordinates,
the Check Point Error for Y coordinates, and the Total Check Point Error display. A total error of less than 1
pixel error would make it a reasonable resampling.
24. To view the polynomial coefficients, click the Model Properties icon .
The Polynomial Model Properties dialog opens.
25. Click the Transformation tab and note the transformation coefficients to be used in the polynomial
equations. Close the Polynomial Model Properties dialog.
6. Resample the Image
Resampling is the process of calculating the file values for the rectified image and creating the new file. All
of the raster data layers in the source file are resampled. The output image has as many layers as the input
image.
ERDAS IMAGINE provides these widely-known resampling algorithms: Nearest Neighbor, Bilinear
Interpolation, Cubic Convolution, and Bicubic Spline.
Resampling requires an input file and a transformation matrix by which to create the new pixel grid.
26. Click the Resample icon in the toolbar. The Resample dialog opens.
27. In the Resample dialog under Output File, enter the name city map of addis ababa-georeferenced.tif
for the new resampled data file. This is the output file from rectifying the city map of addis
ababa.bmp file to the coordinate system of the addis.img file.
28. Under Resample Method, click the dropdown list and select Bilinear Interpolation.
29. Click Ignore Zero in Stats, so that pixels with zero file values are excluded when statistics are
calculated for the output file.
30. Click OK in the Resample dialog to start the resampling process.
The Process List dialog opens to let you know when the processes complete.
31. Click Close in the Process List dialog when the job is 100% complete.

7. Verify the Rectification Process


One way to verify that the input image (city map of addis ababa.bmp) has been correctly rectified to the
reference image (addis.img) is to display the resampled image (city map of addis ababa-georeferenced.tif)
and the reference image and then visually check that they conform to each other.
32. Minimize the Multipoint Geometric Correction workspace.
33. Maximize the IMAGINE ribbon Workspace. The image city map of addis ababa.bmp is displayed in
the 2D View.
34. Click Home tab > Add Views > Create New 2D View.
35. Open city map of addis ababa-georeferenced.tif in the second 2D View.
36. Note that when you move your mouse within the View containing city map of addis
ababageoreferenced.tif, map coordinates in degrees are reported in the Status Bar, compared to file
coordinates reported for the ungeoreferenced city map of addis ababa.bmp.

37. Click to make 2D View #1 active, and click [Clear View] in the Quick Access Toolbar.
The image city map of addis ababa.bmp is removed from 2D View #1.
38. Open addis.img in 2D View #1.

39. Click Home tab > Link All Views. 2D View #2 is now linked to 2D View #1.

40. Click Home tab > Link All Views. 2D View #2 is now linked to 2D View #1.
The Inquire Cursor (a crosshair) is placed in both Views. An Inquire Cursor dialog also opens.
41. Drag the Inquire Cursor around to verify that it is in approximately the same place in both Views.
Notice that, as the Inquire Cursor is moved, the data in the Inquire Cursor dialog are updated.
Reference Image and Georeferenced Output Image
6. Classification

Image classification is the process of creating thematic maps from satellite imagery. A thematic
map is an information representation of an image that shows the spatial distribution of particular
theme.
6.1 Unsupervised Classification
Unsupervised image classification does not utilized training data (area) as the bases for classification.
Spectral classes are grouped first, based solely on the numerical
information in the data, and is then matched by the analyst to information classes (if possible).
Programs, called clustering algorithms, are used to determine the natural (statistical) groupings
or structures in the data. Usually, the analyst specifies how many groups or clusters are to be
looked for in the data. In addition to specifying the desired number of classes, the analyst may
also specify parameters related to the separation distance among the clusters and the variation
within each cluster. The final result of this iterative clustering process may result in some
clusters that the analyst will want to subsequently combine, or clusters that should be broken
down further - each of these requiring a further application of the clustering algorithm.

Learning Objectives: To gain an understanding of the general procedures of unsupervised


image classification approach

Procedures
1. Select Raster | Unsupervised within the Classification category grouping | Unsupervised
Classification.
2. In the “Unsupervised Classification” dialog window that opens, select the “Help” buttonto
become more familiar with the available options this tool offers.
3. In the “Unsupervised Classification” dialog, select the following options:
Color Scheme Options
ERDAS IMAGINE provides a default Color Scheme Option that allows the output
unsupervised classification to resemble the original input data (“Approximate True Color”
option)
Since the original input image was displayed as a FCC, the “Approximate True Color” option
applied the unsupervised classification as an approximate of the false color reflectance values ofthe
input image (see the following image). The can be of certain benefit to an image analyst, particularly
if unfamiliar with the area being classified.
6.2 Supervised Classification

In a supervised classification, the analyst identifies in the imagery homogeneous representative


samples of the different cover types (information classes) of interest. These samples referred to
as training areas. The selection of appropriate training areas is based on the analyst familiarity
with the geographical area and his knowledge of the actual surface cover types present in the
images. A supervised classification process generally consists of three stages performed by the
analyst. These stages include training, classification, and the final output. The training stage
consists of the initial stage of a supervised classification. In this stage, the analyst inspects the
image to be classified and uses knowledge of the area (collected from field visits, reference maps
or photos, or other higher quality and higher resolution data) to collect training sites within the
imagery that represent the corresponding areas on the ground. The training sites may be collected
in the form of delineated polygons or representative pixels that the software will use to develop
a multiband classification based on spectral relationships scattered from the sites. Each training
site should represent a homogeneous and contiguous grouping of pixels within an individual
category of interest. The number of training sites collected from the imagery should also capture
the amount of variability contained within the category of interest as identified across the entire
image data. In addition, each training site’s category of interest should be randomly or
systematically distributed throughout the entire image data (Campbell and Wynne, 2011; Nelson
et al., 2020).
The classification process represents the second stage of supervised classification. This process
uses statistical algorithms to analyze the spectral bands of the imagery and classify each pixel to
either of the defined information classes according to its close spectral reflectance resemblance.
In the finally stage, the output is produced as a thematic, or classification map, representing each
category of interest developed originally from the training samples (Nelson et al., 2020).

Objective: To gain understanding of the general procedures for image analysis and
classification using a supervised learning technique to classify land use and land cover
categories.

Procedure
This exercise demonstrates a supervised classification approach using a Landsat-8 Operational
Land Imager/Thermal Infrared Sensor (OLI/TIRS) image. To perform a supervised
classification, we first need to create a signature file, a file that stores information on the
characteristic spectral response of each category of interest.

1. Add the Landsat 8 subset image to the 2D View window as a False Color Composite (FCC)
image display (5, 4, 3)
2. Select Raster/ Supervised/ Signature Editor
With the subset image displayed in a viewer, the ERDAS IMAGINE’s Area of Interests (AOI) tools
will be used to create polygons and delineate homogeneous areas representing the categories
of interest. This is accomplished by the following steps:
3. Select File/ New/ 2D View #1 New Options/ AOI Layer

4. In the main menu with the newly created AOI layer active in the Table of Contents (this
should be highlighted/active by default), click on the “Drawing” tab in the AOI menu
tab grouping.
NOTE: Again, ensure the new AOI layer is the active layer in Contents list
5. Use the Polygon tool icon (within the Insert Geometry category) grouping to create polygons
for homogeneous areas.
6. Click once to create each vertex of the polygon and double-click to complete. As you create
each polygon, in the Signature Editor Tool window add the newly captured signature to the
signature editor by clicking the “Create New Signature(s) from AOI” tool. This is
accomplished by the following steps:
7. Select Edit/ Add in the signature editor, or the click on the “Create New Signature from
AOI” button in the signature editor
NOTE: Make sure you also edit the name of the class to the category you are delineating
tomatch the names the land cover categories of interest. Each added signature name must
be unique (i.e., Forest1, Forest2, Forest3, etc.), and change the color of each class’s
signature

8. Once you have completed delineating several representative polygons for each category
represented in the image (more is ALWAYS better than a few), save the signature file by
selecting File/ Save in the Signature Editor Tool window. Also, be sure to save your AOI layer
(Right-Click on the AOI file in the Contents Legend/ Save As)
9. Next, in the Signature Editor window select Classify/ Supervised Classification and run the
classifier with the signature file you created. Specify an Output File name in the supervised
classification window and then click OK to run the classification. A Process List window
shouldappear and begin the procedure. Close the window when completed.
NOTE: In case ERDAS IMAGINE experience an error and close, when running the Supervised
Classification from the Signature Editor window, you can also run the Supervised Classification
from the tabbed ribbons menu (Raster tab/ Supervised/ Supervised Classification).

NOTE: When naming the output file, it is a good idea to use a naming convention that allow
toeasily recognize the origin of the file and any subsequent operations made to the original
file.

10. Once you have completed the supervised classification, close the Signature Editor tool
window and remove the AOI layer from the Contents Legend (leave the Landsat 8 subset
inputimage open in the 2D View window). Next, display the newly created supervised
classificationoutput image. This classified image is now known as a“Thematic Image”

Tips for Creating the Supervised Classification


When capturing signatures, it is a good idea to keep your AOIs as small as possible to minimize
(pixel spectral) variability within the polygon. Also try to capture as many polygons as you think

is necessary to fully represent the class category throughout the entire image (for instance, do not
capture all the polygons from the same area, but be sure to spread the AOIs out). You may also
use the zoom tools, to zoom in and out, to help you determine the correct class category. Also,
you may change the band combinations from FCC, to True Color Composite (TCC), and back to
FCC of various band combinations to help you identify the correct class categories.

Finally, keep in mind, classification is not an exact science. Picking pixels out of an image
definitely takes a lot of patience and practice; the more time you spend with the imagery to be
classified, the better the overall classification will be. With practice, as well as understanding the
rules of how light interacts with the features on the surface of the earth, your judgment becomes
increasingly better (Nelson et al., 2020).

You might also like