Final Tvet
Final Tvet
Introduction
Remote Sensing, which includes aerial photographs and satellite images, refers to data collection
taken from a significant distance from the subject. This often refers to photographs and video
taken from above at a significant altitude. Remote sensing produces images of a much larger
area of the Earth's surface than a person on the ground can photograph. It also shows the position
and relationship between objects and geographic features within the area in the image.
Combining special sensors with remote imaging can help determine the health of forests,
movement of camouflaged military vehicles, and study changes in geographic features.
Aerial photographs are produced by exposing film to solar energy reflected from Earth.
Photographs and other images of the Earth taken from the air and from space show a great deal
about the planet's landforms, vegetation, and resources. Aerial and satellite images, known as
remotely sensed images, permit accurate mapping of land cover and make landscape features
understandable on regional, continental, and even global scales. Transient phenomena, such as
seasonal vegetation vigor and contaminant discharges, can be studied by comparing images
acquired at different times.
Photogrammetry
The photogrammetry has been derived from three Greek words: Phos or phot: means light Gramma:
means something drawn or written whereas Metrein: means to measure. This definition, over the years,
has been enhanced to include interpretation as well as Measurement with photographs. The art, science,
and technology of obtaining reliable information about physical objects and the environment through
process of recording, measuring, and interpreting photographic images. Originally, photogrammetry was
considered as the science of analyzing only photographs. But now it also includes analysis of other
records as well, such as radiated acoustic energy patterns and magnetic phenomenon. Photogrammetry
includes two aspects:
Metric: it involves making precise measurements from photos and other information source
to determine, in general, relative location of points. Most common application: preparation of
planimetric and topographic maps. Metric photogrammetry is classically divided into two,
terrestrial photogrammetry and aerial photogrammetry. Photographs of terrain in an area are
taken by a precision photogrammetric camera mounted in an aircraft flying over an area.
a) Terrestrial Photogrammetry: photographs of terrain in an area are taken from fixed
and usually known position or near the ground and with the camera axis horizontal or
nearly so.
b) Aerial Photogrammetry: photographs of terrain in an area are taken from air or
airplane. It is a type used for surveying applications.
Classification of Photographs
Depending on the position of camera at the time of photography, photograph is broadly
classified in to two.
Terrestrial photograph and
Aerial photograph
Terrestrial Photograph: When photograph is taken with ground-based camera, the position
and orientation of which might be measured directly at the time of exposure, or with photo-
theodolite having camera station on the ground and the axis of camera is horizontal or nearly
horizontal.
Usually a stereoscope is used for image interpretation. There are several types of stereoscope,
for example, portable lens stereoscope, stereo mirror scope, stereo zoom transfer scope etc. The
process of stereoscopy for aerial photography is as follows. At first the centers of both aerial
photographs, call the principle point, should be marked. Secondly, the principle point of the right
image should be plotted in its position on the left image. At the same time, the principle point of
the left image should be plotted on the right image. The principle points and the transferred
points should be aligned a long a straight line, called the base line, with the appropriate
separation (normally 25-30 cm in the case of a mirror stereoscope) as shown in the Figure below.
By viewing through the binoculars, a stereoscopic model can now be seen.
The advantage of stereoscopy is the ability to extract 3D for example, classification between tall
trees and low trees, terrestrial features such as height of terraces, slope gradient, detailed
geomorphology in flood plains, and dip of geological layers and so on.
The principle of height measurement by stereoscopic vision is based on the use of parallax,
which corresponds to the distance between image points, of the same object on the ground, on
the left and right image. The height difference can be computed if the parallax difference is
measured between two points of different height using a parallax bar, as shown in the figure
(See Figure below).
Image Interpretation
Image interpretation is the extraction of qualitative and quantitative information in the form of a
map, about the shape, location, structure, function, quality, condition, relationship of/between
objects, etc. by using human knowledge or experience. As a narrow definition, photo-
interpretation is sometimes used as a synonym of image interpretation.
Image interpretation in satellite remote sensing can be made using a single scene of a satellite
image, while usually a pair of stereoscopic aerial photography is used in photo-interpretation to
provide stereoscopic vision using, for example, a mirror stereoscope. Such a single photo-
interpretation is discriminated from stereo photo-interpretation.
Remote sensing images are characterized by their spectral, spatial, radiometric, and temporal
resolutions.
Remote sensing images can be described in terms of their resolution: spatial, spectral,
radiometric and temporal.
Spatial resolution: a measure of the smallest object that can be resolved by the sensor, or
the area on the ground represented by each pixel. The finer the resolution, the lower the
number. For instance, a spatial resolution of 79 meters is coarser than a spatial resolution
of 10 meters.
Spectral resolution: refers to the specific wavelength intervals in the electromagnetic
spectrum that a sensor can record. For example, band 1 of the Landsat TM sensor records
energy between 0.45 and 0.52 μm in the visible part of the spectrum.
Radiometric resolution: refers to the dynamic range, or number of possible data file
values in each band. This is referred to by the number of bits into which the recorded
energy is divided. For instance, in 8-bit data, the data file values range from 0 to 255 for
each pixel, but in 7-bit data, the data file values for each pixel range from 0 to 128.
Temporal resolution: refers to how often a sensor obtains imagery of a particular area. For
example, the Landsat satellite can view the same area of the globe once every 16 days. SPOT,
on the other hand, can revisit the same area every three days. Temporal resolution is an
important factor to consider in change detection studies.
Image analysis: is the understanding of the relationship between interpreted information and the
actual status or phenomenon, and to evaluate the situation. Extracted information will be finally
represented in a map form called an interpretation map or thematic map. Generally, the accuracy
of image interpretation is not adequate without some ground investigation. Ground
investigations are necessary, first when the keys are established and then when preliminary map
is checked.
Shadow is a helpful element in image interpretation. It also creates difficulties for some objects
in their identification in the image. Knowing the time of photography, we can estimate the solar
elevation/illumination, which helps in height estimation of objects. The outline or shape of a
shadow affords an impression of the profile view of objects.
Color is more convenient for the identification of object details. For example, vegetation types
and species can be more easily interpreted by less experienced interpreters using color
information. Sometimes color infrared photographs or false color images will give more specific
information, depending on the emulsion of the film or the filter used and the object being imaged.
In panchromatic photographs, any object will reflect its unique tone according to the reflectance.
For example, dry sand reflects white, while wet sand reflects black. In black and white near
infrared photographs, water is black and healthy vegetation white to light gray.
(a) (b)
Figure: Satellite image of area in gray scale and in standard false color composite
e) Texture is a group of repeated small patterns. For example, homogeneous grassland
exhibits a smooth texture; coniferous forests usually show a coarse texture. However, this
will depend on the scale of the photograph or image.
Procedures: Follow the instruction given below to query and download target images (scenes)
Example of search criteria:
Path= 168/169
Row=55/56
Date: January 2020
Data Set: OLI
Cloud Cover: less than 10 %Data
type Level 1: OLI L1T
Table of Landsat 8 Operational Land Imager (OLI) and Thermal InfraredSensor (TIRS)
Data Set
Additional Criteria
Steps
1. From the file menu in ERDAS IMAGINE, click on the “Raster” tab. Then select “Spectral”
from the Resolution grouping. Finally, click on “Layer Stack”
2. In the Layer Selection and Stacking dialog window, create a layer stacked image (a multi-band
image), composed of LandSat-8 spectral bands (Band 1-7). Add each of the Input Files, one by
one, as follows (be sure to click on “Add” to add each image individually to the Layer List).
3. Next, name the output file: Multispectral and select the “File type: IMAGINE Image (*.img)”
from the drop-down list. Also, check “Ignore Zero in Stats.” under Output Options. The output
file will be saved as an *.img file, ERDAS IMAGINE’s native format for raster files.
4. Select “OK” on the “Layer Selection and Stacking” dialog windows to save the multispectral
image and wait for few minutes until processed.
5. Click on “Close” on the Process List dialog window that appears when completed
5. Display the layer stacked (multispectral. Img) image in ERDAS IMAGINE: select “File/
Open/Raster Layer.”
6. Select “OK” to display the multispectral image in the 2D Viewer
3. Creating a Subset Image
Objective: To create an image subset of interest from the layer stacked multiband image,
which is useful for easy display and processing such as performing thematic classification.
Steps
1.Open the original multispectral image, as a False Color Composite (FCC)
2.Click the Inquire icon from the main menu ribbon and then Inquire Box. Drag the white
inquire box in the 2D View window to encompass the area of interest in the subset image.
Then click on “Apply” and leave the inquire box open
3.Next, from the main menu bar, select the Raster tab/ Subset & Chip/Create Subset Image. In
the Subset dialog window that opens, the Input file should be populated with the multispectral
image of Landsat scene currently displayed in the 2D View window. In the Subset Definition
section, click on the “From Inquire Box” button. This should automatically load the
coordinates from the inquire box. Name the Output File. ”Ignore Zero in Stats” and “OK”
to begin the subset operation
4.Then open the subset image: File /Open | Raster Layer. Next, click on the Raster Options
tab and in the “Layers to Colors” section enter the band combination of: 5 (red), 4 (green), 3
(blue). Check the “Fit to Frame” option. Then click on “OK”.
The result is a smaller multi-band image that can be processed more quickly compared to the
whole scene original image.
Displaying the subset data
4. Mosaic
Data Set:
1. North-AA.img
2. South-AA.img
Procedure:
1. Open the image North-AA.img and South-AA.img in a 2D View.
3. Raster>MosaicPro
4. The data for both images display in the MosaicPro Image List CellArray, and a graphic of the images
display in the canvas of the MosaicPro workspace. If the Image List is not automatically displayed at the
bottom of the MosaicPro workspace, click Edit > Show Image Lists, and select it.
5. MosaicPro Image List displays at the bottom of the MosaicPro workspace with the images listed in the
CellArray.
6. Click the Output Image icon to define output image. In the Output Image Options dialog under Define
Output Map Area(s), make sure that Union of All Inputs is selected and click OK.
8. In the Output File Name dialog, enter the name you want to use in the directory of your choice, then
press Enter.
9. Click the Output Options tab.
11. Click OK in the Run Mosaic dialog. The Process List dialog shows the status of the processes.
5. Geo-referencing/Georectification
Rectification is the process of projecting the data onto a plane and making it conform to a map projection
system. Images can be converted to real-world ground coordinates by referencing the image to another source
that is in the desired map projection. Source information may be obtained from another image, vector
coverages, or map coordinates. In order to accomplish this task, ground control points (GCPs) need to be
selected from both the input source and the reference source. GCPs are points that are used to depict the same
location on the Earth's surface.
Geo-referencing is the process of assigning a real-world map coordinates (Geographic Coordinate System or
Universal Transverse Mercator Coordinate Systems) to a geometrically distorted image. This is accomplished
by using locations on the geometrically correct reference topographic map or satellite image which are readily
identifiable on the geometrically distorted image called Ground Control Points. Road intersection, Assigning
map coordinates to the image data is called geo-referencing. Since all map projection systems are associated
with map coordinates, rectification involves geo-referencing.
In this session, you rectify the scanned map of Addis Ababa City, using a geo-referenced panchromatic
image of the same area. The image is rectified to the Geographic Coordinate Systems.
1. Display Data
First, you display the scanned map to be rectified.
1. ERDAS IMAGINE must be running with one 2D View open in the Workspace. Open city map of addis
ababa.bmp that has not been rectified.
The city map of Addis ababa.bmp displays in 2D View #1.
Input scanned map - city map of addis ababa.bmp Reference Image – addis.img
3. Collect GCPs
Here you will collect Ground Control Points (GCPs) in the Input Scanned Map (the Scanned Map to be
rectified) and the corresponding GCPs in the Reference image.
The Multipoint Geometric Correction workspace is set in Automatic GCP Editing mode by default.
The Toggle Fully Automatic GCP Editing Mode icon is active, indicating that this is the case.
1. In the Main View for city map of addis ababa.bmp, drag the zoom bounding box to one of the areas
shown in the following picture. The circled areas are good locations for GCPs. You should choose
points that are easily identifiable in both images, such as road intersections.
2. Click the Create GCP icon , then click in the zoom bounding box in the Main View to collect the
first GCP for the Input scanned map (city map of addis ababa.bmp).
The point you have selected is marked as GCP #1 in the three View panes. The X and Y coordinates for
GCP #1 are listed in the CellArray as X Input and Y Input.
GCP CellArray
When you select a Point # row, the Status bar displays the Control Point Error for X coordinates, the
Control Point Error for Y coordinates, and the Total Control Point Error. A total error of less than 1 pixel
error would make it a reasonable resampling. Selecting GCPs
Selecting GCPs is useful for moving GCPs graphically or deleting them. You can select GCPs graphically
(in the View) or in the GCP CellArray.
To select a GCP using the GCP icons in the View, click the button and click the desired GCP icon in the
View. When a GCP is selected, you can drag it to the desired location. This method is best for moving an
individual GCP.
To select a GCP using the CellArray, click in the Point # column of the desired GCP. Note that in this method,
the corresponding GCPs in the Input View and in the Reference View are selected. All of the Views drive to
the selected GCP. You can also right-click in any of the rows in the Point # column to see other options.
Deleting a GCP
To delete a GCP, in the GCP CellArray, right-click in the row in the Point # column of the desired GCP and
select Delete Selection.
37. Click to make 2D View #1 active, and click [Clear View] in the Quick Access Toolbar.
The image city map of addis ababa.bmp is removed from 2D View #1.
38. Open addis.img in 2D View #1.
39. Click Home tab > Link All Views. 2D View #2 is now linked to 2D View #1.
40. Click Home tab > Link All Views. 2D View #2 is now linked to 2D View #1.
The Inquire Cursor (a crosshair) is placed in both Views. An Inquire Cursor dialog also opens.
41. Drag the Inquire Cursor around to verify that it is in approximately the same place in both Views.
Notice that, as the Inquire Cursor is moved, the data in the Inquire Cursor dialog are updated.
Reference Image and Georeferenced Output Image
6. Classification
Image classification is the process of creating thematic maps from satellite imagery. A thematic
map is an information representation of an image that shows the spatial distribution of particular
theme.
6.1 Unsupervised Classification
Unsupervised image classification does not utilized training data (area) as the bases for classification.
Spectral classes are grouped first, based solely on the numerical
information in the data, and is then matched by the analyst to information classes (if possible).
Programs, called clustering algorithms, are used to determine the natural (statistical) groupings
or structures in the data. Usually, the analyst specifies how many groups or clusters are to be
looked for in the data. In addition to specifying the desired number of classes, the analyst may
also specify parameters related to the separation distance among the clusters and the variation
within each cluster. The final result of this iterative clustering process may result in some
clusters that the analyst will want to subsequently combine, or clusters that should be broken
down further - each of these requiring a further application of the clustering algorithm.
Procedures
1. Select Raster | Unsupervised within the Classification category grouping | Unsupervised
Classification.
2. In the “Unsupervised Classification” dialog window that opens, select the “Help” buttonto
become more familiar with the available options this tool offers.
3. In the “Unsupervised Classification” dialog, select the following options:
Color Scheme Options
ERDAS IMAGINE provides a default Color Scheme Option that allows the output
unsupervised classification to resemble the original input data (“Approximate True Color”
option)
Since the original input image was displayed as a FCC, the “Approximate True Color” option
applied the unsupervised classification as an approximate of the false color reflectance values ofthe
input image (see the following image). The can be of certain benefit to an image analyst, particularly
if unfamiliar with the area being classified.
6.2 Supervised Classification
Objective: To gain understanding of the general procedures for image analysis and
classification using a supervised learning technique to classify land use and land cover
categories.
Procedure
This exercise demonstrates a supervised classification approach using a Landsat-8 Operational
Land Imager/Thermal Infrared Sensor (OLI/TIRS) image. To perform a supervised
classification, we first need to create a signature file, a file that stores information on the
characteristic spectral response of each category of interest.
1. Add the Landsat 8 subset image to the 2D View window as a False Color Composite (FCC)
image display (5, 4, 3)
2. Select Raster/ Supervised/ Signature Editor
With the subset image displayed in a viewer, the ERDAS IMAGINE’s Area of Interests (AOI) tools
will be used to create polygons and delineate homogeneous areas representing the categories
of interest. This is accomplished by the following steps:
3. Select File/ New/ 2D View #1 New Options/ AOI Layer
4. In the main menu with the newly created AOI layer active in the Table of Contents (this
should be highlighted/active by default), click on the “Drawing” tab in the AOI menu
tab grouping.
NOTE: Again, ensure the new AOI layer is the active layer in Contents list
5. Use the Polygon tool icon (within the Insert Geometry category) grouping to create polygons
for homogeneous areas.
6. Click once to create each vertex of the polygon and double-click to complete. As you create
each polygon, in the Signature Editor Tool window add the newly captured signature to the
signature editor by clicking the “Create New Signature(s) from AOI” tool. This is
accomplished by the following steps:
7. Select Edit/ Add in the signature editor, or the click on the “Create New Signature from
AOI” button in the signature editor
NOTE: Make sure you also edit the name of the class to the category you are delineating
tomatch the names the land cover categories of interest. Each added signature name must
be unique (i.e., Forest1, Forest2, Forest3, etc.), and change the color of each class’s
signature
8. Once you have completed delineating several representative polygons for each category
represented in the image (more is ALWAYS better than a few), save the signature file by
selecting File/ Save in the Signature Editor Tool window. Also, be sure to save your AOI layer
(Right-Click on the AOI file in the Contents Legend/ Save As)
9. Next, in the Signature Editor window select Classify/ Supervised Classification and run the
classifier with the signature file you created. Specify an Output File name in the supervised
classification window and then click OK to run the classification. A Process List window
shouldappear and begin the procedure. Close the window when completed.
NOTE: In case ERDAS IMAGINE experience an error and close, when running the Supervised
Classification from the Signature Editor window, you can also run the Supervised Classification
from the tabbed ribbons menu (Raster tab/ Supervised/ Supervised Classification).
NOTE: When naming the output file, it is a good idea to use a naming convention that allow
toeasily recognize the origin of the file and any subsequent operations made to the original
file.
10. Once you have completed the supervised classification, close the Signature Editor tool
window and remove the AOI layer from the Contents Legend (leave the Landsat 8 subset
inputimage open in the 2D View window). Next, display the newly created supervised
classificationoutput image. This classified image is now known as a“Thematic Image”
is necessary to fully represent the class category throughout the entire image (for instance, do not
capture all the polygons from the same area, but be sure to spread the AOIs out). You may also
use the zoom tools, to zoom in and out, to help you determine the correct class category. Also,
you may change the band combinations from FCC, to True Color Composite (TCC), and back to
FCC of various band combinations to help you identify the correct class categories.
Finally, keep in mind, classification is not an exact science. Picking pixels out of an image
definitely takes a lot of patience and practice; the more time you spend with the imagery to be
classified, the better the overall classification will be. With practice, as well as understanding the
rules of how light interacts with the features on the surface of the earth, your judgment becomes
increasingly better (Nelson et al., 2020).