0% found this document useful (0 votes)
74 views24 pages

Chapter 1. Introduction: (Huan - Nguyen@inha - Ac.kr)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views24 pages

Chapter 1. Introduction: (Huan - Nguyen@inha - Ac.kr)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 1.

Introduction

Huan Van Nguyen, Ph.D.


([email protected])

Dept. of Information & Communication Engineering


Inha University
What is Pattern Recognition?
 “The act of taking in raw data and taking
an action based on the category of the
data“

 Comparing with:
◼ Machine learning
◼ Data manining
◼ Image processing
◼ AI

 The study of how machines can observe


the environment,
◼ learn to distinguish patterns of interest fro
m their background, and
◼ make sound and reasonable decisions abou
t the categories of the patterns
Copyright © 2012 by Compute Vision Lab., Inha University 2
What is Pattern Recognition?
 To classify data (patterns) based on
either a priori knowledge or on information
extracted from the patterns
◼ Pattern: a description of an object of
interest
◼ Recognition: classification of a pattern to
a category (or class)

 Example of “patterns”
◼ Fingerprint images
◼ Human face
◼ Speech signal
◼ Human shape
◼ Satellite image
◼ Ultrasound image
◼ Etc
Copyright © 2012 by Compute Vision Lab., Inha University 3
Applications of PR
 Computer Vision and Image Understanding
 Speech Recognition and Understanding
 Character Recognition
 Radar Analysis
 Underground Resource Exploration
 Biometrics
 Medical Diagnosis
 Remote Sensing
 Factory Automation
 Defense

Copyright © 2012 by Compute Vision Lab., Inha University 4


Major Approaches of PR
 Statistical PR
◼ Recognizes patterns by their statistical
properties generally expressed in various
probability functions.
 Syntactic (Structural) PR
◼ Recognizes patterns by their rules and
grammars.
 Neural Network PR
◼ Recognizes patterns using artificial NNs
modeling the properties of biological NNs.

Copyright © 2012 by Compute Vision Lab., Inha University 5


Typical PR Process
 “TYPICAL” Pattern Recognition System

1. Modeling: need a mathematical description for each class


- Hypothesize a class of models
- Process the sensed data
- Choose the best corresponding model

2. Preprocessing: input signal (image) is preprocessed to simplify


subsequent operations without loosing relevant information

3. Segmentation: region of interest (ROI) is isolated from one another


and from the background

4. Feature extraction: reduce the data by measuring certain “features”


or “properties”

5. Classification: features are passed to a classifier that evaluates the


evidence presented & makes a decision

Copyright © 2012 by Compute Vision Lab., Inha University 6


Example
 Satellite image
◼ Data: hyperspectral
image.
◼ Features: pattern of
the each material
◼ Final goal : classify
the surface/detect
some types of
material

Copyright © 2012 by Compute Vision Lab., Inha University 7


Example
 Spectral library

Copyright © 2012 by Compute Vision Lab., Inha University 8


Example
 Spectral matching methods:
n
ED(x, y ) = x − y 2
=  ( xi − y i ) 2
i =1 x is mean vector
 
y is input vector
x, y
SAM (x, y ) = arccos 
 x  y 
 2 2 
 x
   2 ,x  y
x, y 
2 2
NSAM (x, y ) = arccos   RN y
 x  y  RN =  2
 2 2   y 2 ,y  x
 x 2 2
 2

 Classifier:
M (n) = max( NSAM1,...,NSAM n )

C (x) = index ( M (n)) = n

Copyright © 2012 by Compute Vision Lab., Inha University 9


Unsupervised clustering

March, 2012 Copyright © 2012 by Compute Vision Lab., Inha University 10


Material selection

Water

BACK
Material selection

Road

BACK
Material selection results

Tree

BACK
Pattern Recognition Systems
1. Sensing
 Transducers such as a camera
 Design of sensors is beyond the
scope of this course.

2. Segmentation and Grouping


 To distinguish individual objects and
separate them from the background

Copyright © 2012 by Compute Vision Lab., Inha University 14


Pattern Recognition Systems
3. Feature Extraction:
◼ Goal:
 To characterize an object to be recognized by measurements
whose values are very similar for objects in the same category,
and very different for objects in different categories.
◼ Problem- and domain-dependent
◼ Requires the knowledge of domain
◼ Invariant features
 Feature values are invariant to irrelevant transformations of the
input
 TRS-Invariant: Invariant to translation, rotation, and scaling
◼ Influencing factors to Feature extractor
 Occlusion
 Projective distortion
 (Sampling) Rate
 Deformation
 Illumination
◼ Feature Selection: To select the most valuable features from a
larger set of candidate features.
Copyright © 2012 by Compute Vision Lab., Inha University 15
Pattern Recognition Systems
4. Classification
◼ To assign the object (or pattern) to a category based on the
feature vector provided by the feature extractor.
◼ In practice, perfect classification is impossible ➔ To determine the
probability for each of the possible categories.

◼ Noise
 Causes the variability of feature values for patterns in the same
category.

◼ What is the best way to design a classifier to cope with this


variability? ➔ This course mainly focuses on the design of
classifiers.

Copyright © 2012 by Compute Vision Lab., Inha University 16


Pattern Recognition Systems
5. Post Processing
◼ The post-processor uses the output of the classifier to decide on
the recommended action.

◼ Error Rate
 Simplest measure of classifier performance
 Can we predict the minimum or the maximum error rate?
◼ Risk
 Total expected cost
 Can we estimate the risk?
◼ Context
 Input-dependent information other than from the target pattern
itself
 T/-|E C/-|T
◼ Multiple Classifiers
 Each classifier operating on different aspects of the input.
 How to draw a conclusion when they disagree? ➔ Fusion of the
evidence

Copyright © 2012 by Compute Vision Lab., Inha University 17


Design Cycle
1. Data Collection
◼ The more, the better; but cost…
2. Feature Choice
◼ Simple to extract, TRS-invariant,
insensitive to noise, distinguishing
◼ Requires Prior Knowledge
3. Model Choice
◼ Probability models; Classifier models
4. Training
◼ Validating the model and designing the classifier
5. Evaluation
◼ Overfitting: perfect for the training samples, but
poor for new samples
6. Computational Complexity
◼ #Feature dims, #Patterns, #Categories

Copyright © 2012 by Compute Vision Lab., Inha University 18


Learning and Adaptation
 We use learning because all practical or interesting PR problems are
so hard that we cannot guess classification decision ahead of time
 Any method that incorporates information from training samples in
the design of a classifier employs learning
 Approach:
◼ Assume some general form of model
 Parametric models
 Nonparametric models
◼ Use training patterns to learn or estimate the unknown parameters.
 Bayesian estimation, Maximum likelihood estimation
 Parzen window estimation, k-NN estimation

Copyright © 2012 by Compute Vision Lab., Inha University 19


Learning and Adaptation
1. Supervised Learning
◼ Teacher provides a label for each pattern in a training set
◼ Objective: Reduce sum of the costs for these patterns
◼ Issues: How to make sure the learning algorithm
 Can learn the solution.
 Will be stable to parameter variation.
 Will converge in finite time.
 Scale with # of training patterns & # of input features.
 Favors “simple” solutions.

Copyright © 2012 by Compute Vision Lab., Inha University 20


Learning and Adaptation
2. Unsupervised Learning (Clustering)
◼ There is no explicit teacher.
◼ System forms clusters or “natural groupings” of the input
patterns.
◼ Often the user will set the hypothesized number of different
clusters ahead of time.
Ex: k-Means algorithm, Fuzzy c-means algorithm

Copyright © 2012 by Compute Vision Lab., Inha University 21


Learning and Adaptation
3. Reinforcement Learning (Learning with a critic)
◼ No desired category is given. Instead, the only teaching feedback
is that the tentative category is right or wrong, but does not say
specifically how it is wrong.
◼ Typical way to train a classifier:
 Present an input
 Compute its tentative label
 Use the known target category label to improve the classifier

◼ How can the system learn from such nonspecific feedback?

Copyright © 2012 by Compute Vision Lab., Inha University 22


Summary of Ch 1.
 Introduction of PR and various applications
 Terms and definitions in PR
 Processes in typical PR systems

Copyright © 2012 by Compute Vision Lab., Inha University 23


Reading Assignment #1
 Appendix
◼ A.2 Linear Algebra
◼ A.3 Lagrange Optimization
◼ A.4 Probability Theory
◼ A.5 Gaussian Derivates and Integrals

Copyright © 2012 by Compute Vision Lab., Inha University 24

You might also like