0% found this document useful (0 votes)
68 views10 pages

Lecture 10 PDF

Uploaded by

Evangelista Mao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views10 pages

Lecture 10 PDF

Uploaded by

Evangelista Mao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Probabilistic Map Based


Localization:
Kalman Filter Localization
Section 5.6.8, pages 322-342 in the textbook

Zürich Autonomous Systems Lab

Lecture 10 – Localization: Kalman Filter

2 Kalman Filter Localization

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

3 Kalman filter localization

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

4 Action and perception updates


 In robot localization, we distinguish two update steps:
1. Action update:
• the robot moves and estimates its position through its proprioceptive sensors.
During this step, the robot uncertainty grows.

Robot belief before


the observation

2. Perception update:
• the robot makes an observation using its exteroceptive sensors and correct its
position by opportunely combining its belief before the observation with the probability
of making exactly that observation. During this step, the robot uncertainty shrinks.

Updated belief
Probability of making (fusion)
this observation

Robot belief before


the observation

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

6 Markov versus Kalman localization


Two approaches exist to represent the probability distribution and to compute the convolution
and Bayes rule during the Action and Perception phases

Markov Kalman

• The configuration space is divided into many • The probability distribution of both the robot
cells. The configuration space of a robot moving configuration and the sensor model is assumed
on a plane is 3D dimensional (x,y,θ). Each cell to be continuous and Gaussian!
contains the probability of the robot to be in that
cell.

• The probability distribution of the sensors model


is also discrete.

• During Action and Perception, all the cells are • Since a Gaussian distribution only described
updated. Therefore the computational cost is very through mean value μ and variance σ2, we need
high. only to update μ and Σ2. Therefore the
computational cost is very low!

• Localization can start from any unknown position • Localization is tracked from a known positions
and recovers from ambiguous situation. and recovery from ambiguous situations and after
collision is not possible.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

7 Introduction to Kalman Filter (1)


 Two measurements

 Weighted least-square

 Finding minimum error

 After some calculation and rearrangements

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

Introduction to Kalman Filter (2)


 In Kalman Filter notation

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

Introduction to Kalman Filter (3)


1st hour, 28.4.2008
 Dynamic Prediction (robot moving)

u = velocity
w = noise
 Motion

 Combining fusion and dynamic prediction

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

10 The Five Steps for Map-Based Localization

Prediction position Estimation


Encoder Measurement and Position
(odometry) estimate (fusion)

matched predictions

Predicted features
Map and observations

observations
data base
YES

Matching

raw sensor data or


1. Prediction based on previous estimate and odometry extracted features

perception
2. Observation with on-board sensors
3. Measurement prediction based on prediction and map Observation
on-board sensors
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

11 Kalman Filter for Mobile Robot Localization: Robot Position Prediction


 In a first step, the robots position at time step t is predicted based on its
old location (time step t-1) and its movement due to the control input ut:

f: Odometry function

Jacobians of f

Covariance of previous robot state Covariance of noise associated to the motion

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

12 Kalman Filter Localization: Prediction update

 sr  sl s  sl 


 cos(t 1  r )
 xt 1   2 2b kr sr 0 
s  sl s  sl  Qt   
xˆt  f ( xt 1, ut )   yt 1    r sin(t 1  r )  0 kl sl 
 2 2b 
t 1   sr  sl 
 b 
Odometry
Pˆt  Fxt1  Pt 1  Fxt1  Fut  Qt  Fut
T T

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

13 Kalman Filter for Mobile Robot Localization: Observation

 The second step it to obtain the observation Z (measurements) from the


robot’s sensors at the new location
 The observation usually consists of a set n0 of single observations zj
extracted from the different sensors signals. It represents features like
lines, doors or any kind of landmarks.
 The parameters of the targets are usually observed in the sensor/robot
frame {R}.
 Therefore the observations have to be transformed to the world frame {W} or
 the measurement prediction have to be transformed to the sensor frame {R}.
 This transformation is specified in the function hj (see later).

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

14 Observation

Raw Date of Extracted Lines Extracted Lines


Laser Scanner in Model Space

line j

j

rj

R
ti  Sensor
z   i
i
t
(robot)
 rt  frame

 r 
i

R   
i

 r  rr t
t

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

15 Measurement Prediction

 In the next step we use the predicted robot position x̂t and the features m j
j
in the map M to generate multiple predicted observations ẑt
 They have to be transformed into the sensor frame


zˆtj  h j xˆt , m j 
 We can now define the measurement prediction as the set containing all n j
predicted observations

Zˆ  zˆ j 1  j  ni 
 The function h j is mainly the coordinate transformation between the world
{W} frame and the sensor/robot frame {R}

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

16 Measurement Prediction
 For prediction, only the walls that are in
the field of view of the robot are selected.
 This is done by linking the individual
lines to the nodes of the path

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

17 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example


 The generated measurement predictions have to be transformed to the
robot frame {R}

 According to the figure in previous slide the transformation is given by

and its Jacobian by

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

18 Kalman Filter for Mobile Robot Localization: Matching

 Assignment from observations m j (gained by the sensors) to the targets zt (stored in


the map)
 For each measurement prediction for which an corresponding observation is found
we calculate the innovation:

and its innovation covariance found by applying the error propagation law:

Hj: Jacobian

 The validity of the correspondence between measurement and prediction can e.g.
be evaluated through the Mahalanobis distance:

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

19 Matching

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 10 – Localization: Kalman Filter

20 Kalman Filter for Mobile Robot Localization: Estimation: Applying the Kalman Filter
 Kalman filter gain:

 Update of robot’s position estimate:

 The associate variance

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 10 – Localization: Kalman Filter

22 Estimation

 Kalman filter estimation of the new robot


position xt :
 By fusing the prediction of robot position
(magenta) with the innovation gained by the
measurements (green) we get the updated
estimate of the robot position (red)

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

You might also like