0% found this document useful (0 votes)
118 views24 pages

Lab Assignement Ue22 2020 PDF

The document discusses the sensor actuator loop, which is the low-level control loop that forms the basis for both automatic and autonomous systems. It involves using sensor data to perceive the environment and control actuators based on that sensor data. The document outlines the difference between automatic and autonomous systems, with autonomous systems requiring more advanced capabilities like planning, navigation and reasoning. It then focuses on programming a robot called Rob1A using sensors and actuators in a sensor actuator loop to help understand the basics of low-level robotic control before building more advanced autonomous systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views24 pages

Lab Assignement Ue22 2020 PDF

The document discusses the sensor actuator loop, which is the low-level control loop that forms the basis for both automatic and autonomous systems. It involves using sensor data to perceive the environment and control actuators based on that sensor data. The document outlines the difference between automatic and autonomous systems, with autonomous systems requiring more advanced capabilities like planning, navigation and reasoning. It then focuses on programming a robot called Rob1A using sensors and actuators in a sensor actuator loop to help understand the basics of low-level robotic control before building more advanced autonomous systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

ENSTA B RETAGNE , R OBOTICS G ROUP

UE 2.2 - Sensor Actuator Loop

Benoit ZERR
FISE 2022 - S2 - 2020

C ONTENTS
1 Sensor Actuator Loop - Problem Statement 2
1.1 Automatic System vs. Autonomous System . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The Sensor Actuator Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Sensor Actuator Loop in Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Dynamic modeling of Rob1A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4.1 Brief description of Rob1A design . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4.2 Controlling Rob1A using sensors and actuators . . . . . . . . . . . . . . . . 7

2 Lab work and assessment 8


2.1 Getting marks from waypoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Defining your team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Installing and testing the simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Qualify1 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Qualify2 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 Lite Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.8 Advanced Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 programming Rob1A in Python 13


3.1 Creating and controlling the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Wheel’s motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Odometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Sonars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Controlling the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.6 Filtering the sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.7 Tips & tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4 Control 16
4.1 In place turn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.1 Measuring odometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.2 Measuring heading with the compass . . . . . . . . . . . . . . . . . . . . . . 16

1
4.2 Performing in-place turn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.1 In-place turn with odometers . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.2 In-place turn with compass . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2.3 Python functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Measuring distances with sonars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Performing a linear motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4.1 Python functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.5 design a controller for wall following . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.1 "bang-bang" controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.2 proportional (P) controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.3 proportional-derivative (PD) controller . . . . . . . . . . . . . . . . . . . . . 19

5 Filtering the sensor’s measurments 20


5.1 Estimating the bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 Filtering the noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2.1 Moving average Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2.2 First Order Recursive IIR Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2.3 first low pass filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 median filter - the anti spike filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Appendices 23
Installing the simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Installing the simulator on your own computer . . . . . . . . . . . . . . . . . . . . . . . 24

1 S ENSOR A CTUATOR L OOP - P ROBLEM S TATEMENT


1.1 Automatic System vs. Autonomous System
The current trend in engineering shows a growing number of applications relying on auto-
nomous systems while the twentieth century saw the development of automatic systems. Al-
though they essentially perform the same kind of tasks, autonomous systems and automatic
systems differ mainly in the way they interact with the environment. For example, the OrlyVAL
vehicle connecting the Orly airport to the RER can be considered as an automatic system while
the Waymo Google car can be considered as an autonomous system. The OrlyVal operates in a
fairly well-controlled environment. On the contrary, Google’s car must interact with a relatively
uncontrolled environment, where it has to deal with pedestrians and other vehicles, adapt to
road changes, etc.
The main difference between autonomous systems and automatic systems is the high level
control of these systems. Autonomous systems require intelligent, adaptive mission planning
and efficient sensor processing to extract relevant information from a complex environment.
This high-level control involving planning, navigation, mapping and reasoning (artificial intel-
ligence) far exceeds the skill level of this tutorial. They go to be studied in the second and third
years of the robotics specialty. However, before designing and building autonomous system
prototypes, we need to fully understand the low-level control loop of these systems. This low
level loop can be called "Sensor Actuator Loop". The second benefit of studying this low-level
"sensor actuator loop" is that it is similar for automatic and autonomous systems. In the future,
you will be more likely to deal with autonomous systems rather than automatic systems.

2
Autonomous System Set-Point

+
_
Sensors Filtering
Environment

Actuators Command

Figure 1.1: The "Sensor Actuator Loop"

1.2 The Sensor Actuator Loop


The purpose of this tutorial is to study and practice the low level control loop or "sensor actu-
ator loop". The "sensor actuator loop" (see Fig. 1.1) controls how automatic or autonomous
systems interact with their environment. The set-point (or reference) indicates to the system
the objective to be achieved. The system acquires its environment using its sensors. The mea-
surements of these sensors are corrupted by errors and must be filtered before being used by
the system. The filtered measurements are compared to the set point. From this comparison, a
control error can be defined and a command is computed and applied to the actuators to try to
reduce this error. Then, with its actuators, the system interacts with its environment to provide
new values to the sensors. This sequence is repeated until the goal is reached.

1.3 Sensor Actuator Loop in Python


In Python, the implementation of the "sensor actuator loop" requires precise management of
the execution time. Each iteration of the loop must have the same duration. For this, we use
the time module of Python. The flowchart of the loop is shown in the figure (Fig. 1.2) and the
generic code of the loop is presented in (Listing 1).

Listing 1: Python Sensor Actuator Loop.


1 import time
2 setPoint = someValueWeWant # defining s e t p o i n t ( r e f e r e n c e value )
3 # defining the execution time f o r a loop i t e r a t i o n
4 loopIterationTime = 0.100 # ( e . g . 100 ms , commande at 10 Hz)
5 while True :
6 t0 = time . time ( ) # recording loop s t a r t time
7 # acquire and f i l t e r the sensor measurement used f o r
8 # c o n t r o l l i n g the robot
9 valRaw = acquireSomeSensor ( ) # raw data from sensor
10 v a l F i l t = doSomeFiltering ( valRaw ) # f i l t e r e d data
11
12 # check i f end of loop
13 # not n e c e s s a r l y a s i n g l e condition
14 # e . g . here we have two p o s s i b l e e x i t s of the loop
15 # 1) e x i t on data from another sensor
16 # 2) e x i t on the c o n t r o l e r r o r
17
18 # checking i f end of loop with data from another sensor
19 # ( e . g . o b s t a c l e d e t e c t i o n , low b a t t e r y l e v e l , . . . )
20 otherValRaw = = acquireSomeSensor ( )

3
21 o t h e r V a l F i l t = doSomeFiltering ( otherValRaw )
22 endCondition = checkIfLoopEndSensors ( o t h e r V a l F i l t )
23 i f endCondition :
24 break # leave the loop
25
26 # compute c o n t r o l e r r o r
27 controlError = setPoint − v a l F i l t
28
29 # checking i f end of loop using c o n t r o l e r r o r s
30 # ( e . g . stop the robot a f t e r t r a v e l l i n g a given distance )
31 endCondition = checkIfLoopEndErrors ( controlError )
32 i f endCondition :
33 break # leave the loop
34
35 # defining the new commands based the e r r o r value
36 cmd = computeCommand ( controlError )
37 # applying the commands of the actuators
38 applyCommand (cmd)
39
40 # wait f o r a clean end of the loop i t e r a t i o n
41 execTime = time . time ( ) − t0 # measuring execution time
42 # compute how much to wait f o r the end of t h i s i t e r a t i o n
43 deltaTime = loopIterationTime − execTime
44 # wait only i f deltaTime i s p o s i t i v e
45 i f deltaTime > 0 :
46 time . sleep ( deltaTime )
47 # i f deltaTime < 0 , your computation takes too much time
48 # you have e i t h e r to s i m p l i f y i t or to i n c r e a s e loopIterTime

As you can see in the code and the flowchart, there are two possible outputs (exits) of the con-
trol loop, one using the measurements of another sensor and the other using the control error
between the set-point and the measurement. This can be very useful in certain situations, such
as following a wall on one side and at the same time, stopping when an obstacle arises in front
of the vehicle. In this example, the exit of the loop can occur at the end if the wall (exit on con-
trol error) or when the front sensor detects an obstacle (exit on another sensor measurement).

At the end of the loop code, it is important to wait for the exact duration of an iteration of
the loop. In a control loop, measurements and actions should take place at perfectly repeated
times. Therefore, at the end of the loop code, we measure the computation time ("execTime")
and subtract it from the "loopIterationTime" in order to set the "deltaTime" duration until the
precise end of the iteration. A negative "deltaTime" means that the calculation takes too much
time. Two solutions can solve this problem: (1) simplify the calculation or (2) increase "loo-
pIterationTime". Solution (1) should be preferred because solution (2) makes the system less
responsive by increasing the time between two commands.

1.4 Dynamic modeling of Rob1A


1.4.1 Brief description of Rob1A design

As existing robots (NAO humanoids and DART 4WD) are too complex to perform their low
level control in 8 hours, Rob1A has been specifically designed for this course. Rob1A has been
designed to be easy to control. However, Rob1A will not perform real missions this year be-
cause its construction (mainly 3D printing) has not started yet. It was designed on Blender but
FreeCad, Solidworks or even Catia could have been used. When designing a robot, it is impor-
tant to be able to export it to a dynamic modeling software. The dynamic modeling software
must solve the differential equations of the robot’s motion in real time and detect all possible
collisions between the robot and its environment. Game engines which perfect for these tasks

4
Start of Control Loop

setPoint = someReferenceValue

loopIterTime = someDuration

while true

t0 = time.time()

valRaw = acquireSensors()

valFilt = filter(valRaw)

end loop on yes


another sensor
measurement ?
no

error = setpoint - valFilt

end loop on yes


control error ?

no

cmd = computeCom-
End of Control Loop
mand(error)

execTime = time.time() - t0

deltaTime = loopIterTime-execTime

no
deltaTime > 0 ?
yes

time.sleep(deltaTime)

Figure 1.2: The flowchart of the "Sensor Actuator Loop"

5
Figure 1.3: Rob1A

6
are generally used. In this tutorial course, we will use V-REP for dynamic modeling, but we may
have used WEBOTS, GAZEBO, EUREKA, MORSE ... The design of the robot is imported into
V-REP in Collada format. Real-time simulation of the dynamics of the robot requires a lot of
calculation to render the scene, to solve the differential equations of the movement, to check
the collisions, to simulate the sensors, etc. Hence, to determine the dynamic response of the
robot in real time, the exact design is replaced by a simplified form defined with primitive ob-
jects such as cylinders, cubes, and so on. This simplified form is hidden so as not to disturb the
display. The figure (Fig. 1.3) shows the Rob1A design.

1.4.2 Controlling Rob1A using sensors and actuators

A communication program (in Lua) allows your Python program to control the movement of
Rob1A and get the measurements of its sensors. The robot uses sensors to obtain its own state
and acquire knowledge of its environment. Proprioceptive sensors measure values internal
of the system while exteroceptive sensors acquire information from the robot’s environment.
Rob1A has only two actuators: the engines of its two main wheels.
Rob1A is a very basic robot, it is equipped with:

• 2 main wheels to control the movement using a differential drive,


• a passive "castor" wheel to prevent the back of the robot from rubbing on the ground,
• 2 odometers (proprioceptive sensors), one for each main wheel,
• 4 ultrasonic distance sensors (exteroceptive sensors) to detect obstacles in the 4 cardinal
directions, these sensors are also called sonars,
• a magnetic compass (exteroceptive sensor) which gives the heading of the robot,
• a battery whose charge varies from 100% to 70%. The charge level changes with each new
simulation but remains constant during the simulation,
• a front camera (light grey box).

The robot’s reference point is the yellow square on top of the chassis.

7
Figure 2.1: The mark on a waypoint is maximum if the robot enters the green disk. The mark is
null when the robot is outside of the red circle. Between the green disk and the red
circle the mark is decreasing.

2 L AB WORK AND ASSESSMENT


The lab work consist in moving the robot from start place (green square) to the finish (white
square with chequered flag). You will have to complete 3 or 4 tasks of increasing complexity. If
you choose to complete only 3 tasks your maximum mark will be 12, but you will not have to
explain your code and you will not have oral examination. This section describes the work you
will have to complete and how this work will be evaluated.

2.1 Getting marks from waypoints


Between the start and the finish, the robot will have to reach several waypoints. To get the
highest mark, the robot has to pass on top of the center of every waypoints. At the start, the
waypoints are colored in red. As the robot gets closer to the center of a waypoint, the color
progressivelly changes to green, according to the distance to the center. The maximum mark is
obtained if the robot enters the circle of half radius at the center of the waypoint. The principle
is shown in figure (Fig. 2.1).

2.2 Assessment
The assessment of the work is made all along the lab in six steps :

1. Mark F (0 or 1) - Fill the Moodle Quiz to define your team. The team is generally a pair,
but a team of 3 students can be allowed to do not let a student alone.
2. Task 1 : mark Q1 (2 pts) - Qualify 1
3. Task 2 : mark Q2 (2 pts) - Qualify 2
4. Task 3 : mark L (8 pts) - Lite challenge
5. Task 4 : mark A (6 pts) - Advanced challenge
6. Mark C (-2 to 2 pts) - Analysis of the python code of the advanced challenge and oral
exam.

The final mark is M = F*(Q1+Q2+L+A+C). As the advanced challenge is quite complex to achieve,
you can decide to stop after the lite challenge. If you choose to stop after the lite challenge, your
python code will not be analyzed and you will not have to attend an oral session to explain your
code.

2.3 Defining your team


The first thing to do is to define your team on MOODLE using the following link :

8
https://moodle.ensta-bretagne.fr/mod/feedback/view.php?id=43432

It is extremely important to fill this quiz before the 24th of february. If this is not done your
mark will be null.

2.4 Installing and testing the simulator


Before using the robot, we need to install the simulation tools. If you are using an ENSTA Bre-
tagne computer, you have to start in Linux Centos and to follow the instructions in section ??
(Appendice A). If you use your own computer, the simulation works well on Linux and, whith
little changes, it can operate successfully on Windows and Mac OS.
For the Windows configuration, you will find the explanation on MOODLE.

https://moodle.ensta-bretagne.fr/course/view.php?id=1439#section-4

In the folder /scenes a file called path_log.lua has to be modified with the path of the folder
where the log file 1 will be stored.

2.5 Qualify1 Task


Qualify1 track is very simple (see fig. 2.2). The scene file to load in V-REP is basic.ttt. There is
no noise on sensors (compass and sonars) measurements. You will have to :

1. find the direction to go using the 4 sonars, the right direction is the direction free of ob-
stacles(walls)
2. rotate the robot to place it in the right direction
3. move linearly to the finish
4. stop the robot on the finish waypoint using the distance to the front wall measured by
the front sonar

To get the mark, your work will be automatically evaluated. You will have to upload 2 files on
MOODLE : qualify1.py and control.py, at the following link :

https://moodle.ensta-bretagne.fr/mod/assign/view.php?id=43629

2.6 Qualify2 Task


Qualify2 is a simplified version of the lite track with only one intermediate waypoint. The turn
is randomly defined to left or to the right. The length of the straight lines is random. The sensors
are corrupted by a linear noise. As the turn disrection at the fisrt waypoint is randmly chosen,
there are two scene files to test your code in V-REP : filter1.ttt and filter2.ttt. To complete this
task, you will have to :

1. find the bias of the compass (if you use the compass for changing the orientation of the
robot)
2. define the filters for the sonars and the compass (if you use it)
3. find the direction to go using the 4 sonars, the right direction is the direction free of ob-
stacles(walls)
4. rotate the robot to place it in the right direction
5. move linearly to the first waypoint
6. stop the robot on the first waypoint waypoint using the distance to the front wall mea-
sured by the front sonar
1 The log file is a text file indicating was the robot has done during the mission.

9
Figure 2.2: Scene for qualify1.

7. find the dirtection to turn, left or right, using the left and the right sonar
8. rotate the robot to place it in the direction you ahve found
9. move linearly to the finish
10. stop the robot on the finish waypoint using the distance to the front wall measured by
the front sonar

Your code will be in 3 Python files : qualify2.py, control.py and filt.py To get the mark, your
work will be automatically evaluated. You will have to upload on MOODLE the 3 Python files :
qualify2.py, control.py and filt.py at the following link :

https://moodle.ensta-bretagne.fr/mod/assign/view.php?id=43630

2.7 Lite Challenge


The lite challenge is the same as the Qualify2 task but there are 4 waypoints instead of one. As
the turn direction and the track segment length are random, you have 3 tracks for testing your
control program : lite1.ttt, lite2.ttt and lite3.ttt. To get your mark, you will have to upload 3
Python files on MOODLE lite.py, control.py and filt.py at this link :

https://moodle.ensta-bretagne.fr/mod/assign/view.php?id=43648
Your mark will be obtained by executing your code on another track lite4.ttt which is unknown
but similar to the 3 tracks you have used. The changes are the random changes in turn direc-
tions and the random lengths of the linear segments.
After completing the lite challenge you can decided to stop there. You will not have oral exam
to explain your code.
You can decided to stop after lite challenge.

2.8 Advanced Challenge


The advanced challenge is quite complex. You will have to follow a wall on the left at a con-
stant distance. The noise is more complex with linear and non-linear contribution. The non

10
Figure 2.3: Scene for qualify2.

Figure 2.4: Scene for lite challenge.

11
Figure 2.5: Scene for advanced challenge.

linear noise consists of "spikes" or "outliers". You will have to use both linear and non-linear
filters and you will have to explain your code during an oral session. As the shape of the track
is defined by random parameters, you have 3 tracks for testing your control program : ad-
vanced1.ttt, advanced2.ttt and advanced3.ttt. To get the automatic part of the mark, you will
have to upload 3 files on MOODLE : advance.py, control.py and filt.py

https://moodle.ensta-bretagne.fr/mod/assign/view.php?id=43649

Your mark will be obtained by executing your code on another track advanced4.ttt. This track
is unknown but similar to the 3 tracks you have used.

12
3 PROGRAMMING R OB 1A IN P YTHON
3.1 Creating and controlling the robot
To create a Rob1A robot in your Python programs, you need first to import the "rob1a_v02"
module in your code :

import rob1a_v02 as rob1a

Rob1A is a class simulating the robot. To use it, you just have to create an instance of this class,
for example :

rb = rob1a.Rob1A()

Then you will access to the functions of the robot with "rb.". For example, stopping the robot
will be done by the "stop()" function like this :

rb.stop()

Then the rb object

3.2 Wheel’s motors


The command of main wheel’s motors is a percentage of the maximum speed. The percentage
can go from -100% (full backwards) to +100% (full forwards). To set the left wheel speed at 60%
(forwards) and the right wheel at -40% (backwards), and considering the robot object is named
"rb", the command is simply :

rb.set_speed(60,-40)

To stop the robot, you can use :

rb.set_speed(0,0)

or :

rb.stop()

Note : At every new simulations, the battery level of the robot will change. As Rob1A is a low cost
robot, there is no sensor to measure the battery level. Therefore, to move the robot you cannot use
the idea to set the speed at a given value for a given time as the covered distance will change with
the battery level.

3.3 Odometers
The angular motion of the main wheels is measured with odometers. An odometer will make
200 ticks when the wheel completes a full revolution. The values of left and right odometers are
given by :

odoLeft,odoRight = rb.get_odometers()

Note : the measurements from odometers are considered perfect, there is no need to filter them.

13
3.4 Sonars
To get the measurement of a given sonar, you have to use the rb.get_sonar(name) function;
name can be "front", "back", "left" or "right". For example, for the front sonar, the command is
:

distFront = rb.get_sonar("front")

distFront will contain the last measurement of the sonar.

You can also use rb.get_multiple_sonars(names) to acquire simultaneously several sonars. For
example, if you need both front and left sonars, the command is:

names = ["front","left"]
distFront,distLeft = rb.get_multiple_sonars(names)

Note : A new sonar measurement is performed every 100 ms. If you acquire the sonar every 25 ms,
you will get 4 times the same value

3.5 Controlling the robot


The functions that control the robot are in the "control.py" file. To use it you need to import it
in your program :

import control
import rob1a_v02 as rob1a

and to create an instance of the robot controller before using its functions :

ctrl = control.RobotControl() # create a robot controller

The control can be tested using the test_move() function to perform an in place rotation to the
left for a duration of 10 seconds :

rb = rob1a.Rob1A() # create a robot


spd_left = -50 # define speed of left wheel
spd_right = 50 # define speed of right wheel
durantion = 10 # move for 10 seconds
ctrl.test_move(rb,spd_left,spd_right,duration)

3.6 Filtering the sensors


Change SonarFilter to Filter ...

The filtering functions are coded in the "filt.py" file. To use it you need to import "filt.py" in
your program to use these functions :

import filt

and to create an instance of the filter before using its functions :

flt = filt.Filter() # create a filter

then you can modify the parameters. For example, if you use a MA filter (§ 5.2.1), you can set
its order to 4 with :

flt.set_ma_order(4)

14
When using multiple sonars, you need to have one filter per sonar. For example, if you need to
use front and left sonar, you will have to set both filters :

fltFront = filt.SonarFilter() # create a filter for front sonar


fltLeft = filt.SonarFilter() # create a filter for left sonar
fltFront.set_iir_a (0.8) # use IIR filter with a1=0.8 for front sonar
fltLeft.set_ma_prder (2) # use MA filter with order 2 for left sonar

Taking again the left sonar MA filter example, if rawVal is the measured value, the filtered value,
filtVal, is simply given by:

filtVal = fltFront.ma_filter(rawVal)

Sonars do not always detect something. When the nearest obstacle in the sonar cone is more
than 1.5 meters away, the sonar gives zero distance. It is not a good idea to filter this data! The
best thing to do is to remove these null values. However, after a certain movement of the robot,
a new non-zero distance may occur. The filter contains in its memory old values that may be
very different from the new measurement. The filter can do strange things. If this is the case,
you can reset the filter using a reset function. For the left sonar filter defined above, this is done
by:

fltLeft.ma_reset()

3.7 Tips & tricks


to transate ...

Informations utiles
Lorsque le robot s’arrête plus de 5 secondes, la notation repart à zéro (les points acquis sont
perdus et les waypoints repassent au rouge). Attention lors de l’utilisation de la fonction Python
time.sleep(duree) à ne pas utiliser de valeurs négatives pour duree, cela arrête le programme
d’évaluation automatique qui établit votre note en exécutant les codes remis sous MOODLE.
Avant de démarrer la simulation, vous pouvez changer la position et l’orientation du robot afin
de le placer à l’endroit de votre choix sur le circuit. Les explications visuelles se trouvent à la fin
de la vidéo d’installation (https://moodle.ensta-bretagne.fr/mod/resource/view.php?id=40622).

15
4 C ONTROL
In this section, we will describe the main functions to control the autonomous motion of the
robot.

4.1 In place turn


The easiest way for the Rob1A robot to change its direction is to rotate without changing place.
The robot has three wheels : two main wheels in front with motors and a third wheel (castor
wheel) on the back. The only purpose of the castor wheel is to stabilize the robot horizontally.
This wheel is passive and therefore has no actuator. The movement is fully controlled by the two
main wheels. The two Rob1A robot actuators are the two motors that control the rotation of the
left and right main wheels. The radius of the main wheels is 3 cm and the distance between the
main wheels is 12 cm. We will now try to control the movement of the robot using odometers.
The rotation can be controlled using the odometers or the compass.
The odometers

4.1.1 Measuring odometers

The odometer is a sensor measuring the distance traveled by a mobile. For this, Rob1A uses
rotary incremental encoders. Placed on the shaft of the wheel, the encoder produces pulses
that change with the rotation of the wheel. The detection and analysis of these pulses give a
number of "ticks" proportional to the angular movement of the wheel. The sensor can also
indicate the direction of rotation of the wheel (clockwise or anti-clockwise). We use a sensor
that delivers 200 ticks per complete wheel revolution. Pulse processing is considered perfect.
At the beginning of the simulation, the number of "ticks" is 0 for the left and right main wheels.
When the robot moves, the number of "ticks" is updated according to the rotation of the wheels.
The number of "ticks" increases as the wheel moves forward and decreases as the wheel moves
back.

4.1.2 Measuring heading with the compass

The heading of the robot is an angle varrying from 0 to 360 degrees. The angle is 0 degree when
the robot is oriented to the North. The orientations to the East, South and West are respectively
90, 180 and 270 degrees. The measurements of the compass are biased and corrupted by noise.
The filtering of the data will be usefull to reduce the noise and to estimate the bias.

4.2 Performing in-place turn


To perform an in-place turn, the wheels must rotate at the same speed but in opposite direc-
tion. To control the angle of rotation, you have the choice of the sensor : you can use either the
odometers or the compass.

4.2.1 In-place turn with odometers

If the left wheel turns forward with a given number of "ticks" and, at the same time, the right
wheel turns backwards with the same number of "ticks", the robot will orient itself to the right
without changing position (rotation in place).

16
4.2.2 In-place turn with compass

The robot must rotate until the measured heading is close to the desired heading. The compass
measurments are noisy and they will have to be filtered before to use them (except for Qualify1
task).

4.2.3 Python functions

To perform the in-place turn, the main Python function are :

1. The left and right wheel motors are controlled by the rb.set_speed () function.
2. The values of the odometers are acquired with the rb.get_odometers () function.
3. The value of the heading is acquired with the rb.get_heading () function.

The section 3 explains how to use these functions.

4.3 Measuring distances with sonars


Sonars (or ultrasonic range sensors) are sensors designed to measure the distance between
the robot and its environment. The sonar transmitter sends a short ultrasonic pulse (typ. 40
KHz) and wait until an echo returns to its receiver. If there is nothing in front of the sensor, no
echo is returned and the returned value is 0. If there is anything, an echo is returned and the
sonar measures the t e travel time between transmission and reception. Knowing, with some
precision, the speed of sound in the air c ai r , the distance is simply d e = c ai r t e /2. The use of
infrared light instead of ultrasound is more accurate and faster, but such sensors, called Lidar,
are still expensive and we continue to use sonar on low cost robots. The sonar can detect an
object up to a given distance called the maximum range. In our case, the maximum range is 1.5
m.

4.4 Performing a linear motion


On a real robot, if the left and right wheel motors receive the same command, they will not
rotate at exactly the same speed. Odometer measurements can be used to speed up or slow
down motors to run at the same speed. In this tutorial, to simplify your work, this effect is
not simulated and both motors run perfectly at the same speed when they receive the same
command. For the tutorial to be feasible in 8 hours, another effect called "dead zone" was
removed from the simulation. This effect can be noticed when a motor receives a command
that is too low. The motor does not start and emits an audible "bzzzzz" because of the too
low control current. The "dead zone" of a motor is the command interval for which this effect
occurs.

The linear motion is therefore quite simple to perform, you only need to set the same speed
and the same direction to the two main wheels.
You have several ways to terminate the motion :

1. stop after a duration : this is very easy but the distance travelled will depend on the bat-
tery level, so it’s not a good idea !!! bu you can use it for testing purposes,
2. stop after a distance measured by the odemeters
3. stop at a given distance to the a wall using the front sonar

4.4.1 Python functions

To perform the linear motion, the main Python function are :

17
1. The left and right wheel motors are controlled by the rb.set_speed () function.
2. The values of the odometers are acquired with the rb.get_odometers () function.
3. The value of the front sonar with the rb.get_sonar("front")() function.

The section 3 explains how to use these functions.

4.5 design a controller for wall following


In the AdvTrk challenge, Rob1A has to follow the right wall with a constant distance of 0.5 m.
This distance is from the center of the wall to the center of the robot. The area where you can
score points in wall following are the 4 red rectangular area. You get a maximum of points if
your robot moves close to the larger median of these rectangles.
Consider that the set point (or reference) is a distance of 0.4 m to the right wall, the controller
error is the difference between the set point and the filtered distance measured by the right
sonar. If Rob1A is too close to the wall, error is positive and Rob1A will have to turn to the left.
On the opposite, if Rob1A is too far from the wall, error will be negative and Rob1A will have to
turn to the right. This error will be used to control the robot.

4.5.1 "bang-bang" controller

The simplest controller is the "bang-bang" controller. It applies a constant correction to the
left and right speeds, only the sign of the correction changes. The pseudo code looks like this :

define setPoint
define nominalSpeed
while True:
measure distWall
controlError = setPoint - distWall
if controlError > 0
set_speed (nominalSpeed-deltaSpeed, nominalSpeed+deltaSpeed)
else
set_speed (nominalSpeed+deltaSpeed, nominalSpeed-deltaSpeed)
wait for end of loop iteration

This code does not end. Another test must be done to stop the code when an obstacle occurs
or after a given covered distance. If "deltaSpeed" is too large, the risk is that the robot turns too
much so that the right sonar becomes unable to detect the wall. Wall following may work with
a "bang-bang" controller but it is very tricky to setup.

4.5.2 proportional (P) controller

Instead of a constant correction, the proportional or P controller will apply a correction pro-
portional to the control error. The pseudo code of the P controller is :

define setPoint
define nominalSpeed
while True:
measure distWall
controlError = setPoint - distWall
deltaSpeed = kp * controlError
set_speed (nominalSpeed+deltaSpeed, nominalSpeed-deltaSpeed)
wait for end of loop iteration

18
The key issue is how to find the value kp. We will use an empirical trial and error method.

Note : Kp must also take into account the change of units between the error (in meters) and the
speed command (in %).

4.5.3 proportional-derivative (PD) controller

In wall following, the time derivative of the controller error can be extremely helpful. When
the robot is parallel to the wall, whatever the distance, the derivative of error will be very small.
When Rob1a gets closer to the wall, the derivative is negative ant when it gets farther, the deriva-
tive is positive. A correction that combines a proportion of the derivative of the error and a
proportion of the error is a PD controller or proportional-derivative controller.
The pseudo code of the PD controller is :

define setPoint
define nominalSpeed
lastError = 0
derivOk = False
while True:
measure distWall
controlError = setPoint - distWall
if derivOk:
derivError = controlError-lastError
deltaSpeed = kp * controlError + kd * derivError
else:
deltaSpeed = kp * controlError
set_speed (nominalSpeed+deltaSpeed, nominalSpeed-deltaSpeed)
lastError = controlError
derivOk = True
wait for end of loop iteration

The key issue is how to find the values for kp and kd. We will use an empirical trial and error
method. The derivOk boolean prevent to use undefined lastError in the first pass in the loop.

If Rob1A turns to much, the right sonar will stop detecting the wall. To avoid this problem, you
can limit the turn rate of the robot by limiting deltaSpeed to a given threshold deltaSpeedMax.
So before setting the speed, you can add :

if deltaSpeed > deltaSpeedMax:


deltaSpeed = deltaSpeedMax
if deltaSpeed < -deltaSpeedMax:
deltaSpeed = -deltaSpeedMax
set_speed (nominalSpeed+deltaSpeed, nominalSpeed-deltaSpeed)

19
5 F ILTERING THE SENSOR ’ S MEASURMENTS
The sonars and the compass measurments are limited by the accuracy of these sensors.
The measurements are obviously not perfect because the sensor has a given accuracy. The
accuracy is affected by two types of errors: a systematic error (called a bias) and random errors
(called noise). We use different approaches to deal with these errors.
The bias will be estimated over a large number of measurements and added to them. The noise
will be filtered in real time by a low-pass filter.

5.1 Estimating the bias


The bias will simply be estimated by taking 100 measurements, taking the average and deter-
mining the difference between the mean value and the theoretically expected value.
The sonars do not have bias but the compass have a bias of several degrees.

5.2 Filtering the noise


The measurements of the sonar sensor are affected by two types of disturbances: a Gaussian
noise and noise peaks (or spikes). Gaussian noise is a noise added to each measurement taken
by the sonar and the compass. The noise increases at certain times and corresponds to a mea-
sured value very different from the other measurements made just before. A spike-corrupted
measure is also called "outlier". Gaussian noise can be attenuated by applying a linear low-pass
filter to the measurements, while the elimination of noise peaks requires a nonlinear low-pass
filter. The noise peaks only affect the advance challenge. Therefore, for the simple lite chal-
lenge, linear filtering will be enough. The following paragraphs (§5.2.1 and §5.2.2) describe two
types of linear low-pass filters. As it is only necessary for the advanced challenge, the nonlinear
filter is described later in the paragraph §5.3.

5.2.1 Moving average Filter

We consider x[n] the measurement and y[n] the filter output at discrete time nT with n integer,
T=1/Fs the sampling interval and Fs the sampling frequency. The generic formulation of finite
impulse response (FIR) filters is :
N
X
y[n] = b k x[n − k] (5.1)
k=0

The moving average (MA) filter is a particular case of FIR filter with all b k coefficients set to the
same value b k = 1/(N + 1) :
1 X N
y[n] = x[n − k] (5.2)
N + 1 k=0
N is the order of the filter. For example, a second order MA filter will take the average of the last
3 measurements.

Although very simple to implement, this filter will do the job in many robotic applications. The
only parameter to define is the order of the filter. This can be done empirically using a trial and
error method. It is also possible to precisely define the frequency response of a FIR filter:
N
b k exp − j ωk
X
H (ω) = (5.3)
k=0

with ω = 2π f . In the MA case, all b k = 1/(N + 1). Figure (Fig. 5.1) shows the frequency response
for MA filters of order 2 and order 10. The order 10 is a better low-pass filter, but it requires more

20
Figure 5.1: Frequency response of MA filter at a sampling frequency of 10 Hz (100 ms sampling
period). the cutoff frequency at half power attenuation is 1.55 Hz for order 2 and 0.4
Hz for order 10. Order 10 is therefore a better low pass filter.

calculations. Another problem is that the MA filter introduces a delay between the unfiltered
output and the filtered output, which increases with the order of the filter.

Note: In some applications, the MA filter may not work. Butterworth, Chebyshev or other FIR
filters can be used and for these filters, a synthesis tool is used to define the coefficients b k (which
are no longer equal to each other as in the MA case). In Python, Numpy offers tools for this
synthesis.

5.2.2 First Order Recursive IIR Filter

While the MA filter uses only measurements, the recursive filter uses both the last measures
and the last outputs of the filter. As the filter is recursive, it will use its output to infinity. That’s
why it’s called an infinite impulse response filter or IIR filter. The generic formulation of IIR
filters (infinite impulse response) is as follows:

M
X N
X
y[n] = a l y[n − l ] + b k x[n − k] (5.4)
l =1 k=0

The simplest recursive filter is the first order with N = 0 and M = 1:

y[n] = a 1 y[n − 1] + b 0 x[n] (5.5)

To prevent the filter from changing the measure, its gain must be equal to one. To achieve this,
we need:

b 0 ∈]0, 1] (5.6)
a1 = 1 − b0 . (5.7)

21
The only parameter is b 0 . It can be defined empirically using trial and error method. Like
for MA filter, we can be more rigorous and use the frequency response to control the level of
filtering :
b0
H (ω) = (5.8)
1 − a 1 exp − j ω
with ω = 2π f . Figure (Fig. 5.2) shows the frequency response for b 0 equal 0.6 and 0.22. The
cutoff frequencies are similar those of MA filters of order 2 and order 10, respectively.

Figure 5.2: Frequency response of recursive IIR filter at a sampling frequency of 10 Hz (100 ms
sampling period). The cutoff frequency is 1.55 Hz for b 0 = 0.6 and 0.4 Hz for b 0 =
0.22. The lower is b 0 , the better is the low pass filtering.

5.2.3 first low pass filter

Both lite and advanced challenges require a linear low-pass filter. Therefore, you will need to at
least implement one of the two filters studied previously: MA or IIR.

Exercise 1 - Linear low pass filter


We will use the "linear_filter.py" program to develop and test the low pass filtering of the front
sonar measurements. The only thing to do in "linear_filter.py" is to choose the kind of filter
you will implement (modify line 30 or 31 ). You can also change the parameters of the filter by
modifying line 14 or 15 .

The code of the filter must be placed in the "sonar_filter.py" file. The ma_filter() and iir_filter()
functions in "sonar_filter.py" do nothing; the filtered value is just a copy of the measured value.
You will have to modified one of these two functions to implement your filter.

Note : If you do not know what filter to use, you can implement both and choose after. However,
it is better to choose one and spend time on improving its control parameters when doing the
challenge.

22
5.3 median filter - the anti spike filter
The median filter is commonly use to remove spike noise. It can replace the MA or IIR filter, or
it can be used before them. The median of a set of (2M+1) values is the value such that M values
have values greater or equal than this value, while the other M values are smaller or equal. For
example, we consider the set of 5 measured distances with a spike noise :

{0.87,0.91,0.89,2.51,0.92}

the median of this set is 0.91. It is obtained by ranking the values in ascending (or descending)
order :

{0.87,0.89,0.91,0.92,2.51}

and taking the value in the middle. The median filter will takes the median value of the last M
measurements. The median filter belongs to the family of rank order filters.

A - I NSTRUCTION TO INSTALL THE SIMULATOR


Installing the simulator
The easiest way to run the simulator is to use an ENSTA Bretagne computer operating on linux
Centos.
The simulator can be downloaded using the links (Office365 or FileSender) as indicated in
MOODLE

https://moodle.ensta-bretagne.fr/course/view.php?id=1439#section-4

To install the simulator, you have to follow these steps :

• start the computer in linux Centos


• download the archive file "lab-code-ue22sal-20190214.tgz", in principle it will be in the
"Telechargement" directory
• move the archive file in your working directory
• open a terminal and use the cd command to go in your working directory
• decompress the archive file by typing :

tar xfz lab-code-ue22sal-20200206.tgz

• go to the simulator directory by typing :

cd ue22sal/V-REP_PLAYER_V3_5_0_Linux

• start the simulator by typing :

./vrep.sh

• when the simulator is started, you have to load the scene clicking in the "File" menu on
the "Open scene ..." command. Then you have to click the green vertical arrow to move to
the parent directory, the click on the "scenes" folder, and then double-click on the scene
file "basic.ttt".
• The simulation is started by clicking in the "Start" menu on the "Start simulation" com-
mand.
• open a second terminal and use the cd command to go in your working directory
• go the python files directory by typing :

23
cd ue22sal/lab/test

the start the test program :

python3.7 test.py

If you use another linux dustribution, the python3 command may differ. For example, on
Ubuntu, the command is:

python3 test.py

• go back to the simulator to see, if all is OK, the robot performs a sequence of in place
turns and linear paths.
• On the simulator window, to give more space to the scene you can close the two windows
"Scene hierarchy" and "Model browser"

Instead of running your Python programs from a terminal, it could be possible to use PyCharm
or Spyder3.

Note : you will find videos on MOODLE explaining how to install the simulator and how to use
either Spyder3 or PyCharm.

Installing the simulator on your own computer


For a linux computer with numpy and Python3 installed, you can use the above procedure.

For Windows computers, using a linux virtual machine is not recommended as the simulation
can be too slow. The solution is to install the Windows version of the V-Rep player and then add
the Python files and the scenes files taken from MOODLE.

24

You might also like