Chapter 2
PATH PLANNING IN
UNKNOWN ENVIRONMENTS
INTRODUCTION
It has been a long time since routing entering into the realm of robotics
science. Well ahead of the advent of robots, attention was oriented towards
the set of two key challenges termed as “Street sellers “and” “Piano
Displacer” over many years and their solutions have been suggested in a
variety of methods. The routing was primarily examined in robotic
manipulators and then in other subjects such as wheeled and flying robots.
Since the emergence of humanoid robots, thus far, there has been proposed
a certain number of approaches in order to route them, each with its
strengths and weaknesses. The deep breath of routing applications is to
such a degree that it is yet considered as an open issue.
In this chapter, two terms currently being used in the robot navigation
should be mentioned at the outset:
Guidance: The use of this term, in robotics, is couched in the
movement of a robot from one point to another position on the obstacle-
free path. In the guidance process, the robot can either be equipped with a
predefined path to follow or deliver itself to the target point premised upon
a sequential identification of the surrounding environment. In order to
1 H , Naeimeh Najafizadeh Sari et al.
r r r r r
becomes highly dependent on sensor data, updating the current position
and direction in most cases.
Path planning: Path planning entails data accumulation and evaluation
of some paths along which one or more conditions are fulfilled (e.g.,
bypassing obstacles, producing the shortest path length, imposing
minimum rotation for the robot, and ensuring the greatest safety). In a case
a fixed path is taken for guidance, this route should usually be pre-
specified.
Robot guidance techniques towards a target are contingent on the
pursuing of two key questions:
1. Are the starting point and final position of the robot pre-defined?
If the final position (target) is being determined beforehand, the
coordinates of which can be given to the robot. Otherwise, it is
needed for the robot to set off on a quest to detect its target point.
Each time at which the robot traverse along the path, its position
can be identified either by the robot its own or by utilizing the
external instruments like a GPS receiver.
2. Are the obstacles pre-specified?
Either the obstacles can be determined by providing the robot with
the obstacle’s coordinates or the robot scans them with the aid of
its sensors.
In order for the robot to successfully perform its missions, its path
should be designed beforehand. The pre-planned motion of the robot can
be implemented in various situations as the movement of the robot in the
small distances, drawing and cleansing the environment (functions in
which the environmental spaces should be completely considered by the
robot), warfare and espionage purposes that is increasingly concerned for
the national security.
Due to the lack or insufficiency of information in unknown
environments, the robot needs to be capable of designing the route by its
own and pursue this path towards the target. On the basis of the foregoing,
P P !" #$%!&'( 1)
t*+, -*./t02 30.4, 5+t* path planning approach for robots in unknown
environments and is going to put forward some recent routing methods
along with conventional ones. The second section covers Bug algorithms
family, particularly with regard to Bug 1, Bug 2, ALG 1 & 2, Distbug,
TangBug algorithms and will present a couple of other members of this
family [1]. Likewise, different conventional methods, such as D*, Com,
Class, and Rio algorithms, will be propounded.
The final part of this chapter provides a full description of two novel
routing techniques named as “Vector Filed Histogram” [2] and “Multi
Strategic” [3]. Vector Filed Histogram is a long-established approach for
both unknown and known environments, and it functions on the basis of
gridding the environment and drawing a histogram graph. By the same
token, the multi-strategic method resolves the routing issue in unknown
environments by simultaneous utilization of multiple strategies.
BUG ALGORITHMS FAMILY
Robot guidance in the two-dimensional environment is formed from a
starting point and a target point. The robot seeks to locate an obstacle-free
path as long as proceeding from the beginning point towards the final
position (target). As depicted in Figure 6, three examples of two-
dimensional environments are provided. According to which, the robot
requires to get started from the box-shaped starting position without a path
with uncertainty or difficulty until the achievement of the circle-shaped
target point.
Lumelsky was a trailblazer in path planning with conducting early
works in this field [4-18]. Prior to this debut, navigation of the robot in
unknown environments followed a tortuous route that was performed
utilizing algorithms such as Pledge [19] and Tarry [20]. It was unfortunate
that the Pledge algorithm appeared incapable of taking a robot to a
specified position. Aside from that, applying these two algorithms led to
the robot’s traversed path became extremely lengthy. By all means, some
19 67 8(, Naeimeh Najafizadeh Sari et al.
u:/2ecedented creative approaches in which it is of necessity to access
information from their environmental settings are available.
Bug algorithms are the primary path planning strategies through which
delivering the robot to the target point is guaranteed. These algorithms are
highly convenient for robots with real-time performances (prompt and
immediate performance).
The guidance of the robot in several environments with a scarcity of
information from a starting point S towards a target point T is the prime
objective of bug algorithms. In the case, that no path could be identified for
the attainment of the target point, the algorithm reports “unattainability of
the target point” as a result.
Bug algorithms would be executed in all kinds of robots equipped with
tactile or range-estimation sensors that are provided with a method for the
local position determination (for instance a length-calculator or a GPS
system). The robot is able to detect its route towards the target point
automatically because of implementing this algorithm. Lumelsky has
employed this algorithm on the robot manipulators, end-effector of which
are required to reach a specific position [14, 21, 22]. The movement of the
robot manipulators are virtually the same as the motions of the mobile
robots in an unknown environment, and the sole distinction between them
is that the manipulator arms have a rigid base.
Figure 6. Some examples of 2D environments for robot guidance.
P P !" #$%!&'( 1;
T*0 <u2t*02 /=t0:t+.4 .//4+-.t+=: =< t*+, .4>=2+t*? -.: @0 ,-2ut+:A .:3
questing an environment within a particular range to end in the target point
[23]. Once the robot can locate the target’s position, it approaches the
target so that it is possible to inspect this point more precisely. One model
for these robots is utilized in nuclear reactors in order to identify
radioactive substances. If the robot detects a remote suspicious object, it is
needed to draw near to it and take a full examination from the immediate
vicinity. It entails specification of a strategy for the robot to proceed
towards the suspected object in the environments consisting of a significant
number of objects.
Sensors pioneered in Bug algorithms by Lumelsky and Skewis [12].
Utilization of sensors triggered the elaboration of the Bug 2 algorithm,
which brought about shortening the length of the path through the creation
of shortcuts. Fallowing after, there was raised a severe contention for the
best point to bypass obstacles by robots; therefore, Kamon established
“DistBug” Algorithm in order to accomplish this end. “TangentBug” is
another algorithm with which the robot can employ some sensors to
recognize the surrounding environments [24].
The key commonalities and discrepancies among the Bug algorithms
reveal in the approaches that are adopted to ensure the accomplishment of
their missions. SenseBug is a newly developed algorithm to be matched
with sensors [25]. In this algorithm, three performance criteria are laid out
viz frequency of data collection, path lengths, and the number of efforts to
search. The strategy of this algorithm is based on the reduction in the
frequency of gathering the available information in the environment and
amounts of the search effort to aggregate data at any given time.
Performing these special tricks will not hinder the mission from being
achieved and accomplished.
Given the fact that the surrounding environment can constantly
undergo changes, and there may be only a limited amount of information
from it at each moment. Accordingly, the strategy for the robot’s guidance
should be formulated in a way that it can steer the robot towards the target
point with a minimum quantity of information from the environment
(ideally including both the target position and the robot’s current position).
1B 67 8(, Naeimeh Najafizadeh Sari et al.
C <05 :=t.@40 /.t* design techniques, in particular, A* [26-28], Dijkstra
[29], Distance Conversion [30-32], Potential Field [33-37], Sampling-
based [38, 39], and piano stimulation problem [40-42] requires more data
compared to two aforementioned strategies, and these approaches may
occasionally necessitate a full map. These shortcomings draw attention that
point-to-point guidance is inevitable in unknown environments.
What is more, path planning with bug algorithms has been utilized for
robot guidance on the grassy area [43], crop harvesting [44], and for
mining operations [45]. An algorithm titled as “WedgeBug” has been
executed on mobile robots for surface exploration of the planet Mars by
Laubakh and Burdick [46].
Langer, Coelho & Oliveira [47] came to conclusion that path planning
in unknown environments could be implemented in a various major
industries such as the health care (e.g., a surgical operation in a room
which can be remotely controlled), construction, transportation,
manufacture, military, aerospace (e.g., space research) entertainment (e.g.,
video games). In order to meet these burgeoning demands, they persisted
in applying path planning for these different domains [41-45, 47, 48]. “K-
Bug” Algorithm was simulated in an office room much like to real external
environment and indicated that the planned paths via this algorithm could
keep pace with the generated paths by the A* algorithm.
In short, Bug algorithms provides considerable advantages among
which its simplicity and its capability to be perceived and then
implemented stand out. At the minimum, these algorithms theoretically
ensure the achievement of their target.
Indication and Terminology of Bug Algorithms
It is of critical importance to put forward the indications and
terminology of bug algorithms family prior to the explications of these
algorithms, some of the salient symbols are represented as:
: Starting point
DEFG DIEJJKJL KJ MJNJOQJ RJSKUOJVWJFX YZ
: Target point, also stands for the end point.
! : Current position of the robot.
"# : The $ %& collision point (among all the collision points) which
indicates the $ %& time in all the number of times at which the robot strikes
an obstacle boundary.
'# : The $ %& separation point, which defines the $ %& time at which the
robot separates from obstacle border and approaches the target.
()*, +-: The geometric distance between arbitrary points of * and +.
(./%& )*, +- : The length of robot’s traveled path between two arbitrary
points * and +.
0 : Maximum range of the position sensors.
0)1- : The free space along the 1- direction between the robot and the
first obstacle that intersects this direction.
2 : Free space along the target. If the target is along with the 1-
direction, 2 and 0 will be exactly the same thing.
In addition, the performances of all the aforementioned Bug algorithms
for routing have been evaluated in the same theoretical environment to
present the contrast between them (Figure 7). In this figure, the starting
point and the target position are labeled with letters S and T, respectively.
As will be shown anon, various algorithms with which to convey the robot
from S to T, demonstrate markedly dissimilar and in some cases
suboptimum routes.
Figure 7. The theoretical environment to draw distinction between different Bug
algorithms’ performances.
Y^ [E\K ]EGEJXGEGK, Naeimeh Najafizadeh Sari et al.
_`a b cdaefghij
Bug 1 algorithm founded by Lumelsky and Stepanov is the first
algorithm to consider from Bug algorithms family [16-18]. According to
the strategy of this algorithm, not until the robot encounters an obstacle,
will it keep moving towards the target point. After coming across an
obstacle, the robot circumvents it and selects the nearest location towards
the target. In this algorithm, it is taken for granted that the target position is
defined for the robot in advance, whereas the robot is not able to observe it
directly.
The underlying principles behind the performance of Bug 1 algorithm
are articulated as follows, and their demonstrations consistent with the
mission are illustrated in Figure 8.
Step 0. Set = 0 as an initial value.
Step 1. Increase one unit and robot heads towards the target point
until one of the outlined scenarios plays out:
Step 1.1. The target is achieved; the algorithm is successfully exited.
Step 1.2. An obstacle is encountered, the point at which the impact
occurred is labeled “!" ” and then go to Step 2.
Step 2. Dodge the obstacle from left side in order to meet the point !" ,
and comply with two conditions while searching for point (or points) #" ,
viz., “the shortest distance to the target point” and “the likelihood to make
a beeline for the target.”
Step 3. Examine the traveling path to the target point
Step 3.1. If there exists a point #" reach to point #" and return to
Step 1.
Step 3.2. If there exists no point #" , the target is “unachievable,”
terminate the algorithm.
The robot in motion records the geometric length of the traversed path
between the $% impact point !" and its current position & as '()$% *&, !" +.
Only points in which ' has a minimum value and from which the robot can
proceed towards the target are taken into consideration. Subsequently, the
robot marks one of the minimum points (#" ), and as long as coming back to
!" and as long as coming back to !" , it examines whether the target is
klmn kolppqps qp vpwpxyp zp{q|xp}~pm
target is inaccessible for the robot, the mission is aborted and failure will
be reported as “ the target is not attainable “. All the same, if the robot is
able to achieve the target, the direction for navigating the obstacle
boundary is chosen in such a way that '()$% *&, !" ) gets minimized. In
consequence, the maneuver, at which #" is attained, could be carried out
and Step 1 is thoroughly performed.
The established pseud-code according to the strategy of this algorithm
is encapsulated as follows:
¡
¢ ¢£ ¤ £ ¥¦§¨ © ª§
« ¬
§® ¥¦§¨ © ª§¯°¦±²
³ ´ µ¶·¶¸¹º »¼µµ½¾¸ ¿¶»À¸Á¶¾
Ââ Ä ¤¤ ¡ Å §Æ²ª°Çò £Ç§È£ª¦±¡¡
ÃÉ Ä ¤¤ T) THEN quit//target reached
¢ Ê ¤ ÄËËǧ£ª°Çª §Ç°ª§£
« ¬
ɧ§¯ §Æ²ª°Ç Ƨȣ±°¦Ì
³ ´ µ¶·¶¸¹º »¼µµ½¾¸ ¿¶»À¸Á¶¾
¢ ¢ ¤ £ª¦²Çª§£ §¥ Ä °£± ¢£
Ââ ¢ ² £§ª £È¡ ¬ÂÍ ±²ª ¢Î¡Î ±²ª ÊΡ ŠĤ¤¡
ŠĤ¤Ê¡¡
ÃÉ Ä ¤¤ ¡ Ê Â ÏȪ//target reached
§® ª§ ¢ °§£Ð §Æ²ª°Ç Ƨȣ±°¦Ì
ÃÉ §Æ²ª°Çò £Ç§È£ª¦± °ª ¢ £ ±¦Çª§£ §¥ ¡
Ê Â ÏȪË˪°¦Ðª £§ª ¦°Ç°Æ
ÂÍÊâ
Following Bug 1 algorithm, the maximum traversed path by the robot
is quantified applying the following formula:
-./- = 125
[346 74846*9:;< + > 346 74846*9:;? + > @ > 346 74846*9:;A +]
(1)
ÔÕ ÑlÒq Ólnlpnlnq, Naeimeh Najafizadeh Sari et al.
Ö , ! specify the starting and the end points respectively. A
coefficient of 1.5 is worked out considering the robot bypasses the obstacle
with a complete round for the sake of locating the best possible position to
move off the obstacle border. After storing enough information, the robot
is able to select the best place and return to this point in the remaining half
round.
Figure 8. The general principle of Bug 1 routing method, for the two kinds of target;
accessible and inaccessible.
In the Bug 1 algorithm, if the target is attainable, the achievement of
the target point for the robot is guaranteed at all times. Besides, leaving the
obstacle and moving towards the target is conducted at the best place by
virtue of looking for the entire obstacle’s border in a complete round. As a
way of exemplification, the application of Bug 1 algorithm in a theoretical
environment is depicted in Figure 9.
Bug 2 Algorithm
Bug 2 algorithm has also been set up by Lumelsky and Stepanov [16-
18]. Not only is the algorithm Bug 2 inherently less conservative
comparative to Bug 1 algorithm, but it does also leave the encountering
רÙÚ ×ÛØÜÜÝÜÞ ÝÜ ßÜàÜáâÜ ãÜäÝåáÜæçÜÙè éê
ëìítacle’s boundaries more readily. Whereas, in Bug 1 algorithm, the robot
checks all the points of a barrier so that the nearest point to the target can
be sought, performing the complete round in order to bypass the obstacle is
hindered in the Bug 2 algorithm. If the robot is on the verge of hitting an
obstacle, this strategy is draw up to navigate it to move along the direction
of a straight line that is connected the start point to the target position, and
in a case that it comes across any obstacle, it bypasses the obstacle from
the left side.
In accordance with the hypothetical mission illustrated in Figure 10,
principles for the performance of the Bug 2 algorithm are listed as follows:
Step 0. First, draw an imaginary line connecting the starting point to
the target point and then initialize with zero value.
Step 1. Add one unit to , not until one of the following scenarios
happens, do move along the line ! towards the target point:
Step 1.1. The target is obtained, a successful termination of the
algorithm.
Step 1.2. If an obstacle comes across, the collision point is marked as
“"# ” and then run step 2.
Step 2. Get around the obstacle boundary on the left side until one of
two possible scenarios is fulfilled as follows:
Step 2.1. If the target is successfully spotted, exit from the algorithm.
Figure 9. An example of the employment of Bug 1 algorithm in a
hypothetical environment.
öö îïðñ òïóïôõóïóñ, Naeimeh Najafizadeh Sari et al.
Figure 10. An example of Bug 2 algorithm’s operation in a hypothetical environment.
Step 2.2. Some point along the line M should be found such that
(!, ") < (#$ , %). If movement towards the target is feasible, the point is
labeled “&$ ” and return to Step 1.
Step 2.3. If the point #$ is again encountered, stop the algorithm; the
target point is unattainable.
The adopted pseudo-code for the evaluation of this algorithm is written
as follows:
÷øùúû üýþÿW
LWý Lû úùû from S to T
þWRWý
Mû ý
x
ÿUýL üü ý þ ü !úûW!" ûû
I ü ý ý#WU $"ù %% &û û !øû
LWý # %%! ! ú! ù
þWRWý
Iúú !úû " F
x
LWý L = intersection of x and Len
UNTIL (((L is not null) AND (dist (L,T), dist (H,T) OR (r==T) OR
ü#
I ü ý ý#WU $"ù %% &û û !øû
WUE÷#LW
P'() P*'++,+- ,+ .+/+01+ 2+3,40+56+(7 8 3
Figure 11. An example of Bug 2 algorithm’s implementation in a
hypothetical environment.
In a similar vein, if the target point is attainable, the target point with
the robot’s scanning can be definitely achieved in this algorithm analogous
to Bug 1 algorithm. In order for the robot to identify the best place for
leaving the obstacle’s barrier, it embarks on an omnipresent and full-scale
quest. The implementation of this algorithm in the desirable environmental
settings is exemplified in Figure 11.
In Bug 2 algorithm, the maximal mileage for the robot (length of the
traversed path) is computed via the following equation:
'*%' = +-./012/"/0(3456 )7 8 +-./012/"/0(3459 )7 8 & 8
+-(./012/"/0(345: )7 (2)
In view of the fact that the orientation at which the obstacle is
bypassed (from the left or right side) is predetermined, and constant along
the route, so the traversed distance of the robot in agreement with both the
target and starting points might be either optimal or suboptimum. As
evident in Figure 12, under the worst scenario policy (in which the robot
takes a suboptimal path), the length of the traveled route is approximately
on a par with that of the maximum possible distance. Another traversed
8; H'9, :')'+7)'), , Naeimeh Najafizadeh Sari et al.
p<=h based on the suboptimal selection as an additional case is illustrated in
Figure 13 during which the robot bypasses from the left to achieve the
target, thus, it moves across a rather long route. However, had it bypassed
from the right, it could have traversed a considerably shorter path.
It is obvious that Bug 2 algorithm performs more rapidly than Bug 1. It
is worth mentioning that it is hypothesized that the robot in its close
proximity is solely capable of recognizing the existence of only one
obstacle in both of these algorithms. When the robot is equipped with
obstacle-detection sensors in the 360 degrees angular range, it is
overwhelmingly likely for Bug 1 and Bug 2 algorithms to be improved.
Figure 12. Traversing a roughly maximum distance due to the selecting suboptimal
path by Bug 2 algorithm.
Figure 13. Turning to the left side to circumvent the obstacle (another suboptimal
choice by Bug 2).
>?@A >B?CCDCG DC JCKCNOC QCSDTNCVXC@Y Z[
\]^_ \`abcdefg
The extended version of Bug 2 algorithm that was devised by
Sankaranarayanan and Vidyasagar is widely known as ALG1 algorithm
[49]. One potential shortcoming of the algorithm Bug 2 relates to the fact
that likeliness of the robot’s traveling path for one time or even several
times is strengthened in this algorithm, resulting into the longer generated
routes. With a focus on minimizing this weakness, ALG1 algorithm
reminds the robot of obstacles’ corresponding collision and leaving points
to keeps them in its memory and utilizes them when needed for producing
next shorter paths.
The underpinning principle for the performance of ALG1 algorithms is
outlined as follows:
Step 0. First, draw an imaginary line linking the starting point to the
target position, assume an initial value of 0 for .
Step 1. Add one unit to , and the robot continues passing across the
line ! towards the target point until one of the subsequent scenarios takes
place:
Step 1.1. The target is reached; satisfactorily quitting the algorithm.
Step 1.2. If an obstacle is encountered, the collision point is labeled
“"# ”, then go for running Step 2.
Step 2. Bypass the obstacle border on the left side in order that one of
the following scenarios shows up:
Step 2.1. The target is located; the algorithm is successfully finished.
Step 2.2. One point as $ with the correspondent features needs to be
found as follows:
· The point belongs to the line !.
· For all the %’s: &($, ') < &(%, ')
· When the robot stands at point y, it is able to go towards the target
point. Step 2.2.1. The resulting point is tagged as “ ! ”, and Step 1
is iterated.
qr hijk liminomimk , Naeimeh Najafizadeh Sari et al.
stuv wyzy Travel in the proximity of point "! and then come back to
this point. If "! is approached, walk along the obstacle’s boundary and
bypass the barrier wall from the left side.
Step 2.4. Return to the point "! , in this case; the target is unattainable;
the algorithm is over.
An illustration for the execution of this algorithm in a hypothetical
environment is accounted for in Figure 14.
ALG2 Algorithm
The revised and developed version of ALG1 algorithm that was
proposed by Sankaranarayanan and Vidyasagar is named ALG2 algorithm
[49]. Contrary to the algorithms as mentioned earlier, the concept of line
(through which the starting point and the target point is connected) is no
more defined in this algorithm, and different conditions are set up to
overstep the obstacle by the robot. The operation principle of Algorithms
ALG2 is explained as follows:
Figure 14. An example of the execution of ALG1 algorithm in a
hypothetical environment.
{|}~ {| }
Two initial settings are ! = 0 and " = #($, %);&" refers to the
nearest point of the robot to the target.
Step 1. Add one unit to !, proceed towards the target point, and update
the amount of " with setting the value of '(*, %) simultaneously. Keep
updating " as long as the condition " < '(*, %) maintains. Further this
process until one of the two underlying scenarios takes places as follows:
Step 1.1. If the target point is obtained, quit the algorithm.
Step 1.2. If an obstacle is detected, the point should be labeled as “
+- ”, and then Step 2 is run.
Step 2. Constantly hold the obstacle on the right side (meaning that
bypassing the obstacle is started from left side) and simultaneous with this,
continuously update " with '(*, %) until the condition " < '(*, %)
maintains, this process keeps running until one of the following scenarios
is true:
Step 2.1. The target is met; successful exit from the algorithm.
Step 2.2. The point y is met with the two constraints as follow:
· (!, ") < (#, ")
· The robot is able to proceed towards the target from this point.
Figure 15. ALG2 algorithm’s performance in an arbitrary environment.
, Naeimeh Najafizadeh Sari et al.
¡¢ £¤£¤¥¤ The found point is labelled” ! ”, and return to Step 1.
Step 2.3. Turn around in the proximity of point "! and then come back
to it. If "! is reached, bypass the obstacle from the left side.
Step 2.4. Return to point "! ; the target is unattainable and the
algorithm is stopped.
As a case in point, the performance of this algorithm in an arbitrary
environment is exhibited in Figure 15. It should be emphasized that the
ALG2 algorithm has demonstrated an enhanced efficiency in comparison
with the previously mentioned algorithms given the algorithms’ criteria to
leave a barrier thanks to not needed line # in order to fulfill this condition.
DistBug Algorithm
This algorithm was established by Kamon and Rivlin [50]. This
algorithm applies a range-finder sensor for not only detecting free space $
but also defining the conditions of leaving an obstacle. Performance of
DistBug algorithm must meet the following requirements:
Step 0. Initialize % with the value of 0 and traverse a distance
equivalent to the wall thickness (the minimum thickness of a barrier in the
environment is considered for this displacement, and its amount must be
given to the application by the user; this issue regarded as a weakness of
DistBug algorithm).
Step 1. Add one unit to % and advance towards the target point to
satisfy one of two conditions as below:
Step 1.1. The target is found, successful exit from the algorithm.
Step 1.2. An obstacle is encountered; this point is labeled “"! ” and
Step 2 is run.
Step 2. Circumvent the obstacle from the left side and update the
minimum amount of &(', )* at once and records it as &+!- (.*. This
process proceeds until one of the following conditions is accomplished:
Step 2.1. The target is observable, which means that the condition
&(', )* / $ remains. This point is labelled “ ! ” and Step 1 is run.
¦§¨© ¦ª§««¬« ¬« ®«¯«°±« ²«³¬´°«µ¶«¨· ¸¹
º»¼½ ¾¿¾¿ If (!, ") # $ % &'* (") # +-./, this point should be
labeled “0' ” and Step 1 is run.
Step 2.3. Take a loop and return to 2' ; means that target is
unattainable and the algorithm is stopped.
An example to show how this algorithm performs in a hypothetical
environment is presented in Figure 16.
The TangBug Algorithm
The fundamental theorem of TangBug algorithm has been designed by
Kamon, Rimon, and Rivlin [24]. The foundation of this algorithm is built
on the incorporating distance-finding sensors in the map generating
process from the local environment for the robot, and TangBug algorithm
strives to shorten the route towards the target point as much as possible.
The Ideal case would be when the robot’s sensors can determine the
distance of surrounding obstacles with respect to all of their corners and
edges.
Figure 16. Illustration of the performance of the algorithm Distbug in an
arbitrary environment.
ÈÉ ÀÁÂÃ ÄÁÅÁÆÇÅÁÅÃ , Naeimeh Najafizadeh Sari et al.
Figure 17. Obstacle recognition with the aid of sensors.
However, it is practically impossible for the robot’s sensors to scan all
the angles of obstacles so that the distance between the robot and barriers
can be continuously and determined. discretely
To give an example, the distance between the robot and an obstacle is
calculated after each rotation by 5 degrees. However, there could be
several obstacles between these angle intervals somehow are outside the
robot’s vision applying this strategy. Given that the robot’s sensors can
only cover a specified range, let alone recognizing barriers beyond this
range, leading to some obstacles cannot be detected.
Figure 18. Obstacle recognition with the help of limited-range sensors.
ÊËÌÍ ÊÎËÏÏÐÏÑ ÐÏ ÒÏÓÏÔÕÏ ÖÏ×ÐØÔÏÙÚÏÌÛ ÈÜ
Figure 19. Robot’s straight movement towards the target before sensing the obstacle.
The working principles of TangBug are formulated akin to Bug 2
algorithm:
Step 0. Approach the target point until sensors detect an object
interrupting the direct path towards the target.
Step 1. Proceed towards the discontinuity points ! or either " (Figure
20). The principle, to which the robot adheres in order to choose between
these two points, is the selection of the shortest path to the target. It is
made apparent in Figure 21, the robot selects its way according to this
algorithm in such a manner that the amount #$%(|&'! | + ) ! *), )& " ) +
) " *)-is procured.
Surround Tangential Chart
To begin with, an environment similar to Figure 22a that is composed
of some obstacles with different shapes, a starting point, and a target is
taken account. In the next stage, according to Figure 22b, convex vertices
of each obstacle should firstly be determined. The specified vertices, the
starting point, and the target position would be connected through straight
lines based on the condition that these lines do not intersect the barriers.
The resulting map demonstrating in Figure 22c is known as the surround
tangential graph. It could be substantiated that this graph has the capacity
to comprise the optimal route from the starting point to the position of the
åæ ÝÞßà áÞâÞãäâÞâà , Naeimeh Najafizadeh Sari et al.
çèéêëçì íîïð ñòçïóèô òèçî õñé çîë ö÷øëé -consideration environment is
depicted in Figure 22d.
Figure 20. The start point, target point and two discontinuity points.
Figure 21. Robot’s motion towards discontinuity points.
ùÞúâ ùûÞããàãü àã ýãþãÿ Pã ãúä
ã à ÿã åå
Figure 22. The stages of drawing the tangential diagram and selecting the optimal path
in an environment comprising of obstacles and starting points and targets.
Local Tangent Graph
If the robot is incapable of extracting environmental information from
its surrounding, it aims to draw a local map in return; this approach is
referred to as the Local Tangent Graph (LTG). Figure 23 serves as an
example for illustration of this graph. To generate the local tangent graph,
it is required for the collected information to be applied in two functions
(!) and ". The function (!)#computes the distance to the first visible
obstacle along direction of θ and by the same token the free space in front
of the robot is quantified in function F. Utilizing this information,
å3 ÝÞßà áÞâÞãäâÞâà , Naeimeh Najafizadeh Sari et al.
cèôcöôèçïñ÷ ñõ ôñcèô tangent chart nodes and the optimal route is performed
as follows:
A. Nodes resulted from examining status F:
· If (!, ") # $, the target is visible and a node named " is
designated on it.
· If $ % &, this indicates that no obstacle is observed across the
target point, accordingly, the node " is designated alongside
the target. A case in which these nodes has been created, is
demonstrated in Figure 23.
B. The nodes resulted from examining state of function r(θ):
· By the time at which a discontinuity in the function &(')
appears, a node along ' is made. An example of these nodes
whose numbers are 1, 2, 3 and 4 is exhibited in Figure 23.
· If &(') = & (implying that the distance of obstacle is
equivalent to the maximal range of sensor) followed by the
reduction of &('), a node will be added along '. In Figure 23,
node 5 is an instance of such nodes.
· If &(') * &, and thereafter,+&(') becomes equal to &, a node is
assigned along '.
C Determining the optimal path:
· For each node, the distance function (-. , ") is determined, in
which -. and " stand for the / 01 node and the target node,
respectively.
· Any node that holds the lowest value of the distance function
(-. , "), is implemented as the optimal node and labeled “ ”.
Local Minimum
Figure 24 vividly demonstrate that the straight motion of robot towards
the target leading to the shorter distance notwithstanding, the presence of
an obstacle between the robot and the target makes a complete separation
between them in certain circumstances. To put it differently, even though
the robot is situated in a local minimum position, being stuck with an
obstacle makes the robot away from the target once the robot’s genuinely
aa aa !" #a$% a"& ' !% !% !%() # ) %*
broader aspects of the environment should be considered in order to cause
the robot to proceed towards the target point.
Figure 23. An example of localized tangent diagrams.
Figure 24. Robot getting away from the target due to local minimum position.
- H+ , , Naeimeh Najafizadeh Sari et al.
I a a% # . !% in a local minimum, TangentBug algorithm
adopts the method of moving across the obstacle’s boundary. In this
situation, the robot picks one direction to circumvent the barrier due to the
use of a graph of local tangential, and afterward, continuously updates two
corresponding variables defined as below:
!"#$$#%&' ()*: The minimum distance to the target along an obstacle
with the lowest effect.
!+&,-. ()*: The minimum distance from point / to the target point.
At every moment, TangentBug algorithm monitors the available space
for the robot and procure the point / at which the distance function
!(/0 )* has its minimum value. Into the bargain, the amount of !(/0 )*
supersedes the current value of !+&,-. ()*. The process of following the
barrier’s boundary prolongs until one of the following two conditions is
accomplished:
· If !+&,-. ()* < !"#$$#%&' ()*: the target is unattainable.
· If the robot entirely bypasses the obstacle and it has to perform this
task again; similarly, the target is unattainable, and the mission
aborts.
The pseudo-code for TangentBug algorithm in such an environment
displayed in Figure 25 is expounded as follows:
/ 1
H2
Figure 25. Routing strategy in TangentBug algorithm in an environment with
multiple obstacles.
0
W1245 6789:;
L<= > ? =
REPEAT
x’ = x//robot’s previous location
u@ABCD E FG JKMNOQ CK>BRAT >
UV XOK KFTCBYZD ADCDYCDA NO ANRDYCNKO >[ =\<] > ? =
<L^<
L<= DL and eR be discontinuity points
UV XXdist (x,eL) + dist(eL,T)) < (dist (x, eR) +dist (eR,T)))
=\<] > ? DL ELSE w = eR
UNTIL ((x ==T)) OR (dist (x’ , g) < dist (T,g)))
IF (x ==g) THEN quit//target reached
LET H = L = x//contact location
LET dir = direction of continuity point (L or R)
REPEAT
LET w = the discontinuity point in direction dir
IF (dist (x,T) , dist (L,T) ) THEN L = x
Update x by moving towards w
_]=UL XXANTC XE`=[` ANTC XL`=[[ bd XE ?? =[ bd XE ?? \[[
UV XE ?? =[ =\<] euNCffCBRQDC RDBYgDA
UV XE ?? \[ =\<] euNCffCBRQDC not reachable
<]hi\UL<
The resultant trace of implementing TangentBug algorithm in an
arbitrary environment is sketched in Figure 26.
Figure 26. Implementation of the algorithm TangentBug in an arbitrary environment.
rs jklm nkokpqokom, Naeimeh Najafizadeh Sari et al.
tvwxy z{| }~|yvw
Exclusive of the preceding algorithms, there exist several alternative
algorithms that function based on Bug algorithm, each of which strives to
optimize a special section of it implementing the auxiliary apparatus (e.g.,
sensors, GPS, camera, map, etc.). Indeed, these algorithms have not been at
the core of ubiquitous attention; on this ground, it would suffice to
introduce them in this section briefly.
HD-I Algorithm
Algorithm “HD-I” is analogous to Rio1 algorithm already mentioned
in section 2-2-4. These two algorithms differ in the orientation in which
they follow the obstacle wall. The HD-I algorithm is founded on the
essential characteristics, namely the distance from the robot to the target,
orientation, and the before followed directions. This method contributes to
a high degree to minimize the path.
Ave Algorithm
Algorithm “Ave” that is an enhanced version of HD-I algorithm to be
exact, presented by Noborio et al. [51]. The pivotal refined feature of this
algorithm is to specify the direction along which robots move across the
barriers’ walls. In this algorithm, procurement of the distance to the target
is based on not only current nodes’ information but also the information of
previously passed nodes by the robot.
VisBug 21 and 22 Algorithms
Algorithm VisBug 21, presented By Lumelsky and Skewis [12],
enables the robot to follow the path established in Bug 2 Algorithm
implementing sensors that have a definite range limit. With the aid of these
sensors, finding shortcuts, which in return results in shortening the path,
has been devised. VisBug 22 Algorithm has more risk-taking behavior to
identify the shortcut in contradistinction to Bug 21 algorithm. This
algorithm is efficient to decrease the length of path traversed by the robot
ko kppmp mp ppp pmppq r
¡ ¢£e that this route would be
lengthened if the shortcuts were not functional anymore.
WedgeBug Algorithm
WedgeBug Algorithm was put forward by Laubach and Burdick [46].
Wedges are analyzed in this algorithm, and the first wedge indicates the
direction that conveys the robot to the desired target point. If no encounter
between an obstacle and the robot occurs, it maintains its path towards the
target, but if the case in which there exist any obstacle arises, the robot
proceeds to follow the hypothetical barrier’s border. Meanwhile, the vast
majority of wedges are evaluated by means of limited-range sensors of the
robot. The strength point of this algorithm is to obviate the need for
generating either a local or global tangent graph. It should be born in mind
that the creation of regionsl or surround tangent graphs are required in
algorithms, for instance, TangBug algorithms. This algorithm is designed
under the following assumptions that should always be checked:
· No information is available in advance;
· The plan should be performed based on the collected data from
both sensors and cameras;
· The design should be robust, meaning it should predict a proper
mechanism with respect to changes of the environment;
· The design should be comprehensive and accurate.
CautiousBug Algorithm
The logic of CautiousBug algorithm was proposed by Magid and
Rivlin [52]. This algorithm comprises spiral motions, which leads the robot
to successively switch the pursuit direction towards the obstacle.
Three-Dimensional Bug (3DBug) Algorithm
3DBug algorithm was offered by Kamon et al. [53]. Indeed, this
algorithm is the refined version of TangBug algorithm that can operate in
three dimensions rather than two dimensions. Multiple points of concerns,
which are not handled easily compared with the robot’s following the
¬ ¤¥¦§ ¨¥©¥ª«©¥©§, Naeimeh Najafizadeh Sari et al.
®¯°der of the obstacle, arises in this context since the robot scans in a
large-scale environment.
Axis-Angle Algorithm
Lumelsky and Tiwari framed the general concept of Axis-angle
algorithm [13]. This algorithm was motivated by the central principles of
Pledge algorithm [19] and formulates two variables to get off the barrier,
first of which is the angle that the connected line from point x (the current
position of the robot) to the target point makes with the line from the
starting point to the end point. The second variable is defined as the angle
between the line connecting the beginning point to the target point and the
current velocity vector of the robot. One of the outstanding advantages of
this algorithm in contrast to the rest of Bug algorithms is that range
measurement tools (including sensors) are not involved, for this reason; a
merely simple direction indicator can be implemented to execute this
algorithm.
Opitm-Bug Algorithm
Foundation of Optim-Bug Algorithm was made by Kreichbaum [54].
Contrary to other algorithms of Bug family, this algorithm employs an
infrared sensor for the acquisition of environmental information, and it can
simultaneously use this information in order to draw the map of the
surrounding environment. When Optim-Bug algorithm has complete
knowledge of the environmental situation, it could disregard the step in
which the robot follows the barrier’s border. When the robot is equipped
with this sensor in addition to a map, this algorithm would be capable of
selecting the shortest path, leading to a sharp decline in the amount of time
the robot takes to reach the target without encountering obstacles.
Unknown Bug Algorithm
Unknown Bug algorithm [54] seems to be the same as Optim-Bug
algorithm, but with one exception, that some sources of uncertainty are
considered in the robot path. This algorithm quantifies the optimum route
at each phase of the robot’s motion and in order to do so, exploits available
±¥²© ±³¥ªª§ª´ §ª µª¶ª·¸ª ¹ªº§»·ª¼½ª²« ¬¾
¿ÀÁ¯°ÂÃÄ¿¯À ŰÆÇÆ¿ÈÆÉ Á°¯Â ÊËÌÍ ÇÃÂÆ°ÃÍ ÂÃÎÏÍ ÆÄÇÐÑ ÃÏ ÒÆÓÓÐ ÔÀ¯ÄÕÆ°
motivation of this algorithm is to minimize the maximum error of the final
position of the robot with regard to the target point.
SensBug Algorithm
The motive of developing SenseBug algorithm [25] is inspired by the
need to corporate robots in building spaces such that the obstacles are
supposed as simple objects with a fixed length. Leaving obstacles in this
algorithm is comparable with that of COM algorithm, illuminated in the
next section. It is acknowledged that obstacles can change their position in
building environments. Based on this premise, it is confirmed that
SenseBug can succeed in the robot guidance towards the target. Using the
wireless network in this algorithm is imperative to navigate the robot in a
given direction so that the maximal path length can be reduced during
pursuing the barriers’ borders.
D* Algorithm
D* Algorithms was proposed by Stentz [55], and it’s explicit policies
differentiate it from Bug algorithms, the most important of which is the
employment of maps (mapping is not permitted in Bug algorithms).
Consequently, D* algorithms demonstrate well-formed and unique features.
The motion direction in this algorithm is specified through map gridding,
and based on scoring these grids, as illustrated in Figure 28.
D* Algorithm Performance without Barriers
In order to gain a better understanding of the D* algorithm, suppose a
field with 5 × 5 grid considering Figure 28. In the initial state, the robot
places in the cell (3, 1), and the target is set in the cell (3, 5). It is assumed
that the step sizes for moving along horizontal and vertical direction are the
same and weighted with number 1, and diagonal stepping with number 2.
The algorithm D* under this assumption makes Table 3 for the adjacent
cells to the target cell (T). Firstly, in this table, determining the value of
REFERENCES
[1] Ng JS. An analysis of mobile robot navigation algorithms in
unknown environments. 2010.
[2] Borenstein J, Koren Y. The vector field histogram-fast obstacle
avoidance for mobile robots. IEEE transactions on robotics and
automation. 1991;7:278-88.
[3] Bianco G, Cassinis R. Multi-strategic approach for robot path
planning. IEEE. p. 108-15.
[4] Lumelsky V, Skewis T. A paradigm for incorporating vision in the
robot navigation function. IEEE. p. 734-9.
[5] Lumelsky VJ. On the connection between maze-searching and robot
motion planning algorithms. IEEE. p. 2270-5.
[6] Lumelsky V, Sun K. Gross motion planning for a simple 3D
articulated robot arm moving amidst unknown arbitrarily shaped
obstacles. IEEE. p. 1929-34.
[7] Sun K, Lumelsky V. Computer simulation of sensor-based robot
collision avoidance in an unknown environment. Robotica.
1987;5:291-302.
[8] Lumelsky VJ. Effect of robot kinematics on motion planning in
unknown environment. IEEE. p. 338-43.
[9] Lumelsky VJ. A comparative study on the path length performance
of maze-searching and robot motion planning algorithms. IEEE
transactions on robotics and automation. 1991;7:57-66.
ÝÞß Öר×Ù×ÚÛ×Ü
àÝáâ Skewis T, Lumelsky V. Experiments with a mobile robot operating
in a cluttered unknown environment. Journal of Robotic Systems.
1994;11:281-300.
[11] Lucas C, Lumelsky V, Stepanov A. Comments on" Dynamic path
planning for a mobile automation with limited information on the
environment"[with reply]. IEEE Transactions on Automatic Control.
1988;33:511-2.
[12] Lumelsky VJ, Skewis T. Incorporating range sensing in the robot
navigation function. IEEE Transactions on Systems, Man, and
Cybernetics. 1990;20:1058-69.
[13] Lumelsky V, Tiwari S. An algorithm for maze searching with
azimuth input. IEEE. p. 111-6.
[14] Lumelsky VJ. Dynamic path planning for a planar articulated robot
arm moving amidst unknown obstacles. Automatica. 1987;23:551-
70.
[15] Lumelsky VJ. Continuous robot motion planning in unknown
environment. Adaptive and Learning Systems: Springer; 1986. p.
339-58.
[16] Lumelsky V, Stepanov A. Dynamic path planning for a mobile
automaton with limited information on the environment. IEEE
Transactions on Automatic Control. 1986;31:1058-63.
[17] Lumelsky VJ, Stepanov AA. Effect of uncertainty on continuous
path planning for an autonomous vehicle. IEEE. p. 1616-21.
[18] Lumelsky VJ, Stepanov AA. Path-planning strategies for a point
mobile automaton moving amidst unknown obstacles of arbitrary
shape. Algorithmica. 1987;2:403-30.
[19] Abelson H, DiSessa AA. Turtle geometry: The computer as a
medium for exploring mathematics: MIT press; 1986.
[20] Berge C. The theory of graphs: Courier Corporation; 2001.
[21] Lumelsky V. Algorithmic issues of sensor-based robot motion
planning. IEEE. p. 1796-801.
[22] Lumelsky V. Continuous motion planning in unknown environment
for a 3D cartesian robot arm. IEEE. p. 1050-5.
Öר×Ù×ÚÛ×Ü ÝÞã
àäåâ Miskon MF, Russell AR. Close Range Inspection Using Novelty
Detection Results. Springer. p. 947-56.
[24] Kamon I, Rimon E, Rivlin E. Tangentbug: A range-sensor-based
navigation algorithm. The International Journal of Robotics
Research. 1998;17:934-53.
[25] Kim S-K, Russell JS, Koo K-J. Construction robot path-planning for
earthwork operations. Journal of computing in civil engineering.
2003;17:97-104.
[26] Pearl J. Heuristics: intelligent search strategies for computer
problem solving. 1984.
[27] Nilsson NJ. A mobile automaton: An application of artificial
intelligence techniques. SRI International Menlo Park CA Artificial
Intelligence Center; 1969.
[28] Ko WS, Senevieatne LD, Earles SWE. Space representation and map
building-A triangulation model to path planning with obstacle
avoidance. IEEE. p. 2222-7.
[29] Foux G, Heymann M, Bruckstein A. Two-dimensional robot
navigation among unknown stationary polygonal obstacles. IEEE
transactions on robotics and automation. 1993;9:96-102.
[30] Choset HM, Hutchinson S, Lynch KM, Kantor G, Burgard W,
Kavraki LE, et al. Principles of robot motion: theory, algorithms,
and implementation: MIT press; 2005.
[31] LaValle SM. Planning algorithms: Cambridge University Press;
2006.
[32] Jarris RA. Collission-free trajectory planning using distance
transforms. Mechanical Engineering Transactions of the IE
Australia. 1985;10:187.
[33] Latombe J-C. Robot motion planning: Springer Science & Business
Media; 2012.
[34] Khatib O. Real-time obstacle avoidance for manipulators and mobile
robots. Autonomous robot vehicles: Springer; 1986. p. 396-404.
[35] Trevisan M, Idiart MAP, Prestes E, Engel PM. Exploratory
navigation based on dynamical boundary value problems. Journal of
Intelligent and Robotic Systems. 2006;45:101-14.
ÝÞæ Öר×Ù×ÚÛ×Ü
àåæâ Masoud SA, Masoud AA. Motion planning in the presence of
directional and regional avoidance constraints using nonlinear,
anisotropic, harmonic potential fields: a physical metaphor. IEEE
Transactions on Systems, Man, and Cybernetics-Part A: Systems and
Humans. 2002;32:705-23.
[37] Bräunl T. Embedded robotics: mobile robot design and applications
with embedded systems: Springer Science & Business Media; 2008.
[38] Kavraki L, Svestka P, Overmars MH. Probabilistic roadmaps for
path planning in high-dimensional configuration spaces: Unknown
Publisher; 1994.
[39] Geraerts R, Overmars MH. A comparative study of probabilistic
roadmap planners. Algorithmic Foundations of Robotics V: Springer;
2004. p. 43-57.
[40] Schwartz JT, Sharir M. Algorithmic motion planning in robotics.
Algorithms and Complexity: Elsevier; 1990. p. 391-430.
[41] Schwartz JT, Sharir M. On the “piano movers'” problem I. The case
of a two‐dimensional rigid polygonal body moving amidst polygonal
barriers. Communications on pure and applied mathematics. 1983;
36:345-98.
[42] Perez-Lozano T. Spatial planning: A configuration space approach.
IEEE Trans Computers. 1983;32.
[43] Huang Y, Cao Z, Hall E. Region filling operations for mobile robot
using computer graphics. IEEE. p. 1607-14.
[44] Ollis M, Stentz A. First results in vision-based crop line tracking.
IEEE. p. 951-6.
[45] Land S, Choset H. Coverage path planning for landmine location.
[46] Laubach SL, Burdick JW. An autonomous sensor-based path-planner
for planetary microrovers. IEEE. p. 347-54.
[47] Langer RA, Coelho LS, Oliveira GHC. K-Bug, a new bug approach
for mobile robot's path planning. IEEE. p. 403-8.
[48] Sciavicco L, Siciliano B. Modelling and control of robot
manipulators: Springer Science & Business Media; 2012.
çèéèêèëìèí îïð
ñòïó Sankaranarayanan A, Vidyasagar M. A new path planning algorithm
for moving a point object amidst unknown obstacles in a plane.
IEEE. p. 1930-6.
[50] Kamon I, Rivlin E. Sensory-based motion planning with global
proofs. IEEE transactions on robotics and automation. 1997;13:814-
22.
[51] Noborio H, Nogami R, Hirao S. A new sensor-based path-planning
algorithm whose path length is shorter on the average. IEEE. p.
2832-9.
[52] Magid E, Rivlin E. CAUTIOUSBUG: A competitive algorithm for
sensory-based robot navigation. IEEE. p. 2757-62.
[53] Kamon I, Rimon E, Rivlin E. Range-sensor based navigation in three
dimensions. IEEE. p. 163-9.
[54] Kriechbaum KL. Tools and algorithms for mobile robot navigation
with uncertain localization. 2006.
[55] Stentz A. Optimal and efficient path planning for partially known
environments. Intelligent Unmanned Ground Vehicles: Springer;
1997. p. 203-20.
[56] Horiuchi Y, Noborio H. Evaluation of path length made in sensor-
based path-planning with the alternative following. IEEE. p. 1728-
35.
[57] Lozano-Pérez T, Wesley MA. An algorithm for planning collision-
free paths among polyhedral obstacles. Communications of the ACM.
1979;22:560-70.
[58] Spandl H. Das HALMOR System. Maschinelles Lernen [Das
HALMOR system. Machine learning]: Springer; 1992. p. 63-97.
[59] Lengyel J, Reichert M, Donald BR, Greenberg DP. Real-time robot
motion planning using rasterizing computer graphics hardware:
ACM; 1990.
[60] Lozano-Perez T. A simple motion-planning algorithm for general
robot manipulators. IEEE Journal on Robotics and Automation.
1987;3:224-38.
[61] Udupa SM. Collision detection and avoidance in computer
controlled manipulators. 1977.
îïô çèéèêèëìèí
ñõöó Donald BR. A search algorithm for motion planning with six degrees
of freedom. Artificial Intelligence. 1987;31:295-353.
[63] Dorst L, Trovato K. Optimal path planning by cost wave propagation
in metric configuration space. International Society for Optics and
Photonics. p. 186-97.
[64] Khatib O, Le Maitre JF. Dynamic controlof manipulators operating
in a complex environment.
[65] Barraquand J, Latombe J-C. Robot motion planning: A distributed
representation approach. The International Journal of Robotics
Research. 1991;10:628-49.
[66] Koditschek D. Exact robot navigation by means of potential
functions: Some topological considerations. IEEE. p. 1-6.
[67] Pavlidis T. Contour filling in raster graphics: ACM; 1981.
[68] Yufka A, Parlaktuna O. Performance comparison of the BUG’s
algorithms for mobile robots. p. 416-21.
[69] Brandão AS, Sarcinelli-Filho M, Carelli R. An analytical approach to
avoid obstacles in mobile robot navigation. International Journal of
Advanced Robotic Systems. 2013;10:278.
[70] Secchi H, Carelli R, Mut V. Discrete stable control of mobile robots
with obstacles avoidance. p. 405-11.
[71] Vidyasagar M. Nonlinear systems analysis: Siam; 2002.
[72] Fakoor M, Kosari A, Jafarzadeh M. Revision on fuzzy artificial
potential field for humanoid robot path planning in unknown
environment. International Journal of Advanced Mechatronic
Systems. 2015;6:174-83.
[73] Mobayen S, Javadi S. Disturbance observer and finite-time tracker
design of disturbed third-order nonholonomic systems using terminal
sliding mode. Journal of Vibration and Control. 2017;23:181-9.
[74] Bellman R. A Markovian decision process. Journal of mathematics
and mechanics. 1957:679-84.
[75] Kolobov A. Planning with Markov decision processes: An AI
perspective. Synthesis Lectures on Artificial Intelligence and
Machine Learning. 2012;6:1-210.
çèéèêèëìèí îïï
ñðõó Fakoor M, Kosari A, Jafarzadeh M. Humanoid robot path planning
with fuzzy Markov decision processes. Journal of applied research
and technology. 2016;14:300-10.
[77] Sheskin TJ. Markov chains and decision processes for engineers and
managers: CRC Press; 2016.
[78] Puterman ML. Markov Decision Processes: Discrete Stochastic
Dynamic Programming: John Wiley & Sons; 2014.
[79] Hu Q, Yue W. Markov decision processes with their applications:
Springer Science & Business Media; 2007.