0% found this document useful (0 votes)
55 views

04 Pathplanning

The document discusses path planning techniques for robotics, including: - Path finding vs. trajectory optimization and examples of each. - Sample-based path finding approaches like probabilistic roadmaps and rapidly exploring random trees that can handle complex environments. - Non-holonomic systems and challenges in path planning for systems with constraints.

Uploaded by

Disha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

04 Pathplanning

The document discusses path planning techniques for robotics, including: - Path finding vs. trajectory optimization and examples of each. - Sample-based path finding approaches like probabilistic roadmaps and rapidly exploring random trees that can handle complex environments. - Non-holonomic systems and challenges in path planning for systems with constraints.

Uploaded by

Disha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Robotics

Path Planning

Path finding vs. trajectory optimization, local


vs. global, Dijkstra, Probabilistic Roadmaps,
Rapidly Exploring Random Trees,
non-holonomic systems, car system equation,
path-finding for non-holonomic systems,
control-based sampling, Dubins curves

Marc Toussaint
University of Stuttgart
Winter 2014/15
Path finding examples

Alpha-Puzzle, solved with James Kuffner’s RRTs

2/61
Path finding examples

J. Kuffner, K. Nishiwaki, S. Kagami, M. Inaba, and H. Inoue. Footstep


Planning Among Obstacles for Biped Robots. Proc. IEEE/RSJ Int.
Conf. on Intelligent Robots and Systems (IROS), 2001. 3/61
Path finding examples

T. Bretl. Motion Planning of Multi-Limbed Robots Subject to Equilibrium


Constraints: The Free-Climbing Robot Problem. International Journal
of Robotics Research, 25(4):317-342, Apr 2006.

4/61
Path finding examples

S. M. LaValle and J. J. Kuffner. Randomized Kinodynamic Planning.


International Journal of Robotics Research, 20(5):378–400, May 2001.
5/61
Feedback control, path finding, trajectory optim.

path finding
trajectory optimization

feedback control
start goal

• Feedback Control: E.g., qt+1 = qt + J ] (y ∗ − φ(qt ))


• Trajectory Optimization: argminq0:T f (q0:T )
• Path Finding: Find some q0:T with only valid configurations

6/61
Outline
• Really briefly: Heuristics & Discretization (slides from Howie CHoset’s
CMU lectures)
– Bugs algorithm
– Potentials to guide feedback control
– Dijkstra

• Sample-based Path Finding


– Probabilistic Roadmaps
– Rapidly Exploring Random Trees

• Non-holonomic systems

7/61
A better bug?
“Bug 2” Algorithm
m-line

1) head toward goal on the m-line


2) if an obstacle is in the way,
follow it until you encounter the
m-line again.
3) Leave the obstacle and continue
toward the goal

OK ?

8/61
A better bug?
“Bug 2” Algorithm

1) head toward goal on the m-line


2) if an obstacle is in the way,
Start
follow it until you encounter the
m-line again.
3) Leave the obstacle and continue
toward the goal

Goal Better or worse than Bug1?

9/61
A better bug?
“Bug 2” Algorithm

1) head toward goal on the m-line


2) if an obstacle is in the way,
Start
follow it until you encounter the
m-line again.
3) Leave the obstacle and continue
toward the goal

Goal
NO! How do we fix this?

10/61
A better bug?
“Bug 2” Algorithm

1) head toward goal on the m-line


2) if an obstacle is in the way,
Start
follow it until you encounter the
m-line again closer to the goal.
3) Leave the obstacle and continue
toward the goal

Goal Better or worse than Bug1?

11/61
BUG algorithms – conclusions
• Other variants: TangentBug, VisBug, RoverBug, WedgeBug, . . .
• only 2D! (TangentBug has extension to 3D)
• Guaranteed convergence
• Still active research:
K. Taylor and S.M. LaValle: I-Bug: An Intensity-Based Bug Algorithm

⇒ Useful for minimalistic, robust 2D goal reaching


– not useful for finding paths in joint space

12/61
Start-Goal Algorithm:
Potential Functions

13/61
Repulsive Potential

14/61
Total Potential Function
U (q ) = U att (q ) + U rep (q )

F (q ) = −∇U (q )

+ =

15/61
Local Minimum Problem with the Charge Analogy

16/61
Potential fields – conclusions
• Very simple, therefore popular
• In our framework: Combining a goal (endeffector) task variable, with a
constraint (collision avoidance) task variable; then using inv. kinematics
is exactly the same as “Potential Fields”

⇒ Does not solve locality problem of feedback control.

17/61
The Wavefront in Action (Part 2)
• Now repeat with the modified cells
– This will be repeated until no 0’s are adjacent to cells
with values >= 2
• 0’s will only remain when regions are unreachable

18/61
The Wavefront in Action (Part 1)
• Starting with the goal, set all adjacent cells with
“0” to the current cell + 1
– 4-Point Connectivity or 8-Point Connectivity?
– Your Choice. We’ll use 8-Point Connectivity in our example

19/61
The Wavefront in Action (Part 2)
• Now repeat with the modified cells
– This will be repeated until no 0’s are adjacent to cells
with values >= 2
• 0’s will only remain when regions are unreachable

20/61
The Wavefront in Action (Part 3)
• Repeat again...

21/61
The Wavefront in Action (Part 4)
• And again...

22/61
The Wavefront in Action (Part 5)
• And again until...

23/61
The Wavefront in Action (Done)
• You’re done
– Remember, 0’s should only remain if unreachable
regions exist

24/61
The Wavefront, Now What?
• To find the shortest path, according to your metric, simply always
move toward a cell with a lower number
– The numbers generated by the Wavefront planner are roughly proportional to their
distance from the goal

Two
possible
shortest
paths
shown

25/61
Dijkstra Algorithm
• Is efficient in discrete domains
– Given start and goal node in an arbitrary graph
– Incrementally label nodes with their distance-from-start

• Produces optimal (shortest) paths

• Applying this to continuous domains requires discretization


– Grid-like discretization in high-dimensions is daunting! (curse of
dimensionality)
– What are other ways to “discretize” space more efficiently?

26/61
Sample-based Path Finding

27/61
Probabilistic Road Maps

[Kavraki, Svetska, Latombe,Overmars, 95]

28/61
Probabilistic Road Maps

[Kavraki, Svetska, Latombe,Overmars, 95]

q ∈ Rn describes configuration
Qfree is the set of configurations without collision
28/61
Probabilistic Road Maps

[Kavraki, Svetska, Latombe,Overmars, 95]


Probabilistic Road Map
• generates a graph G = (V, E) of configurations
– such that configurations along each edges are ∈ Qfree 29/61
Probabilistic Road Maps

Given the graph, use (e.g.) Dijkstra to find path from qstart to qgoal .

30/61
Probabilistic Road Maps – generation
Input: number n of samples, number k number of nearest neighbors
Output: PRM G = (V, E)
1: initialize V = ∅, E = ∅
2: while |V | < n do // find n collision free points qi
3: q ← random sample from Q
4: if q ∈ Qfree then V ← V ∪ {q}
5: end while
6: for all q ∈ V do // check if near points can be connected
7: Nq ← k nearest neighbors of q in V
8: for all q 0 ∈ Nq do
9: if path(q, q 0 ) ∈ Qfree then E ← E ∪ {(q, q 0 )}
10: end for
11: end for

where path(q, q 0 ) is a local planner (easiest: straight line)

31/61
Local Planner

tests collisions up to a specified resolution δ

32/61
Problem: Narrow Passages

The smaller the gap (clearance %) the more unlikely to sample such
points.
33/61
PRM theory
(for uniform sampling in d-dim space)
• Let a, b ∈ Qfree and γ a path in Qfree connecting a and b.

Then the probability that P RM found the path after n samples is

2|γ| −σ%d n
P (PRM-success | n) ≥ 1 − e
%

σ = 2d|B 1|
|Qfree |
% = clearance of γ (distance to obstacles)
(roughly: the exponential term are “volume ratios”)

• This result is called probabilistic complete (one can achieve any


probability with high enough n)
• For a given success probability, n needs to be exponential in d

34/61
Other PRM sampling strategies

illustration from O. Brock’s lecture

Gaussian: q1 ∼ U; q2 ∼ N(q1 , σ); if q1 ∈ Qfree and q2 6∈ Qfree , add q1 (or vice versa).
Bridge: q1 ∼ U; q2 ∼ N(q1 , σ); q3 = (q1 + q2 )/2; if q1 , q2 6∈ Qfree and q3 ∈ Qfree , add q3 .

35/61
Other PRM sampling strategies

illustration from O. Brock’s lecture

Gaussian: q1 ∼ U; q2 ∼ N(q1 , σ); if q1 ∈ Qfree and q2 6∈ Qfree , add q1 (or vice versa).
Bridge: q1 ∼ U; q2 ∼ N(q1 , σ); q3 = (q1 + q2 )/2; if q1 , q2 6∈ Qfree and q3 ∈ Qfree , add q3 .

• Sampling strategy can be made more intelligence: “utility-based


sampling”
• Connection sampling
(once earlier sampling has produced connected components) 35/61
Probabilistic Roadmaps – conclusions
• Pros:
– Algorithmically very simple
– Highly explorative
– Allows probabilistic performance guarantees
– Good to answer many queries in an unchanged environment

• Cons:
– Precomputation of exhaustive roadmap takes a long time
(but not necessary for “Lazy PRMs”)

36/61
Rapidly Exploring Random Trees
2 motivations:

• Single Query path finding: Focus computational efforts on paths for


specific (qstart , qgoal )

• Use actually controllable DoFs to incrementally explore the search


space: control-based path finding.

(Ensures that RRTs can be extended to handling differential


constraints.)

37/61
n=1

38/61
n = 100

38/61
n = 300

38/61
n = 600

38/61
n = 1000

38/61
n = 2000

38/61
Rapidly Exploring Random Trees

Simplest RRT with straight line local planner and step size α

Input: qstart , number n of nodes, stepsize α


Output: tree T = (V, E)
1: initialize V = {qstart }, E = ∅
2: for i = 0 : n do
3: qtarget ← random sample from Q
4: qnear ← nearest neighbor of qtarget in V
α
5: qnew ← qnear + |q −q
(q
| target
− qnear )
target near
6: if qnew ∈ Qfree then V ← V ∪ {qnew }, E ← E ∪ {(qnear , qnew )}
7: end for

39/61
Rapidly Exploring Random Trees

RRT growing directedly towards the goal

Input: qstart , qgoal , number n of nodes, stepsize α, β


Output: tree T = (V, E)
1: initialize V = {qstart }, E = ∅
2: for i = 0 : n do
3: if rand(0, 1) < β then qtarget ← qgoal
4: else qtarget ← random sample from Q
5: qnear ← nearest neighbor of qtarget in V
α
6: qnew ← qnear + |q −q
(q
| target
− qnear )
target near
7: if qnew ∈ Qfree then V ← V ∪ {qnew }, E ← E ∪ {(qnear , qnew )}
8: end for

40/61
n=1

41/61
n = 100

41/61
n = 200

41/61
n = 300

41/61
n = 400

41/61
n = 500

41/61
Bi-directional search
• grow two trees starting from qstart and qgoal

let one tree grow towards the other


(e.g., “choose qnew of T1 as qtarget of T2 ”)

42/61
Summary: RRTs
• Pros (shared with PRMs):
– Algorithmically very simple
– Highly explorative
– Allows probabilistic performance guarantees

• Pros (beyond PRMs):


– Focus computation on single query (qstart , qgoal ) problem
– Trees from multiple queries can be merged to a roadmap
– Can be extended to differential constraints (nonholonomic systems)

• To keep in mind (shared with PRMs):


– The metric (for nearest neighbor selection) is sometimes critical
– The local planner may be non-trivial

43/61
RRT*
Sertac Karaman & Emilio Frazzoli: Incremental sampling-based
algorithms for optimal motion planning, arXiv 1005.0416 (2010).

44/61
RRT*
Sertac Karaman & Emilio Frazzoli: Incremental sampling-based
algorithms for optimal motion planning, arXiv 1005.0416 (2010).

44/61
References
Steven M. LaValle: Planning Algorithms,
http://planning.cs.uiuc.edu/.

Choset et. al.: Principles of Motion Planning, MIT Press.

Latombe’s “motion planning” lecture, http:


//robotics.stanford.edu/~latombe/cs326/2007/schedule.htm

45/61
Non-holonomic systems

46/61
Non-holonomic systems
• We define a nonholonomic system as one with differential
constraints:
dim(ut ) < dim(xt )
⇒ Not all degrees of freedom are directly controllable

• Dynamic systems are an example!

• General definition of a differential constraint:


For any given state x, let Ux be the tangent space that is generated by
controls:
Ux = {ẋ : ẋ = f (x, u), u ∈ U }
(non-holonomic ⇐⇒ dim(Ux ) < dim(x))

The elements of Ux are elements of Tx subject to additional differential


constraints.
47/61
Car example

ẋ = v cos θ
ẏ = v sin θ
θ̇ = (v/L) tan ϕ
|ϕ| < Φ

 
x  
v

 
State q = y 

  Controls u =  

ϕ
   
 
θ
 

   


ẋ


v cos θ 
Sytem equation 



 = 
ẏ 





v sin θ 



θ̇ (v/L) tan ϕ
   

48/61
Car example
• The car is a non-holonomic system: not all DoFs are controlled,
dim(u) < dim(q)
We have the differential constraint q̇:

ẋ sin θ − ẏ cos θ = 0

“A car cannot move directly lateral.”

• Analogy to dynamic systems: Just like a car cannot instantly move sidewards,
a dynamic system cannot instantly change its position q: the current change in
position is constrained by the current velocity q̇.

49/61
Path finding with a non-holonomic system
Could a car follow this trajectory?

This is generated with a PRM in the state space q = (x, y, θ) ignoring


the differntial constraint.
50/61
Path finding with a non-holonomic system
This is a solution we would like to have:

The path respects differential constraints.


Each step in the path corresponds to setting certain controls. 51/61
Control-based sampling to grow a tree

• Control-based sampling: fulfils none of the nice exploration properties


of RRTs, but fulfils the differential constraints:

1) Select a q ∈ T from tree of current configurations

2) Pick control vector u at random

3) Integrate equation of motion over short duration (picked at random


or not)

4) If the motion is collision-free, add the endpoint to the tree

52/61
Control-based sampling for the car

1) Select a q ∈ T
2) Pick v, φ, and τ
3) Integrate motion from q
4) Add result if collision-free

53/61
J. Barraquand and J.C. Latombe. Nonholonomic Multibody Robots:
Controllability and Motion Planning in the Presence of Obstacles. Algorithmica,
10:121-155, 1993.

car parking
54/61
car parking
55/61
parking with only left-steering

56/61
with a trailer
57/61
Better control-based exploration: RRTs revisited
• RRTs with differential constraints:
Input: qstart , number k of nodes, time interval τ
Output: tree T = (V, E)
1: initialize V = {qstart }, E = ∅
2: for i = 0 : k do
3: qtarget ← random sample from Q
4: qnear ← nearest neighbor of qtarget in V
5: use local plannerR τto compute controls u that steer qnear towards qtarget
6: qnew ← qnear + t=0 q̇(q, u)dt
7: if qnew ∈ Qfree then V ← V ∪ {qnew }, E ← E ∪ {(qnear , qnew )}
8: end for

• Crucial questions:
• How meassure near in nonholonomic systems?
• How find controls u to steer towards target?

58/61
Metrics
Standard/Naive metrics:
• Comparing two 2D rotations/orientations θ1 , θ2 ∈ SO(2):
a) Euclidean metric between eiθ1 and eiθ2
b) d(θ1 , θ2 ) = min{|θ1 − θ2 |, 2π − |θ1 − θ2 |}
• Comparing two configurations (x, y, θ)1,2 in R2 :
Eucledian metric on (x, y, eiθ )
• Comparing two 3D rotations/orientations r1 , r2 ∈ SO(3):
Represent both orientations as unit-length quaternions r1 , r2 ∈ R4 :
d(r1 , d2 ) = min{|r1 − r2 |, |r1 + r2 |}
where | · | is the Euclidean metric.
(Recall that r1 and −r1 represent exactly the same rotation.)

59/61
Metrics
Standard/Naive metrics:
• Comparing two 2D rotations/orientations θ1 , θ2 ∈ SO(2):
a) Euclidean metric between eiθ1 and eiθ2
b) d(θ1 , θ2 ) = min{|θ1 − θ2 |, 2π − |θ1 − θ2 |}
• Comparing two configurations (x, y, θ)1,2 in R2 :
Eucledian metric on (x, y, eiθ )
• Comparing two 3D rotations/orientations r1 , r2 ∈ SO(3):
Represent both orientations as unit-length quaternions r1 , r2 ∈ R4 :
d(r1 , d2 ) = min{|r1 − r2 |, |r1 + r2 |}
where | · | is the Euclidean metric.
(Recall that r1 and −r1 represent exactly the same rotation.)

• Ideal metric:
Optimal cost-to-go between two states x1 and x2 :
• Use optimal trajectory cost as metric
• This is as hard to compute as the original problem, of course!!
→ Approximate, e.g., by neglecting obstacles. 59/61
Side story: Dubins curves
• Dubins car: constant velocity, and steer ϕ ∈ [−Φ, Φ]

• Neglecting obstacles, there are only six types of trajectories that


connect any configuration ∈ R2 × S1 :
{LRL, RLR, LSL, LSR, RSL, RSR}

• annotating durations of each phase:


{Lα Rβ Lγ , , Rα Lβ Rγ , Lα Sd Lγ , Lα Sd Rγ , Rα Sd Lγ , Rα Sd Rγ }
with α ∈ [0, 2π), β ∈ (π, 2π), d ≥ 0

60/61
Side story: Dubins curves

→ By testing all six types of trajectories for (q1 , q2 ) we can define a


Dubins metric for the RRT – and use the Dubins curves as controls!

61/61
Side story: Dubins curves

→ By testing all six types of trajectories for (q1 , q2 ) we can define a


Dubins metric for the RRT – and use the Dubins curves as controls!

• Reeds-Shepp curves are an extension for cars which can drive back.
(includes 46 types of trajectories, good metric for use in RRTs for cars)

61/61

You might also like