0% found this document useful (0 votes)
948 views

Robotics and Control by Mittal

Uploaded by

sujay nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
948 views

Robotics and Control by Mittal

Uploaded by

sujay nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 186

122 Robotics and Contr

of
Solution Let the known position and orientation of the endpoi arm be
by
given

T= (4.7)
0 00 1
where each, has a numeric value
To obtain the solutions for joint variables (6,. 6,. B,), In Eq. (4.7)Tis equ
to overall transformation matrix for the 3-DOF articulated arm "T, derived
equated
d in
Example 3.3, that is
GC-CS3 -S G^(L,C2» + L2C;)| 2 is a
S,C-S,Ss C S(L,C2 t L2C2)| 2122 23 24
C 0 LSy + LSS 12 i (4.8)
S3 C
0
Equation (4.8) gives 11 nontrivial equations for the three unknown ioint
variables, 6. 6, and 0 appearing on the left-hand side. The determination of
solution for these three joint variables for known rj is the inverse kinematie
problem and is worked out as follows.
Step 1 Applying guideline (a), an inspection of elements of the matrices on
both the sides of Eq. (4.8) gives that 6, can be obtained from element 3 of row 1.
The element (1,3) of left-hand side matrix has a term (-S,) in only one variable 0,
and a constant ri3 on right-hand side and, hence, it can give angle 6, from
-sin 6,=r3. However, according to guideline (c) this is not preferred as correct
quadrant of the angle can not be found. Alternatively, applying guideline (b), 6,
can be isolated by dividing element (2, ) by (1, 1) or (2, 2) by (1,2) or (l. 3) by
(2.3) or (2, 4) by (1, 4). Out of these, the last one is preferred as per guideline (d)
Thus, equating element (1,4) and (2, 4), on both sides of the matrix two equations
are obtained as
CLC23+ L,C2) = r14 (4.9)
S(L,C23+ L,C;) = r24 4.10)

Dividing Eq. (4.10) by Fq. (4.9) gives


(4.11)
C 14
Therefore, according to guidline (c),
6, = Atan2(4 '1a) (4.12)
Step 2 The other two unknowns, 6, and 0, cannot be obtained directly. 0
solution for 6, and 6,, inverse transform approach, To
guideline (e), is uscu
isolate 3, both sides of Eq. (4.8) are
postmultiplied by (°T,)".
This will g
T,'T=TTT (4.13)
The Inverse Kinematics
123
From Eq. (3.21). T, is

T SC 0
00 0 (4.14)
0 0 0
The inverse of *7, is obtained using Eq. (2.53) as

C S 0 -L1
R*D,S, C, 0 (0
LO 0 0 (4.15)

Substituting "T, and 'T, from Eqs. (3.19) and (3.20),


Ea. (4.7) and [°T3l° from Eq. (4.15). in Eq. (4.13) Example 3.3, T from
gives
GC -CGS S LCC1
S,C-S,S -G L5C C Sy - S 23 -lai1ha
Ssh2
C2 0 LS
0 0
C-S2 S -S2 2 -L +u
0 0

(4.16)
Note that the left-hand side of Eq. (4.16) has
only of6, both sides
right-hand side has only 6, terms. A close examination 6, reveals that and terms and the

equations obtained from elements (1,4), (2,4). and (3, 4) are only function o 6,
and 6. Thus, with use of some algebra and
trigonometric identities, 6, can be
eliminated and solution for 6, is obtained. Equating the elements (1, 4),
(2, 4).
and (3, 4) of the two matrices, three equations obtained are

LCC2=La'1t ri4 (4.17)


LSC2=L321 + 24
(4.18)
LS=-L31+ r34 (4.19)
By squaring Eqs. (4.17) and (4.18), and adding gives,
LCC}+Sf) = (-Lh1 + 4) +(-Lh t
From this 6, is eliminated because C+ Sf = 1, thus

L,C t/(-Lsi tha) +(-Lg


= t
a) (4.20)
Dividing Eq. (4.19) by Eq. (4.20). gives
S -Lz t4 (4.21)
C2 ty-Li +ia +(-LV tha)
Hence
6, Atan2 -L ta). t V-Li t+ja) +(-L thu (4.22)
124 Robotics and Control
tirst solving for (9,
Step 3 The solution for 0, is
obtaincd

by element
by
(3,2) gives +0,).Dv
clement (3,) of Eq. (4.8)

S
C 142
8,+0 Atan2(.
424
Thus.
,
, =Atan2 ( ) - (4.25
and (4.25) givethe completesolution the 3-DM
for.

Equations (4.12).(4.22).
for thejoint displacements 6,. 0, and.
articulated arm as expressions Note that the ahoue lerm
and orientation.
arm end-point position
ofknown Alternate expressions for A n
one of the possible sets of expressions. the chosen elements of th ánd 4
obtained if instead of equating
would be
used, or instead of isolating 6,,
6, 1s isolated by prem matrices
other elements are

It is also possible to
find solution withot ing
se of the
both sides with T)
instead of using algebra and trigonqn
inverse matrix approach and ometry.
can be obtained from elements(a
instance. after solving for 8,. (¬, 6)
+
,l)and
(3. 2) and through trigonometric
manipulation, 6, and 6, are obtained

Inverse kinematics of
RPY wrist
Example 4.2
kinematic model was obtained in Examnle 2
For the 3-DOF RPY wrist
Eq. (3.27). as
CC 0
[-C5%C, +S,S, C,S,S, S,C
+

-S,S,C-Cs S,S,S, -C,C S,C 0


(4.26
CC -C293 S 0
0 0 0
Determine the solution for the three joint variables for a given end-effector
orientation matrix Tg.
0
y Oy a 0
(4.2
n 0 a. 0
0 0 0 I
Solution The overall transformation matrix T and end-effector man
represent the same transformations. Thus, equating Eqs. (4.26) and(4.2ng

" 4, -CS,C +5,5, C5,S +S,C, GC


n O, a, 0 -S,S,C -CS S,5,5, -C,C SC (428
0
n 0 a. 0 C,C -CS3 S
0 0 1
0
The Inverse Kinematics 125

The eleents of the matrix on left hand side of matrix cquation arc known
(given). while. the clements of matrix on rigt-hand side have three unknown
ioint variables &,. 8, and 9. To get the solution for these joint variables, the more
consistent analytical approach (guidcline (e)) is used herc.
Guideline (e) suggests premultiplying thc matrix equation. Eq. (4.28) by
inverse of transtormation matrix "T, involving the unknown , and from the
clements of the resultant matrix equation determine the unknown. Recall that the
ight-hand side of Eq. (4.28) is the product of three transformation matrices "T
T. and T, cach involving one unknown . 9, and . respectively.
This process i8 continued successively. that is, moving one unknown (by its
inverse transform) from right-hand side of the matrix cquation to the lett-hand
side of the matrix equation and solving it, then moving the next unknown to the
lett-hand side, until all unknown are solved.
To solve for &,. both sides of Eq. (4.28) are premultiplied by "T". From
Eqs. (3.24) - (3.26)

CS0 , o, a, 0-S 0 C 0TC -S 0 0


0 01 0 |, o, a 0 C2 0 S o S C 0 0

S-C 0 0n 0 0 01 0o 00 1 0
0 0 0 10 o 0 0
0 0 0
00 1
Cn, +S Co, +S0, Ca, +Sha, 01 -S,C3 S,S C 01
n 0 C,CCs S 0
Sn,-Cn, So, -Go, Sha,-Ca, 0 S C3 0 0
0 0
(4.29)
The left-hand side of Eq. (4.29) has one unknown (6) and the right-hand side
has two unknown (6, and 9,). Scanning the elements of both the matrices in
Eq. (4.29), the equation in one unknown (G) is obtained by equating elements
(3, 3). That is,
Sa, -

Cja, = 0 (4.30)
ay
tan =
C
which gives
e, = Atan2(a,, a,) (4.31)
The process of further premultiplication is not necessary because the solutions
for the remaining two unknowns (0, and 6,) can be obtained from Eq. (4.29).
Equating (1, 3) and (2. 3) elements on both sides in Eq. i4.29) gives
C=Cja, +S,a,
S = a.
(4.32)
126 Robotics and Control

solution for 6, IS obtained as

From these two equations the


Atan2(a.. Cja,
+ Su,)
0, =

2) of Eq. (4.29) gives


4.33
Bquating elements (3.1 ) and (3,
S =S,,C",

C =S,- Cjo, (4.34)


Which lead to the solution
for 6, as

6 Atan2(S,n, - C",, So,- Cio,) (4.35


unknown to the left-ho
is to m o v e one
The inverse transform technique th
it is also possible to achieve this
at a time and
solve it. Therefore,
the inverse transform
of premultiplying by
postmultiplying instead here.
This is illustrated
involving an unknown. left-hand side, postmultiplyino h
To solve for 6,, that is, to the
m o v e it to both
gives
sides of matrix equation Eq. (4.28) by T3
0 07 C 0 S 0T-S2 0 C 0
a, 0T C S
, ,
o - S C3 0 0 S 0 -C, oC2 0 s, 0
o-S3 01 0o o0 0 0
a 00 0 10
0 0 0 o 0 0 0 0
o 0 0 1L
or

C3n,-S30, S3n, +C3o ax 01 -C,S SS CC


C,C 0
01
Cny-S30, S3n, +Cz0 ay -S,S-C S,C2 0
0
(4.36
C3 So, Sgn, +C30 4; 0 C S2
0 0 0
0
matrices both sides, the elements (3, 2) gives
Comparing elements of the
on

Szn +Cy0, =0
and. thus, the solution for 63 is (437

0=Atan2 (-o, n)
the elements (3,1) and (3, 3), 6, is obtained as
Similarly, from (4
0 = Atan2 (a,, Cn,- So,)
and from the elements (1, 2) and (2, 2), 6, is obtained as
(430

6, = Atan2(S, n , + C3o,-S3n, - C30,)

solution of Siml
gives
In this example, premulitplying
or postmultiplying
complexity but this may not be always the case. The decision to pre
ply

postmultiply is left to the discretion of the reader.


Example 4.3 SCARA manipulator inverse kinematicS the 4-DOFSCAR

Analytically solve the inverse kinematic problen1 for the 4* oniti


s s the cont

contiguration man1pulator given in Fig. 3.22, Example 3.6. Discuss


for existence and multiplicity of solutions.
The Inverse Kinematics 127
Solution For the SCARA manipulator of Example 36, equating "T, from
Fa. (3.40) with T in Eq. (4.7) gives

C4 S24
S24 C24 0
(4 40)
0 td-|

The solution for joint displacement d, is directly ohtained hy equat1ng the


ements (3, 4)
on hoth sides of Eq 4 40).

d u l4-L (441
Next. to solve for 6, elements (1, 4) and (2, 4) are compared. This gives

(4 42)

Squaring Eqs.
LS+LS=
(4.42) and (4.43). adding and simplityng using the
4 43)

trigonometric identity cos (a + ) = cos a cos B- sin a sin B. gives

L+L+ 214,L,C, =rit (4 44)

G t-L- (445)
2141
Since S, =t1-c (446)
the solution for 8, is obtained from Eqs. (4.45) and (4.46) as
6 = Atan2S2. C;) (447)
Now that 6, being known, Eqs. (4.42) and (4.43) can be used to compute &
These equations are written as
(4.48)
LCC-S,S,)+ L41CG 4 =

L(S,C +C,S,)+ L1S =4 (449)

or
(L +LC C -(L,S,)S, i4 =
(4.50)
(4.51)
(L +LC2 S +(L,5,CG =

Lei and (L2S,) rsin (4.52)


(LtLC,) rcos o
=
=

With r= yL + L,C,) +(LS, 4.53)

and 0= Atan2
LS l4 t
L,C; | (4.54)

Eqs. (4.50) and (4.51) reduce to


r c o s( 6 , +0) = i (4.55)
rsin(6, t 0)= (4.56)
128 Robotics and Control
as
0, is obtaincd
From Eqs. (4.55) and (4.56).
"" 4.57)
, - Atan ,
LS tL
Atan2 ,"Alan2
4.58
one
variable, , nknown. From
is unk
determined. only
and d,
With . 8, are
elements (1.) and (2.1), the cquations
C124 'n (4.59)
S1242
(4.60)
Equations (4.59) and (4.60)
give 0, as
6+6-0, =Atan2(r2.' )
0,+ 6 -Atan2 (r21,
P) (4.61)
= displacements 6j, 6, d, and g
The complete closed
thejoint
form solution for
(4.41) and (4.61
(4.58), (4.41),
of SCARA manipulator, is given by Eqs. s tool position andd
functions of the manipulator
respectively. as explicit
onentation.
Existence of Solutions
arm exist, that is, the given
kinematic problem of a given
Solutions to the inverse
orientation of the tool
is within the manipulator's
Cartesian position and
condition is satisfied:
workspace if the following the range -1,1], right-hand side of
functions take values in
"Sincesin and cos
[-1,1]."
the Eq. (4.45) must lie in range the
rotation for the revolute joints
for full 360 degrees of
Again, these solutions are mechanical constraints.
the prismatic joint. The
and limitless translation for
however, will permit only such
solutions for which the joint variables
value take a
allowed.
that lies in the range of motions
Multiplicity of Solutions
are two solutions for 6
Due to the presence of the square root in Eq. (4.46), there
for a given position and orientation of the tool with respect to
the base. From
Eqs. (4.57) and (4.61), observe that there is one set of solution for &, and d
corresponding to each value of G,. Thus, the number of solutions to the invers
kinematics problem of the given SCARA arm is two. Note that multiple solune
exist due to the fact that revolute joint axes l and 2 are parallel.
Example 4.4 Numerical solutions for a 3-DOF manipulator
lon
For the 3-DOF (RRP) configuration manipulator, shown in Fig. 4.7. the poa
and orientation of point Pim Cartesian space is given by
The Inverse
Kinematics 129

0.354 0.106
0.354 O866
0612 0.500 0612 0184 (4.62)

0.707 0.707 0.212

all solutions to the inverse


variables, that is,
Detemine all values of all joint
limits) for three joints
kinematic pnoblem
The joint displacementsallowed (joint
ldentify the
are 100 100°. 30°< 0, 70° and 005md,<0.5 m.
feasible solutions
structure
another common
Solution A manipulator with this configuration is
handling and
robots, it effective in material
widely used in industrial as is very
are
The first two joints
and gives a spherical workspace.
other applications, sweep
revolute joints and provide motion in two perpendicular
planes. Their
the reach to
radius
sphere. The third prismatic joint provides
gencrates constant
a axes intersect
at a point
where the wrist is attached. The three joint
the arn pnt

P
d3

L2

arm
Fig. 4.7 A 3-DOF spherical configuration
forward kinematie
forward kinematic model is obtained first. For the
The
carried out first. While
frame assignment for the home position is
model. the can be eliminated from
frames it is observed that the link dimension L,
assigning coincide w ith origin
the origin of frame {0} to
the kinematic model by choosing can be made
2 (see Example 3.3). The link dimension L
of frame {1} at joint as shown in Fig. 4.8
such that the axis of
zero by modifying
the design slightly
prismatic link passes through
the origin frame {1 }.
of frame {0}.
frane assignment with the origin of three frames,
The final
shown in Fig. 4.9. This minimizes
and frame {2) at the same point is
frame {1 } condition for
parameters as well as satisties the necessary
the number of non-zero

solutions.
existence of closed form
are labulated in Table 4.
Thejoint-link parameters
130 Robotics and Control

da

Fig. 4.8 3-DOF spherical arm in h o m e pOsifion: 6, = 0, = 0 and d, - a

0.05
X3
d3

Same origin
Lo

Xo

Fig 4.9 Frame assignment for spherical arm with horizontal home position

Table 4.1 Joint-link parameters for spherical arm

Link 4 CoS6 CaSo


90 S 1
-90 0 S2
0 d 0 d 0

The three link


transformation matrices T, 'T2 T; and the overall arm
transformation matrix T, are obtained as
C 0 S 0

T,6,)=S 0-C 0 (4.63)


01 0 0|
Lo 0 0 1
C 0 S, 0
S 0
C 0
T,0,)o -1 0 (4.64

L0 0 0
The Inverse Kinematics 131

Td) (4.65)
0 d

and. thus,

GC S, GS
S,C S,S d,S,,
7- °7, T, T, =| (4.6
S 0 C

and then
Fist. generalizcd solution is worked out, as in previous examples
a
Let the
he
the Nalues from Eq. (4.62) will be substituted to get specific solutions.
am point position and orientation be specified as in Eq. (4.7).
and
The kinematic model equations arethus, obtained by cquating Eq.(4.66)
Eq.(4.7)giving
GC-S -CS -d,CS1 2 3
S,C C -S,S -d,S,S|_| T24 467

S 0 C2 dC
0 0 0 00

The solutions for joint displacements are obtained by comparing


preferred
in Eq. (4.67). The resulting equations are
elements (1, 4). (2, 4) and (3, 4)
-d^C,S2 = 14 (4.68)
-d3S,S = r24 (4.69)
4170)

the solution for 8, is


Dividing Eq. (4.69) by Eq. (4.68)
6, = Atan2-24 -1a) (4.71)

Squaring and adding Eq. (4.68) and Eq. (4.69) gives


d s$c+S)=i t
t 4.7)
Of dS, = t Vri
solution of O, as
Dividing Eq. (4.72) by Eq. (4.70) gives
(473)
0, =Atan2(t Vri t i u
is obtained by squaring and adding
he joint displacement d, for joint 3
cannot be negative, only
T4s. (4.68), (4.69) and (4.70). Since the displacementd,
positive sign is used. Thus,
(4.74)
d, =t yiat t iu
132 Robotics and Control

displacements
are obtained i substituting the
by
erical values for joint Eq.
The and
oricntation matrix, (
4.62) in to
alues from given arm point position solutions are:
(4.74). The specilic
Eqs.(4.71). (4.73) and
106)
=

=Atan2 (0.184. 0.
(ty0.106)+-0.184),
0.212)=t 45 (4.75)
Atan2
=0.30
+ (-0.184) + (0.212)
8, = Atan2 (0.106)
tabulated in Table 4.2.
solutions are
The two pssible
tle specified arm point
Table 4.2 Tv possible solutions for

Solution No -45 .30


60°
45 0.30
-60

The joint range specified for joint 2 is: -30° <0,< 70°. The solution 1 es
range constraint and, hence, solution
angle 8, as 6,= 45°. This violates the joint
1 is not feasible.

of 5-DOF Manipulator
Example 4.5 Inverse Kinematics
For the 5-DOF industrial manipulator discussed in Example 3.7, obtain the
analytical solutions ofjoint variables.
Solution Let the end-effector tool point transformation matrix be given by

n, 0, ay d
TE n, o az d,|
(4.76)
0 00 1J
From the kinematic model obtained in Example 3.7, the overall transformation
matrix for the end-effector tool point is (Eq. (3.48))

CS4C +S, -Cs% +,C GC4 GLC +LC3 +L,u


OTSC234Cs -CSs -S,S2s4S% - CCs S,C214 S(LC, +LC +L,C)
T,=|-C234C C234ds -S34 L-L,S, -L,S23-L,Shs4
0 0
(4.77)
A closeexamination of Eqs.(4.76) and (4.77)
solution can be found for any of the
clearly shows that no direc
approach of guideline (e) is used here.
joint-variables. Hence, the inverse mans
In order to solve for first
joint variable , both matrices, Eq. (4.76) a
Eq. (4.77) are premultiplied by inverse of
side matrix of "T, (that is This given let-na
equation as
The Inverse Kinematics 133

[ C, + S, Co, +So, Ca, +Sa, d, + Sd,


-d, + l

-Sn, +C,So, +Co, -S1a, + C4, -Sd, +Cid,|


0 0
(4.78)

and the ht-hand side of matrix equation is


S214Cs S234S, C234 LsC234 +,C2 + l,C2
Cz14Cs C24Ss S214 L,S214 t+ LS2 + lS,
T = T7,7'7, Ss -Cs 0 0

0 0 0
(4.79)

or

Cn, +S/1, Ca, +S,a, Cd, +S,d,


Co,+S10
-11 -0z -0 -d +Li
-S, +Cly -S1o, + C10, -S,a, +Ca, -S,d, Cd,
+

0 0 0

S234Cs S234Ss C234 L,C234 +LC23 +L2C2]


-C234Cs Cz34S5 Sz34 LgS234 + L3S23 + L2S (4.80)
0
-Ss -Cs
0 0 1
elements of Eq. (4.80), the single variable equations in 8, are
Comparing
obtained from elements (3, 3) and (3, 4)
as:

-Sa, +C,a, = 0
(4.81)
-S,d, +Cd, = 0

is
Using the later, the solution for 6,
6, Atan2(d,. d,) (4.82)
is obtained as
From ratio of elements (3, 1) and (3, 2), solution for 6,
-S, =-S,n, + C1y (4.83)
-Cs -So, + C1o,
(4.84)
6, =
Atan2(-Sn, +
C,,-S10, +Co,)
is obtained by first
The solution for remaining three variables B, 6, and 6,
3) and (2, 3) gives
SOlVing for ,+ 0, + 04. Equating elements (1,
C214 =Ca, +S1a, (4.85)
S234-
134 Robotics and Control

which leads to
5,a,
+0, =Atan2(-a,. Ga,
+

,+0,
A

sides of Eq. (4.80) by ( 7,)". gives


Premultiplying both

(T, '7, = 7,'7,'7,


C,(C0, +S0,)-S,
C,Cn,+S,n,)-S,n. +
So,)-Cz
)-C,n, -S,(C,
S,(Cn, +S,n,)-
S, + C,
-S,, + C
0

CCa, +S,a,)- S,a. C,(Cd, +S,d,) -S,(d, -L,)-L,1


-S,(C,d, +S,d,)- C,(d, L)
+
-S, (Ca, +Sya,)-Ca,
-Sd, +C,d,
-S,a, +C,

SuCs SuS, Cu LsC4 +LC]


-C4C Ca4S's Sy4 LzS34 LzC3
+

-Ss-C 0 (4.88)
0 0

From elements (1, 1) and (1, 2), two equations are obtained:

G(C, +Sn,)-Sn, S,G =


(4.89)
C C o +So,)-S2o, = -SyaSs (4.90

Dividing Eq. (4.90) by Eq. (4.89) and rearranging gives


-Co,+S,o,)C2 +S%0, = lan 65|{Cn, +Sn, )C2 -Ssn (4.91)

O
Stan6,(C", +Sn,)+Go, +So,)
n, tan 6, t 0
C2
or e, =Atan2an6,(Cn, +Sn,) + Co, +Syo,, n, tan 6, + o. (4.92
Similarly from elements (1, 4) and (2, 4) of Eq. (4.88), two equations are
obtained after rearrangements as
LC34 + LC, = CC,d, +S, C2d, - S,(d, - L,)- L
(4.93
LsS4 + L S, = - C,S,d, + S, Sd, -C, (d, + L)

Substituting for C34 and S4 from elements (1, 3) and (2, 3), respectuvelyTo
the left-hand matrix of Eq. (4.88) and
simplifying gives
L,C = CCd, +S,Cd, S, (d, - L,)-L - L,(C,Cya, +S,C%a, , 4 .
LyS-CS,d, -S,5,d, -C,(d. + L) + L,(C,S,4, + S,S,d, + C,a,
(4.94
The Inverse Kinematics 135

Eq. (4.94) 6, is obtained as


Form a
(LS-CS,d, S,S,d, C,(d I,)1(6,S.,
S,S,
1,0,C0 ,
e.A LG -C,C,d, +S,C,d, S,td, 1, 495)

joint variable 0, iseomputed from Fq


(4 86
The last unknown
e, - u 9, 0, (49%)

Inverse kinematics for 6-DOF Stanford manipulator


Fxample 4.6
the joint displacements as cxplicit functions of the pos1tion and
Obtain all of Example 38 by
for the 6 DOF man1pulator
omentation of the end-effector of solutions
discuss the existence and muluplicity
analvtical method. Also, 4. 5
wist whose joint axes
Solution given6-DOF- manipulator arm has a cond1tion for existence
The
of
at a point. This satisfies the necessary
and 6 intersect in
problem closed
closed fomm solutions, Hence,
it is possible to solve the inverse

fom respect
to the

Let the given position


and orientation of theend-effector with
be
base in Cartesian space
n, O, a, d, |
y O, a, d, (497
T=
| n, 0, a, d
0 0 0 1J
matrix (the forward
From Example 3.8, manipulator transformation
the
(3.58) that is
kinematic model) for the given manipulator is given by Eq.

GCC,CC C,CCCS6 GCCS1,


-S,5,CC +S1SCS, GCCS
-S,S,S,l
-CGS,S,C, +CS,S,S% Sy4'$+CS,C,l%
-C,C,5,5% -CCS,C. +CS,Cs +CSnd-S,L
-S,CS% -S,C,C%
S,CC.CC, -SCC,CS SC.CS,
-CS,C S,C,CS%
+C,S,C,C +CS,S,4% (4.98)
T-SSS,.
-SCSS%
+S,S,S,5,
-SCS,C
+C,S4S,
+S,S,C,
+S,SCl
+SSd + C,L

+CC +CC,C
S,CC.C% S,C,CS, S,CS -S,C%
C,SC +CS,S% +C,C +C,Cl+Cd
+S,S% +SS,o
i Roboties and Control
Observe that in Fq (4 98) neither single variableterms are present nor.

algebra (like div ision of two elements) will give single


variable isolation ple
to find a solution two altermatives are: () matrix prenultiplicati

postmultiplication to isolate variable at a time, as Was done in Exarmni.


one
In this cxample i
and (m) uIse more involved algebra and trigonometry. latter
sod
Bquating (1, 4). (2. 3). (2, 4), (3, 3)
elements (1, 3).
and (3, 4) in e
Eqs (4497
and (4.98). six cquations are obtained.

CCCS S,S,S, + C,S,C, =


a, (4.99
G C C , S , - S,S,S, + C,S,C%))+ C,S,d, -S,L = d, 141
4.100
S,C.C,S,+ C,S,S% +S,S,C, =a, 4.101
LS,,C,S, +C,S,S, + S,S,C,) + S,S,d, + C,L = d, 14.102,
(4.102
-S,CSs + C,C; = a.
(4.103
L-S,CS, + C2C;) + Czd,
=
d. (4.104
Substituting Eq. (4.99)into Eq. (4.100) gives
Lya, + CS,d; - S,L2 = d,

or CS,d S,L d, L,a,


-
= -

(4.105
Similarly, substituting Eq. (4.101) into Eq. (4.102) gives
Lga, +S1S,d, + CL2 = d,

S S , d +CL, = d, - Lga,
of (4.106
Squaring Eqs. (4.105) and (4.106), adding and simplifying gives
S d + L = ld, - L,a, +ld, - Lga,)¥

sdi = d, - L,a, +d, L,a, -L (4.107)


Also, combining Eqs. (4.103) and (4.104), rearranging and squaring gives

Cydy d -L (4.108
Cd =(d - L,a, (4.109

Fquations (4.107) and (4.109) are solved for d

d,
=tyd, La, + ld, Lsd,) + ld -L,a¥- the
The displacement of prismatie joint d, is always positive: therelore.
negative solution is not valid. Thus

(4.110)
d, =yd, L,4,+ d, ,4,+ td, - La,- 4.
The Inverse Kinematics 137

hus, solution for the joint variable d, which gives the joint displacement for

the prismatic joint, is found. Next, solve for joint displacement 6. From
Eq.(4.107).

(4.111)
S,d=tyd, -I,a, (d, -L,d,) -
Dividing Eq. (4.111) by Eq. (4.108), solution of joint variable , is obtained

e. = Atan2tyd, - L,a, +d, - Lyd, - 12.(d, - L,a,)) (4.112)

Next. to solve for G. Eqs. (4.105) and (4.106) can be used as they have only
, as unknown. First, let the constants K, and K, be defined as

K =
S»d3 and K,= L
Substituting these constants in Eqs. (4.105) and (4.106):
K,C - K2S, = d, - Lga (4.113)
K,S + K^G =d, - L,4, (4.114)
The equations of this form can be solved by making trigonometric substitutions
R = r sin o and K = r cos (4.115)
where
r=+K +K3
= Atan2 (4.116)
Substituting K, and K, from Eq. (4.115) in Eqs. (4.113) and (4.114)

sin(o -

0,) =
Ls, (4.117)

Lga,
cos( 0,) "* =
(4.118)
From Eqs. (4.117) and (4.118).
s ,64,)
-6, =Atan2 r
(4.119)

Suhstituting for o. r, K, and K, and solving for 6, gives

S,d L
6, Atan2
VSd+l Vsd +
d,-Ld, , - L , (4.120)

Thus, Eqs. (4.1 10), (4.1 12) and (4.120) give solutions for the lirst three joint
displacements 6. 6, and d, respectively.
138 Robotics and Control
To obtain the solutions for the remaining three joint displacements 9.n
.
to view the descrinti
and
oanother approach will
be used. It is possible of
too
arm endpoint fram
ame,
Trame, frame {6}. with respect to frame (3).
the
two along
different paths.
016

2T3 3Ta 4Ts T6 P


12
4 6
2
Path 2
Path 1

graph for the manipulator


Fig. 4.10 The transform

The manipulator transformation


matrix "7, is equal to the product of sir 1.
link
transformation matrices
(4.121)
Fig. 4.10 with nodes
as shown in
This can be represented graphically
transform. From the graph in
frames and edges representing the
representing traverse from frame (31 to
are two paths to
Fig. 4.10 observe that there
frame {6}. one is via frame {4)
and {5} in the forward direction and other is via
direction. The use of these two paths to get the
frame {0}. the base, in the reverse
solutions is discussed below.
Path I frame (3} > frame (4) frame (5) frame {6}
be obtained as
path the transformation T
can
Along this
(4.122)
In Example 3.8, the transformation matrices T4, "T5, and °T, were obtained
these matrices,
in terms 65, and 6, respectively. On multiplying
of 6,.

CCC6S,S% -C4CgS% -S,C, -C4Ss -L,CS


S,C,C6 +CS% -S,C,S% - C,C% -S,Ss -LSSs
(4.123)
T, -SsS% C5 LC +L4
S,C6
0 0
Path 2 frame {3}> frame {2)> frame {1} frame {0} > frame {6}
to the base and
This is a path via the base. The tool frame is defined with respect
hence, it can be reached from frame {3} traversing the links of the arm. Thus,

It is known that T = (T-)andT, = T hence


(4.124)
T,=(7)(*7,) (7)T erse
Since 6,, 6,, and d, are known, the matrices *T. 'T, andT, andtheir
can
6
can be computed. Substituting these values and multiplying the matnicCC>
be computed. Let it be denoted as
The Inverse Kinematics 139

T (4.125)

|0 0 0
Since matrices in Eq (4.123) and (4.125) represent the same point, the arm
point.0. B. b, can be tound by equating the corresponding clements of matrices.
Eirst, for 0. the elements in row 3 are equated to give three equations as:

(4.126)
-

SS% '32=
(4.127)
C r33 (4.128)
Squaring Eqs. (4.126) and (4.127), adding and dividing the result by
Eg. (4.128) gives solution for 8, as

, Atan2 ty +),s) (4.129)


Next. joint angle 6, is obtained by equating the elements (2, 4) and (1,4) as
-L,SSs = r4 (4.130)
-LSSs = r14
(4.131)
Solving for 0, from Eqs. (4.130) and (4.131) gives

64 Atan 2 (4.132)
Ss
Note that since Sg is a variable and may have more than one value, it is going
to influence solution of , . Finally, to solve for 6, Eqs. (4.127) is divided by
Eq. (4.126) to give

6, =Atan2-2 (4.133)
Thus, expressions for all the six joint displacements of 6-DOF manipulator are
obtained as explicit functions of the desired position and orientation of end-
effector. The values of joint displacements can be determined from these functions
for the desired end-effector location data. Note that the inverse kinematics
solutions could have been obtained by following alternate approaches. The issues
of existence and multiplicity of solutions are discussed next.

Existence of Solutions
A close examination of the expressions of the joint variables reveals that the
oulions to the inverse kinematics problem of the given 6-DOF arm existonly if
the following conditions are satisfied:
() The term underthe square root in Eq. (4.110) is non-negative, thatis,
Td, La, ) + (d, - L,a,' + (d, - Lya,)- LJ2 0 (4.134)
140 Robotics and Control
(4.11), is non
(i) Similarly, the
term under the square
root in Eq.
nnegative
that is.
+ (d, nd, ) - L 1 2 0 (4.135
ld, - a, on the dimc
conditions are clearly geometric
constraints
nensions of
The above
the links of the manipulator. One. for given link dimensions ( .
look at thcse.
Thereare two ways to
the desired position (d,, d "
able to attainn
the end-cffector will not be satisticd, and scco
are not
the above conditions
o n e n t a t i o n (a,. a,, if
a,)
Ly and L, for the do
conditions can he used
to design link
dimensions
sired
workspace 18 automatically satic.
satisficd, condition (1) ied
Note that. if condition (i) is
of rotation for all revolute ijoints
that here it is assumed that full 360°
Aiso, note
are possible. But, due
and limitless translations of the prismatic joint to
motions are restricted only if
and solutions exist in
mechanical constraints. joint
conditions given above,
each of the joint variables take
addition to satistying the
a value that lies within the range of
motion allowed for thatjoint

Multiplicity of Solutions

Equation (4.112) gives two solutions of &, due to the presence ofthesquare root
Since 6, depends on 6, (Eq. (4.120)). there is one value of 6, for each value of4.
Similarly. from Eq. (4.129). there are two solutions for 6, and hence, it givesone
set of solutions for 9, and 6, for each value of 65.
Thus. the number of solutions to the inverse kinematics problem of the given
6-DOF-manipulator arm is four.
Example 4.7 Determination of joint variables for a 4-DOF RPPR
manipulator
For a 4-DOF. RPPR manipulator, the joint-link transformation matrices, with
joint variables 6,. d,. d and 6, are
C-S 0 0
S C 00
T (,)0 0 0
(4.136)
0 0 0 1

T,(6,) -10 d (4.137)

0 0 51
01 0 0
T,(d)=0 01 d (4.138)

0 00 I
The Inverse Kinematics 141

C S 0 01
- S C 0 o
T0)=| 0
(4.139)

It the tool contiguration matrix at a given instant is as given below, obtain the
magnitude of cach joint variable

-0.250 0.433 -0.866 -89.101


0433-0.750 -0.500 -45.67
TE -0.866 0.500 0.000 50.00
(4.140)

Solution The kinematic model of the manipulator will be


7'7 7, T, = T: (4.141)
Substituting from Eqs. (4.136) - (4.140) gives

-S4 0 07
C -S 0
0Ti 0 0 0 Ti 0 0 5TC
SS C0 0|0 0 1 0 0 00 Sa C4 I0 TE
00o0-10
LO
d|0
0 0 10 0 0 1 o
0
0 0
I
ds0
10
0
0 0 1J
(4.142)

or

GC-CS4 -S -d3S, +5CG -0.250 0.433 -0.866-89.10


0.433 -0.500 -45.67
S,C-S,S4 C dC +5S =
-0.750

-S4-Ca 0 d -0.866 -0.500 0.000 50.00


0 0 0 0 0
(4.143)
The solutions for joint-variables are found by using the direet approach.
guideline (a)-(d). The solutionfor first joint variable 6, is obtained by comparing
elements (1,3) and (2, 3)
sin 6, -0.866 (4.144)
-S =- =

6, -0.5
C = cos =

or 6= Atan2(0.866, -0.5) = - 60° (that is: 6, = Atan2 (-a,. a,)) (4.145)


The solution for second variable d, is obtained from element (3, 4) as:

d, = 50 (that is : d, =

d) (4.146)

the third joint variable d, is obtained from elenments (1, 4) and


Similarly,
clements (2, 4) by squaring, adding and simplitying

d,=tyd, +d-25 (4.147)


142 Robotics and Control

d = 100
(4.148)
The tourth joul
vViariable
6 IS C O m p u .
uted from
he negative.
Note thatd, can not
clements (3. ) and (3. 2) as

Alan2(0 806. 05) = 60" (that


is: G, =Alan2(-n.
-0,

(4.149)
= oricntation. 7, Will be ed hu
achieved by settin .

and
cffector position
The given end
\anable vCctor to
the jont
S0 100 60"| (4.150
=0

EXERCISES

first link istu


in Fig. 4.3(a) the as
twice
link planar manipulator
.1 Forthe two 2L,). Sketch
the reachable workspaco
ace of
second link (L
=

as long as the
limits are
the manipulator if the joint
range

0<0 <170°,
(4.151)
-90°<0 <110°.

reachable workspace of the tip of a two-link


42 Sketch the approximate
For this arm the first link is thrice as lone
planar arm with revolute joints. limits are
30<0 < 180*
asthe second link, that is,L 3L2 andthe joint
=

and -100° < 6, < 160°.

4.3 Sketch the approximate reachable workspace


and the dexterous workspace
4.5.
of the 3-DOF planar manipulator shown Fig.
in
4.4 Show that for a 3R planar manipulator having link lengths asL, L, andL
with (L+L)>Lz, the RWS is acircle with radius rgws =(L+L+L)
and DWS is a circle with radius rpws = (L + L- Lz).
4.5 Explain why closed form analytical solutions are preferred over numerical
iterative solutions.
4.6 Discuss the existence of multiple solutions for Example 4.1.
For a three-link planar manipulator (2-DOF for position and 1-DOF for
orientation) two solutions are possible for a given position and orientation.
as discussed in Section 4.3.2. If one more degree of freedom is added such
that the manipulator is still planar, how many solutions will be possible
for a given position and orientation when the added joint is
(a) revolute.
(b) prismatic.
4.8 How many solutions are possible, assuming no joint range limits for the
following3-DOF arm configuration?
(a) PPP manipulator shown in Fig. E4.8.
(b) RPP nanipulator shown in Fig. 3.13.
(c) RRP manipulator shown in Fig. 4.7.
(d) RRRmanipulator shown in Fig. 3.15.
The Inverse Kinematics 143

Fig. E4.8 A three degree of freedom PPP configuration

4.9 For the two degree of freedom planar RP configuration arm discussed in
Example 3.1, how many solutions can be found for a given position and
orientation. What will be the number of solutions if following alterations
are made in the manipulator configuration?
(a) Add one revolute joint after the prismatic joint.
(b) Add one revolute joint before the prismatic joint.
(c) Add one degree of freedom (IR) wrist.
In each case, the manipulator remains planar.
4.10 For the 2-DOF manipulator shown in Fig. 3.11 determine the solution for
all joint displacements q for a given tool point position and orientation.
4.11 Obtain the inverse kinematics solution for the 3-DOF planar manipulator
shown in Fig. 4.5.
4.12 Workout Example 4.1 without using inverse transform approach.
4.13 Obtain the closed form solutions for the joint displacements of the

cylindrical configuration arm described in Exercise 3.2.


in Exercise 3.6.
A.14 For the manipulator arm consisting of 3-DOF described
obtain the inverse kinematics solutions.
3-DOF articulated
4.15 Obtain the inverse kinematics model for the
arm

discussed in Example 3.3.


Fig. 3.18, obtain the conditions of
416 For the 3-DOF-RPY wrist shown in
singularities.
the forward kinematic
4.17 For the 3-DOF arm shown in Fig. E4.17, determine
solution for inverse dinematics.
model and, there from, obtain the general

L2
L1

degree offreedom manipulator


arm

Fig. E4.17 A three


Robotics and Control
144 determine
shown in Fig. E4.18 ne the joint
manipulator
4-DOF position and orientation
4.18 For the
displacenments required
tool point
forthe matrix. The nsions are
dimensio
giver
shown in
transformation the
the following

figure. -84
-0.866 0
0.5
0.866-0.5 0-48.5
T -1 105
(4.152)
20
60

40

500
Tool point

Note: Not to scale and all dimension in mm

Fig. E4.18 A 4-DOF manipulator

4.19 For a 5-DOF, RRR-RR articulated configuration manipulator shown in

Fig. E3.14 obtain the inverse kinematics model.


4.20 Consider the 6-DOF manipulator in Exercise 3.18; find solutions for alu
joint variables in terms of end-effector position and orientation y
analytical method and inverse transform method.
4.21 For the SCARA robot discussed in Example 3.6. end-ettect
configuration (orientation and position) has been calculated in Exerci
13. Taking this as the desired end-effector
configuration compu
joint displacement vector.
4.22 Consider the two-link end
planar manipulator (see Fig. 3.11) with is the
effector located at (4, 0). If the
length of each link is 2 units, detern
values of the joint variables (8,. ces
0,) using transformation n
Fig. E3.11, is it possible to tinu inver
4.23 For the 3-DOF
manipulator in
kinematics solutions using
analytical approach? Justify you we answer

Determine the solutions for all joint variables.


4.24 Find the general inverse 3 - D O F Euler wr
kinematies solutions for the 3-DOF
see Exercise 3.5.
The Inverse Kinematics
145
25 The joint -link
parameters of a 6 DOF freedom
Table E4.17. Find the inverse kinematics manipulator
are described
in
solutions for all the joint
anglese , e, , , 0, 0,1 Assume that the pocition and
onentationof the end effector
with respect to the hase coordinates "T is
Anown and is given by

, , d,
T d
d 4.153)

Table E4.17 ont link parameters for DOF


a 6
man1pulator

90 d
0 0 ,
0
90° 0
90° d
0
d
426 Describe the workspace of a manipulator. Make a list of factors on which
the workspace. the dexterous and reachable
workspace, of a given
manipulator depends.
4.27 Solutions to inverse kinematics problem are generally difficult. Explain
why
4.28 Explain the factors on which the number of solutions to given inverse
kinematics model depend.
4,29 How are the feasible solutions determined? What parameters have control
on the number of feasible solutions to the given inverse kinematies
problem?
s)Is it always possible to find analytical solutions tothe inverse kinematies
problem? Give a situation when the analytical solution to inverse
nematics problem cannot be found.
What are closed form solutions to inverse kinematies problem' Explain
r methods for
obtaining closed form solutio
2Why closed form solutions are prefered over nunerical, iterative orother
Torms of solutions to the inverse kinenmatics problem

SELECTED BIBLIOGRAPHY

D Baker and C. Wanpler, "On the lavese Kinenatics ot Relundnt


Manipulators," The tntenational Journmal of Roboties Reveurh. 7(2),
1988
Dynamic Modeling

accclcrate. move at

inust

cycle a
manipulator
nd ofientation se
the work tinne varying
position
uring This
and decclerate. Time varying torques a r e
speed. behaviour.

manipulator is termed as its


dynamic
balance out
and externaled
the internal forces
to
and acceleration) of link
actuators)

the joints (by the joint


motion (velocity
at
causcd by of the internal force,
T
The internal
forves are
are some he
frictional forces
Conolis, and environment. These includee the
Inertial. the
the forces
exerted by
are have to withstandnd
cxternal forves result, links and joints
forces. As a
"load and gravitational balance a c r o s s
these.
s t r e s s e s caused by
force/torque
model for the dynamic
behaviour of the
mathematical
Inchapter. the
this
mathematical equations, often referreda
The
is developed.
(EOM) that describe the
ofequations ofmotiontorques.
manipulator

manipulator
dvnamics. are a set
to input actuator
dynamic response of the manipulator
is useful for computation torque and of
model of a manipulator
The dynamic information
work cycle, which is vital
for execution of a typical
forces required
drives, and actuators.
The dynamic behaviour ofthe
for the design oflinks.joints,
actuator torques and motion of
manipulator provides relationship between joint
sumulation and design of
control algorithms. The manipulator control
for
links
obtain the desired
of the manipulator to
the dynamic response
maintains
dynamic model and
pertormance, which directly depends
on the accuracy of the
The control problem requires specitying the
efficiency of the control algorithms. Simulations
desired response and performance.
control strategies to achieve the and
moiuon permits tlesting of control strategies, planning.motion
of manipulator
pertormance studies without a physical prototype of the manipulator.
The serial link manipulator represents a complex dynamic system, when
be modeled by systematically using known physical laws of Lagrang
mechanics or Newtonian mechanics. Approaches such as Lagrange-tuler
which is "energy-based", and Newton-Ealer (NE)) based on "force-balae
Dynamic Modeling 191
can
he stematical ally
applicd to develop the
no backlash, no trin tion and manipulator EOM Assuming rigid
n

the resulting EOM are neglecting effects of control


set of
sec ond order. ipled. component
a
namic

al cquations,
d i f l e r e n t t a l

consIsting inertia oading


Another method,
ot nonl1near
and
h C e n Oints inints the compling reax ton forces
an cquin alent" dynamic model
gecneralized d Alembert iple.
princ provides
Newton Euler andT ag1ange Tuler
alosed forn soluton. which formulations of the dynamc model
pn
tOl bascd on such
are computationally
ntensive making real
a dynamie model, ine
antNC the computatonal fficient, if
specd, recursiVC methods and not impossible
T

don simplitying assumplions havc been apprximate models


clhowever. result in suhoptimal dynamic developed The se approximate
navCment to low specds. Thc LE and NE models. performance and restrict armn
Ide a svmbolie solution presented in this chapter
t manipulator dynamics and
ontrol problem. give insight into the

.1 LAGRANGIAN MECHANICS
Aalar function called Lagrunge
function or
lagrangian LI defined the
as
difference between the total kinetic energy Kand the total potential
mechanical system.
energy
Pot a

L = K- P

The Lagrange-Euler dynamic formulation is based on a set general1zed of

coordinates todescribe the system variables. In the


general1zed coord1nates.
generalized displacement 'q' is used as a joint variable. which deseribes a linear
displacementdfor a prismatic joint and angular displacement & tor rotary joint a
and 9 describes linear velocity d(=v) and angular velocity 8=w) tor prismatie
and
rotary joints, respectively, as diseussed in earlier chapters.
Similarly.
generalized torque 'T required at the joint to produce
Tepresents the force f for a prismatic joint and torque r tor a revolute joint. as
desired dy nuimics
een in
Eq. (5.78). Because kinetic and potential energes are funtion ot and
= . n ) , so is the
h edynamice
Lagrangian L.
model based on Lagrange-Euler formulativn is obtaned tron the
agrangian, as a set of equations,
d
rtor
i - 1, ..... H .
dt dy,) dy
C lett-hand side o f dynamic equations can be nterpieted as sum ot the

torquorces due to kinetie and potential energy present in the sysiem. The nght-
hand side is t he joint torque for joint i that is provaded by the itttuatori. If r, =
t,
means that JOnti
jont does not move and it t # 0, the anpul.ator novement is

Odified by the actuator at


jOnt
192 Robotics and Control

FREEDOM
TWO DEGREE OF
6.2 MANIPULATOR-DYNAMIC MODEL
oul to illustrate the
The dynamies of a simple manipulator is
worked
involved in the dynam
mic modeli
grang
clarify the problems
Euler formulation and
to
joints, as shown in Fiy
with both rotary
A planar. 2DOF manipulator obtaincd using
direct gcometric an.

dvnamic nmodel is
considered and its coordinatefr
formulation. Forthe manipulator,
before diseussing the general linklengths L,and L,, mass of links
and
e, and e.
0) and {1}.joint variablesshown in the figure. The
nmass
ofcach link is asSumed
are
m, and m. respectively. of mass of cach link and links are assumo
to be a point mass
located at the center
velocities are v, v,, 6, and
to be slender
members. The linear and angular
respectivecly.
Link 2 (L2) P

m2
*2 y2)
Link 1 (L1) (1) 2

m (X1. Y1)

Fig. 6.1 A 2-DOF planar articulated (RR) arm

of the manipulator. The


TheLagrangian requires kinetic and potential energies
kinetic energy of a rigid body (a link), can be expressed
as:

K=mv +Io (6.3)


2
the
where is the linear velocity , @is the angular velocity,m is the mass, and/is
v
moment of inertia of the rigid body at its center of mass.

Thus, the kinetic energy for the link I with the linear velocity v =, 4:

angular velocity , = 6, moment ofinertia I, = m , l j , and massm, 1

(6.4)

2 8 24 6
and its potential energy is
(6.5)

m,gl, sin ,
-aXIS
where g is the magnitude of acceleration due to gravity in the negative
direction.
Dynamic Modeling 193
ar
For the second link, link 2. the
Cartesian position coordinates (,. V2) ot the
of link are:
center of
mass

= L cos 0, +
L, cos (0, +0,)
(6.6)
= L sin 6, +L sin (0, +0,)
Ditferentiating Eq. (6.0) gives the components of velocity of link 2 as

i, =
L sin 6, 6, l
-

sin (0, +6,)(0,


-

+
0,)
(6.7)
i, =
L cos &, 0, +L cos
(0, +0,)6, +0,
From these components, the square of the magnitude of velocity of the end of
link 2 is

=
List®} +s0, +0,) L,la5,50 +0,6,) +

+Lictoj + Lici6 +è,)* + L,LiC\C,} +è,6,)


Simplifying

= Lio} +0, +0+L,44C,6} +8,6,) (6.8)


where ,, S, sin 6,, C2
C, = cos = =
cos
(6, + 0,) and S12 =
sin (, +
6,)
Thus, the kinetic energy of link 2 with 0, = 8, + 8, and 1, = mL is

Km,v +/0
2

m,1Ljo} +L6, +0,} +L,L2C,6; +8,0,))


+m,L(6,
24
+é,) (6.9)

m,L0; +m,L (0 +0; +20,0.)+ 2m,L,L,C,(07 + 8,d,)


6
Thepotential energy oflink 2. from Eq. (6.6), is
+ kl,S}2 (6.10)
P =
m>gl,S,
The Lagrar
CLagrangian L = K,+A
K- P='A -P-",is obtainedtrom Eqs. (6.4).
(6.5). (6.9), and (6.10). Rearranging and simplitying, the Lagrangian
10.9),
is
Robotics and Control
194
20,0,)
m,1,0, +0,
L=m, +m,0 6 (6.11
m t m, Jel,S,
m.gl.
m,glS12
+8,0,)-1
m,l, 1,C,(e; Tq
(6.2). gives thetor
torque t at
formulation
for link .
Lagrange-Euler
The
jont as
d de 6.12

differentiatcd
wrt 6, and 6, to give
in Eq. (6.1) is
The Lagrangian
m,gl,C12 (6.13)
m, tm, Je
0 ,+m,1216,+,
+
t
m:
and
O
(6..14
maL,LC, (26, +0)
time
Differentiating Eq. (6.14) wrt

d m, +m, J+m,L/ +m,L,L,C2|6,

m,14L,5,6 (6.15
m L , 4 L C :|0, -m,l,laS0,0, -
into Eq. (6.12), the torque at
Substituting the
results of Eqs. (6.13) and (6. 15)
joint I is obtained as

m+m, Lm,/ +m,4,L2Ca


m
,4C, -m,1,45,4,0, -m,L,LS.e
(6.16)
my
+
m8 LC, +m,g L2C2
2
of Lagrangian, Eq. (6.11), for joint
are
Similarly. the derivatives
(6.17)

m,L,L5,(0; +0,0,)- m,gl,C:

dL (6.18)
and m,t 6, +0,)+m,L lC,0,
and

m,t m, m,1,1,C, 0,+m,10, m,l4l,S00*


dt d6 (o 19)
Dynamic Modeling 195

Again
from Eq (6.12),

t-m l,,m,1.0
+m,Ll,S,0, m, gl,Ci (6.20)

Eyuations (6. 16) and (6.20) are the EOM (dynamic model) of the 2-1ink planar
manipulator Because both the joints are revolute, the generalized torques t and
T.represent the actual joint torques.
Torqueequation, Eqs. (6.16) and (6.20) can be written in the generalized form.
which will be deseribed in Section 6.4, as
T = M 6 t M0, + H +G
(6.21)
T M6, + Ma26, + H, + G,
where

M+ m,)+mt +m,b,lC|
M2 M m4+LL
Mapmt
H=maLL,S,0,0, - maLLS,03

H=mLlaS,®}

G + mJL4G,+m,LC
G m,L,G28
2
These coefficients are defined as

M effective inertia,
=

M=effectivecoupling inertia,
acceleration forces
H,=centrifugal and Coriolis cunbersome when a
This direct formulation approach becomes quite
In the following sections, the
anipulator with than 2-DOF is analyzed.
more
based on homogeneous cowrdinate
derivation of EOM for an-DOF manipulator,
ransformation matrices is presented.

.5 LAGRANGE-EULER FORMULATION
procedure for obtaining the
grange-Euler formulation is a systematic
The n-DOF open kinematie chain
a m i c model of an n-DOF manipulator.
Robotics and Control
196
or aisplacement variak
has n joint posilion
e s t a b l i s h e s the relation
hes,
serial

9=l41..
link manipulator
q,1'. The LE
formulation |Eq.
(6.2)|
and the generalized torques
between
torques ana
applied
accelerations,
nonconservative tor
velocities,
positions, are the
thejoint The generalized torques ques
to the manipulator. forces, and induced joint
torques?
friction The
contributed by joint actuators, joint the joints due to contact orintcract of
the torques at
induced joint torques are

environment. In the present


discussion
onlyjoint actn
lator
the
the end-etfector with
torques are considered.
formulation is carried
out in the followi
The derivation of EOM using LE wing
transformation
matrICes 7, which are obtained
subsections. It makes use of link
First, the link velocity:is
modeling discussed in Chapter .
from the kinematic used to compute
tensor is obtained. These are
and next the link inertia
computed calculated and next, the Lagrangian is
is
Ainetic energy. Then potential energy
in (6.2) to get the dynamic model.
formed. which is substituted Eq.
THE MANIPULATOR
6.3.1 VELOCITY OF A POINT ON

of an n-DOF manipulator, link velocity


For eomputing the kinetic energy of a link
extended to compute the velocity of
is required. The concepts of Chapter 5 are
each link.
as shown in
Consider a point P on link i of an n-link (n-DOF) manipulator,
frame {i-1}, and frame {i} arechosen
Fig. 6.2. The coordinate frames, frame {0},
describes the point P on the link with
as per convention. The position vector 'r

respect to frame {i). In homogeneous coordinate notation,

' = (6.22)
The position of point P with respect to base coordinate is

= ( ° 7 T..)' (6.23)
Yi-1

i-1) Xi-1 Linki

Z-1
P
'r-1
Lo

(0) dm
Yo Z

o
Fig. 6.2 Velocity of a point on the link
Dynamic Mod ng 197
where
-IT. is given by Eq. (3.3) with q, 0, for
prisn
ioint. The velocity of point P with a
rotary joint or
q, =
d, for a
). is obtained from Eq. (5.46), as respecct to base coordinates, frame

EV
(6.24)
here the fact that 'r = 0 is used

Thetranstormation a i x T, involves
4ial derivative with complex trigonometric terms and its
artial
respect to
q. required in Eq. (6.24), involves
ation. The following steps complex
simplify the computation of
ofthe homogenec transformation matrix. partial derivative
Consider the transtormation matrix
that is.
"T, for link j
given by q. (3.4) for
rotary joint. a

C6,-Se,ca, Se,sa, a,c8,


T =| 0 Se
C,Ca, -ce,Sa
Sa
a,Se
6.25)
Ca
0 0
d
where a.d,, a. and 6, have the usual
Eg. (6.25) with respect to 6, is
meaning. The partial derivative of

-se, -c0,Ca Ce,Sa -a,S6,


8T CO -se,ca se,sa a,Co
0 0 (6.26)
0 0 0 0
Comparison of Eqs. (6.25) and (6.26) gives a pattern. It is observed that
E. 6.26)can be obtained from
Eq. (6.25) by
interchanging row I with row 2,
changing the sign of row 1, and
making row 3 and row 4 zero.
Hence, partial derivative ofT, with respect to 6, can be obtained using the
eps and without actually differentiating the terms. The same result can be
aned using matrix operations, which is more convenient in performing the
raions using a computer. Mathematically these steps can be carried out using
a4x4 matrix
Q, defined as
O-I0 0
I0 0 0|
(for revolute joint) (6.27)
0 0 0 0|

dpremultiplying/l7, with Q, that is,


198 Robotics and Control

se,Ca, seSa, a,Ce,


I 0 0ose, 0,Ca, CO,Sa, aSO
C
0 Sa

Se
-Ce,Ca, CO,Sa, -4,Se,
Se,Ca, Se,Sa, a, Ce,
T (6.28)

Hence,
The result is same as Eq. (6.26).
T -'T, (6.29)
also applies for a prismatic joint
The partial derivative equation, Eq. (6.29)
with Q defined as

o 0 0 07
00 00
(for prismatic joint) 6.30)
,0 0 0 1
The reader may verify Eq. (6.29) with Q, in Eq. (6.30) for a prismatic joint.
Since T= "7 '7, T. its partial derivative "T, with respect to q,is

oT)-°T, 'T aT)


dg
dq
Using Eq. (6.29). this simplifies to
T)- T '7,. 0,T)T,..-T =°7, , T (6.3
dq
This result is valid only for j s i. Hence, for i = 1, 2, . ,

d)T- for jSi (6..32)


d 0 for j> i
Itis worth noting that the partial derivative of homogeneous transformaton
matrix T, with respect to
q, represents the effect of motion of joint j onln
The link velocity v, in Ey. (6.24) Is, thus, simplificd using Eq. (6.32) as

,2"T,0,74,'
j=
(6.33)
Dynamic Modeling 199
The Inertia Tensor
6.3.2

of the link contributcs incrtia forces


Themass all the inertial loads during
motion
of the link. The mass
lect
ertics, which retle
with respect to rotations about the
1 g n o f frame
of interest, are
represented by a mnoment of nertia tensor. It is a
matrix, which characterizes the
ymmetric matrix,
distribution of mass of a rigd
l i(thhe
n ink k i).i). The
The mmomment of
inertia tensor is defined as
dy

dm A,V, dm,, dm,, dm,


| , , dm, Jy,dm, , dm,,
dm, (6.34)
Adm,y,dm,:, dm, dm,
, dm, J, dm, , dm, dm
whercd1, is themass of the element on link i located at'r =[, Y, 1.
hown in Fig. 6.2. Using the moment of inertia, , . cross product of inertia.
as

/ a n d first moments of body, the inertia tensor in Eq. (6.34) I, is expressed


C for details).
as (see Appendix

-t1yy +1) m, X
- +I) m,y
Iyt +1y -I m,z
m,X myi m; mS
(6.35)

1]' is its of The


where m, is the mass of the link i and 7 [ï, , Z; = center mass.

the distribution of the link


moment of inertia tensor, for linki depends on
mass

and not on its position or rate of change of position.

6.3.3 The Kinetic Energyy


on linki. fori 1.2.. n, located
The kinetic energy of the differentialmassdm,
=

to the base frame {0) is


atrand moving with velocity "; (=P) with respect
(6.36)
dx dn,0,*
to obtain (»,F as,
The trace operator| Tr(A) 2a, |is used
=

i=1

(6.37)
=,v, =
"i. =Tr"; ")=Tr (,,) the
(6.37) and the result in Eq. (6.36),
Substituting v, from Eq. (6.33) Eq.
in
obtained as
AInetic energy of the differential mass is

d
k=l
=
Robotics and Control

(",0 '1)9
T 2 2°7.0 '7)dm,
(6 18

linki is then
The total kinctic energy of

(6 39

Eq (6.39)isthe
m n e n t of nertia tensor
The integral term| 'r r'dm, in

given by Eq (6.35). Therefore. A,is

T I°7.0"7,),(°7 ,Q "T)4,4 (6.40)

is
Thus. forn-DOF manipulator,. total kinetic energyof manipulator

2T , "7)U,("T"7)4,4
j = k=l
l

Exchanging the trace and sum operations

IE
ET|(°70,7),
=l k=l
(°7-0T) |G,4 (6.41)

scalar and is a function of


Note that the kinetic energy{of the manipulator is a
joint position and velocity (q. g).

6.3.4 The Potential Energy

The potential energy P of link i in a gravity field g is

(6.42)
= -m, g7)= -m,g °77
link i with respect to
where vectors 'rand represent the center of mass of
frame (0 and with respect to frame {i), respectively, and the acceleration due to
base
gravity &=*,*,K i s the 4 x I gravity vector with respect to
frame (0). The negative sign indicates that work is done on the system to raise
link i against gravity. The total potential energy ofthe manipulator is sum of the

potential energy of the inks, that is,

(6.43)

Because r is a funcuon of q, the potential cnergy of a manipulator


describe by a sealar formula as a function of joint displacements q.
Dynamic Modeling 201
6.3.5 Equations of Motion
The Lagrangian,L = X-T. from Eqs. (6.41) and (6.43). is given by

i=1j=lk=1 i=

(6.44)
ACcording to the Lagrange-Euler dynamic formulation. the generalized
uce T, of the actuator at jointi, to drive link i of the manipulator, is given by
torqu
Fq. (6.2)

d ( 8L
38838
,d&q
By substitutingL and carrying out the differentiation, the generalized torque t,
anplied to link i of n-DOF manipulator is obtained. The detailed derivations are
aried out in Appendix D. The final EOM (dynamic model) is

2M (qä, + 22h 4,4 +G, for i = 1,2,.. n (6.45)


j=l j=lk=1

where

M Td, 4}] p=max(i.j)


(6.46)

hyj (6.47)
p=max(i.j,k) L
G, =2m, gdpi "p (6.48)
p=
d=-e,T forj Si
and
forj > i
(6.49)

[°T T-0T fori2k2J


and odi =
T- T,-0,7 fori j k (6.50)
fori< jor i < k
Fquation (6.45) is the dynamic model of the manipulator and gives a set of
nonlinear, coupled, second order ordinary differential equations for n-links of
-DOF manipulator. These equations are the equations of motion or thee
ynamic equations of motion for the manipulator. Note that the mass of the
Payload would also contribute to the inertia and its
position is continuously
changEung during the work cycle. This inertia is not included in the above EOM.
C physical meaning of various terms in Eqs. (6.45) to (6.49) is described as
follows.
202 Robotics and Control

these cqiations represcnt ina.a


1. The cocfficicnts of the q, t c m s in ia It is
acccleration of j o i n t i cause.
a torque a
inertia when
known as effective when
acceleration at joint jcauses
ineitia
joint i, and coupling
Is relatcd to accelera"lle a
cefficient M,, of the
joint i In other words, the actuator In
and represent inertia loading of the ary
joint
driving torque
nertia at ontiwhere the
.

effetive
acts
M
-

oupling inertia
betwecn joint i and jointj is the
It
the
is
reacttion
M - acceleration at joint:
torquc M, . al oint indced by i
Revere
M,4, as torque at joint
applies cqually with torque due to
acleration of oint
Since Tn4) TA ) it can he shown that M, -

M,.
induced reaction trar
Thc fhoient h, represents the velocidy
andK are relalcd to velocities of
ointi. the first indc The indices
and oint A. whose dvnamic interplay inducesa reaction torquc at iuns
particular. a term of the fomn h , Is thc centrilugal lorce actingat to
nt
duc to velocity at joint and a term of the form hy 4,4 1s known as th
Conolis force acting at jointi due to velocities at jointj and t 1
In
particular
o r n o l i s force coefficient generaled by the velocities ofiount .
and oint kand "felt" at joint i. Coriolis force acting at joint i due to
clocitics at joint j and joint k is a combination term of h, 4,4,

= Centrifugal foree coetficient at joint i generated due to angular


Neloxity at jointj. The centntugal force acting atjointi dueto velocity
jat jointj is given by h4.
h = ha, because the Coriolis force acts at joint i due to velocities of
joints and k and suffix order does not matter.
h 0 . Coriofis force at joint i is not due to joint velocity itself
The terms involving gravity
represent the gravity generated moment at
g
jointi.Dhe coefficient G, is the gravity loading force at joint i due to the
linkse to n. The gravity term is a function
ofthe current position.
4 = generattzed force applied at jointi due to motion of links.
S. q = joint displacement for joint i
6. = velocity of joint i
7. = acceleration of joint i
8. The termsd, determine the rate of change of points'r, on linki, that is, the
position and orientation of frame {i} relative to the base coordinate frame

as changes.
In dynamic model
equations |Eq. (6.45)|, inertial and gravity ters are
significant in manipulator control as
they affect positioning accuracy and ser
Dynami Modeling 203
which in turn determine the
a
bility.
nd centritugal torces are repeatability of the
manipulator. The
C o r o l i s

significant for high speed motion of the


m a n p u l a t o r

The agrange-Euler tormulation discussed above has the following


haracteristics:

uis svstematic and describes motion in real


The equations ot motion obtained physical terms.
The matrix-vector form
analytical and compact.
are
of cquations (inertia matrix, centrifugal, and
Coriolis force matrix, and gravitational force
vector) is appcaling tor
calculations and control system design.
The control problem can be
.

simplified by designing the structure of the


manipulator with minimal joint
couplings, that is, coefficients M, and hia
may be reduced or eliminated.

. The model i8
computationally intensive and
control
is not amenable to online

An algorithm to get the dynamic model of any n-DOF


farmulation is described in the next section. manipulator using LE

6.3.6 The LE Dynamic Model Algorithm


The Lagrange-Euler formulation to derive the
closed-form equations of motion
dnamic model) of a manipulator can be summarized in the form of an algorithm.
Algorithm 6.1 LE Formulation
of Dynamic Equations
Thisalgorithm carries out the complete dynamic formulation of an n-DOF
manipulator that satisfies the condition for existence of closed-form geometric
solutions. The variou steps are
Step 1 Assign frames {0}. ....{n} using DH notation (Algorithm 3.1) such that
frame {i} is oriented (aligned) with
principal axis of link i.
Step 2 Obtain the link transformation matrix T, for each link and from these
compute product matrices T2, "T3 and so on, which are required for
computing the coefficients di, and its derivatives, using

T =
I1P-T forj =0, 1, 2,.n-2: k=j+2,.. 6.51)
p=j+
Step 3 Define 0, for each link using Eq. (6.27) or Eq. (6.30), depending on
whether the joint is revolute or prismatic.
Step 4 For each link i determine the inertia tensor 1; with respect to the frame {i}
(see Appendix C).
Step 5 Compute d, for i, j= 1, 2... n using Eq. (6.49).
Step 6 Compute the inertia coefficients M, for i,j = 1, 2, ... n using Eq. (6.46).
Step 7 Compute the velocity coupling coefficients hj for i, j, k 1,2,.., n using
=

Eqs. (6.47) and (6.50).


StEp 8 Compute gravity loading terms G, for each link, i= 1, 2, ., n using
Eq. (6.48).
204 Robotics and Control

coefficients in Bq. (6.45) to


formulate the th cquation.
n for
Step 9 Substitute all the
torque get the dynamic model of a tu.

An example is worked out using


Algorithm 6. l to 0 link
planar manipulator.
manipulator using LE formulati.
tion
Example 6.1 EOM for 2-DOF planar
modc) RR-Dlan for the 2-DOF
Detemine the equations of motion (dynannic
formulation. Assume both linksh
manipulator arm using the Lagrange-Euler
mass (m, m2 m). Assume furthe. = =

oqual length (L, L, = L) and have equal uniform mass distribution, that is th
=

that the links slender members with a


are the
of the link.
center of of cach link is located at the mid point
mass
Solution The dynamic model of the two link, 2-DOF planar manipulator was
The manipulator is shown in
developed in Section 6.3 using a direct approach.
Fig.6.1
Yo Y1 Y2

L/2 -L/2 {2}


m1 m2
X2
Xo 1 X
Z1 Z2
Fig. 6.3 Frame assignments for two-link, planar RR manipulator

The LE formulation begins with the determination of kinematic model.


Applying step 1 of Algorithm 6.1, the frame assignment is carried out as shownin
Fig. 6.3 and the joint-link parameters are tabulated in Table 6.1.

Table 6.1 Joint-link parameters for RR planar manipulator

Link i Sa
Co
0 1 0
2 6,

The link-transformatlion matrices and the required products of


matrices are obtained as per step 2. These are
transformat
given by Eqs. (6.52) and (6.3
GS, 0 LC] -S, 0 LC
S C 0 LS
T 0 01 0 T S C
01
0
LS,
0
(6.52)

[o
Dynamic Modeling 205
Ci2-S2 0 LCi2 + G)]
OT 2 C2 0 LS2 + S1)
0 0 (6.53)
0
. Ca = COS (G, +6,) and S12 = sin (9, + 0). Applying step 3.0, matrices for
where
joints I and 2, from Eq. (6.27), are

0 -1 0 07

: o 00 0 (6.54)

Next step (step 4) requires computation of inertia tensors. The inertia tensors
I,and for two slender links of length with mass m = m, = m at the centroid of
the link and Li =l2=L, with respect to frame {i), i= 1, 2, are computed using
Table C.I in Appendix Cas:
m 0 0 -mL
00 0 0
I=}=| 0 0 0
(6.55)

-mL 0 0 m
Step 5 of Algorithm 6.1 requires the computation of matrices di, which are
required to compute all other coefficients in Eqs (6.46) (6.48). According to
Eq. (6.49)
°T,-0,T forj Si

do forj>i
Fori,j= 1, 2, the fourd, matries are:d11, d12, dz1, and d2. These are computed
now. For i =j = 1, djj is computed as:
ro - 0 01CS 0 LC
0 o|S C 0
LS
d= T, °7 =2,°T 0100 0 o0 0 0
Lo 0 0
Note that T, is the
identity matrix.
[-S, -C, 0 -LS,]

or
d C0 0
S
0
0 LC
0
(6.56)

0 0 00
Similarly, d2 = 0 (because j > i) (6.57)
Control
206 Robotics and
S C
S2 0 C2 +C)|
C2
d - "1,"7;0
0
C2 0 -LS12

(6.5
Finally d 7 , . 7, =| 0
0
0
matrix MM
the elements of incrtia
to compute
Step 6 applied
is

Eq. (6.46), that is

M = T[4,4,4]]
P=max(i,)

(6.55) to (6.59), the effective inertia coefficien


lents
Using thed, and1, from Eqs as:
first,
M and M2 are computed,
M =Tr(d d)+Tr(d I2 dz) (6.60

Now.
-S -C 0 -LS, ml 0 0 -mL-S C
4,,,d7= S0 LC 0 0 0 0 -C-S 0 0
0 0 0 0 0 00 0 0 000
0 00 0-mL 0 0 m -LS LC 0 0

mLs -mL s,C 0 0


mL s,C mliC} o o
of d,4,d7 0 0 0
(6.61

0 0

Thus,
Trd d=mt'(s+C})=m 3
(6.6

and
-S2 -C2 0 -LS2 + S)1[ 4m 0 0 4ml
Cy2 -S2 0
LUC2 +C)| 0 00 0
d d= 00 0
d
0 0 0
0
J-ml. 0 0
ml:S +S)0 0
-mL S +S,)
-mlC2 +C) 0 0
ml C +C)
00
0
00 0
Dynamic Modeling
-S2 C: 01
-Ci2 -S
0
-LS1: +
S,) LC +
C) 0
0
S: +S, +S - S:C12 +CS2
+5,C2 +SC)
E m SCi2 +fCs
+SC2 + SC) Ci +CC +Ci 0
0 (6.63)
0 0
0

Thus. Tr d4)=mt+C (6.64)


Substituting Eq. (6.62) and Eq. (6.64) in Eq. (6.60) and
simplifying gives
M ml + mLC2 (6.65)
Similarly, Ma =

Tr(d2l, d)=m? (6.66)


The coupling inertia coetficients are computed as
M2 =
Ma =
Tr(d, 1, d)= m mc (6.67)
From the inertia coefficients obtained above, the inertia matrix for the
manipulator is

M ) =mG +C) m?c}++C) (6.68)


mL} +4,) mi?
In step 7, the Coriolis and centrifugal force coefficients (or velocity coupling
Coeficients), hjp for i, j,k =1, 2 are obtained using Eq. (6.47), that is

hik pmax(,j,k)
adl1,
T L094
Tom Eqs. (6.50) and (6.55), the centrifugal acceleration coetficients are

h = 0

h22 =Tr 4dd2

Tr0,'0,'7,1, 4)=- mt's, (6.69)

hau=Tr(7,o"T,0 °7,1,4)= mt s,
= 0
h222
208 Robotics and Control

coctticients are
acceleration
and the Coiolis

"7,0,'7,1,d)
S
h1h21=Tr 7, (6.7
"T,,"7,0,'7,1, d,
=0
Ir
hh are computcd using the serie-
cocfficient terims
The Coniolis and centnifugal
summation

+h0,0, h,:,0 +

For i= 1. H, =
22hè à, =hi8} th 0,0,

md. S,e,®, ml's,0 (6.71

+ h8,0, +
h,,0;
=2h.8,0, h,8; thyt,0,
=

and for i
=2. H,
j=li=1

= m l S.8; (6.72

coefficient matrix H is, therefore.


Thus, the Coriolis and centrifugal

H8.e)= -mt s0,0 m s,01 -

(6.73)
H8.8)H. mL s,0
gravity loading at the two joints.
8 is applied to compute the
Step
From Eqs. (6.48) and (6.56) - (6.59),

G, -(mgd ' +mg d,1) 6.74)


The mass of the links has been assumed to be concentrated at the centroid of
the links. The centroid is located from the origins of frame {1} and frame {2}

'==t 0 0 (6.75)
and the gravity is in the negative y-direction givingg = [0 8 0 01
Hence
[-S, -C 0 -LS,-
-S 0 LC,
mgd'= m[0 -g 0 0]
0 0 0 0

0 0

mgl
2
(6.76)
Dynamic Modeling

-S2-C 0 SS,
med T = m[0
C -S 01C +G,)
0 0 0 0 0

0 (

(6.77)

Finally
G mglC: mgl 6.78)

G, =mg d:' = mglC (6.79)


Sumilarly

The complete dynamic model is obtained in the final step, step 9, by


hstituting the above results, Eqs (6.68), (6.73), (6.78), and (6.79) into
Ea (645). The equations of motion in the vector-matrix form are

1m'G+C;) ml+{C)Të,1
+}C) m
[-mL's,0,0,--mL's,05Lmgt C2+imglC, (6.80)
ml S,0 JT mgL C2
The reader should verify that the dynamic model inEq. (6.21) is identical to
dynamic model in Eq. (6.80)for m =m^ =m and L, = L, =L. Equations (6.80)
arerather complex for a simple manipulator. Obviously, the dynamic equations
for a 6-DOF manipulator would be quite complex.

6.4 NEWTON-EULER FORMULATION

The second approach to dynamic modeling of robotic manipulators is discussed


nOw. The Newton-Euler (NE) formulation is based on the Newton's second law
and d'Alembert principle. The balance of all forces acting on a link of the
manipulator leads to a set of equations, whose structure allows a recursive
SOIution. A forward recursion, which describes the kinematic relations of a
moving coordinate frame, is performed for propagating velocities and
Talions, followed by a backward recursion for propagating torces and
motents. Initially, it is assumed that the position, velocity, and acceleration of
oint,(q. 4. j) are known. The joint torques required to cause these time-
dEnt motions to realize a trajectory are computed using the recursive NE
C Cquations of motion. To understand the Newton- Euler formulation, some
basic concept of kinetics are reviewed first.
9
Robotic Sensors and
Vision

is able to carry out the specified


robot with a closed-loop control system
information about joints and end-effector from
lwork cycle with the help of
The joint and end-effector position,
velocity, and possihlv
the sensors.
acceleration are fed back by means of optical encoders, tachometers, or other
actuators to ensure that the
sensors. placed the manipulator joints or joint
on
manner. Such a robot can accurately move but
manipulator moves in the desired
in its work environment.
is unable to sense and respond to any change
For example, in a pick-n-place operation the robot is required to pick parts
from a specific location, say "pick point", on the moving conveyor and place them
shown in Fig. 9.1. The
into different bins based on some specified criterion, as
will close to
grip the
manipulator will move the gripper to pick point and gripper
orientation of the part and
object at the pick point irrespective of the presence or

bin 3

bin 2 Manipulator
bin 1

Conveyor belt

Parts Pick point

Fig. 9.1 A robot performing a simple task of pick-n-place: picking parts from
a moving conveyor and placing them into bins 1, 2, or 3
Robotic Sensors and Vision 339

the next pOSition even it no part


the
nev

ove
to is
picked up. That is, it will execute
ill cle it is programmed to without any concen for 'correctness
t H o r kC V c l e

of the

arly. the nipulator will contimuc to move through the pecified path
Sin s o C obstacle comes in its path. It will collide with the obstacle rather
obstac

il
Cn elligently move around the obstacle and avoid it
han , for a robotiC anipulator to operate effectively and ntelligently
o WOrk in uiIstructircd
it tto
slhle it
c n a h l e

envionment, it must be equipped with


to
nd ich give intormation about itsclf and
its
environment.
The computer
taller operating the robot must receive inlormation from the sensors and
t in real ume tor dccision making. Robots capable of such decision
N calledintelligent robots, the envionment sensing is called intelligent
atnd th
mNmS. and the sensors uscd for such operations are called intelligent sensors
h e ntelligent senso sors need associated hardware and software to process the
from the robot s data cquisition system. The performance. capability.
enals trom

leibility of a robot are greatly dependent on its data-acquisition system. A


ithout intelligent sensors is handicapped and can only do very specific
while a robot that is able to "'see and "feel" is easier to train to perform
omplex tasks.

1 THE MEANING OF SENSING

First.what does it mean to sense something? Can you name the five human
enses? What does each sense tell us about the surroundings? Senses often seem
tooverlap. In other words, two different senses can sometimes detect the same
things. For example, to find outif you are near a wall. you can open your eyes and
look. stick out your hand to feel, or even yell to hear whether your voice is
reilected. Human sensing is considered as the best sensing and man-made sensors
are still far inferior to human and other natural sensors in many respects.

9.1.1 The Human Sensing


What you learn from your senses depends on what your brain does with the
mtormation it receives from your sense organs, such as your eyes, ears, and skin.
For example, a big piece of "steel" may look heavy to you, but that is because you
diready know that it is steel and steel is heavy. You would have to pick it up (or
to and feel it to really know, orif you cannot guess the material it is made ot
Y
Oain is trained to know what information from each sense organ means
and
fhow to
interpret it. kinds of signals one organ
n in also knows how to interpret the ditlerent be
ite o Lhe brain. For example, a sudden sensation on your tinger ight
lerpreted as "It is re
How is really sharp!" or "That's hot!" or even "This is really heavy!"
that possible fromjust the tip of your finger?
First, the nerves
Senso them), so
your finger are extremely sensitive (and brain does a lotof
there are many
ihey
PrOceePCK up very small details. Second, your
of work of
and organizing the information that the nerves in your finger send to it.
340 Robo'ics and Control

9.1.2 The Problem of Robot Sensing


Much of what your senses "tell youdepcnds on what you already kno
now about the
inds of things you might be sensing. What a particular sensation makes
think and conelude or do is a result of complex interacton between your sen.
you
organs and vour brain. Robots "know" nothing. which is Wny sensing with

phy sical sensors to solve these problems with sensor s ouput and Computer

very dittcult
hat secms ike a very sinmple sensing task becomes quilc complex who.
en
applied to the robot. For example, for most people. il 1S pretty casy to tell if there

I8 an apple on the table in front of them. You just open your eye and look
mmediatcly you know, "Yes, there it is". or "No, there arc no apples here." Yo
can also do it with your eyes elosed by just reaching out and lCeling or smellino

for one.For a robot., it is not tlhat simple.


Assume you have a obot with a light sensor thal resembles a colour vide
camera. Y ou would like to "teach" (program) your robol so that it can determine
whether there is an apple in front of it. How will it be done? You may try like
this-put something in front of the robot, and "tell it is not an apple or it is an
apple. How many things do you need to show? What about the apple size and
shape variations?
Well. a good first step would be to let the robot understand what an apple is
How would you explain the apple to your robot?
The problem of robot sensing is not only difficult as illustrated above, it is
greatly limited by the sensor capabilities. It is extremely difficult to build sensor
systems that are as sensitive as human senses.
Properly designing or choosing and using sensors is an essential part of
robotics, and sensors of all types are used in many different kinds of robots for
different tasks. For example, some robots use only simple position sensors, while
some may have a combination of special cameras and advanced software to gather
information about their surroundings, and orient themselves based on the
behaviour of other robots.

SENSORS IN ROBOTICS
Sefsors are used in robotic systems for a variety of functions. The task planning
and control algorithms, discussed in previous chapters, require, both on-line
measurements of the parameters characterizing the internal state otf the
manipulator as well as its work environment. The most common and minimal use
of sensors is to provide information about the status of links and joints of the
manipulator and about the working environment of the robot. In addition. sensors
may be deployed to provide data for inspection and quality control. safety
monitoring, and to detect and resolve interlocks in the workecell.
The major functions of sensors in robots can be grouped into five basic
categories.
Robotic Sensors and Vision 341
I. Status sensors,.

Environment sensor
Ouality control sensors
Safety sensors,and
Workcell control sensors

o! the sensors
MoIsUndet each of the
The olc
se
catcgories Is le se rihed

Stafus Sensors

2.1
Iseof sensors on a tobot is to sense pitton, velo ity. aceleration
7he p r m a r

o t
torquc/force at cach jont of the
manipulator for and metion
persitton
Thesc sensors forn an essential part of the haic or internal closed lonp
stems and are called internal sensort or state lor status) sensor
n t r

CTnal sensOTS gIve fecdback on the status of the nipulator itself The degree
racy
l ura that
c tha can be achicved by a man1pulator depends on the resolution and
ntermal sensors. Internalsensors must al uhe ont effective hecause
are
required for cach axis of the manipulator
the
.2 Environment Sensors

The scond major funetion for sensors is to extract features of the objects in the
arkiell or surrounding environment ofthe robot. This knowledge is utilzed by
the somputer controller to modify oradapt to a given situation For example. i
to process several
the robot has types ofdifferent
parts., each requinng d1fferent a

uuence of actions by the robot, such as adjusting the gripper orientation or


appiving the exact gripping force, it must determine the required parameters for
each part. Sensors for these functions are generally placed in the environment of
the robot or are external to the manipulator and are called etternal or
exceptions where extermal be
emvironmentsensors. There an sensor
are may

mounted on the manipulatorsuch as a wrist force-torque sensor. Another evample


sa camera mounted on the wrist to "see" the part in workcell before gripping it
External sensors would be used to perform one or more of the tollowing

functions
detect presence of workpiece.
determination of position and/or orientation ofworkpiece andot objects to
be articulated, or other objects present in the workel

workpiece identification.
determination of workpiece properties such as size, shupe and so oa.

detect, identify obstacles in the environment, and provide


intormation

about their size, shape, location, speed and so on.


interactuon torces
provide information about the manipulator-environment
and torques.
Provide information about environmental vanables such as temperature

numidity, and so On.


342 Robotics and Control

'dctermination of the position and orientation of the end-eifector.joints,


and links of the
manipulator.
Soeines, the accuracy requirenents in a given application are more
strimgent than the inherent accuraey and repcatability of the manipulator. For
mple, the task of assembling two parts that have very small allowance and
Toquire close alignment that is smaller than the resolution of the manipulator. In
Such situations. the feedback fromexternal sensors, such as VISIOn, can be used to
inprove the aceuracy of positioning of the robot.
All the information from the external would have to be
sensors processed by
ne computer in teal time to guide the manipulator in the execution of its

programmed work eycle. An important sensing method is the vision system that
might be employed to determine such characteristicsas part location and
OTentalion. siZe and shape, and many other properties. Robotic vision is diseussed
in detail in the later
part of thischapter.
9.2.3 Quality Control Sensors

Inspection and quality control is an important function for robotic sensors.


Traditionally. quality control is performed on statistical sampling basis, using
manual inspection techniques. Because sensors can be used to determine a variety
f part quality characteristics, the use of sensors permits 100 percent inspection.
The external sensors used for environmental feedback can also be used for
detecting faults and failures in the finished product. The inspection process can be
made a part of the programmed work cycle and the
sequence of operations
performed by the manipulator may be linked to the result of inspection process.
Computer vision, ultrasonic, sonic, and other sensors can be used for inspection.
Vision system can provide a large variety of information about the work piece in
addition to its position and orientation.

9.2.4 Safety Sensors

An important function of the sensors in robotics is


safety and hazard monitoring.
The safety of the workers and other equipment in the work environment, and that
of the manipulator itself, are important concerns. For
example, if there is power
failure, all the links of the manipulator may fall to
zero-gravity position
instantaneously. This may injure the human beings in the vicinity or may damage
the equipment or the manipulator itself. A solution is to sense the power failure
and apply breaks to prevent the uncontrolled falling of links due to
gravity.
9.2.5 Workcell Control Sensors

Another major use of sensor technology in roboties is to implement interlocks in


workcell. An interlock in the work cycle iS a situation that requires sequencing of
tasks such that completion of atask must be ensured before proceeding to the next
task. For example, a part must arrive on the conveyor before the gripper of the
manipulator can pick it. This kind of verification of interlocked tasks can be done
Robotic Sensors and
Vision 343
1 the signals tron a
hcorpora
variety of sensors in the
s
may also be used
also be to detect and resolve the interlocks in robot program.
the workcel
AIlthe
e calcgories ol scnsor
functions deseribed above require sensors to
part of rolbol control
a nn l c y system to :accomplish specific control function.
obot requires an inteligent data
ligent rob
closcd-loop control system. ligureacquisition system and multilevel
9.2 shows the mulilevel
hu'tar

of a connputer-r-based control
hilccture

intelligent robotic manipulator


Control signals

Models Control
Internal (state) components
Intelligence sensors R
algorithm feedback Actuators
Tasks
(work cycle) Transmission
p
Motion
Mechanical system
algorithm (Links and Joints)
Control t
algorithm Sensors

External Interaction
Program sensors
feedback
force/motion)

Sensors
Programming Work
(Teaching) environment

Fig. 9.2 Architecture of a computer-based intelligent robotic manipulator

Fusion of the available sensory data with task planning characterizes the robot
as an intelligent robot, which is capable of perception of action. The control
algorithms can guarantee the coordinated motion of the mechanical structure in
Corespondence to the task planning.
The signals from all the sensors on the manipulator and in the environment are
0 L the computer-controller, which analyses them as per system models, task
POgrams, control algorithms, motion algorithms, and intelligence algorithms and
to the àctuators to
SCCrales the control signals. These control signals are applied
mand and control the manipulator. The interaction of the manipulator with
the and again fed back and the eycle
T Cnvironment is sensed by the sensors
gOes on.
is discussed brietly in the next
.TOd classification of robotic sensors

Section.
344 Robotics and Comtrol

9.2.6 Classification of Robotic Sensors


number of broad groups in ew of.
vicw of their
Kobotic sensors can be classificd in a

and in robotics like property sensors, Tunctional sen


applicability use

acoustic sensors, sensors and sO On.


optic
The schematic in Fig. 9.3 gives a possible classiticalion hierarchy of
the
robotic sensors. The asks and specific sensors in cach group are also mentioDe.
ned
therein. These groupings are to some extent arbilrary and not unique. F
into touch or tactile as well as int
Cxample. external sensors could be subdivided into
contact or noncontact sensors.

Robotic sensors

Environment sensors
Status sensors
(To detect sition, (To detect presence,
velocity, acceleration, motion, force, torque
torque, force, etc.) Touch/Tactile, etc.)

otentiometer,
tachometer, Contact sensors Noncontact sensors
optical encoder,
microswitch, etc.

Pressure, force, slip, Vision, optical, acoustic,


torque, surface, finish, infrared, proximity, range,
temperature, pH, etc. temperature, chemical, etc.

Fig. 9.3 A possible classification hierarchy of robotic sensors

Contact or touch sensors are one of the most common sensors in robotics.
These are generally used to detect a change in position, velocity, acceleration,
orce, or torque at the manipulator joints and/or the end-effector. There are two
main types, bumper and tactile. Bumper type detect whether they are touching
anything, the information is either "yes" or "no". They cannot give information
about how hard is the contact or what they are touching. Tactile sensors are more
complex and provide infomation on how hard the sensor is touched, or what is
the direction and rate of relative movement.
The technologies used for sensors in this group are electric, electromagnetie.
electronic, and optic. Some common robotic contact type sensors are
potentiometers, tachometers, optical encoders, accelerometers, linear variabie
differential transformers (LVDT), and force-torque sensors.
The second important group is of the noncontact sensors, which are used to
parametric information about the environment, task. objects in the workspace and
Robotic Sensors and
Vision 34
t e t
sence.
p r e s c n c e ,

distance, or
teatures of the
ity
include proximity sensors, range workpiece. The sensors in this
ems
on.
and so on. Various technologiessensors,
such astemperature sensISOrs, visIon
magneticane on are used for sensors optic, acoustic, intrared,
in this
ohotics
in robotics to dectect the presence
of group. Proximity sensors
object withina an
ind the accurate ate
object. These are useful in spccificd distance
distance of the
l s o n avondance.
object grasping or
is
The Visi SVStem probably the most powerful sensor
H

as it can
the workpiece such as size, detect various
a
tur's.

shape,
and other attributes of thegeometry, distance, colour, texture
nish,
iristics of the environment such as workpiece. It can also detect
position and orientation of parts and
raleri

schanges in these with time, presence of


gradient,
and so on. smoke, humidity, temperature
e
Some common kinds of sensors used in robotics
are discussed
e
mpleto description of various sensor technologies and sensors and
here. The
their detailed
aly sis can be found elsewhere,

KINDS OF SENSORS USED


IN ROBOTICS
anv Types of sensors are used robotics. In this
itable for robotic applications are discussed.
section, some common sensors
These include touch and tactile
TsOrs in the contact type and proximity,
range and physical property sensors in
e noncontact type.

Position sensing is the most fundamental need


for robots to control
nd orientation or motion
of joints and links of the robot. position
The measure of the
position rotation of the manipulator, that is,
and
Isend-effector or its joints is needed to
get an accurate assessment of
position of
Totions and positions. In many applications, there is a need to manipulator
te position of the end-effector (tool or accurately know
gripper)
tobjects in workspace, or to track motion to
or to find the
coordinate locations
these, rotation sensors are very importantprovide
real-time adjustments. For
in robotics to
xeds and motions of end-effectors. There are very few devices keep track of joint
olute position for linear motions. Often linear motion is to directly give
on using rack and pinon drive and converted to rotary
aled to rotary sensors such as encoders are
give absolute linear position.
ndustrial robotic
manipulators currently manufactured, the
urmber the prismatic joints. Thus, sensing of rotation or revolute joints
damental requirement, as stated above. Reliable angularsensorsposition is
mplemented in robots in the form position are

ders, resolvers, and of potentiometers, tachometers,


Proxim and accelerometers.
touch sensors are generally used in intelligent
Re, while force-torque grasping of work
itrol of its rque sensors provide feedback for object
manipulation and
lul to p interactution with the environment. For
example, these sensors are
slipping or damage of the object once grasped or to apply the
Robotics and Control
346
In gross guidance informationn for
contrast, the
a nut.
nght torque tighten
to
vision sensing. Vision sensino e
is provided by range and
manipulator
in addition to gross guidance information he
provide useful information
other in later part of this chapter,
are discussed
vision sensing fundamentals

9.3.1 Acoustic Sensors


a wide variety of nonconi..
Acoustic or sonic sensors arecommonly used for
distance measuring, or navigation applications, onic
presence. proximity.
sensors can be simple or very complex.
They typically transmit a short burt oe
which reflects the sound back to the sensor
ultrasonic sound towards the target,
the echo to return to the sensor and calculatec
The system then counts the time for
of sound in the medium. The wide variety
the distance of the target using the speed
from one another acoustically. They operate
of sensors currently available differ
different radiation patterns.
at different frequencies and have
Ultrasonic sound is vibrations at a frequency above the range of human

ultrasonic sensors use a single transducer to both


hearing, usually >20 kHz. Most
transmit the sound pulse and receive the
reflected echo, typically operating at
and 250 kHz.
frequencies between 40 kHz
Silicon-based ultrasonic sensors and
acoustic wave sensors are relatively new
microelectromechanical system (MEMS) sensors that are
and extremely versatile
in robotics. They are competitively priced,
just beginning to realize their potential
reliable, and consistent in
inherently rugged, very sensitive, intrinsically
of applications, such as
performance. These can be used for a large variety
acceleration, force torque, humidity, temperature
presence, proximity, velocity,
and so on.
Acoustic wave sensors send out acoustic waves, the acoustic wave
as

surface of the material, changes in the characteristics


propagates through or on the
the wave. These
of the propagation path affect the velocity and/or amplitude of
of
changes can be monitored measuring the frequency or phase characteristics
by
the reflected wave. These changes can then be correlated to the corresponding
wave
physical quantity being measured.Array of ultrasonic sensors or acoustic
of
sensors are useful for imaging and scene analysis. For a limited class
applications acoustical inmaging may be complementary or even competitiVe
optical imaging.

9.3.2 Optic Sensors


Optic or light-based sensors are noncontact sensors and wide range ol op

sensors based on different techniques are available for use in robotics. Op


sensors otherthan camera, used in vision systems, are discussed here
Simple light sensors work with visible light and work on the princip
detecting the change in the intesity of light sent and received. The signal can
binary 0 (black or no light) or 1 (white or bright), or magnitude of light, s
between 0 and 100.
obotic Sensors and
Vision 347
O p t i c

nonvisible light imclude


.ars
s e n s o r s
in
infrared
sCnsOrs An infr:
sensor based inlrared, ultraviolet, and
within space offew meters, with
a tracking sensor
system
laser-beam
allows a full 3-D
Auother cye safe laser
laser base
bascd
sale resolution
scanning sensor is used of
fraction of millimeter
on
a
a
a

cles, rcbots, and other for


g u u e d ve
material handling navigation on automaled
1 Pneumatic Sensors
equipncnt
ls of pneumatic sensors on robots are oldest
Ayhuc
n e cases.
some cases. Pneumatie sensors and still convenient or
sut They can be usea may be
contact-less
as
proximity devices or contact
sensors, touch sensors
force scnsOrs.
or or
pressure
Pneuatic proxin imity
in
sensors are uselul
accurate distance bctween
ONide workpiece grasping as
they can
iec. finger closure fingers and
workpicce,
workpicce,

signals and so on. orientation of


Sensitive and are suitable for
are very sensitiv Pressure-or vacuum-based sensors
Cour
or ressure based sensors are handling delicate and fragile
used in objects.
oles,
sUch.as edges, hol shape, etc. and detectingdetecting
the
or
identifying the features
Canme other conventional sensors that
movement.
are used on robot
micro-switches.
micro
synchros,differential transformers, proximity
résolvers,
are and
sars, hall-effect sensors,
optical interrupters, tachometers and eddy current
accelerometers.
9,3.4 Force/Torque Sensors
The most significant
problem with force/torque
sensor (FTS) is
ioint. For force/torque sensor to be mounting it on the
sensitive, it must be flexible (not
makes the overall joint much more flexible rigid). This
contrary to rigidity
required for
precision and accuracy and causes a loss of
It is possible to estimate the controllability of the joint.
joint torques and inertias without the use of forcel
torque sensors. Estimation algorithms based on actuator
estimation are available. These methods are power input and inertia
propagated errors. complex and very sensitive to
Apart from the joint forces/torques, the interaction between robot
effector) and environment generates (its end-

preserve the
forces/torques that must be controlled to
integrity of task being performed. For measuring these interaction
forces, mounted between the wrist and the end-effector and are
sensors are
as Wrist Force known
Torque Sensors (WFTS).
A
strain-gauge based WFTS is shown in Fig. 9.4. The sensor has two annular
ings, outer ring and inner ring, which are connected to wrist end
and the end-
erlector end,
On
respectively. These rings are connected to each other by a cross and
the arms of this cross,
eight pairs of strain gauges are mounted as shown. By
ably selecting and connecting the gauges in the Wheatstone bridge the sensor
ctccts threce force components along the three principal axes and three moments
along the three axes.
348 Robotics and Control

Hub connect
to arm end)

4 Deflection bars
with 4 strain gauges
on each bar

Rim (connect
to wrist)
based wrist force-torque sensor
9.4 A six-comyponent strain-gauge
Fig.
available for WFTS based not only on strain
Many other alternate designs
are
It is important to note that WFTS
sonic and optic sensors.
gauges but also on
of the manipulator and, therefore, it
should notaffect the positioning accuracy
requirements of WFTS are: must besmall
it
must have high stiffness. Some other
sensitive, linear, and with low internal friction
light in weight. compact in design,

9.3.5 Optical Encoders

encoders are frequently used sensors


For position sensing in robotics, optical
ease of application and versatility.
because of their simple construction, low cost,
displacement into digital code or pulse
Optical encoder converts linear or angular
signals. Optical encoders are of
two types:
relative to a fixed reference
Absolute encoders: They provide actual position
coded signal with distinct digital code
(zero) position. Their output is a digitally
increment of resolution.
indicative of each particular least significant
Their
Incremental encoders: These sense the position from previous position.
make no distinction
increment of resolution but these
output is a pulse for each
between increments.
source, a
The opticalencoder is n'ade up of three basic component: a light
source
as illustrated in Fig. 9.5. The light
rotary (or translatory) disc and sensor,
a

or an infrared light-emitting diode (LED).


The disc
can be an incandescent lamp

has alternate opaque and transparent sectors, which are


etched by means of a
metal disc. The sensor
photographic process on a plastic disc or slots are cut on a
is a photodiode or a phototransistor, which senses the light (or no light) passing
electronics detects the
through sectors or slots of the disc. Finally, a conditioning
light and dark signals and converts these signals into usable form of pulses o
digital code.
The two types of optical encoders, incremental and absolute are described i
detail in the following paragraphs.
Robotic Sensors and Vision 349
Disc

Motor

Photo-detector

LED To conditioning
electronics

Fig. 9.5 Components of an optical shaft encoder


lacTemental Optical Lncoder The incremental optical encoder disc is connected
the
1otor sha
motor haft and rotates with motor. The rotation of the disc permits the light
to
h
to re:
the sensor, whenever the transparent slot comes in front of LED, thereby
nerating aa serio
generating series of pulses as output of sensor. These pulses are fed to a counter,
hcounts the number of pulses; the count being a measure of angle (or
whi

istance) through which the shaft has moved. By sampling the counter at regular
dist
intervals by means of clock pulses, the speed of rotation of the shaft can be
obtained. The resolution of an incremental encoder is given by:
Basic resolution = (360hn) where m = No. of sectors on disc, each sector is
transparent and halt opaque. A typical incremental encoder having a disc
with 2500 sectors (or lines) gives a basic resolution of 0.144°. Using another disk
as a mask, using two or four optoelectronic channels or using multiplier circuits
increases the accuracy and resolution of the incremental optical encoder. For
example, if the pulses are increased by a factor of 4, by use of any of these
approaches, the above incremental encoder will give a resolution of 0.036

Absolute Encoder Absolute optical encoder consists of a multiple track light


source, a multi-track receiver and a multi-track rotary disk. The basie
construction of absolute encoder in shown in Fig. 9.6(a) for the measurement of
angular displacement. The absolute encoder disc has concentric circles of slots
also known as tracks or channels and radial sectors to generate the pulses. As
many LED-photodiode pairs as the number of tracks are needed. These nmay be
SUitably spread round the tracks to avoid signal interference.
The transparent and opaque slots in the tracks are arranged in such a way that
ne sequential output from the encoder is a number in binary code. The
uransparency is regarded as '1' and opaqueness as 0'. The tracks and slots of a
-DIt absolute encoder disc are illustrated in Fig. 9.6(b).
The number of bits in the binary code will be equal to the number of tracks on
disc. Thus,
e for an absolute encoder with a disc of 10 tracks, there will be 10
sand the number of positions that can be detected is 2or 1024. This encoder
w have a resolution of 360°/1024 = 0.35°. An absolute encoder is a digital
350 Robotics and Control

docs not rcquire


any coditonng but c
transducer in the tnne sense as its output
infornaton
absolute position
be directly read to get the

10 light sources
and detectors 000 111

10 bit
absolute
position
output 001 110

010 101
Encoder shaft

Disk
011 100

(a) Arrangement of light sources (b) A 3-bit 3 track absolute encoder disc
and detectors for 10-bit output

9.6 Absolute optical encoder and its disc.


Fig.
volatile when
The chief disadvantage of optical encoders is that the output is
power is off. There are ways to overcome
this.

9.3.6 Choosing the Right Sensor

To decide what type of sensor or which sensor would be best for a particular
situation. following guidelines may be used.
.Carefully analyze the nature ofinformation needed.
Find what information will be provided by the sensor.
Determine how the sensor signals will be used.
For the variable to be sensed determine: its nominal value, range ofvalues.
accuracy required, required reliability and speed of measurement ete
Estimate the environmental variations under which the sensing is to be
done
Examinc the nature of task dependent on the sensed signals, for exanple.
criical tasks require reliable sensing.
Based on the above guidelines all possible sensors that can do tthe specilic
task are identilied taking into account such other tactors as sensitivity,
lineaity
in the desired range, inaintainability, life, power
consumption, ruggednc
availability and cost. Kemember that there is always nore tha one way to se
the desired quantity and more than one type of sensor can do the of

job, hence, ou
the several sensors avanlable to get he desired intormation, one that is 'optimunm
should be depioyed.
Robotic Sensors and Vision
351
9.4 BOTIC VISION

The inost
howerful
powerful sensor. which
cquip a robot with large varicty of sensory
can
an, iiss robotic viISion. Robotie, or synonymously.
formation, "computer or
1 : t hite"
systerms are anong the o s t conplex sensory
vision systems
n c VISIOn

nnay be defned as
systems in use.
vision the process of
acquiring and extracting
R o b o t i c

fomation from inmages of 3-D world.


Rabotic VISiOn 1s pimarny targetlcd at manipulation and interpretation of
and u s e of this information in robot
mage operation control. Robotic vision
ryuies two aspect to be addressed. Onc. provision for visual input, and two,
required to productively utilize the visual information in
ssing
stem. The architecture of vision systems, the comnputer- a

ntation. and image storage are discussedcomponents


hase
for visual input.
image represent
first. The processing of the
ircd image to extract the ntormation from it is discussed later.
.5 INDUSTRIAL APPLICATIONS OF
VISION-CONTROLLED ROBOTIC SYSTEM1S
Vision systems can provide information about the
position, orientation, identity,
and condition of each part in the surroundings. This valuable
used to automate the manipulation of objects, knowledge can be
ollision with obstacles, or decide how to grasp an
plan robot motions to avoid
data produced by a Vision system is largest, the
object. Although the volume of
amount of information
an image cannot be matched present in
by any other sensory system. The effective use of
robotic vision system makes assembly.
classification tasks more robust.
quality control. parts handling. andd
Using a
single
camera, it is possible to track
multiple objects in visually
cluttered environments. A vision-controlled
industrial robot can be
number of different deployed for a
applications.
9.5.1 Presence

The simplest application is to find the


presence or absence of a part at a
location, say on a
conveyor belt orin a bin. Many other specitic
proximity or touch simple sensors such as
sensors can be used for this, but the
presence visual detection of
combined with other applications discussed
CCurate and versatile information.
next gives much more

9.5.2 Object Location


he
parts or obstacles in the
visual fields can be located
POSilion and
orientation can be determined with accurately and their
SSment is used for tracking motion of manipulator,precision. Accurate coordinate
end-etfector,
dcles.Sometimes, for this purpose special identification markingsobjects,
on the parts.
made are
or
352 Robotics and Control

9.5.3 Pick and Place

The manipulator can be guided to pick parts from a spccific location after its
presenee has been detected or from any imaged location in the workcell and place
it at the desired location. The gripper can be oriented according to the part's
orientation to hold the part properly. To picka part from a moving conveyor orto
put on it is inherently easier than to lift it from a storage bin or place it there.

9.5.4 Object ldentification


The objects captured in an image can be identified and distinguished from each
other. This may be a part of sorting process, involving determination of location
and pick-and-place tasks to effect physical movement of parts.

9.5.5 Visual Inspection

This is a very powerful and potential application area for automated quality
control using robots, while they are performing other tasks. Visual inspection is
based on extracting specific quantitative measurements of desired parameters
from an image.
The robot involved in visual inspection can do sorting of 'good' and 'bad'
parts or control the manufacturing process based on measured values of
parameters.

9.5.6 Visual Guidance


The image of the scene can be used for accurate specification of relative positions
of the manipulator and the part in the scene as well as their relative movements.
An example of application of visual guidance is the assembly operation, which
requires accurate position and orientation control of two parts to be fitted
together. To assemble the parts, one part is held stationary, while the other is
maneuvered by the robot, whose movements are controlled by the visual
feedback. Another example can be of guiding the motion of the manipulator
through stationary or mobile obstacles in the environment of the robot.
Potential industrial applications for vision-controlled robots are numerous and
with the help of vision systems robots can be made to perform difficult and
complex tasks. From the above applications, it should be clear that the task that
can be performed depends on what information is obtained from the image.
The processing hardware and the concepts involved in robotic vision are
considerably complex as compared to other sensory systems. The subject of
robotic vision is subjective in nature and generalized analytic solutions are yet to
be found. Even after years of research, robotic vision remains a challenge. It can
be said that the robotic vision is still in the early stages of development
holds the future as the most powertful robot sensory technology.
Rolote ensors ane 'ision 353
PROCESS OF IMAGING

OCeSs o agng. geting an


The b a s c r O e s s

an algebrac image foT


image is depi tedComputer processing. from te
ighisoun arvay
1he hght
so
minates ththe vbject and the
illuminates by the
s«chematic in Fig 9

formed in the camera is


1hc mage fon canera captures the reflected light
hely ofnitable transducers.
suUilal converted into analog
signal (voltage with the
Finally,
the analog voltages are digit 7ed
meted

algehrac arTay. This


aay array is the "image"
is
and
n t e r y r e t e d b
the computer
according to to he processed and
predefined algorithms
Light source

Sensor
Lens

Analog image data

Object 5 0 17 90 Analog
to digital
205 10 17 8

82 97 111 8

29 23 301 16

Fig. 9.7 Capturing an image and its digitization for further computer
processing-a schematic

9.7 ARCHITECTURE OF ROBOTIC VISION SYSTEMS


Robot vision system is divided into
subsystems based
the way the vision on
sy stems generally implemented. A robotic vision system consists of one or
are
more cameras
acting as sensors. The Vision interface unit connects the cameras to
the video
display and the hardware to convert the image into a suitable digital
form. This hardware includes a video-butfering unit ealled a frame grubber and
an image preprocessor.

A
Computer is utilized to process the mage, usng the mage prucessing
S0Tw are for extraction of information about the scene,
interprelation ot the mage.
Tcognition ofthe objects and so on for the desired application.
SChematic diagram ofa typical robotic vision systemarchitecture is show
Fig. 9.8. On the basis of the processing output, the computer generates
a n d signals, which are sent to the manipulator through the robot controller,
fur u nanipulatoroperation control. Also video signas are sent to viulev display
for the operalor.
354 Robotics and Control

Camera 2

Camera 3

Camera 1

Worktable

Vision
interface

Video Frame Robot


mixer crabber controller

Image
Display preprocessor

Computer

Image processing
software

Fig. 9.8 A robotic vision system with multiple cameras for manipulator control

9.7.1 Stationary and Moving Camera


An elementary robotic vision system consists of only one stationary camera
mounted over-seeing the workspace. (say, camera 2 in Fig. 9.8). This produces a
single perspective 3-D view of the scene of activities. Multiple stationary cameras
give muliple perspectives of the scene. A camera can also be mounted on the
manipulator, for example, camera 3 in Fig. 9.8. Such a mobile camera generates
an unlimited number of
perspectives. This camera can be moved by the
manipulator around the object to get specific views of the object and much better
visual information. This is called active
sensing as the
positions the sensor on the basis of previous experience or manipulator
itselt
closest to the task
being performed. The disadvantages of mobile camera use are, one, the additional
mass of the camera is undesirable on more than one account and the cannera
two,
i subjected to continuous motion and
rough handling. The disadvantages of using
a
stationary camera are (1) stationary camera requires a complete preknowledge
of the 3-1D scene, and (iu) some parts
of the scene may be hidden from the camera
even with use of nuliple stationary cameras.
Stationary camera and mob1le camera, both, have computational and controt
problems. The fundamentals ofl robotic vision are introduced here by restricting to
the case of a
single overhead stationary camera for getting a 2-D image wiln
constrained lighting. The setup tor this idealized system is shown in Fig. 9.9.
Robolic Sensors and Vision 353

Monitior Frame grabber


Camera

Light

Objects
Computer

Manipulator

Controller

Fig, 9.9 Vision system setup with single overhead stationary camera

98 IMAGE ACQUISITION

vision chain is the camera. It plays the role of robotic "eye" or


The first link in the
the sensor.
This is the imaging component or the noncontact or remote sensor.
is converted into electrical signals in the camera and when
The visual information
Kaumpled spatially and quantized, these signals give a digital image in real-time,
by a process
called digitizing.
which
The robotic vision cameras are essentially optoelectronic transducers,
in the domain of
convert optical input signal to electrical output signal. They fall
TV cameras. There is a variety of camera technologies available for imaging.
Some of these are: black-and-white vidicon tube; solid-state cameras based on

charge-coupled devices (CCD), charge injection devices (CID), and silicon


sensor cameras. Two of these technologies are described here.
bipolar

9.8.1 Vidicon Tube


The basic structure ofthe vidicon camera tube is shown in Fig. 9.10. The optical
mage is formed on the glass faceplate coated with a thin photosensitive layer
Composed ofa large number of tiny photoresistive elements. The resistance of the
element decreases with increasing illumination. Once the image forms on the

aceplate, a charge is accumulated, which is function of the intensity of the


unpinging light over a time, from which an electrical video signal is
specified
derived.
he charge built up is 'read' by scanning the photosensitive layer by a focused
Ctron beam produced by the electron gun at the rear of the tube. The seanning

nrolled by a deflection coil mounted along the length of tube. The electron
Smade to scan the entire surface. typically. 30 times per second. line by

COnsisting of over 500 scan lines for the whole image, as shown in Fig. 9.10.
ch
complete scan is called aframe.
356 Robotics and Control
Photosensitive
Deflection and focusing coils
layer

Electron beam
Scene

Lens

Mesh Faceplate
Electron gun

Fig. 9.10 Schematic of a vidicon tube

Start

End
Fig. 9.11 Electron beam scanning pattern for one frame of image

Scanning at higher line rates per frame, scanning even and odd lines
simultaneously at 60 times per second (shown with dashed and continuous lines
in Fig. 9.11) are some of the techniques utilized to reduce flicker, etc.

9.8.2 Charge-Coupled Device (CCD)


The charge-coupled device falls in the category of solid-state semiconductor
devices. A monolithic array of closely spaced metal oxide
semiconductor forms
the photosensitive layer. A simplified construction
of CCD is illustrated in
Fig.9.12
The light is absorbed on the photoconductive substrate and charge accumulates
around the isolated "wells" under control
of
Each isolated well represents a pixel. Charges are
electrodes,
as shown in the
Fig. 9.12.
accumulated for the time it
takes to complete a single image scan. The
charge built up is proportional to the
intensity of image. Once the charge is accumulated, it is transferred by the
electrodes, line by line, to the registers.
Robotic Sensors and Vision 357

SiO

Nitride ovide
(stubstrate)

p-islands
n tvpe silicon

Fig. 9.12 Charge(ouyled derr (CD) sensor struc turc

DE
sCRIPTION OF OTHER COMPONENTS OF
VISION SYSTEM

vision system consists ofhardware and software for performing the


A c o m p l e t e

sensing and procesSing the nmage (the scene) and utlizing the results
inctions ot sensi
btaned to.
.command the robot. A block diagram of the components of a vision
stem.
aTanged schematically, Is shown in Fig. 9.13 and their functions
arTange
are
cplained below.

Algorithmms Memory

Lighting

Frame Robot
Camera ADC grabber Computer /F
controller

Monitor
keyboard

Fig. 9.13 Schematic showing components of a vision system

9.9.1 Illumination
The image presented to the camera is light reflected from the environment. This
varies in wavelength and intensity throughout the image and is directly dependent
on the illumination of the scene or lighting, that is, type, luminous power and
placement of light sources. A poor lighting produces low-contrast images.
shadows, and noise in a 2-D vision system, the desired contrast can often be
accomplished by using a controlled lighting system. A 3-D vision system may
cquire more sophisticated lighting system. In a single camera sy stem.
dngulation is used to detect shape and depth. Some systems use two cameras to
-Dimages to achieve a stereoscopic view of the scene.

9.9.2 Analog-to-Digital Conversion and Frame Grabber


Analog-to-digital (A/D) conversion is required to convert analog picure signal
C Camera into digital form that is suitable f o r c o m p u t e r processing. The

You might also like