0% found this document useful (0 votes)
87 views24 pages

ECE5590 Mathematical Fundamentals

Uploaded by

Pablo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views24 pages

ECE5590 Mathematical Fundamentals

Uploaded by

Pablo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

ECE5590: Model Predictive Control 2–1

Mathematical Fundamentals

Vector Spaces

! Simplest example of a vector space is the Euclidian n-dimensional


space, Rn, e.g.,
u D .u1; u2; :::; un/ with ui 2 R for i D 1; 2; : : : ; n

! Rn exhibits some special properties that lead to a general definition of


a vector space:

– Rn is an additive abeliean group


– Any n-tuple ku is also in Rn
– Given any k 2 R and u 2 Rn we can obtain by scalar multiplication
the unique vector ku

In abstract algebra, an abelian group, also called a


commutative group, is a group in which the result of ap-
plying the group operation to two group elements does
not depend on the order in which they are written (the
axiom of commutativity). Abelian groups generalize the
arithmetic of addition of integers. They are named after
Niels Henrik Abel. [Jacobson, Nathan, Basic Algebra I,
2nd ed., 2009]

! Some further results:

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–2

– k .u C v/ D ku C kv (i.e., scalar multiplication is distributive under


addition)
– .k C l/ u D ku C lu
– .kl/ u D k .lu/
– 1#uDu
! To define a vector space in general we need the following four
conditions:
1. A set V which forms an additive abeliean group
2. A set F which forms a field
3. A scalar multiplication of k 2 F with u 2 V to generate a unique
ku 2 V
4. Four scalar multiplication axioms from above.
Then we say that V is a vector space of F .
Examples:
! All ordered n-tuples, u D .u1; u2; : : : ; un/ with ui 2 C forms a vector
space Cn over C
! The set of all n $ m real matrices Rn$m is a vector space over R

Linear Spans, Spanning Sets and Bases

! Let S be the set S =fu1; u2; : : : ; umg 2 V where V is a vector space of


some F
! Define L.S / Dfv 2 V W v D a1u1 C a2u2 C : : : C amum with ai 2 F g
! Thus L.S ) is the set of all linear combinations of the ui
! Then,
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–3

– L.S/ is a subspace of V (note, it is closed under vector addition


and scalar multiplication)
– L.S/ is the smallest subspace containing S

! We say that L.S/ is the linear span of S , or that L.S/ is spanned by S


! Conversely, S is called a spanning set for L.S/

I MPORTANT: A spanning set v1; v2;: : : ; vm of V is a basis if the vi are


linearly independent
N OTE: If d im.V / D m, then any set v1; v2;: : : ; vn 2 V with n > m must be
linearly dependent

Linear Transformations, Matrix Representation & Matrix Algebra

Linear Mappings

! Let X; Y be vector spaces over F and let T be a mapping of X into Y ,


T WX !Y

! Then T 2 L.X; Y /; the set of linear mappings (or transformations) of


X into Y , if
T .u C v/ D T .u/ C T .v/
and
T .ku/ D k # T .u/; for all u; v 2 X and k 2 F
) This defines a Linear Tranform

Matrix Respresentation of Linear Mappings

! Linear mappings can be made concrete and represented by matrix


multiplications...
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–4

– once x 2 X and y 2 Y are both expressed in terms of their


coordinates relative to appropriate basis sets
! We can now show that A is a matrix representation of the linear
transform T :
– y D T .x/ is equivalent to y D Ax
! Note that when dealing with actual vectors in Rk or Ck, an all-time
favorite basis set is the standard basis set, which consists of vectors
ei which are zero everywhere except for a “1” appearing in the i th
position
Change of Basis
0 0 0
! Let e1; e2;: : : ; e m be a new basis for X
0
! Then each ei is in X and can be written as a linear combination of the
ei :

0 X
n
ei D pij e j
j D1
0
E D EP
where P is a square and invertible matrix with elements pj i
! Thus we can write,
0
E D E P %1
meaning that P %1 transforms the old coordinates into the new
coordinates.
! We define a similarity transformation as
T D P %1TP
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–5
Image, Kernal, Rank & Nullity

Isomorphism. A linear transformation is an isomorphism if the mapping


is “one-to-one”
Image. The image of a linear transformation T is,
Image.T / D fy 2 Y W T .x/ D y; x 2 Xg
Kernel. The kernel of a linear transformation T is,
ker.T / D fx 2 X W T .x/ D 0g
! Image.T / is a subspace of Y (the co-domain) and ker.T / is a
subspace of X (the domain)
Example 1.1
" #
1 0 1
A W R3 ! R2 with A D
0 1 1
! Rank of a Matrix
! The rank of a matrix is defined as the number of linearly independent
columns (or equivalently the number of linearly independent rows)
contained in the matrix
! Our definition of the rank of T is consistent with this because in the
case of T 2 L.X; Y / defined by x! Ax, the dimension of Image.T / is
nothing other than the dimension of the linear span of the columns of
A, which obviously is equal to the number of linearly independent
column vectors
! Note that an alternative but equivalent and perhaps more convenient
way to define the rank of a square matrix is by way of minors:
– If an n $ n matrix A has a non-zero r $ r minor, while every minor
of order higher than r is zero, then A has rank r.
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–6
Example 1.2
2 3
1 1 0
6 7
60 %1 17
AD6
61
7
4 1 075
3 2 1
! Since the dimension of A is 4 $ 3, it’s rank can be no larger than 3
! But since column 2 plus column 3 equals column 1, the rank reduces
to 2
Corrollary: A square n $ n matrix is full rank (non-singular) if its
determinant is non-zero.
! Some helpful properties of determinants:
Let A; B 2 Rn$n
1. det.A/ D 0 if and only if the columns (rows) of A are linearly
dependent
2. If a column (row) of A is zero, then det.A/ D 0
3. det.A/ D det.AT /
4. det.AB/ D det.A/ # det.B/
5. Swapping two rows (columns) changes the sign of det.A/.
(a) Scaling a row (column) by k, scales the determinant by k.
(b) Adding a multiple of one row (column) onto another does not
affect the determinant.
6. If A D diagonalfai i g or lower triangular, or A D upper triangular with
diagonal elements ai i ; then det.A/ D a11; a22; : : : ; ann

Define the adjoint of a square matirx A as follows:


c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–7

! Let !i;j D .%1/i Cj det.Mi;/ where Mi;j is the matrix A with the i throw
and j thcolumn removed.
! Then,
adj.A/ D !i;j
T

! Then the matrix inverse is defined as:


1
A%1 D # adj.A/
det.A/
Eigenvalue & Singular Value Decompositions

! Often in engineering we have to deal with scalar functions of a matrix


! For example

G.s/ D C.sI % A/%1B


where A is a square real matrix, represents the transfer function matrix of
a dynamic system with many inputs and many outputs.
! Equally,
y D e At xo
is the form of solution of n 1st order ordinary differential equations.
1
! We know how to handle terms like , whose inverse Laplace
s % a
Transform is just e at .
! But how do we handle .sI % A/%1? And how do we evaluate e At ?

– Remember that an n $ n matrix A can be thought of as a matrix


representation of a linear tranformation on Rn, relative to some
basis (e.g., the standard basis).

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–8

– Yet we know that a change of basis, described in matrix form by


0
E D EP , will transform A to P %1AP .

! Suppose then that it were possible to choose our new basis such that
P %1AP were a diagonal matrix D.
! This transforms general matrix problems into “diagonal” problems –
they consist now of a collection of scalar problems, 1=.s%di / and e di t ,
which we can easily solve.
! Such a basis exists for almost all A0s and is defined by the set of
eigenvectors of A.
! The corresponding diagonal elements di of D turn out to be the
eigenvalues of A.
Eigenvalue Decomposition

! A linear transformation T maps x ! y D Ax where in general x, y


have different directions
! There exist directions w however which are invariant under the
mapping
Aw D "w where " is a scalar:

! Such A-invariant directions are called eigenvectors of A and the


corresponding "’s are called eigenvalues of A; eigen in German
means “self”.
! The above equation can be re-arranged as:
."I % A/ w D 0
and thus implies that w lies in the kernel of ."I % A/:

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–9

! A non-trivial solution for w can be obtained if and only if,


ker."I % A/ ¤ f0g, which is only possible if ."I % A/ is singular, i.e.,
det."I % A/ D 0

! This equation is called the characteristic equation of A, and defines


all possible values for the eigenvalues "
! The " may assume one of the n roots, "i of the characteristic
equation, and are thus called characteristic roots or simply the
eigenvalues of A.
! Interesting and important properties of the eigenvalues of a matrix A:
X
n
– Trace.A/ & sum of the diagonal elements of A D "i D sum of
iD1
the eigenvalues
Y
n
– det.A/ & determinant of A D "i D product of the eigenvalues
iDi

! Once the values of " have been determined, the corresponding


eigenvectors may be calculated by solving the set of homogeneous
equations:
0 1
w1
B C
B w2 C
."I % A/ B C
B ::: C D 0 where wi denotes the i element of w
th

@ A
wn
N OTE : If Aw D w , then A .kw/ D kAw D k"w D " .kw/, so that if w is
an eigenvector of A, then so also will be any scalar multiple of w – the
important property of w is its direction, not its scaling or length.
Example:
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–10

2 3
1 %3 5
6 7
AD40 1 25
0 %1 4
so that,
det."I % A/ D "3 % 6"2 C 11" % 6 D 0

I MPORTANT E IGENVECTOR /E IGENVALUE P ROPERTIES :


1. The eigenvalues of a diagonal or triangular matrix are equal to the
diagonal elements of the matrix.
2. The eigenvectors of a diagonal matrix (with distinct diagonal
elements) are the standard basis vectors.
3. The eigenvalues of the identity matrix are all "1" and the eigenvectors
are arbitrary.
4. The eigenvalues of a scalar matrix kI are all k and the eigenvectors
are arbitrary.
5. Multiplying a matrix by a scalar k has the effect of multiplying all its
eigenvalues by k , but leaves the eigenvectors unaltered.
6. (Shift Theorem) Adding onto a matrix a scalar matrix, kI , has the
effect of adding k to all its eigenvalues, but leaves the eigenvectors
unaltered.
7. The eigenvectors of a matrix and its transpose are the same.
8. The eigenvalues/eigenvectors of a real matrix are real or appear in
complex conjugate pairs.

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–11
Characteristic Decomposition

! By convention, we collect all the eigenvectors of an n $ n square


matrix A as the column vectors of the eigenvector matrix, W :
W D Œw1; w2; # # # ; wn#
! Upon inversion we can obtain the dual set of left eigenvectors as the
row vectors of the dual eigenvector matrix, V D W %1:
2 3
t
v
6 1t 7
6 v2 7
V DW D6%1
6 ::: 7
7
4 5
vnt
! Combining the eigenvector and dual eigenvector matrices together
we get the eigenvalue/eigenvector decomposition (or characteristic
decomposition):
AW D W ƒ

where ƒ is a diagonal matrix containing the eigenvalues, "i , of A.


Post-mulitplying the above by W %1 we obtain:
A D W ƒW %1 D W ƒV
) This is also known as the spectral decomposition of A.
! The eigen-decomposition can be ill-conditioned with respect to its
computation as shown in the following example.
Example
" #
1 $
AD
0 1:001
where by inspection, "1 D 1 and "2 D 1:001.
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–12
! It is easy to see that,
" # " #
1 1 $1
D1#
0 0 1:001
0
" #
1
! Hence the first eigenvector is w1 D .
0
! From,
" #" # " # " #
1 $ a a C $b a
D D .1:001/
0 1:001 b 1:001b b
we get,

$b D :001a
1:001b D 1:001b
which gives,
a D 1000$b
" #
1000$
! Letting b D 1, then the second eigenvector is w2 D .
1
! The w2 eigenvector has a component which varies 1000 times faster
than $ in the A matrix.
! In general, eigenvectors can be extremely sensitive to small changes
in matrix elements when the eignvalues are clustered closely
together.
Some Special Matrices

R EAL S YMMETRIC M ATRIX: S T D S


! Here S has real eigenvalues, "i and hence real eigenvectors, wi ,
where the eigenvectors are orthonormal (i.e., wtj ! wi D 0, for all
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–13

i ¤ j ) . Therefore its eigenvalue/eigenvector decomposition


becomes:
S D RƒRT ;
where RT R D RRT D I .
X
Corrollary: Any quadratic form, aij xi xj can be written as x t S x and is
positive definite (i.e., is positive for x ¤ 0 ) if and only if "i .S/ > 0 for
all i.
N ORMAL M ATRIX : N 'N D N N '
! "
N
If ."i ; wi / is an eigen-pair of N , then "i ; wi is an eigen-pair of N '
The eigenvectors of N are orthogonal.

H ERMITIAN M ATRIX: H ' D H


H has real eigenvalues and orthogonal eigenvectors.

U NITARY M ATRIX: U 'U D U U ' D I


The eigenvalues of U have unit modulus and the eigenvectors are
orthogonal.

Jordan Form

! If the n $ n matrix A cannot be diagonalized, then it can always be


brought into Jordan form via a similarity transformation,
2 3
J1 0
6 ::: 7
T %1AT D J D 4 5
0 Jq
where

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–14

2 3
"i 1 0
6 7
6 "i : : : 7
Ji D 6
6
7
::: 1 7 2 R
ni $ni

4 5
0 "i
is called a Jordan block of size ni with eigenvalue "i . Note that J is
block-diagonal and upper bi-diagonal.
Singular Value Decomposition

! Despite its extreme usefulness, the eignenvalue/eigenvector


decomposition suffers from two main drawbacks:
1. It can only handle square matrices
2. Its computation may be sensitive to even small errors in the
elements of a matrix
! To overcome both of these we can instead use the Singular Value
Decomposition (when appropriate).
! Let M be any matrix of dimension p $ m. It can then be factorized as:
M D U †V '
where U U ' D Ip and V V ' D Im. Here, † is a rectangular matrix of
dimension p $ m which has non-zero entries only on its leading diagonal:
2 3
%1 0 # # # 0 0 # # # 0
6 7
6 0 %2 # # # 0 0 # # # 0 7
†D6 7
6 ::: ::: : : : ::: ::: : : : ::: 7
4 5
0 0 # # # %N 0 # # # 0
and %1 ( %2 ( # # # ( %N ( 0.
! This factorization is called the Singular Value Decomposition (SVD).
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–15

! The % ’s are called the singular values of M .


! The number of positive (non-zero) singular values is equal to the rank
of the matrix M .
! The SVD can be computed very reliably, even for very large matrices,
although it is computationally heavy.

– In MATLAB it can be obtained by the function svd: [U,S,V] =


svd(M).

! The largest singular value of M , %1, (also denoted, %)


N is an induced
norm of the matrix M :
kM xk
%1 D sup
x kxk
! It is important to note that,

MM ' D U †2U '


and
M ' M D V †2 V '

! These are eigen-decompositions since U 'U D I and V 'V D I with


† diagonal.

S OME P ROPERTIES
p p
! %i D "i .M M / D "i .MM '/= i thsingular value of M
'

! vi D wi .M 'M / = i t h input principal direction of M


! ui D wi .MM '/ = i t h output principal direction of M

Example (Golub & Van Loan):

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–16

" #
0:96 1:72
AD D U †V T
2:28 0:96
" #" #" #T
0:6 0:8 3 0 0:8 %0:6
D
0:8 %0:6 0 1 0:6 0:8
! Any real matrix, viewed geometrically, maps a unit-radius
(hyper)sphere into a (hyper)ellipsoid.
! The singular values %i give the lengths of the major semi-axes of the
ellipsoid.
! The output principal directions ui give the mutually orthogonal
directions of these major axes.
! The input principal directions vi are mapped into the ui vectors with
gain %i such that, Avi D %i ui .

3 × u1

v2
v1

u2

Domain of Matrix Input Image of Matrix Output

! Numerically, the SVD can be computed much more accurately than


the eigenvectors, since only orthogonal transformations are used in
the computation.
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–17
Vector & Matrix Norms

Vector Norms

A vector norm on C is a function f W C ! R with the following properties:


! f .x/ ( 0 8 x 2 Cn
! f .x/ D 0 , x D 0
! f .x C y/ ) f .x/ C f .y/ 8 x; y 2 Cn
! f .˛x/ D j˛jf .x/ 8 ˛ 2 C; x 2 Cn
The p-norms are defined by
!1=p
X
n
kxkp D jxi jp
i D1
Three norms of particular interest to control theory are:
! 1- NORM
X
n
kxk1 D jxi j
i %1

! 2- NORM
v
u n
uX 2 p
kxk2 D t jxi j D x 'x
i %1

! 1- NORM
kxk1 D max jxi j
i
The following relationships hold for 1, 2, and 1 norms for vectors:
p
! kxk2 ) kxk1 ) n kxk2

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–18
p
! kxk1 ) kxk2 ) n kxk1

! kxk1 ) kxk1 ) n kxk1

Matrix Norms

Matrix norms satisfy the same properties outlined above for vector
norms. The matrix norms normally used in control theory are also the
1-norm, 2-norm and 1-norm.
! Another useful norm is the Frobenius norm, defined as follows:
0 11=2
XXm n
kAkF D @ jaij j2A 8 A 2 Cm$n
i D1 j D1

Here aij denotes the element of A in the i th row, j th column.

! The p-norms for matrices are defined in terms of induced norms, that
is they are induced by the p-norms on vectors.
! One way to think of the norm kAkp is as the maximum gain of the
matrix A as measured by the p-norms of vectors acting as inputs to
the matrix A:

kAxkp
kAkp D sup 8 A 2 Cm
x¤0 kxkp

! The matrix norms are computed by the following expressions:


X
m
kAk1 D max jaij j
j
i D1

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–19

X
n
kAk1 D max jaij j
i
j D1

kAk2 D %N .A/

S OME U SEFUL R ELATIONSHIPS :


! kABkp ) kAkp kBkp
p
! kAk2 ) kAkF ) n kAk2
p
! max jaij j ) kAk2 ) mn max jaij j
i;j i;j
p
! kAk2 ) kAk1 kAk1
1 p
! p kAk1 ) kAk2 ) m kAk1
n
1 p
! p kA1k ) kAk2 ) n kAk1
m
Toeplitz & Hankel Matrix Forms

Model-based Predictive Control formulations make extensive use of


Toeplitz and Hankel matrices to simplify much of the algebra.
Consider the polynomial n.z/:
n.z/ D n0 C n1z %1 C # # # C nmz %m
Then we define the Toeplitz matrices, &n, Cn for n.z/ from the following
matrix form: # $
Cn
&n D
Mn
where,
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–20

2 3
n0 0 0 ### 0
6 7
6 n1 n0 0 ### 0 7
6 7
Cn D 6
6 n2 n1 n0 # # # 0 7 7
6 ::: ::: ::: ::: ::: 7
4 5
nm nm%1 nm%2 :::

and
2 ::: 3
0 nm nm%1
6 ::: 7
Mn D 4 ::: ::: :::
5
:::
0 0 0 # # # n0

Define the Hankel matrix Hn as


2 3
n n2 n3 # # # nm%1 nm
6 1 7
6 n2 n4 # # # nm 7
6 n3 0 7
6 n 7
6 3 n4 n5 # # # 0 0 7
6 : 7
6 :: ::: ::: ::: ::: ::: 7
Hn D 66n
7
7
6 m%1 nm 0 ::: 0 0 7
6 7
6 nm 0 0 ::: 0 0 7
6 7
6 0 0 ::: 7
4 0 0 0 5
::: ::: ::: ::: ::: :::

Note: The dimension of matrices &n; Cn; Hn are not defined here as they
are implicit in the context in which they are used.

Toeplitz matrices defined in this manner are used for changing


polynomial convolution into a matrix-vector multiplication. As might be
guessed, the relationship between matrix/vector multiplication and
polynomial operations is very useful for analysis of predictions.
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–21
Analytic Functions of Matrices

Let A be an n $ n matrix with eigenvalues "1; "2; : : : ; "n, and let p.x/ be
an infinite series in a scalar variable x,
p.x/ D a0 C a1x C a2x 2 C : : : C ak x k C : : :
Such an infinite series may converge or diverge depending on the value
of x.
E XAMPLES
1 C x C x2 C x3 C : : : C xk C : : :

! This geometric series converges for all jxj < 1 .

x2 x3 xk
1CxC C C ### C C ###
2Š 3Š kŠ
! This series converges for all values of x and is in fact, e x .

These results have analogies for matrices.

Let A be an n $ n matrix with eigenvalues "i . If the infinite series p.x/


defined above is convergent for each of the n values of "i , then the
corresponding matrix inifinite series
X 1
p.A/ D a0I C a1A C a2A2 C # # # C ak Ak C # # # D ak Ak
kD0
converges.

D EFINITION: A single-valued function f .z/, with z a complex scalar, is


analytic at a point z0 if and only if its derivative exists at every point in
some neighborhood of z0: Points at which the function is not analytic are
called singular points.

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–22

R ESULT. If a function f .z/ is analytic at every point in some circle ( in


the complex plane, then f .z/ can be represented as a comnvergent
power series (Taylor series) at every point z inside (.

I MPORTANT R ESULT. If f .z/ is any function which is analytic withing a


circle in the complex plane which contains all the eigenvalues "i of A,
then a corresponding matrix function f .A/ can be defined by a
convergent power series.
E XAMPLE :
The function e ˛x is analytic for all values of x. Therefore it has
convergent series representation:

˛x ˛ 2x 2 ˛ 3x 3 ˛k xk
e D 1 C ˛x C C C ### C C ###
2Š 3Š kŠ
The corresponding matrix function is

˛A ˛ 2 A2 ˛ 3 A3 ˛ k Ak
e D I C ˛A C C C ### C C ###
2Š 3Š kŠ

E XAMPLE :
e At # e Bt D e .ACB/t
if and only if, AB D BA.

OTHER U SEFUL R EPRESENTATIONS :


A3 A5
sin A D A % C % ###
3Š 5Š
A2 A4
cos A D I % C % ###
2Š 4Š
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–23

A3 A5
sinh D A C C C ###
3Š 5Š
A2 A4
cosh D I C C C ###
2Š 4Š
sin2 A C cos2 A D I
e jA % e %jA e jA C e %jA
sin A D and cos A D
2j 2
Cayley-Hamilton Theorem

If the characteristic polynomial of a matrix A is


j"I % Aj D .%"/n C cn%1"n%1 C cn%2"n%2 C # # # C c1" C c0 D 4."/
then the corresponding matrix polynomial is
4.A/ D .%1/nAn C cn%1An%1 C cn%2An%2 C # # # C c1A C c0I
C AYLEY -H AMILTON T HEOREM. Every matrix satisfies its own
characteristic equation.
4.A/ D 0

The proof of this follows by applying a diagonalizing similarity


transformation to A and substituing into 4.A/.
E XAMPLE :
" #
3 1
AD
1 2

j"I % Aj D .3 % "/.2 % "/ % 1 D "2 % 5" C 5


" # " # " # " #
10 5 3 1 1 0 0 0
4.A/ D A2 % 5A C 5I D %5# C5# D
5 5 1 2 0 1 0 0

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "
ECE5590, Mathematical Fundamentals 2–24

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright "

You might also like