0% found this document useful (0 votes)
31 views33 pages

Unknown Input Observer and Robust Control1

Uploaded by

szy184521004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views33 pages

Unknown Input Observer and Robust Control1

Uploaded by

szy184521004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Unknown Input Observer

and Robust Control


Zhu Fanglai
1. Unknown Input Observer (UIO)
Consider a linear system with unknown inputs

 x  Ax  Bu  D
 (1)
 y  Cx

where x  R n is the state vector, u  R m is the control input vector, y  R p


is the measured output vector and   R q stands for unknown Inputs
consisting of external disturbance, the unknown mode parameters or
variations of some mode parameters and actuator faults and so on.
A natural assumption is the unknown input  is bounded:   d 0
The main purpose of unknown input observer design is to construct a
system which is able to estimate the system states by avoiding the
influence of the unknown input, or even more reconstruct the unknown
inputs.
The history of unknown input observer design
Luenberger observer (1960s)  Unknown input observer (1967-1980:
linear systems, state estimation without reconstructing the unknown
inputs)  (1980-1995: linear systems, nonlinear systems, state
estimation without or with reconstructing the unknown inputs)  (1995-
2010:deal with observer matching condition)
 (2000-present: simultaneous estimations of the state, unknown inputs

and measurement noise)


The existence of the unknown input observer
Now, consider system as follows:
 Ex  Ax  Bu  D
 (2)
 y  Cx  Gu  F

where E  R rn is a known constant matrix. When r  n (r  n) , system (2) is


called descriptor system. When r  n but E is singular, system (2) is called
singular system or descriptor system. When r  n and E is nonsingular,
system (2) turns out to be a general linear system with unknown inputs.
Definition 1: For any complex s, if it satisfies
 sE  A D 
rank    nq (3)
 C F

then the complex s is called an invariant zero of system (2) respect to


unknown input  or system (E, A, C, D, F)
Definition 2: The system (2) or system (E, A, C, D, F) is called minimum
phase system, if all of its invariant zeros are with negative real parts.
An equivalent statement of Definition 2 is:
Definition 2#: The system (2) or system (E, A, C, D, F) is called minimum
phase system, if
 sE  A D 
rank    nq
 C F

holds for all complex s with Re( s)  0 .


Condition 1: Condition
E D
rank    nq (4)
C F

is called (unknown input) observer matching condition of system (2).


Now, again consider the general linear system (1) with unknown input.
The minimum phase system definition for system (1) becomes:
Definition 3: The system (1) or system (A, C, D) is called minimum phase
system, if
 sI n  A D 
rank    nq (5)
 C 0 p q 

holds for all complex s with Re( s)  0 .


Also, the observer matching (condition 1) for system (1) becomes:
Condition 2: Condition
 In D 
rank   nq
C 0 pq 

is called (unknown input) observer matching condition for system (1).


Lemma 1: The observer matching condition 2 is equivalent to
rank(CD )  rank(D )  q (6)
Proof:
In D    I n 0  In D    In D 
n  q  rank    rank     rank 
C 0 p q    C I p   C 
0 pq   0 CD 

  I n D  In  D   In 0 
 rank      rank  n  rank  CD 
  0 CD   0  I q   0


CD 

Thus, we have rank  CD   q  rank  D  .


Theorem 1: For general unknown input linear system (1), there exists an
unknown input observer (UIO) which can produce simultaneously the
estimations of the system states and the unknown inputs, if and only if
system (1) is a minimum phase system and observer matching condition
(6) holds.
Definition 4: The pair (A, C) is called detectable, if

 sI n  A
rank   n
 C 
holds for any complex s with Re( s)  0 .
Theorem 2: The minimum phase system condition (5) and the observer
matching condition (6) hold if and only if , for any symmetric positive
definite matrix Q  R nn, the following matrix equations
( A  LC )T P  P ( A  LC )  Q (7)
 T
 D P  HC
have solutions for symmetric positive definite matrix P  R nn, matrices L  R n p
and H  R q p.
The above results come from
M. Corless, J. Tu, State and input estimation for a class of uncertain systems,
Automatica 34 (6) (1998) 757–764.
Reduced-Observer Design
Consider unknown input system (1), Suppose that the minimum phase
system condition (5) and observer matching condition (6) hold for it. Then
by Theorem 2, we know that (7) has solutions.
Decompose the state vector x , the matrices A, B, D, P and Q into block
vector or matrices as follows
 x1   A11 A12   B1   D1   P1 P2   Q1 Q2 
x   , A   , B   , D   , P   T  ,Q   T
 x2   A21

A22   B2   D2   P2 P3  Q2 Q3 

where x1  R p , B1 R pm , D1  R pq , A11 , P1 , Q1  R p p . And also without loss of


generality, we assume that C   I p 0  .
Denote K  P31 P2T  R ( n  p ) p , and the second equation of (7) is just
 P1 P2   D1   I p  T
 PT     H
 2 P3   D2   0 

which implies that  P2T P3  D  0 . Multiplying P31 from left side of previous
equation yields  K I n  p  D  0 . Now by taking a state equivalent
transformation  z1   I p 0   x1 
z 
 z2   K I n  p   x2 
System (1) becomes
 z1   A11  A12 K  z1  A12 z2  B1u  D1
 (8)
 z2   A22  KA12  z2   K  A11  A12 K   A21  A22 K  z1   K I n  p  Bu

Theorem 3: Under the assumption that system (1) is a minimum phase


system and the observer matching condition (6) holds, then the following
system
 zˆ2   A22  KA12  zˆ2   K  A11  A12 K   A21  A22 K  y   K I n  p  Bu

  y  (9)
x  
ˆ 
 ˆ
z
 2  Ky 
a reduced-order observer of system (1) with dimension of n-p, where K
services as the reduced-order observer gain matrix.
Proof: Noticing that z1  x1  y, and subtracting the first equation of (9) from
the second one of (8), we get the observer error dynamic system

z2   A22  KA12  z2 (10)


If we notice that output matrix C has the special form of C   I p 0  , we will
find that the easily that the block form of the first equation of (7) is
* *   Q1 Q2 
    T
 A22  KA12  P3  P3  A22  KA12 3  Q3 
T
* Q2
and this gives

 A22  KA12  P3  P3  A22  KA12 3  Q3  0


T
(11)
(11) is just an Lyapunov equation which indicates that A22  KA12 is a
asymptotical matrix. Therefore, the observer error dynamic system (10) is
asymptotically stable, i.e. we can conclude that lim z2  t   0 and this ends
t 

the proof of Theorem 3.


2. Sliding Model Observer Design
Theorem 4: Under the assumption that system (1) is a minimum phase
system and the observer matching condition (6) holds, then the following
system
xˆ  Axˆ  Bu  L( y  Cxˆ )   (t , xˆ , y ) (12)

is a sliding model observer of system (1), where


DH  y  Cxˆ 
 (t ; xˆ , y )  
H  y  Cxˆ 

is the sliding model term of the observer, where  is a positive scalar.


Proof: The observer error dynamic system can be obtained by subtracting
(12) from (1):
e   A  LC  e    D (13)

where e  x  xˆ Consider Lyapunov function candidate V  t   eT  t  Pe  t , and


its derivative along the error system (13) is

V  t   2eT Pe  eT  A  LC  P  P  A  LC    2eT PD  2eT PD


T
 
Since
eT
PDH  y  Cxˆ  eT PDHCe eT PDDT Pe
2e P  2 
T
 2  2
H  y  Cxˆ  HCe DT Pe
T 2
D Pe
 2  2  DT Pe
DT Pe
and
2eT PD  2 eT PD  2 DT Pe   2d 0 DT Pe

Therefore, we have

V  t   eT Qe  2eT PD  2eT PD K  eT Qe  2    d 0  DT Pe


If we choose the  is large enough such that   d0 , then we have
V  t   eT Qe  0

and this implies that error dynamic system (13) is asymptotically stable, i.e.,
lim e  t   0
t 

and this ends the proof of Theorem 4.


Remark 1: In practice we usually compute the sliding mode term by
 DH  y  Cxˆ 
 , if y  Cxˆ  
 (t ; xˆ , y )   H  y  Cxˆ 
0, if y  Cxˆ  

where  is a positive scalar which is small enough.
Sliding Model Control
 In the formulation of any practical control problem, there will always be a
discrepancy between the actual planet and its mathematical model used
for control purpose.
 These discrepancies (or mismatches) arise from unknown external
disturbances, plant parameters, and parasitic/unmodeled dynamics.
 Designing control laws that provide the desired performance to the
closed-loop system in the presence of these disturbance or uncertainties
is a very challenging task.
 One approach is the so-called sliding model control.
Consider the single-dimensional motion of a unite mass
 x1  x2
 (14)
 x2  u  f ( x1 , x2 , t )
where u is the control input, and the disturbance term f ( x1 , x2 , t ) satisfies
f ( x1 , x2 , t )  L . The problem is to design a feedback control law such that the

closed-loop system can be asymptotically stable.


Let us introduce a new variable
    x1 , x2   x2  cx1 , c  0 (15)
In order to achieve lim xi  t   0,(i  1, 2) in the presence of the bounded
t 

disturbance f ( x1 , x2 , t ), we only need to drive the variable  to zero in a


means of the control u.
Consider Lyapunov function candidate
1
V t    2 (16)
2
In order to provide the asymptotic stability of
  x2  cx1  cx2  u  f  x1 , x2 , t  ,   0   0 (17)
the following conditions must be satisfied:
(a) V  t   0 For   0
(b) lim V t   


In order to achieve finite-time convergence, condition (a) can be modified


V  V 1/2 ,   0 (18)
Based on (18), we have

dV  V 1/2 dt , V 1/2 dV   dt , d V 1/2    dt
2
Now, integrating above inequality from 0 to t gives
1
0  V 1/2  t     t  V 1/2  0   0
2
Consequently, V(t) reaches zero in a finite time
2V 1/2  0  (19)
tr 

Therefore, a control u that is computed to satisfy (18) will drive V  t  or  to
zero in a finite time tr and will keep it at zero thereafter.
The derivative of V is computed as
V      cx2  f  x1 , x2 , t   u  (20)

Assuming u  cx2  v and substituting it into (19) we obtain


V   f  x1 , x2 , t    v   L   v (21)
If we design v    sign   , where   0 and

1, if x  0
sign  x   
 1, if x  0
then we have
V   L          L 

On the one hand, in order to meet (18), we only need to


(16)
1
V       L   V    
 1/2

2

And this gives   L  . Finally, the control law u that drives  to zero in
2
a finite time determined by (19) is
u  Cx2   sign   (22)
Theorem 4: The sliding mode controller (22) can drive the trajectories of
original system states x1  t  and x2  t  onto the sliding model surface
  x1 , x2 , t   0 (23)
where  is determined by (15) in a finite time tr constrained by (19). Thus,
under the controller (22), the trajectories of the original system states x1  t 
and x2  t  are driven onto zeros asymptotically in a finite time tr even if it
suffers from disturbance f  x1 , x2 , t  .
Proof: (23) implies that x1  cx1  0, c  0 since x2  x1 . A general solution of it is
 x1  t   x1  0  exp  ct 

 x2  t   cx1  0  exp  ct 
3. Linear Matrix Inequality (LMI)
Definition 3. 1: A matrix function in form of
f  A1 , , Al ; X 1 , , X m   0 (3,1)
is called matrix inequality, where Ai  i  1, , l  are constant known matrices
and X i  i  1, , m  are matrix variables. The notation of “<“ means that the
value of the function of f is a symmetric negative definite matrix. If all the
terms in (3.1) are linear terms with respective to the variables of X i  i  1, , m 
, then it is called a linear matrix inequality (LMI).
For example, the Lyapunov inequality AT P  PA  Q  0 is a LMI, where A  R nn
is a constant known matrix, Q  R nn is a constant known symmetric matrix,
and P  R nn Is a matrix variable which should be symmetric positive definite
matrix. The Riccati inequality
AT P  PA  PBR 1 BT P  Q  0 (3.2)
is a matrix equality but not a LMI, where R and Q are two symmetric positive
definite matrices.
Every non-LMI can be transformed into a LMI equivalently by the following
famous Schur complement lemma:
Lemma 3.1 (Schur complement lemma): For a given symmetric block matrix
 S1 S2  nn
S T  R
 S2 S3 
where S1  R rr (r  n) and S3  R ( n r )( n r )are two symmetric matrices, and
S 2  R r( n  r ) is a matrix, then the following three conditions are equivalent:

(i) S  0
(ii) S1  0, S3  S 2T S1S 2  0
(iii) S 2  0, S1  S 2 S3 S 2T  0
Proof: (i)  (ii) Since S is a symmetric matrix, then S1  S1T , S2  S2T . Besides, S
< 0 implies that S1 is a symmetric positive or negative definite matrix,
therefore, S1 is a nonsingular matrix. Because
 Ir 0   S1 S2   I r 0   S1 0 
  S T S 1 
 2 1 I n  r   S2T S3   0 1 
S1 S 2   0 S3  S 2T S11S 2 

Thus, S < 0 if and only if S1  0, S3  S2T S1S2  0 .


Obviously, the Riccati inequality (3.2) is non-LMI, by Lemma 3.1, we can
transform it into a LMI. In fact, because
AT P  PA  PBR 1 BT P  Q  AT P  PA  Q  PB   R  BT P
1

Now, by Lemma 3.1, we can conclude that matrix inequality (3.2) is


equivalent to AT P  PA  Q  0 and
 AT P  PA  Q PB 
 0

T
B P R 
which is a LMI.
H∞ Stability
Consider linear system with unknown input
 x  Ax  Bu  Dd
 (3.3)
 y  Cx
where x  R n , y  R pare the state and output vectors, respectively, and
d  R q is the unknown input vector. A is a constant matrix.

Suppose f  t   R m
is a time-varying function. Suppose that 0 f  t    ,
2 1/2
  
where f  t   f T  t  f  t  , then f 2
  f t   is called the L2  norm
 0 
of the function f  t  .
In a sense of time domain, the H∞ stability of system (3.3) from the
unknown input d to output y is equivalent to (i) all the eigenvalues of A
are with negative real parts; (ii) y 2   d 2, where   0 is called the H∞
stability performance index.
Lemma 3.1: System (3.1) is H∞ stable from the unknown input d to output
y with a performance index of   0 , if there exist a Lyapunov functionV  t 
such that
V  t   yT  t  y  t    2 d T  t  d  t   0 (3.4)

where V  t  is the derivative of V  t  with time t along the trajectory of the


dynamic system (3.3).
Proof: It from (3.4), yT  t  y  t    2 d T  t  d  t   V  t  . Then integrating it from 0
to t gives t t
 y s  d s ds  V  t   V  0 
2 2
2
ds   2
2 (3.5)
0 0

 
y  s  2 ds   d  s  2 ds  V     V  0   0 which implies that y 2   d
2 2
Thus, 0 
2
0 2
Now, for system (3.3), we design a state
xˆ  Axˆ  Bu  L  y  Cxˆ  (3.6)
The observer error dynamic system can be obtained by subtracting (3,6)
from (3.1)
e   A  LC  e  Dd (3.7)

What follows, we plan to determine the observer L such that the error
dynamic system (3.7) is stable in a sense of H∞ Stability satisfying e 2   d 2

For this purpose, consider Lyapunov function candidate V  t   eT Pe and its


derivative along the trajectory of (3.7) is
V  2eT P  A  LC  e  2eT PDd  eT  P  A  LC    A  LC  P  e  2eT PDd
T
 
Thus, we have

V  e e   d d  e P  A  LC    A  LC  P  I n  e  2eT PDd   2 d T d
 T 
T 2 T T
 
 PA  AT
P  PLC  C L P  In
T T
PD   e 
 eT d T    
 *  2 I n   d 
  T 

where
 PA  AT P  XC  X T P  I n PD 
   
 *  2 I n 
X  PL
We therefore obtain H ∞ stable observer is designed by solving the
following LMI problem
min 
 0

 PA  AT P  XC  X T P  I n PD  (3.8)
 0
 *  I n 

where    2 . Then, if the LMI problem (3.8) is feasible, we choose L  P 1 X .

You might also like