Unknown Input Observer
and Robust Control
Zhu Fanglai
Unknown Input Observer (UIO)
Consider a linear system with unknown inputs
x Ax Bu D
(1)
y Cx
where x R n is the state vector, u R m is the control input vector, y R p
is the measured output vector and R q stands for unknown Inputs
consisting of external disturbance, the unknown mode parameters or
variations of some mode parameters and actuator faults and so on.
A natural assumption is the unknown input is bounded: d 0
The main purpose of unknown input observer design is to construct a
system which is able to estimate the system states by avoiding the
influence of the unknown input, or even more reconstruct the unknown
inputs.
The history of unknown input observer design
Luenberger observer (1960s) Unknown input observer (1967-1980:
linear systems, state estimation without reconstructing the unknown
inputs) (1980-1995: linear systems, nonlinear systems, state
estimation without or with reconstructing the unknown inputs) (1995-
2010:deal with observer matching condition)
(2000-present: simultaneous estimations of the state, unknown inputs
and measurement noise)
The existence of the unknown input observer
Now, consider system as follows:
Ex Ax Bu D
(2)
y Cx Gu F
where E R rn is a known constant matrix. When r n (r n) , system (2) is
called descriptor system. When r n but E is singular, system (2) is called
singular system or descriptor system. When r n and E is nonsingular,
system (2) turns out to be a general linear system with unknown inputs.
Definition 1: For any complex s, if it satisfies
sE A D
rank nq (3)
C F
then the complex s is called an invariant zero of system (2) respect to
unknown input or system (E, A, C, D, F)
Definition 2: The system (2) or system (E, A, C, D, F) is called minimum
phase system, if all of its invariant zeros are with negative real parts.
An equivalent statement of Definition 2 is:
Definition 2#: The system (2) or system (E, A, C, D, F) is called minimum
phase system, if
sE A D
rank nq
C F
holds for all complex s with Re( s) 0 .
Condition 1: Condition
E D
rank nq (4)
C F
is called (unknown input) observer matching condition of system (2).
Now, again consider the general linear system (1) with unknown input.
The minimum phase system definition for system (1) becomes:
Definition 3: The system (1) or system (A, C, D) is called minimum phase
system, if
sI n A D
rank nq (5)
C 0 p q
holds for all complex s with Re( s) 0 .
Also, the observer matching (condition 1) for system (1) becomes:
Condition 2: Condition
In D
rank nq
C 0 pq
is called (unknown input) observer matching condition for system (1).
Lemma 1: The observer matching condition 2 is equivalent to
rank(CD ) rank(D) q (6)
Proof:
In D I n 0 In D In D
n q rank rank rank
C 0 p q C I p C
0 pq 0 CD
I n D In D In 0
rank rank n rank CD
0 CD 0 I q 0
CD
Thus, we have rank CD q rank D .
Theorem 1: For general unknown input linear system (1), there exists an
unknown input observer (UIO) which can produce simultaneously the
estimations of the system states and the unknown inputs, if and only if
system (1) is a minimum phase system and observer matching condition
(6) holds.
Definition 4: The pair (A, C) is called detectable, if
sI n A
rank n
C
holds for any complex s with Re( s) 0 .
Theorem 2: The minimum phase system condition (5) and the observer
matching condition (6) hold if and only if , for any symmetric positive
definite matrix Q R nn, the following matrix equations
( A LC )T P P ( A LC ) Q (7)
T
D P HC
have solutions for symmetric positive definite matrix P R nn, matrices L R n p
and H R q p.
The above results come from
M. Corless, J. Tu, State and input estimation for a class of uncertain systems,
Automatica 34 (6) (1998) 757–764.
Reduced-Observer Design
Consider unknown input system (1), Suppose that the minimum phase
system condition (5) and observer matching condition (6) hold for it. Then
by Theorem 2, we know that (7) has solutions.
Decompose the state vector x, the matrices A, B, D, P and Q into block
vector or matrices as follows
x1 A11 A12 B1 D1 P1 P2 Q1 Q2
x , A , B , D , P T ,Q T
x2 A21
A22 B2 D2 P2 P3 Q2 Q3
where x1 R p , B1 R pm , D1 R pq , A11 , P1 , Q1 R p p . And also without loss of
generality, we assume that C I p 0 .
Denote K P31 P2T R ( n p ) p , and the second equation of (7) is just
P1 P2 D1 I p T
PT H
2 P3 D2 0
which implies that P2T P3 D 0 . Multiplying P31 from left side of previous
equation yields K I n p D 0 . Now by taking a state equivalent
transformation z1 I p 0 x1
z
z2 K I n p x2
System (1) becomes
z1 A11 A12 K z1 A12 z2 B1u D1
(8)
z2 A22 KA12 z2 K A11 A12 K A21 A22 K z1 K I n p Bu
Theorem 3: Under the assumption that system (1) is a minimum phase
system and the observer matching condition (6) holds, then the following
system
zˆ2 A22 KA12 zˆ2 K A11 A12 K A21 A22 K y K I n p Bu
y (9)
x
ˆ
ˆ
z
2 Ky
a reduced-order observer of system (1) with dimension of n-p, where K
services as the reduced-order observer gain matrix.
Proof: Noticing that z1 x1 y, and subtracting the first equation of (9) from
the second one of (8), we get the observer error dynamic system
z2 A22 KA12 z2 (10)
If we notice that output matrix C has the special form of C I p 0 , we will
find that the easily that the block form of the first equation of (7) is
* * Q1 Q2
T
A22 KA12 P3 P3 A22 KA12 3 Q3
T
* Q2
and this gives
A22 KA12 P3 P3 A22 KA12 3 Q3 0
T
(11)
(11) is just an Lyapunov equation which indicates that A22 KA12 is a
asymptotical matrix. Therefore, the observer error dynamic system (10) is
asymptotically stable, i.e. we can conclude that lim z2 t 0 and this ends
t
the proof of Theorem 3.
Sliding Model Observer Design
Theorem 4: Under the assumption that system (1) is a minimum phase
system and the observer matching condition (6) holds, then the following
system
xˆ Axˆ Bu L( y Cxˆ ) (t , xˆ , y ) (12)
is a sliding model observer of system (1), where
DH y Cxˆ
(t ; xˆ, y )
H y Cxˆ
is the sliding model term of the observer, where is a positive scalar.
Proof: The observer error dynamic system can be obtained by subtracting
(12) from (1):
e A LC e D (13)
where e x xˆ Consider Lyapunov function candidate V t eT t Pe t , and
its derivative along the error system (13) is
V t 2eT Pe eT A LC P P A LC 2eT PD 2eT PD
T
Since
e T
PDH y Cxˆ eT PDHCe eT PDDT Pe
2e P 2
T
2 2
H y Cxˆ HCe DT Pe
T 2
D Pe
2 2 DT Pe
DT Pe
and
2eT PD 2 eT PD 2 DT Pe 2d 0 DT Pe
Therefore, we have
V t eT Qe 2eT PD 2eT PD K eT Qe 2 d 0 DT Pe
If we choose the is large enough such that d0 , then we have
V t eT Qe 0
and this implies that error dynamic system (13) is asymptotically stable, i.e.,
lim e t 0
t
and this ends the proof of Theorem 4.
Remark 1: In practice we usually compute the sliding mode term by
DH y Cxˆ
, if y Cxˆ
(t ; xˆ, y ) H y Cxˆ
0, if y Cxˆ
where is a positive scalar which is small enough.
Sliding Model Control
In the formulation of any practical control problem, there will always be a
discrepancy between the actual planet and its mathematical model used
for control purpose.
These discrepancies (or mismatches) arise from unknown external
disturbances, plant parameters, and parasitic/unmodeled dynamics.
Designing control laws that provide the desired performance to the
closed-loop system in the presence of these disturbance or uncertainties
is a very challenging task.
One approach is the so-called sliding model control.
Consider the single-dimensional motion of a unite mass
x1 x2
(14)
x2 u f ( x1 , x2 , t )
where u is the control input, and the disturbance term f ( x1 , x2 , t ) satisfies
f ( x1 , x2 , t ) L . The problem is to design a feedback control law such that the
closed-loop system can be asymptotically stable.
Let us introduce a new variable
x1 , x2 x2 cx1 , c 0 (15)
In order to achieve lim xi t 0,(i 1, 2) in the presence of the bounded
t
disturbance f ( x1 , x2 , t ), we only need to drive the variable to zero in a
means of the control u.
Consider Lyapunov function candidate
1
V t 2 (16)
2
In order to provide the asymptotic stability of
x2 cx1 cx2 u f x1 , x2 , t , 0 0 (17)
the following conditions must be satisfied:
(a) V t 0 For 0
(b) lim V t
In order to achieve finite-time convergence, condition (a) can be modified
V V 1/ 2 , 0 (18)
Based on (18), we have
dV V 1/ 2 dt , V 1/2 dV dt , d V 1/ 2 dt
2
Now, integrating above inequality from 0 to t gives
1
0 V 1/ 2 t t V 1/2 0 0
2
Consequently, V(t) reaches zero in a finite time
2V 1/ 2 0 (19)
tr
Therefore, a control u that is computed to satisfy (18) will drive V t or to
zero in a finite time tr and will keep it at zero thereafter.
The derivative of V is computed as
V cx2 f x1 , x2 , t u (20)
Assuming u cx2 v and substituting it into (19) we obtain
V f x1 , x2 , t v L v (21)
If we design v sign , where 0 and
1, if x 0
sign x
1, if x 0
then we have
V L L
On the one hand, in order to meet (18), we only need to
(16)
1
V L V
1/ 2
2
And this gives L . Finally, the control law u that drives to zero in
2
a finite time determined by (19) is
u Cx2 sign (22)
Theorem 4: The sliding mode controller (22) can drive the trajectories of
original system states x1 t and x2 t onto the sliding model surface
x1 , x2 , t 0 (23)
where is determined by (15) in a finite time tr constrained by (19). Thus,
under the controller (22), the trajectories of the original system states x1 t
and x2 t are driven onto zeros asymptotically in a finite time tr even if it
suffers from disturbance f x1 , x2 , t .
Proof: (23) implies that x1 cx1 0, c 0 since x2 x1 . A general solution of it is
x1 t x1 0 exp ct
x2 t cx1 0 exp ct