4CM40_Physical_and_data-driven_modelling Notes Chapter6
4CM40_Physical_and_data-driven_modelling Notes Chapter6
6.1 Introduction
The focus will be on model order reduction of linear time-invariant systems. Before discussing new
methods, some extensions to the results of Chapter 4 will be given. Suppose that a (high order)
continuous-time strictly proper time-invariant state space model (A, B, C) is to be approximated by a
system in the same class but of lower order. The given system has zero initial state:
d
x(t) = Ax(t) + Bu(t)) x(t0 ) = 0, x(t) ∈ Rn
dt
y(t) = C x(t) (6.1)
and its transfer function matrix is
G(s) = C(s In − A)−1 B (6.2)
173
174 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
Cr := Dr−1 C
Bc := B Dc (6.5)
Then the transfer function matrix G(s) (6.2) is brought to the diagonally multiplied form
Under the assumptions to be made in Section 6.2.2, G r c (0) = −Cr A−1 Bc represents the steady state
gain of the system. One might consider G r c (0) as an important indicator for the magnitudes of the
entries of G r c (s).
In Section 6.3, the following properties regarding the system (6.1) will be assumed to hold.
The next section will first consider some extensions to the method of modal approximation, discussed
previously in Chapter 4.
where λi 6 = 0 is assumed by virtue of the first assumption in 6.2.2. The contribution of each mode
to the input-output behaviour of the system can be evaluated in various ways and leads to different
approaches to arrive at a truncated model.
6.3. EXTENSIONS TO MODAL APPROXIMATION 175
where k·k denotes a matrix norm, then the contribution of mode i to the input-output behaviour can
be considered as very small, relative to other contributions. Because mode i is stable, its contribution
can be deleted in the summation (6.8) and thus in the model (3, B ∗ , C ∗ ). This amounts to deleting
column i in the matrices 3 and C ∗ , and deleting row i in the matrices 3 and B ∗ . This approach to
arrive at a reduced model has been mentioned before in Chapter 4.
then the contribution of mode i to the steady state gain matrix of the system can be considered as
very small. Because mode i is stable, it can be deleted in the summation (6.8) and thus in the model
(3, B ∗ , C ∗ ). This amounts to deleting column i in the matrices 3 and C ∗ , and deleting row i in the
matrices 3 and B ∗ .
original
- G oc (s)
j
?
-
6
- G oc (s) approximation
G(s) error
j
?
- -
u(s) 6 (s)
approximation
-
G k (s) = G oc (s)
G(s) is decomposed into the sum of its strongly controllable and observable part G oc (s) and near-
uncontrollable, near-unobservable part G oc (s). A good candidate approximation of lower order k than
the original system order n is G k (s) = G oc (s). The transfer function G(s)− G k (s) is the approximation
error and in this case equals G oc (s).
In the results to be discussed in this section, the Lyapunov matrix equation plays a prominent role.
• Z > 0 if and only if (F, G) is controllable, where GGT := Q, i.e., G is any factor of Q
The result provides a physical interpretation for the relationship between the controllability of (F, Q)
and the nonsingularity of Z : the white noise input with intensity Q will only contribute to the state
variance over the complete state space (i.e., Z > 0) if (F, Q) is controllable. The system (6.1) under
the assumptions 6.2.2 allows the definition of the controllability Gramian matrix S = ST , defined as
Z ∞
T
S= e At B BT e A t dt (6.17)
0
The asymptotic stability of A guarantees the integral expressions to exist, and the solutions S and P are
unique. The Gramian matrices S and P can be given an interpretation by considering the system (6.1)
over the time interval t ∈ (−∞, ∞) with state x(0), and considering the energy of the future output
given x(0), and the minimal energy needed in the past to arrive at x(0) from the zero state at t = −∞.
R∞
Note that the energy of the future output given x(0) is 0 yT (t)y(t) dt with u(t) = 0, t ≥ 0. The result
shows that if S −1 is large which is the case if S is nearly singular, then there will be some states that
can only be reached if a large input energy is used, i.e., these states are close to uncontrollability. If the
system is released from x0 at t = 0 with u(t) = 0, t ≥ 0 then if the observability Gramian matrix P is
nearly singular, a part of the initial condition will have little contribution to the output energy.
178 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
Adding this constraint to the expression to be minimised (6.21) by means of a Lagrange multiplier
vector λ leads to the Lagrangian
Z 0 Z 0
L(u(t), λ) = u (t)u(t) dt + λ [x0 −
T T
e−At Bu(t) dt] (6.23)
−∞ −∞
which is a function of u(t) and λ. For a stationary point, small perturbations δu(t) and δλ are required
to induce a perturbation zero in the Lagrangian. Thus
Z 0 Z 0 Z 0
δL(u(t), λ) = 2 uT (t)δu(t) dt − λT e−At Bδu(t) dt + δλT [x0 − e−At Bu(t) dt] (6.24)
−∞ −∞ −∞
must be zero for any perturbations δu(t) and δλ. From this equation it follows that the necessary
conditions for a stationary point are equation (6.22) and
1
uT (t) = λT e−At B (6.25)
2
Using (6.25) in (6.22) gives
Z 0
1 T 1
x0 = e−At B BT e−A t dtλ = Sλ (6.26)
2 −∞ 2
λ = 2S −1 x0 (6.27)
which establishes the result. Although the Gramian matrices S and P are defined as solutions to (6.17),
(6.18), respectively, they can be computed more efficiently on the basis of purely algebraic relations, as
the following result shows.
• S ≥ 0 and P ≥ 0
6.4. APPROXIMATION BY GRAMIAN-BASED BALANCE/TRUNCATE 179
• These matrices are the solution to the linear matrix equations (Lyapunov equations)
AS + S AT + B BT = 0
AT P + P A + C T C = 0 (6.30)
• (A, B) is controllable if and only if S > 0, and (A, C) is observable if and only if P > 0.
The result can be shown as follows.
which shows the second assertion. To prove the third assertion, observe that the linear equations (6.30)
in fact define a linear mapping between the n × n entries of −C T C and the n × n entries of P. By
bringing the matrix A to Jordan form, it can be shown that the eigenvalues of this map are given by
The asymptotic stability of A guarantees that no eigenvalue of this map actually is zero, i.e., the
mapping is invertible and thus the solution P is unique, and this shows the third assertion. Finally,
suppose that x ∈ Rn is nonzero and satisfies P x = 0. Then
Z ∞
T T
x Px = x T e A t C T Ce At x dt = 0 (6.34)
0
and thus
Ce At x = 0 ∀t ≥ 0 (6.35)
Expanding the left side of this equation in a Taylor series expansion about s = 0, this equation requires
that
Cx = 0
C Ax = 0
C A2 x = 0
..
.
C An−1 x = 0 (6.36)
180 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
or equivalently
C
CA
C A2x = 0
(6.37)
..
.
n−1
CA
which contradicts observability. Thus such a nonzero vector x does not exist, i.e., P is nonsingular
and consequently, observability implies P > 0. Taking these steps in reverse order shows that P > 0
implies observability. Repeating the dual arguments for S and for controllability shows the complete
final assertion in the result. The eigenvalues of S and P close to zero indicate whether the system is
close to uncontrollability and unobservability, respectively. Again, these eigenvalues are not invariant
under a similarity transformation. Suppose that the system (A, B, C) is brought under system similarity
to (T −1 AT, T −1 B, C T ) for some nonsingular transformation matrix T . Then the matrices S and P
transform to
Ŝ = T −1 ST −T
P̂ = T T P T (6.38)
respectively. If T is chosen large then the eigenvalues of P̂ increase and those of Ŝ decrease. However,
the eigenvalues of the product S P are invariant under a similarity transformation, because
Ŝ P̂ = T −1 S P T (6.39)
where by convention
The Hankel norm kG(s)k H of the transfer function matrix G(s) is defined as
1
kG(s)k H := {λmax (S P)} 2 (6.42)
The Hankel singular values and Hankel norm as defined here are invariants of the input-output
relationship, i.e., their values are independent of the internal state space realization used to compute
them. The eigenvalues of S can be balanced against those of P by selecting T such that the eigenvalues
of T −1 ST −T and those of T T P T are equal and equal the Hankel singular values.
6.4. APPROXIMATION BY GRAMIAN-BASED BALANCE/TRUNCATE 181
is said to be internally balanced or shortly balanced, if the nonsingular transformation matrix T brings
both T −1 ST −T and T T P T to Jordan diagonal form, such that both Jordan forms are equal and have the
Hankel singular values on their diagonal. In the development to follow, a transformation matrix T that
realizes this requirement will be constructed. Define the eigenvalue decompositions for the symmetric
matrices S and P that are solutions of the Lyapunov equations (6.30) :
S =: Uc 6c2UcT
P =: Uo 6o2UoT (6.44)
where
In order to derive the required balancing state transformation, define the matrix
H := 6o UoTUc 6c (6.46)
H =: U H 6 H VHT (6.47)
with inverse
−1
T −1 = 6 H 2 U HT 6o UoT (6.49)
Ŝ = T −1 ST −T
−1 −1
= 6 H 2 U HT 6o UoTUc 6c2UcTUo 6o U H 6 H 2
−1 −1
= 6 H 2 U HT H H TU H 6 H 2
−1 −1
= 6 H 2 U HT U H 6 H VHT VH 6 H U HT U H 6 H 2
= 6H (6.50)
P̂ = 6 H (6.51)
As 6 H is a diagonal matrix, it follows that the eigenvalues of Ŝ P̂ are the diagonal entries of 6 2H .
As P > 0 and S > 0, it follows that 6 H > 0. Suppose that the diagonal entries of 6 H =
182 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
diag(σ H 1 , σ H 2 , . . . , σ H n ) which actually are the Hankel singular values of the transfer function matrix
of the system, have been ordered in accordance with (6.41) as
σH 1 ≥ σH 2 ≥ · · · ≥ σH n > 0 (6.52)
6H 1
0
6H = (6.53)
0 6H 2
and suppose that the diagonal entries in 6 H 2 are much smaller than the diagonal entries in 6 H 1 . This
means that the balanced system contains states that are almost uncontrollable/unobservable. After
similarity transformation (6.43) by T (6.48), the system (6.1) becomes:
6H 1 6H 1
T
ÂT21
Â11 Â12 0 0 Â11 B̂
+ 1 B̂1T
+ B̂2T = 0 (6.55)
Â21 Â22 0 6H 2 0 6 H 2 ÂT12 T
Â22 B̂2
6H 1 6H 1
T
ÂT21
T
0 Â11 Â12 Â 0 Ĉ
+ T11 + 1T Ĉ1 Ĉ2 = 0 (6.56)
0 6 H 2 Â21 Â22 Â12 ÂT22 0 6H 2 Ĉ2
The partitioning of the matrices, vectors and equations in (6.54), (6.55) and (6.56) is assumed to be
consistent with the partitioning of 6 H (6.53). The model reduction step is formulated in the following
result.
d
x̃1 (t) = Â11 x̃1 (t) + B̂1 u(t)
dt
ỹ(t) = Ĉ1 x̃1 (t) (6.57)
is a reduced order approximation of the system (6.1) of order k. The approximation error satisfies the
frequency domain error bounds:
where
and the ∞ norm of a transfer function matrix is defined as the supremum over all frequencies of the
maximum singular value of the transfer function matrix:
where σ denotes the maximum singular value of the matrix argument. The method of balancing and
truncation as described in Section 6.4.8 can be applied to real-world problems as easily as modal
truncation. The numerical techniques presently available for the solution of Lyapunov equations are
reliable and can handle high order systems of state dimension one hundred or higher.
1. Equations (6.55) and (6.56) have the following equations as their upper left partition:
As 6 H 1 > 0 it follows from result 6.4.1 that Â11 is asymptotically stable, and that the system
(6.57) is controllable and observable.
3. The eigenvalues of Â11 in the truncated system (6.57) in general are not a subset of the eigenvalues
of the matrix A in the original system (6.1).
4. The original system need not necessarily satisfy all of the assumptions 6.2.2. If the original high
order system is not asymptotically stable, then the unstable eigenvalues of the matrix A should be
retained in the reduced order model. Thus one can bring the system to Jordan form, decompose
the system into a parallel representation of an antistable system and an asymptotically stable
system, balance and truncate the asymptotically stable system and consider the parallel sum of
antistable system and truncated stable system as the final result of the model reduction procedure.
5. For balancing and truncation, it is not relevant whether the system has diagonal Jordan form or
not.
6. If the original system is not controllable or not observable, then the uncontrollable part and
the unobservable part of the system have to be removed before the procedure for balancing
and truncation can be applied. As the uncontrollable part and the unobservable part have no
contribution to the input-output behaviour, their removal can be viewed as an initial truncation
step for which no balancing is required.
The error bounds (6.58) only show a single property of the error behaviour of the approximation by
balancing and truncation. It may be worthwhile to analyse the error G(jω) − G k (jω) in the frequency
domain. If the error behaviour is not satisfying, for instance because relatively large errors occur in a
frequency region that is thought to be of importance, then frequency weighting can be applied. In this
sense, an important frequency region for models to be used in feedback control system design might be
the cross-over region of the feedback loop.
184 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
where W (s), V (s) are asymptotically stable filters. Here, V (s) is the input weighting filter, W (s) is
the output weighting filter. As we consider multivariable systems, the transfer function matrices of the
system and of an input or output weighting filter in general will not commute, so that both input and
output weighting must be considered in order to cover all possible cases. The objective then is to make
the frequency-weighted error
small in some sense. The operation of balancing and truncation, applied to a realisation of W (s)G(s)V (s)
will in general not be in the form W (s)G k (s)V (s). A result, which turns out to work very well in
practice but for which no error bounds can be given, is the following. Let (Aw , Bw , Cw , Dw ) and
(Av , Bv , Cv , Dv ) be (minimal) realisations of the stable filter W (s) and V (s), respectively. Then a
realisation of W (s)G(s)V (s) is
x (t) xw (t)
A Bw C 0 0
d w w
x(t) = 0 A BCv x(t) + B Dv u v (t)
dt
xv (t) 0 0 Av xv (t) Bv
xw (t)
yw (t) = Cw Dw C 0 x(t)
(6.65)
xv (t)
Let the Gramians (6.17), (6.18) be computed by solving the Lyapunov equations (6.30) for the system
(6.65):
· · · · · ·
SW GV = · S · PW GV = · P · (6.66)
· · · · · ·
The block partitioning is conformal to the partitioning of the state in the system (6.65). The blocks
indicated by (·) are of no importance to the further results. Now consider the symmetric blocks S
and P having the dimension n × n. Balancing (A, B, C) such that S = P = 6 H and truncating
(A, B, C) provides a result in which the filters have contributed in forming S and P. The part of
the original system that is almost uncontrollable/unobservable when considered in conjunction with
the input and output weighting filters has been deleted. The selection of the filters has an influence
on the frequency distribution of the error. One needs an interactive sequence of filter adjustments,
combined with visual inspection, to arrive at the most powerful results. The Matlab Weighted Order
Reduction Toolbox of Wortelboer [30] precisely provides these steps. One possible choice for weighting
functions W (s) or V (s) can be determined from the requirement that the system model G(s) is to
be used in a feedback system in closed loop with a controller K (s). The controller K (s) might be
the result of a control design using the high order system. The arrangement is the standard feedback
6.4. APPROXIMATION BY GRAMIAN-BASED BALANCE/TRUNCATE 185
system of Fig. 6.2. The closed-loop transfer function is G(s)[I + K (s)G(s)]−1 K (s). Thus the choice
V (s) = [I + K (s)G(s)]−1 K (s) for the input weighting and unit output weighting leads to a reduction
where the closed-loop relevant dynamics tend to be dominantly retained in the reduced order truncated
model.
Note that this approach aims at making G k (s)V (s) resemble G(s)V (s). This is only a step in the right
direction, as the real objective is to make the error
small. An iterative procedure where G(s) in the expression for V (s) is replaced by the most recent
reduced order model can lead to a better solution. However, convergence of such a procedure can not
be guaranteed.
Observe that a relation replacing (6.61) does not exist for the general frequency-weighted case. Con-
sequently, the stability of a reduced-order model resulting from frequency-weighted balancing and
truncation can not be guaranteed. On the other hand, the closed-loop reduction method can be applied
to unstable plants G(s) that are stabilized by the controller K (s).
−1
−2
−3
log10(sigma(k))
−4
−5
−6
−7
−8
−9
−10
0 2 4 6 8 10 12
k
0.9 0.9
0.8 0.8
0.7 0.7
output response
output response
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.5 1 1.5 0 0.5 1 1.5
t (normalized units) t (normalized units)
0.9
0.8 0.8
0.7
0.6
output response
0.6
output response
0.5 0.4
0.4
0.2
0.3
0.2
0
0.1
0 −0.2
0 0.5 1 1.5 0 0.5 1 1.5
t (normalized units) t (normalized units)
0
gain (dB)
−50
−100
−150
−200
−1 0 1 2 3
10 10 10 10 10
−200
phase (degrees)
−400
−600
−800
−1000
−1 0 1 2 3
10 10 10 10 10
normalized frequency (rad/s)
and the linear discrete time system with zero initial state
z k+1 = F z k + Gu k z0 = 0
yk = H z k (6.68)
(s In − A)(s −1 In + s −2 A + s −3 A2 + · · · ) = In (6.70)
and consequently
C(s In − A)−1 B = s −1 C B + s −2 C AB + s −3 C A2 B + · · ·
X∞
= Di s −i
i=1
H (z In − F) G = z H G + z −2 H F G + z −3 H F 2 G + · · ·
−1 −1
X∞
= Di z −i (6.71)
i=1
where
are the Markov parameters of the system. As the equations for the continuous time case and the discrete
time case are similar, the theory of realization of state space models on the basis of input-output data
will be elaborated for the discrete time case only. The relationship between input and output variables is
yk = (z −1 H G + z −2 H F G + z −3 H F 2 G + · · · )u k (6.73)
190 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
In the time domain, the operator z −1 represents a delay of one time interval. For a unit impulse input
signal, u k = 1 for k = 0 and u k = 0 for k = 1, 2, . . . . Then (6.73) states that the impulse response
values at successive time instants
y1 = H G, y2 = H F G, y3 = H F 2 G, ...
are given by the Markov parameters (6.72) of the system. Thus the Markov parameters determine both
the formal power series development of the transfer function matrix (6.71), and the impulse response
sequence. The matrix triple (F, G, H ) uniquely determines the sequence of Markov parameters. Let
( F̃, G̃, H̃ ) be related to (F, G, H ) through
F̃ = T F T −1
G̃ = T G (6.74)
−1
H̃ = H T
Di = H F i−1 G (6.75)
−1 i−1 −1
= HT TF T TG
= H̃ F̃ i−1 G̃ (6.76)
3. Only the controllable and observable part of (F, G, H ) contribute in forming the Markov
parameters
4. A matrix triple (F, G, H ) will be called a minimal realization of a sequence of Markov parame-
ters Di , i = 1, 2, . . . if (6.75) is satisfied and (F, G, H ) is controllable and observable.
In general, for a given sequence of Markov parameters it is only useful to determine a minimal
realization, as any other realization modulo similarity can be determined from this minimal realization.
Block matrices in which the block entry i, j only depends on i + j are called block Hankel matrices. A
block Hankel matrix has identical blocks in skew diagonal direction. Let a transfer function matrix
G(z) be given as a formal power series,
G(z) = z −1 D1 + z −2 D2 + z −3 D3 + · · · (6.77)
6.5. INPUT-OUTPUT MODELS 191
directly yields y(z) = G(z)u(z), which shows the first part of the result. To prove the second part,
suppose that (F, G, H ) is a minimal realization of the sequence of Markov parameters D1 , D2 , . . . ,
i.e.,
Di = H F i−1 G i = 1, 2, . . . (6.82)
Define
Then
u(z)
x1 (z)
Im 0 0 0 · · · x (z) u(z)
2 = (6.85)
0 G FG F 2 G · · · x (z) x(z)
3
..
.
As
0 H u(z) y(z) y(z)
= = (6.86)
G F − z In x(z) Gu(z) − (F − z In )x(z) 0
it follows that
u(z)
x1 (z)
0 H Im 0 0 0 · · · x (z) y(z)
2 = (6.87)
G F − z In 0 G FG F 2 G · · · x (z) 0
3
..
.
Premultiplication of (6.87) by
Il 0
0 H
0 H F
(6.88)
0 H F 2
.. ..
. .
and the equation (6.82) yields (6.80), which shows the second part of the result. For the following result,
define the rank of the infinite matrix HE as the maximum amongst the ranks of any finite submatrix
of HE .
6.5. INPUT-OUTPUT MODELS 193
by deleting all columns and rows in (6.89) except the rows 1, 2, . . . , l, l + i 1 , l + i 2 , . . . , l + i n and
except the columns 1, 2, . . . , m, m + j1 , m + j2 , . . . , m + jn . Then a minimal realization for
G(z) = z −1 D1 + z −2 D2 + z −3 D3 + · · ·
is given by
F = Ê −1 Â
G = Ê −1 B̂
H = Ĉ (6.91)
and thus the observability matrix and controllability matrix both are of rank n. The realization (6.91)
is one element of a class of equivalent minimal realizations. Any other minimal realization of the
sequence of Markov parameters Di , i = 1, 2, . . . is related to (6.91) by a similarity transformation
(6.74).
D1 , D2 , . . . = 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, . . .
Due to the periodic nature of the sequence of Markov parameters, the infinite matrix HE repeats itself
every 4 rows or columns. As the first 4 rows or columns are linearly independent, rank HE = 4 =: n.
The matrix P̂(z) (6.90) thus can be formed from the first five rows and columns of (6.89):
0 1 2 3 0 0 0 0 0 0
1 2 3 0 1 0 1 2 3 0
P̂(z) = 2 3 0 1 2 − z 0 2 3 0 1
3 0 1 2 3 0 3 0 1 2
0 1 2 3 0 0 0 1 2 3
" #
0 Ĉ
=
B̂ Â − z Ê
Using
0.0417 0.0417 0.2917 −0.2083
0.0417 0.2917 −0.2083 0.0417
Ê −1 =
0.2917 −0.2083
0.0417 0.0417
−0.2083 0.0417 0.0417 0.2917
a realization follows as
0 0 0 1 1
1 0 0 0
,
0
F = Ê −1 Â =
0 G = Ê −1 B̂ =
1 0 0 0
0 0 1 0 0
6.5. INPUT-OUTPUT MODELS 195
H = Ĉ = 1 2 3 0
Note that one could equally well bring the matrix
1 2 3 0 1
2 3 0 1 2
B̂ Â = 3 0 1 2 3
0 1 2 3 0
so that
HG H FG H F 2G H F 3G H F 4G · · · = 1 2 3 0 1 · · ·
When we investigate the row and column dependencies in the infinite block Hankel matrix HE , we
see that the rows {1, 2, 4, 6} and the columns {1, 2, 3, 4} are linearly independent, and all other rows
and columns can be formed as linear combinations of the indicated ones. Thus rank HE = 4 =: n.
196 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
The matrix P̂(z) (6.90) can be formed from the rows {1, 2, 3, 4, 6, 8} and columns {1, 2, 3, 4, 5, 6} of
(6.89):
0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 1 0 1 0 0 0 0 0 0
0 0 0 1 0 1
− z 0 0 0 0 0 1
P̂(z) =
0
1 0 1 1 1
0
0 0 1 0 1
0 1 1 1 1 1 0 0 0 1 1 1
1 1 1 1 1 1 0 0 1 1 1 1
" #
0 Ĉ
=
B̂ Â − z Ê
Now the matrix B̂ Â is brought by elementary row operations to a form where the first four columns
form a unit matrix, to obtain the resulting matrix pair G F :
1 0 0 0 0 0
0 1 0 0 1 0 0 0 0 1
,
G F =
0 H = Ĉ =
0 1 0 0 0 0 1 0 1
0 0 0 1 0 1
so that
2 3 4 0 0 0 1 0 1 1 1 1 1 ···
HG H FG HF G HF G HF G ··· =
0 1 0 1 1 1 1 1 1 1 ···
6 0 T
H =U V (6.94)
0 0
1. The diagonal entries σ1 , σ2 , . . . , σr are called the singular values of the matrix H .
4. The rank of H is r
5. The columns of U and V are called the left singular vectors and the right singular vectors,
respectively
equals
7. The 2-norm kH k2 satisfies kH k2 = σ1 , i.e., the 2-norm of a matrix equals the maximum singular
value (which often is denoted as σ (H )).
The way in which the concept of singular values allows the approximation of matrices is shown in the
following result.
6ρ
0 T
Hρ = U V (6.97)
0 0
where
6ρ = diag(σ1 , σ2 , . . . , σρ ) ρ ≤r (6.98)
Then
The result states that if a matrix is to be approximated optimally in the sense of the 2-norm or in the
sense of the Frobenius norm by a matrix of lower rank, then the singular value decomposition of the
matrix provides the optimal solution. The optimal solution is found by discarding the smallest singular
values in the singular value decomposition, until the rank constraint is met.
G(z) = z −1 D1 + z −2 D2 + z −3 D3 + · · · (6.101)
Following the earlier development for the exact theory, a (modified) system matrix analogous to (6.89)
can be formulated, which now is finite:
0 HC
(6.103)
HB H A − z HE
6.6. NUMERICAL APPROACH TO REALIZATION 199
Instead of determining the rank n of HE and selecting n linearly independent rows and columns in HE ,
a more gradual approach will be followed, based upon the singular value decomposition of HE . This
takes the following form:
61 0
T
V1
H E = U1 U2 (6.104)
0 62 V2T
6s
0
62 = (6.105)
0 0
where 6s = diag(σρ+1 , σρ+2 , . . . ). The value of determines the partitioning of the set of singular
values. Pre- and postmultiplication of (6.103) by
" # " #
Iρ 0 Iρ 0
−1
, − 21
, (6.106)
0 61 2 U1T 0 V1 61
−1
0 HC V1 61 2
1 (6.107)
−2 T −1 −1
61 U 1 H B 61 2 U1T H A V1 61 2 − z Iρ
−1 − 12
F = 61 2 U1T H A V1 61 (6.108)
− 12
G = 61 U1T H B
− 12
H = HC V1 61
The approach allows to select a value such that only those singular values that are beyond a certain
noise level are taken into consideration as contributing to the order of the approximate realization. If
the order of approximation is chosen as too large, then the noise in the Markov parameters is modelled
as part of the realization.
k
X
Sk = Di k = 1, 2, . . . (6.109)
i=1
200 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
where Di are the coefficients of the impulse response matrix, i.e., the Markov parameters. If step
response matrix coefficients are available from an experiment, one possible redefinition of the approxi-
mate realization algorithm of Section 6.6.3 is as follows. Define the finite block Hankel matrices
S1 S2 S3 S4 ··· Sk2
S2 − S1 S3 − S1 S4 − S1 ··· ··· Sk2 +1 − S1
S3 − S2 S4 − S2 ··· ··· Sk2 +2 − S2
.. .. ..
HE =
. . .
S4 − S3
.. .. .. ..
. . . .
Sk1 − Sk1 −1 Sk1 +1 − Sk1 −1
Sk1 +2 − Sk1 −1 · · · ··· Sk1 +k2 −1 − Sk1 −1
S2 − S1 S3 − S1 S4 − S1 S5 − S1 ··· Sk2 +1 − S1
S3 − S2 S4 − S2 S5 − S2 ··· ··· Sk2 +2 − S2
S4 − S3 S 5 − S3 ··· ··· Sk2 +3 − S3
. . ..
HA = .. .. .
S5 − S4
.. .. .. ..
. . . .
Sk1 +1 − Sk1 Sk1 +2 − Sk1 Sk1 +3 − Sk1 ··· ··· Sk1 +k2 − Sk1
HC = S1 S2 S3 S4 · · · Sk2
S1
S2 − S1
S3 − S2
HB = S − S (6.110)
4 3
..
.
Sk1 − Sk1 −1
and proceed with the approximate realization algorithm exactly as in Section 6.6.3. The matrices HE ,
H A and HC in (6.110) follow from the matrices in (6.102) by column operations under strict system
equivalence, defined by postmultiplication by the matrix
I I I ··· I
0 I I ··· I
0 0 I ··· I (6.111)
.. .. .. ..
. . . .
I
0 0 0 ··· I
The asymptotic stability of A implies that the time moments of the system are well defined, and that
the matrix A is invertible. Then
and consequently
The coefficients in this series development are representative for the time moments of the system, as
discussed in Chapters 2 and 4. If a model is asymptotically stable and it fits the first k coefficients of
(6.115) in its series development about s = 0, then it has its first k time moments in common with
the system (A, B, C). Suppose that the coefficients in this formal series expansion are considered as
Markov parameters:
D j = −C A− j B j = 1, 2, . . . (6.116)
and suppose further that a state space model realization (F, G, H ) is determined for the sequence
D1 , D2 , . . . using the approach discussed in the previous sections. Then
Di = H F i−1 G i = 1, 2, . . . (6.117)
A = F −1
B=G
C = −H F −1 (6.118)
The result provides a Padé or time moment matching approximation algorithm in state space form
using the numerical realization algorithms of the previous sections.
202 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
References
[1] S. Barnett. Matrices in Control Theory. Van Nostrand Reinhold Company, London, UK, 1971.
[2] S. Barnett and C. Storey. Matrix Methods in Stability Theory. Nelson, London, UK, 1970.
[3] M. Bettayeb. New interpretation of balancing state space representations as an input-output energy
minimization. International Journal of Systems Science, 22:325–331, 1991.
[4] D. Bonvin and D. A. Mellichamp. A unified derivation and critical review of modal approaches to
model reduction. International Journal of Control, 35:829–848, 1982.
[5] O. H. Bosgra and A. J. J. van der Weiden. Input-output invariants for linear multivariable systems.
IEEE Transactions on Automatic Control, 25:20–36, 1980.
[6] R. W. Brockett. Finite Dimensional Linear Systems. John Wiley and Sons, Inc., New York, NY,
1970.
[7] C. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika,
1:211–218, 1936.
[8] D. Enns. Model Reduction for Control System Design. Doctoral Dissertation,. Stanford University,
Dept. Aero. Astr., Stanford, CA, 1984.
[9] D. F. Enns. Model reduction with balanced realizations: an error bound and a frequency weighted
generalization. Proceedings IEEE Conference on Decision and Control, pages 127–132, 1984.
[10] B. A. Francis. A Course in H∞ Control Theory. Lecture Notes in Control and Information Science,
vol. 88. Springer Verlag, Berlin, 1987.
[11] J. S. Freudenberg. Plant directionality, coupling and multivariable loop-shaping. International
Journal of Control, 51:365–390, 1990.
[12] K. Glover. All optimal Hankel-norm approximations of linear multivariable systems and their l∞
error bounds. International Journal of Control, 39:1115–1193, 1984.
[13] G. H. Golub and C. F. VanLoan. Matrix Computations. Johns Hopkins University Press, Baltimore,
MD, 1983.
[14] S. J. Hammarling. Numerical solution of the stable, non-negative definite Lyapunov equation.
IMA Journal of Numerical Analysis, 2:303–323, 1982.
[15] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge,
England, 1991.
[16] E. Jonckheere and Chingwo Ma. Combined sequence of Markov parameters and moments in
linear systems. IEEE Transactions on Automatic Control, 34:379–382, 1989.
[17] E. Jonckheere and Chingwo Ma. Recursive partial realization from the combined sequence of
Markov parameters and moments. Linear Algebra and its Applications, 122/123/124:565–590, 1989.
[18] R. E. Kalman. On minimal partial realizations of a linear input/output map. In R. E. Kalman
and N. DeClaris, editors, Aspects of Network and System Theory, pages 385–407. Holt,Rinehart and
Winston, Inc., New York, NY, 1971.
204 CHAPTER 6. MODEL APPROXIMATION FOR LINEAR SYSTEMS
[19] R. E. Kalman. Realization theory of linear dynamical systems. In Control Theory and Topics in
Functional Analysis, vol II, Proc. Int. Seminar Course, Trieste, Italy, Sept./Nov. 1974., pages 235–256.
International Atomic Energy Agency, Vienna, 1976.
[20] R. E. Kalman. On partial realizations, transfer functions, and canonical forms. Acta Poytechnica
Scandinavica. Math. Comp. Sci. Ser., MA-31:9–32, 1979.
[21] B. C. Moore. Singular value analysis of linear systems. Proceedings IEEE Conference on
Decision and Control, pages 66–73, 1978.
[22] B. C. Moore. Principal component analysis in linear systems: controllability, observability, and
model reduction. IEEE Transactions on Automatic Control, 26:17–32, 1981.
[23] M. Morari and E. Zafiriou. Robust Process Control. Prentice Hall International Ltd, London, UK,
1989.
[24] R. J. Ober. Balanced realizations: canonical form, parametrizations, model reduction. Interna-
tional Journal of Control, 46:643–670, 1987.
[25] L. Pernebo and L. M. Silverman. Model reduction via balanced state space representations. IEEE
Transactions on Automatic Control, 27:382–387, 1982.
[26] V. L. Shrikhande, H. Singh, and L. M. Ray. On minimal realization of transfer function matrices
using Markov parameters and moments. Proceedings of the IEEE, 65:1717–1719, 1977.
[27] G. W. Stewart. Introduction to Matrix Computations. Academic Press, New York, NY, 1973.
[28] J. B. Waller and K. V. Waller. Defining directionality: Use of directionality measures with respect
to scaling. Industrial and Engineering Chemistry Research, 34:1244–1252, 1995.
[29] J. H. Wilkinson. The Algebraic Eigenvalue Problem. Clarendon Press, Oxford, UK, 1965.
[30] P. M. R. Wortelboer. Frequency-weighted balanced reduction of closed-loop mechanical servo-
systems: Theory and Tools. Doctoral Dissertation. Delft University of Technology, Delft, 1994.
[31] P. M. R. Wortelboer and O. H. Bosgra. Generalized frequency weighted balanced reduction. In
Proceedings of the 31th IEEE Conference on Decision and Control, Tucson, Arizona, USA, 16-18 Dec.
1992., pages 2848–2849. IEEE, New York, NY, 1992.
[32] P. M. R. Wortelboer and O. H. Bosgra. Frequency weighted closed-loop order reduction in the
control design configuration. In Proceedings of the 33rd IEEE Conference on Decision and Control,
Lake Buena Vista, Florida, USA, december 14-16, 1994, pages 2714–2719. IEEE, New York, NY,
1994.
[33] K. Zhou. Error bounds for frequency weighted balanced truncation and relative error model
reduction. In Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, Texas,
USA, december 15-17, 1993, pages 3347–3352. IEEE, New York, NY, 1993.
[34] K. Zhou, C. D’Souza, and J. R. Cloutier. Structurally balanced controller order reduction with
guaranteed closed loop performance. Systems and Control Letters, 24(4):235–242, 1995.
[35] Kemin Zhou. Frequency-weighted model reduction with l∞ error bounds. Systems and Control
Letters, 21:115–125, 1993.