0% found this document useful (0 votes)
13 views9 pages

J DSP 2014 10 005

Uploaded by

Ali Saad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views9 pages

J DSP 2014 10 005

Uploaded by

Ali Saad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.

1 (1-9)
Digital Signal Processing ••• (••••) •••–•••

Contents lists available at ScienceDirect

Digital Signal Processing


www.elsevier.com/locate/dsp

Recursive least squares parameter identification algorithms for systems


with colored noise using the filtering technique and the auxilary
model ✩
Feng Ding a,b,∗ , Yanjiao Wang a , Jie Ding c
a
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China
b
Control Science and Engineering Research Center, Jiangnan University, Wuxi 214122, PR China
c
School of Automation, Nanjing University of Posts and Telecommunications, Nanjing, 210032, PR China

a r t i c l e i n f o a b s t r a c t

Article history: This paper focuses on the parameter estimation problems of output error autoregressive systems and
Available online xxxx output error autoregressive moving average systems (i.e., the Box–Jenkins systems). Two recursive least
squares parameter estimation algorithms are proposed by using the data filtering technique and the
Keywords:
auxiliary model identification idea. The key is to use a linear filter to filter the input–output data. The
Filtering technique
proposed algorithms can identify the parameters of the system models and the noise models interactively
Parameter estimation
Recursive identification and can generate more accurate parameter estimates than the auxiliary model based recursive least
Least squares squares algorithms. Two examples are given to test the proposed algorithms.
Box–Jenkins system © 2014 Elsevier Inc. All rights reserved.

1. Introduction toregressive moving average (ARMA) models [16], this paper con-
siders the output error (OE) model with AR noise as shown in
The development of information and communication technol- Fig. 1 (the OEAR model for short), which can be expressed as
ogy has had a tremendous impact on our lives, e.g., the informa- B ( z) 1
tion filtering, optimization and estimation techniques [1–4]. In the y (t ) = u (t ) + v (t ), (1)
A ( z) C ( z)
areas of signal processing and system identification, the observed
output signals always contain disturbances from process environ- where {u (t )} and { y (t )} are the system input and output se-
ments [5–8]. The disturbances are of different forms (white noise quences, respectively, { v (t )} is a white noise sequence with zero
or colored noise). It is well known that the conventional recursive mean and variance σ 2 , and A ( z), B ( z) and C ( z) are polynomials in
least squares (RLS) method generates biased parameter estimates the unit backward shift operator z−1 [z−1 y (t ) = y (t − 1)]:
due to correlated noise or colored noise [9]. Thus the identification
A ( z) := 1 + a1 z−1 + a2 z−2 + . . . + ana z−na ,
of output error models with colored noise has attracted many re-
search interests [10]. The bias correction methods have been con- B ( z) := b1 z−1 + b2 z−2 + . . . + bnb z−nb ,
sidered very effective to deal with the output error models with
C ( z) := 1 + c 1 z−1 + c 2 z−2 + . . . + cnc z−nc .
colored noise [11,12]. However, the bias correction methods ignore
the estimation of the noise models [13]. In this paper, we propose Assume that the orders na , nb and nc are known, and u (t ) = 0,
new identification methods for estimating the parameters of the y (t ) = 0 and v (t ) = 0 for t  0. The coefficients ai , b i and c i
system model and the noise model. are the parameters to be estimated from the input–output data
Since the noise in real life can be fitted by the autoregressive {u (t ), y (t )}.
(AR) models, the moving average (MA) models [14,15] or the au- The model in (1) can be transformed into a new controlled au-
toregressive moving average (CARMA) form,


A ( z)C ( z) y (t ) = B ( z)C ( z)u (t ) + A ( z) v (t ),
This work was supported by the National Natural Science Foundation of China
(Nos. 61273194, 61203028) and the PAPD of Jiangsu Higher Education Institutions. or
*
Corresponding author at: Key Laboratory of Advanced Process Control for Light
Industry (Ministry of Education), Jiangnan University, Wuxi 214122, PR China. A  ( z) y (t ) = B  ( z)u (t ) + D ( z) v (t ), A  ( z) := A ( z)C ( z),
E-mail addresses: [email protected] (F. Ding), [email protected]
(Y. Wang), [email protected] (J. Ding). B  ( z) := B ( z)C ( z), D ( z) := A ( z).
http://dx.doi.org/10.1016/j.dsp.2014.10.005
1051-2004/© 2014 Elsevier Inc. All rights reserved.
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.2 (1-9)
2 F. Ding et al. / Digital Signal Processing ••• (••••) •••–•••

Fig. 1. The output error autoregressive system.

This CARMA model can be identified using the recursive extended


least squares algorithm [9,17]. However, the model in (1) only con-
tains (na + nb + nc ) unknown parameters, while this new model Fig. 2. The output error auto-regressive systems with the auxiliary model.
contains (2na + nb + 2nc ) parameters, resulting in the increment of
the computation load of identification algorithms. Moreover, some Eqs. (2) and (1) can be written as
extra computation is required to compute the estimates of the pa-
rameters b i and c i .
 
x(t ) = 1 − A ( z) x(t ) + B ( z)u (t )
In practical industries, there exist many unmeasurable variables
in systems, such as the state variables [18] and the inner variables = ϕ T (t )θ, (3)
or the noise-free outputs. In general, one can use the outputs of  
w (t ) = 1 − C ( z) w (t ) + v (t )
an appropriate auxiliary model to replace the unmeasurable vari-
ables for identification. The auxiliary model identification idea can = ψ T (t )c + v (t ), (4)
be applied to linear systems containing the unknown variables in
the information vectors [19,20], nonlinear systems, dual-rate/mul- y (t ) = x(t ) + w (t )
tirate systems [21,22], and missing-data systems or systems with = φ T (t )ϑ + v (t ). (5)
scarce measurements [23,24]. Recently, Chen et al. presented a
data filtering based least squares iterative algorithm for parame- A difficulty of identification is that φ(t ) contains the unknown in-
ter identification of output error autoregressive systems [25]. ner term x(t − i ) and the unmeasurable noise term w (t − i ). An
On the basis of the work in [25–27], this paper investigates the effective method of estimating the parameter vector ϑ is to em-
recursive identification problems of the OEAR models and the Box– ploy the auxiliary model identification idea in [19,28] as shown in
B (z)
Jenkins models using the filtering technique. Two-stage recursive Fig. 2, where xa (t ) := Aa (z) u (t ) is the output of the auxiliary model.
a
least squares algorithms are proposed through filtering the input– The unknown term x(t − i ) is replaced with the output xa (t − i ) of
output data. Since the OEAR models and the Box–Jenkins models the auxiliary model and the unknown noise term w (t − i ) is re-
involve the system models (the OE part) and the noise models (the placed with its estimate ŵ (t − i ) for parameter estimation. Define
AR or ARMA part), the proposed algorithms can generate the pa-  
rameter estimates of the system models and the noise models. ϕ a (t )
φ̂(t ) := ∈ Rn ,
The rest of this paper is organized as follows. Section 2 gives ψ̂(t )
the auxiliary model identification algorithm for OEAR systems. Sec- 
tion 3 analyzes the convergence analysis of the auxiliary model ϕ a (t ) := −xa (t − 1), −xa (t − 2), . . . , −xa (t − na ),
T
based recursive generalized least squares algorithm. Section 4 de-
∈ Rna +nb ,
u (t − 1), u (t − 2), . . . , u (t − nb )
rives a parameter estimation algorithm based on the data filtering
 T
technique. Section 5 gives simply a filtering based identification al- ψ̂(t ) := − ŵ (t − 1), − ŵ (t − 2), . . . , − ŵ (t − nc ) ∈ Rnc .
gorithm for Box–Jenkins systems. Section 6 provides two examples
to show the effectiveness of the proposed algorithms. Finally, some Referring to the methods in [19,28], we can obtain the auxiliary
concluding remarks are given in Section 7. model based recursive generalized least squares (AM-RGLS) algo-
rithm for generating the estimate ϑ̂(t ) of ϑ :
2. The auxiliary model based recursive generalized least squares  T 
algorithm ϑ̂(t ) = ϑ̂(t − 1) + L (t ) y (t ) − φ̂ (t )ϑ̂(t − 1) , (6)
 T  −1
Define the noise-free output x(t ) and the noise term w (t ) as L (t ) = P (t )φ̂(t ) = P (t − 1)φ̂(t ) 1 + φ̂ (t ) P (t − 1)φ̂(t ) , (7)
 T 
B ( z) 1 P (t ) = P (t − 1) − L (t ) L T (t ) 1 + φ̂ (t ) P (t − 1)φ̂(t ) ,
x(t ) := u (t ), w (t ) := v (t ), (2)
A ( z) C ( z) P (0) = p 0 I , (8)
and the parameter vector ϑ and the information vector φ(t ) as  
ϕ a (t )
  φ̂(t ) = , (9)
θ n
ψ̂(t )
ϑ := ∈R , n := na + nb + nc , 
c ϕ a (t ) = −xa (t − 1), −xa (t − 2), . . . , −xa (t − na ),
θ := [a1 , a2 , . . . , ana , b1 , b2 , . . . , bnb ]T ∈ Rna +nb , T
u (t − 1), u (t − 2), . . . , u (t − nb ) , (10)
c := [c 1 , c 2 , . . . , cnc ]T ∈ Rnc ,  T
  ψ̂(t ) = − ŵ (t − 1), − ŵ (t − 2), . . . , − ŵ (t − nc ) , (11)
ϕ (t )
φ(t ) := ∈ Rn , xa (t ) = ϕ T
ψ(t ) a (t )θ̂ (t ), (12)

ϕ (t ) := −x(t − 1), −x(t − 2), . . . , −x(t − na ), u (t − 1), ŵ (t ) = y (t ) − xa (t ), (13)
T  T T
u (t − 2), . . . , u (t − nb ) ∈ Rna +nb , ϑ̂(t ) = θ̂ (t ), ĉ 1 (t ), ĉ 2 (t ), . . . , ĉnc (t ) , (14)
 T  T
ψ(t ) := − w (t − 1), − w (t − 2), . . . , − w (t − nc ) ∈ Rnc . θ̂ (t ) = â1 (t ), â2 (t ), . . . , âna (t ), b̂1 (t ), b̂2 (t ), . . . , b̂nb (t ) . (15)
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.3 (1-9)
F. Ding et al. / Digital Signal Processing ••• (••••) •••–••• 3

To initialize the AM-RGLS algorithm, we take ϑ̂(0) to be a small Define a Lyapunov function
real vector, e.g., ϑ̂(0) = 1n / p 0 with 1n being an n-dimensional
T
column vector whose elements are 1, p 0 to be a large positive W (t ) := ϑ̃ (t ) P −1 (t )ϑ̃(t ).
number, e.g., p 0 = 106 , and I to be an identity matrix of appro- T
priate dimensions. Note that ỹ (t ) = φ T (t )ϑ̃(t − 1) = ϑ̃ (t − 1)φ(t ) is scalar valued. Us-
ing (16) and (19), we have
3. The convergence analysis of the AM-RGLS algorithm   T
W (t ) = ϑ̃(t − 1) + P (t )φ̂(t ) − ỹ (t ) + (t ) + v (t ) P −1 (t )
The martingale convergence theorem is one of the main tools  
× ϑ̃(t − 1) + P (t )φ̂(t ) − ỹ (t ) + (t ) + v (t )
of studying the convergence of recursive identification algorithms  
[17,30–32]. The basic idea is to establish a recursive equation about = W (t − 1) + ỹ 2 (t ) − 2 ỹ 2 (t ) + 2 ỹ (t ) (t ) + v (t )
the parameter estimation error ϑ̃(t ) := ϑ̂(t ) − ϑ , to formulate a T
Lyapunov function in the estimation error ϑ̃(t ) and to prove the + φ̂ (t ) P (t )φ̂(t ) ỹ 2 (t ) + v 2 (t ) + 2 (t ) − 2 ỹ (t )
convergence of the algorithms by using the martingale conver-  
× (t ) + v (t ) + 2(t ) v (t )
gence theorem.
 T 
Assume that { v (t ), Ft } is a martingale difference sequence de- = W (t − 1) − 1 − φ̂ (t ) P (t )φ̂(t ) ỹ 2 (t )
fined on the probability space {Ω, F , P }, where {Ft } is the σ al-    
T
gebra sequence generated by the observations up to and including + 2 1 − φ̂ (t ) P (t )φ̂(t ) ỹ (t ) (t ) + v (t )
time t. The noise sequence { v (t )} satisfies the following assump- T  
tions [17]: + φ̂ (t ) P (t )φ̂(t ) v 2 (t ) + 2 (t ) + 2(t ) v (t ) . (20)
   Referring to the proof of Lemma 3 in [28], we have
(A1) E v (t )Ft −1 = 0, a.s.,
    −1
(A2) E v 2 (t )Ft −1 = σ 2 , a.s.
T T
1 − φ̂ (t ) P (t )φ̂(t ) = 1 + φ̂ (t ) P (t − 1)φ̂(t )  0.
Refer to the method in [30,31] and assume that (t ) is bounded
Theorem 1. For the system in (5) and the AM-RGLS algorithm in
with 2 (t )  ε . Since v (t ) is a white noise with zero mean and
(6)–(15), assume that (A1)–(A2) hold and that there exist positive con- T
stants α , β , γ and t 0 such that the following generalized persistent variance σ 2 , and ỹ (t ), φ̂ (t ) P (t )φ̂(t ) and (t ) are uncorrelated
excitation condition (unbounded condition number) holds [28]: with v (t ), taking the conditional expectation of both sides of (20)
with respect to Ft −1 and using (A1) and (A2) give
1
t
(A3) α I 
T
φ̂(t )φ̂ (t )  β t γ I , a.s., t  t 0 .     T 
t E W (t )Ft −1 = W (t − 1) − 1 − φ̂ (t ) P (t )φ̂(t ) ỹ 2 (t )
j =1  
T
+ φ̂ (t ) P (t )φ̂(t ) v 2 (t ) + 2 (t )
Then for any c > 1, we have
T  
   W (t − 1) + φ̂ (t ) P (t )φ̂(t ) σ 2 + ε , a.s.
ϑ̂(t ) − ϑ 2 = O [ln t ]
c
→ 0, a.s.
t Let r (t ) := tr[ P −1 (t )]. Referring to the proof of Theorem 1 in [28],
applying the martingale convergence theorem (Lemma D.5.3 in
This means that the parameter estimation error ϑ̂(t ) − ϑ converges to
[17]) to the above inequality, we can conclude that
zero with the increasing of t.
  [ln r (t )]c
ϑ̂(t ) − ϑ 2 = O , a.s., c > 1.
Proof. Define the parameter estimation error vector
λmin [ P −1 (t )]
ϑ̃(t ) := ϑ̂(t ) − ϑ. Furthermore, using (A3), we can conclude that the parameter esti-
mation error ϑ̂(t ) − ϑ converges to zero as t goes to infinity. 2
Note that P (t ) is a symmetric matrix: P (t ) = P (t ). Using (6) and
T

(5) gives
4. The data filtering based recursive least squares algorithm
 T 
ϑ̃(t ) = ϑ̂(t − 1) + P (t )φ̂(t ) y (t ) − φ̂ (t )ϑ̂(t − 1) − ϑ
 T  By introducing a linear filter, say C ( z), to filter the system input
= ϑ̃(t − 1) + P (t )φ̂(t ) φ T (t )ϑ + v (t ) − φ̂ (t )ϑ̂(t − 1) and output data, the OEAR system model in (1) can be transformed
into an OE model with white noise. Define the filtered input u f (t )
= ϑ̃(t − 1)
and output y f (t ) as
T  T
+ P (t )φ̂(t ) −φ̂ (t )ϑ̃(t − 1) + φ(t ) − φ̂(t ) ϑ + v (t )
  u f (t ) := C ( z)u (t )
= ϑ̃(t − 1) + P (t )φ̂(t ) − ỹ (t ) + (t ) + v (t ) , (16)
= u (t ) + c 1 u (t − 1) + c 2 u (t − 2) + . . . + cnc u (t − nc ), (21)
where
y f (t ) := C ( z) y (t )
T
ỹ (t ) := φ̂ (t )ϑ̃(t − 1) ∈ R, (17)
 T = y (t ) + c 1 y (t − 1) + c 2 y (t − 2) + . . . + cnc y (t − nc ). (22)
(t ) := φ(t ) − φ̂(t ) ϑ ∈ R. (18)
It is easy to see that y f (t ) = 0 and u f (t ) = 0 for t  0 from y (t ) = 0
Applying the matrix inversion formula and u (t ) = 0 for t  0.
− 1 Multiplying both sides of (1) by C ( z) yields
( A + B C )−1 = A −1 − A −1 B I + C A −1 B C A −1
B ( z)
to (8) gives C ( z) y (t ) = C ( z)u (t ) + v (t ).
A ( z)
T
P −1 (t ) = P −1 (t − 1) + φ̂(t )φ̂ (t ). (19) Then we have the following filtered OE model,
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.4 (1-9)
4 F. Ding et al. / Digital Signal Processing ••• (••••) •••–•••

B ( z) Replace the unmeasurable noise terms w (t − i )’s in ψ(t ) with its


y f (t ) = u f (t ) + v (t ) := xf (t ) + v (t ), (23)
A ( z) estimates ŵ (t − i )’s and define
where  T
ψ̂(t ) := − ŵ (t − 1), − ŵ (t − 2), . . . , − ŵ (t − nc ) ∈ Rnc . (34)
B ( z)
xf (t ) := u f (t ).
A ( z) Thus, replacing w (t ) and ψ(t ) in (31)–(32) with ŵ (t ) and ψ̂(t )
gives
Define
 T 
 ĉ (t ) = ĉ (t − 1) + P n (t )ψ̂(t ) ŵ (t ) − ψ̂ (t )ĉ (t − 1) , (35)
ϕ f (t ) := −xf (t − 1), −xf (t − 2), . . . , −xf (t − na ), u f (t − 1),
T T
P n (t − 1)ψ̂(t )ψ̂ (t ) P n (t − 1)
u f (t − 2), . . . , u f (t − nb ) ∈ Rna +nb . (24) P n (t ) = P n (t − 1) − . (36)
T
Then we have 1 + ψ̂ (t ) P n (t − 1)ψ̂(t )
  Use the estimate ĉ (t ) := [ĉ 1 (t ), ĉ 2 (t ), . . . , ĉnc (t )]T ∈ Rnc to form the
xf (t ) = 1 − A ( z) xf (t ) + B ( z)u f (t ) estimate of C ( z) as follows:
= −a1 xf (t − 1) − a2 xf (t − 2) − . . . − ana xf (t − na )
Ĉ (t , z) := 1 + ĉ 1 (t ) z−1 + ĉ 2 (t ) z−2 + . . . + ĉnc (t ) z−nc .
+ b1 u f (t − 1) + b2 u f (t − 2) + . . . + bnb u f (t − nb )
From (21) and (22), the estimates of the filtered input u f (t ) and
= ϕ Tf (t )θ, (25) the filtered output y f (t ) can be computed through
y f (t ) = xf (t ) + v (t ) = ϕ Tf (t )θ + v (t ). (26)
û f (t ) = Ĉ (t , z)u (t )
Based on the identification model in (26), we have the following
recursive least squares algorithm for computing the estimate θ̂ (t ) = u (t ) + ĉ 1 (t )u (t − 1) + ĉ 2 (t )u (t − 2)
of θ : + . . . + ĉnc (t )u (t − nc ),
 
θ̂ (t ) = θ̂(t − 1) + P f (t )ϕ f (t ) y f (t ) − ϕ Tf (t )θ̂(t − 1) , (27) ŷ f (t ) = Ĉ (t , z) y (t )

P f (t − 1)ϕ f (t )ϕ Tf (t ) P f (t − 1) = y (t ) + ĉ 1 (t ) y (t − 1) + ĉ 2 (t ) y (t − 2)
P f (t ) = P f (t − 1) − . (28)
1+ϕ T
f
(t ) P f (t − 1)ϕ f (t ) + . . . + ĉnc (t ) y (t − nc ).
Here, we can see that the algorithm in (27)–(28) cannot be ap- Replacing ϕ f (t ) in (27)–(28) with ϕ̂ f (t ), and y f (t ) with ŷ f (t ), we
plied to estimate θ̂ (t ) directly, because the filtered input u f (t ), the obtain the recursive least squares estimation algorithm for the OE
filtered output y f (t ), and the inner variables xf (t − i )’s in the infor- part parameters,
mation vector ϕ f (t ) are unknown. To overcome this problem, the  
T
inner variables xf (t − i )’s in ϕ f (t ) are replaced with the outputs θ̂ (t ) = θ̂(t − 1) + P f (t )ϕ̂ f (t ) ŷ f (t ) − ϕ̂ f (t )θ̂(t − 1) , (37)
x̂f (t − i )’s of an auxiliary model according to the auxiliary model T
P f (t − 1)ϕ̂ f (t )ϕ̂ f (t ) P f (t − 1)
identification idea. The auxiliary model can be taken to be P f (t ) = P f (t − 1) − T
. (38)
1 + ϕ̂ f (t ) P f (t − 1)ϕ̂ f (t )
T
x̂f (t ) = ϕ̂ f (t )θ̂ (t ), (29)
 Define the gain vectors L f (t ) := P f (t )ϕ̂ f (t ) ∈ Rna +nb and L n (t ) :=
ϕ̂ f (t ) := −x̂f (t − 1), −x̂f (t − 2), . . . , −x̂f (t − na ), P n (t )ψ̂ f (t ) ∈ Rnc . From (33)–(38), we can derive the filtering based
T
û f (t − 1), û f (t − 2), . . . , û f (t − nb ) ∈ Rna +nb , (30) recursive least squares (F-RLS) algorithm for the OEAR model [26]:
 T 
where û f (t − i ) is the estimate of u f (t − i ) and ϕ̂ f (t ) is obtained by θ̂ (t ) = θ̂(t − 1) + L f (t ) ŷ f (t ) − ϕ̂ f (t )θ̂(t − 1) , (39)
replacing the unknown filtered input u f (t − i ) and filtered output  T −1
y f (t − i ) in ϕ f (t ) with their estimates û f (t − i ) and ŷ f (t − i ). From L f (t ) = P f (t − 1)ϕ̂ f (t ) 1 + ϕ̂ f (t ) P f (t − 1)ϕ̂ f (t ) , (40)
(21) and (22), we can see that the estimates of the filtered input  T 
P f (t ) = I − L f (t )ϕ̂ f (t ) P f (t − 1), (41)
u f (t − i ) and filtered output y f (t − i ) rely on the estimates of the
T
noise part parameters c i ’s. The following discusses the estimation x̂f (t ) = ϕ̂ f (t )θ̂ (t ), (42)
of the noise model. 
Let ĉ (t ) be the estimate of c at time t. From the identification
ϕ̂ f (t ) = −x̂f (t − 1), −x̂f (t − 2), . . . , −x̂f (t − na ), û f (t − 1),
T
model in (4), we can obtain the estimation algorithm of computing û f (t − 2), . . . , û f (t − nb ) , (43)
ĉ (t ):
û f (t ) = u (t ) + ĉ 1 (t )u (t − 1) + ĉ 2 (t )u (t − 2)
 
ĉ (t ) = ĉ (t − 1) + P n (t )ψ(t ) w (t ) − ψ T (t )ĉ (t − 1) , (31) + . . . + ĉnc (t )u (t − nc ), (44)
P n (t − 1)ψ(t )ψ T (t ) P n (t − 1) ŷ f (t ) = y (t ) + ĉ 1 (t ) y (t − 1) + ĉ 2 (t ) y (t − 2)
P n (t ) = P n (t − 1) − . (32)
1 + ψ T (t ) P n (t − 1)ψ(t )
+ . . . + ĉnc (t ) y (t − nc ), (45)
Notice that w (t − i ) in the above algorithm is unmeasurable. From  T 
(1) to (3), we have ĉ (t ) = ĉ (t − 1) + L n (t ) ŵ (t ) − ψ̂ (t )ĉ (t − 1) , (46)
 T  −1
w (t ) = y (t ) − x(t ) = y (t ) − ϕ T (t )θ. L n (t ) = P n (t − 1)ψ̂(t ) 1 + ψ̂ (t ) P n (t − 1)ψ̂(t ) , (47)
 T 
Replacing ϕ (t ) and θ with ϕ a (t ) and θ̂ (t − 1), respectively, yields P n (t ) = I − L n (t )ψ̂ (t ) P n (t − 1), (48)
the estimate of w (t ): ŵ (t ) = y (t ) − ϕ T
− 1),
a (t )θ̂(t (49)
T T
ŵ (t ) = y (t ) − ϕ a (t )θ̂(t − 1). (33) xa (t ) = ϕ a (t )θ̂ (t ), (50)
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.5 (1-9)
F. Ding et al. / Digital Signal Processing ••• (••••) •••–••• 5


ϕ a (t ) = −xa (t − 1), −xa (t − 2), . . . , −xa (t − na ), u (t − 1), D ( z) = 1 + d1 z−1 + d2 z−2 + . . . + dnd z−nd .
T
u (t − 2), . . . , u (t − nb ) , (51) Note that the noise w (t ) := C (1z) v (t ) in (1) is an autoregressive pro-
 T D (z)
ψ̂(t ) = − ŵ (t − 1), − ŵ (t − 2), . . . , − ŵ (t − nc ) , (52) cess and the noise w (t ) := C (z) v (t ) in (55) is an autoregressive
 T moving average process.
ĉ (t ) = ĉ 1 (t ), ĉ 2 (t ), . . . , ĉnc (t ) , (53) Define
 T
θ̂(t ) = â1 (t ), â2 (t ), . . . , âna (t ), b̂1 (t ), b̂2 (t ), . . . , b̂nb (t ) . (54)
θ s := [a1 , a2 , . . . , ana , b1 , b2 , . . . , bnb ]T ∈ Rna +nb ,
The F-RLS estimation algorithm involves two stages: the param-
eter identification of the system model (see (39)–(45)) and the θ n := [c 1 , c 2 , . . . , cnc , d1 , d2 , . . . , dnd ]T ∈ Rnc +nd ,
parameter identification of the noise model (see (46)–(52)). The 
ϕ n (t ) := − w (t − 1), − w (t − 2), . . . , − w (t − nc ), v (t − 1),
two estimation stages are coupled: the identification of the noise
T
model relies on the identification of the system model, vice versa, v (t − 2), . . . , v (t − nd ) ∈ Rnc +nd .
see (39) and (49). The procedure of the proposed F-RLS algorithm
is as follows. Employing the data filtering technique, we can derive the data fil-
tering based recursive least squares estimation algorithm for the
1. Initialization: set u (t ) = 0, y (t ) = 0, ŷ f (t ) = 0, û f (t ) = 0, Box–Jenkins systems as follows:
ŵ (t ) = 0, xa (t ) = 1/ p 0 and x̂f (t ) = 1/ p 0 for t  0, and
p 0 = 106 .
 T 
θ̂ s (t ) = θ̂ s (t − 1) + L f (t ) ŷ f (t ) − ϕ̂ f (t )θ̂ s (t − 1) ,
2. Let t = 1, θ̂(0) = 1na +nb / p 0 , ĉ (0) = 1nc / p 0 , P f (0) = p 0 I na +nb ,
P n (0) = p 0 I nc . θ̂ s (0) = 1na +nb / p 0 , (56)
3. Collect the input–output data u (t ) and y (t ), and form ϕ a (t )  T −1
using (51) and ψ̂(t ) using (52). L f (t ) = P f (t − 1)ϕ̂ f (t ) 1 + ϕ̂ f (t ) P f (t − 1)ϕ̂ f (t ) , (57)
4. Compute ŵ (t ) using (49), the gain vector L n (t ) using (47), and  T 
P f (t ) = I − L f (t )ϕ̂ f (t ) P f (t − 1), P f (0) = p 0 I na +nb , (58)
the covariance matrix P n (t ) using (48).
T
5. Update the parameter estimation vector ĉ (t ) using (46). x̂f (t ) = ϕ̂ f (t )θ̂ (t ), (59)
6. Compute ŷ f (t ) using (44) and û f (t ) using (45), and form ϕ̂ f (t ) 
using (43). ϕ̂ f (t ) = −x̂f (t − 1), −x̂f (t − 2), . . . , −x̂f (t − na ),
7. Compute the gain vector L f (t ) using (40) and the covariance T
matrix P f (t ) using (41). û f (t − 1), û f (t − 2), . . . , û f (t − nb ) , (60)
8. Update the parameter estimation vector θ̂ (t ) using (39), and
û f (t ) = −d̂1 (t )û f (t − 1) − d̂2 (t )û f (t − 2) − . . . − d̂nd (t )û f (t − nd )
compute x̂f (t ) using (42) and xa (t ) using (50).
9. Increase t by 1, and go to Step 3. + u (t ) + ĉ 1 (t )u (t − 1) + ĉ 2 (t )u (t − 2)

Theorem 2. For the identification models in (26) and (4) and the F-RLS
+ . . . + ĉnc (t )u (t − nc ), (61)
algorithm in (39)–(54), assume that (A1)–(A2) hold and that there exist ŷ f (t ) = −d̂1 (t ) ŷ f (t − 1) − d̂2 (t ) ŷ f (t − 2) − . . . − d̂nd (t ) ŷ f (t − nd )
positive constants α , β , γ and t 0 such that the following generalized per-
sistent excitation condition (unbounded condition number) holds [28]: + y (t ) + ĉ 1 (t ) y (t − 1) + ĉ 2 (t ) y (t − 2)

1
t + . . . + ĉnc (t ) y (t − nc ), (62)
(A4) α I  ϕ̂ f (t )ϕ̂ Tf (t )  β t γ I , a.s., t  t 0 ,  T 
t θ̂ n (t ) = θ̂ n (t − 1) + L n (t ) ŵ (t ) − ϕ̂ n (t )θ̂ n (t − 1) ,
j =1

1
t θ̂ n (0) = 1nc +nd / p 0 , (63)
T
(A5) α I  ψ̂(t )ψ̂ (t )  β t γ I , a.s., t  t 0 .  T −1
t L n (t ) = P n (t − 1)ϕ̂ n (t ) 1 + ϕ̂ n (t ) P n (t − 1)ϕ̂ n (t ) , (64)
j =1
 T 
P n (t ) = I − L n (t )ϕ̂ n (t ) P n (t − 1), P n (0) = p 0 I nc +nd , (65)
Then for any c > 1, the parameter estimation error vectors ϑ̂(t ) − ϑ
converges to zero ŵ (t ) = y (t ) − ϕ Ta (t )θ̂ s (t − 1), (66)
 
θ̂ (t ) − θ 2 = O [ln t ]
c
→ 0, a.s., xa (t ) = ϕ Ta (t )θ̂ s (t ), (67)
t 
ϕ a (t ) = −xa (t − 1), −xa (t − 2), . . . , −xa (t − na ),
 
ĉ (t ) − c 2 = O [ln t ]
c
→ 0, a.s. T
t u (t − 1), u (t − 2), . . . , u (t − nb ) , (68)
T
v̂ (t ) = ŵ (t ) − ϕ̂ n (t )θ̂ n (t ), (69)
The proof can be made by using a similar way in the previous

section. ϕ̂ n (t ) = − ŵ (t − 1), − ŵ (t − 2), . . . , − ŵ (t − nc ), v̂ (t − 1),
T
5. The extension to the Box–Jenkins systems v̂ (t − 2), . . . , v̂ (t − nd ) , (70)
 T
More generally, consider the following Box–Jenkins systems θ̂ s (t ) = â1 (t ), â2 (t ), . . . , âna (t ), b̂1 (t ), b̂2 (t ), . . . , b̂nb (t ) , (71)
 T
B ( z) D ( z) θ̂ n (t ) = ĉ 1 (t ), ĉ 2 (t ), . . . , ĉnc (t ), d̂1 (t ), d̂2 (t ), . . . , d̂nd (t ) . (72)
y (t ) = x(t ) + w (t ) = u (t ) + v (t ), (55)
A ( z) C ( z)
The initialization of the above algorithm is similar to the F-RLS
where algorithm.
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.6 (1-9)
6 F. Ding et al. / Digital Signal Processing ••• (••••) •••–•••

Table 1
The AM-RGLS estimates and their errors.

t a1 a2 b1 b2 c1 δ (%)
100 0.20911 −0.26431 0.44331 −0.54242 0.81011 13.42219
200 0.24539 −0.21626 0.45352 −0.57057 0.85537 11.40221
1000 0.34465 −0.22798 0.49216 −0.56873 0.79783 5.36216
2000 0.34491 −0.24689 0.49027 −0.58291 0.80543 3.40654
3000 0.33559 −0.26480 0.49535 −0.59624 0.81264 1.88778
4000 0.34301 −0.26102 0.49629 −0.59525 0.81223 1.84643
5000 0.34808 −0.26202 0.49584 −0.59254 0.81464 1.70038

True values 0.35000 −0.28000 0.50000 −0.60000 0.82000

Table 2
The F-RLS estimates and their errors.

t a1 a2 b1 b2 c1 δ (%)
100 0.29384 −0.20177 0.45265 −0.58269 0.71472 12.42560
200 0.28878 −0.18218 0.45644 −0.58659 0.76554 11.12515
1000 0.34926 −0.26475 0.49587 −0.59382 0.79564 2.43769
2000 0.34598 −0.26973 0.49109 −0.59399 0.79601 2.34089
3000 0.34241 −0.28796 0.49536 −0.60369 0.80933 1.34909
4000 0.34834 −0.28169 0.49617 −0.60133 0.81075 0.85151
5000 0.34997 −0.27825 0.49600 −0.59818 0.81095 0.83842

True values 0.35000 −0.28000 0.50000 −0.60000 0.82000

Table 3
The parameter estimates and variances based on 15 Monte Carlo simulations.

Algorithms t a1 a2 b1 b2 c1
AM-RGLS 100 0.40018 ± 0.19407 −0.23820 ± 0.11821 0.47754 ± 0.08673 −0.56886 ± 0.11555 0.79325 ± 0.16076
200 0.37586 ± 0.13047 −0.26431 ± 0.06962 0.48459 ± 0.05867 −0.57452 ± 0.05455 0.80794 ± 0.09694
1000 0.35156 ± 0.03744 −0.28338 ± 0.05540 0.50001 ± 0.02624 −0.59228 ± 0.02965 0.80874 ± 0.03218
2000 0.34932 ± 0.01887 −0.28353 ± 0.03664 0.50027 ± 0.02062 −0.59713 ± 0.01422 0.81215 ± 0.02607
3000 0.34974 ± 0.01503 −0.28397 ± 0.02374 0.50152 ± 0.01245 −0.59736 ± 0.01203 0.81223 ± 0.01935
4000 0.34858 ± 0.01477 −0.28288 ± 0.02840 0.50139 ± 0.01113 −0.59876 ± 0.01022 0.81370 ± 0.02232
5000 0.34785 ± 0.01775 −0.28372 ± 0.03070 0.50057 ± 0.00821 −0.59979 ± 0.00725 0.81493 ± 0.01885

F-RLS 100 0.21792 ± 0.15878 −0.16267 ± 0.11846 0.40396 ± 0.08370 −0.54538 ± 0.07634 0.70304 ± 0.12254
200 0.27139 ± 0.14077 −0.21205 ± 0.08179 0.43211 ± 0.07923 −0.55413 ± 0.06604 0.74738 ± 0.07298
1000 0.34540 ± 0.03026 −0.27969 ± 0.03451 0.48887 ± 0.02691 −0.59218 ± 0.02192 0.81398 ± 0.02419
2000 0.34997 ± 0.02481 −0.28061 ± 0.02659 0.49418 ± 0.02131 −0.59609 ± 0.01672 0.80707 ± 0.02374
3000 0.35191 ± 0.01568 −0.28142 ± 0.02384 0.49677 ± 0.01229 −0.59677 ± 0.01469 0.80957 ± 0.01503
4000 0.35222 ± 0.01234 −0.28016 ± 0.01883 0.49785 ± 0.01100 −0.59743 ± 0.01314 0.81183 ± 0.01171
5000 0.35112 ± 0.01428 −0.28172 ± 0.01252 0.49772 ± 0.00740 −0.59800 ± 0.00711 0.81284 ± 0.00989

True values 0.35000 −0.28000 0.50000 −0.60000 0.82000

6. Examples

Example 1. Consider the following OEAR model


B ( z) 1
y (t ) = u (t ) + v (t ),
A ( z) C ( z)
A ( z) = 1 + a1 z−1 + a2 z−2 = 1 + 0.35z−1 − 0.28z−2 ,
B ( z) = b1 z−1 + b2 z−2 = 0.50z−1 − 0.60z−2 ,
C ( z) = 1 + c 1 z−1 = 1 + 0.82z−1 ,
θ = [a1 , a2 , b1 , b2 , c 1 ]T .
The input {u (t )} is taken as a pseudo-random binary sequence
with zero mean and unit variance, and { v (t )} as a white noise
sequence with zero mean and variance σ 2 = 0.302 , the corre- Fig. 3. The estimation errors δ versus t.
sponding noise-to-signal ratio is δns = 46.30%. The noise-to-signal
ratio δns of a system is defined as the square root of the ratio of By applying the AM-RGLS and the F-RLS algorithms to estimate
2
the variance σ w of the disturbance w (t ) and the variance σx2 of the parameters of this example system, the parameter estimates
the noise-free output x(t ) (namely, the output y (t ) when v (t ) ≡ 0) and their estimation errors are shown in Tables 1–2, the param-
[29,30] – see Fig. 1: eter estimation errors δ := θ̂(t ) − θ/θ  versus t are shown in
 Fig. 3.
var[ w (t )] σw Furthermore, using the Monte Carlo simulations with 15 sets
δns = × 100% = × 100%.
var[x(t )] σx of noise realizations, the parameter estimates and their estima-
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.7 (1-9)
F. Ding et al. / Digital Signal Processing ••• (••••) •••–••• 7

Table 4
The F-RLS estimates and their errors for Example 2.

σ2 t a1 a2 b1 b2 c1 d1 δ (%)
1.002 100 0.71017 0.12723 0.22952 0.53891 0.41197 0.07943 41.61326
200 0.82461 0.28844 0.29551 0.47133 0.36517 0.01409 28.45013
1000 0.95901 0.41883 0.36766 0.58098 0.28639 −0.07812 12.24376
2000 0.96895 0.39771 0.38382 0.59988 0.29154 −0.09982 10.94661
3000 0.96225 0.42325 0.36493 0.62941 0.24167 −0.13777 6.71909
4000 0.96364 0.39993 0.35814 0.62308 0.23398 −0.15695 5.02215
5000 0.98619 0.39456 0.36512 0.64190 0.22704 −0.16039 4.29876

0.802 100 0.88215 0.31677 0.25373 0.59735 0.45398 0.11057 33.11441


200 0.98114 0.43385 0.31294 0.53923 0.29746 −0.07281 14.16945
1000 0.97399 0.42858 0.36519 0.59350 0.20072 −0.18350 3.93052
2000 0.97571 0.40304 0.37760 0.60563 0.23266 −0.16942 4.52039
3000 0.97043 0.42027 0.36231 0.62895 0.19837 −0.18735 3.10064
4000 0.97073 0.40095 0.35678 0.62344 0.19632 −0.19798 2.31104
5000 0.98830 0.39622 0.36231 0.63817 0.19739 −0.19267 1.99224

True values 1.00000 0.40000 0.35000 0.62000 0.20000 −0.20000

Table 5
The parameter estimates based on 15 Monte Carlo runs for Example 2 (σ 2 = 0.802 ).

t a1 a2 b1 b2 c1 d1
100 0.44612 ± 0.80785 0.27325 ± 0.26451 0.34355 ± 0.10022 0.31356 ± 0.37019 0.25004 ± 0.49069 −0.11674 ± 0.43794
200 0.65922 ± 0.71150 0.34665 ± 0.22310 0.35498 ± 0.05986 0.40685 ± 0.23816 0.25088 ± 0.30398 −0.13955 ± 0.31010
1000 0.95508 ± 0.14880 0.40156 ± 0.11442 0.35418 ± 0.02806 0.57999 ± 0.07920 0.23323 ± 0.16074 −0.16295 ± 0.12941
2000 0.97248 ± 0.05046 0.40451 ± 0.08110 0.35259 ± 0.02501 0.60009 ± 0.03997 0.22688 ± 0.16101 −0.17045 ± 0.12061
3000 0.97845 ± 0.03862 0.39763 ± 0.04194 0.35230 ± 0.02217 0.60345 ± 0.03067 0.21309 ± 0.07148 −0.17884 ± 0.07171
4000 0.99520 ± 0.02867 0.40301 ± 0.02500 0.35071 ± 0.01707 0.61304 ± 0.01645 0.20514 ± 0.03931 −0.19032 ± 0.03595
5000 0.99914 ± 0.03438 0.40544 ± 0.03087 0.35285 ± 0.01453 0.61687 ± 0.02130 0.19836 ± 0.02885 −0.19694 ± 0.02660

True values 1.00000 0.40000 0.35000 0.62000 0.20000 −0.20000

Table 6
The parameter estimates based on 15 Monte Carlo runs for Example 2 (σ 2 = 1.002 ).

t a1 a2 b1 b2 c1 d1
100 0.35683 ± 0.84052 0.22795 ± 0.29872 0.34171 ± 0.11911 0.27456 ± 0.43844 0.25955 ± 0.59768 −0.09336 ± 0.44704
200 0.53760 ± 0.76585 0.31317 ± 0.19499 0.35581 ± 0.07811 0.35309 ± 0.29352 0.26618 ± 0.33906 −0.10993 ± 0.29229
1000 0.92890 ± 0.16560 0.38910 ± 0.13871 0.35479 ± 0.03324 0.56603 ± 0.08795 0.25408 ± 0.15694 −0.13379 ± 0.15324
2000 0.95972 ± 0.08148 0.39672 ± 0.09831 0.35301 ± 0.03081 0.59337 ± 0.05063 0.24187 ± 0.17144 −0.15094 ± 0.13436
3000 0.97028 ± 0.04875 0.39079 ± 0.04872 0.35265 ± 0.02765 0.59848 ± 0.04140 0.22738 ± 0.08092 −0.16149 ± 0.08391
4000 0.99219 ± 0.03517 0.39885 ± 0.03132 0.35072 ± 0.02113 0.61072 ± 0.02274 0.21730 ± 0.05989 −0.17634 ± 0.05548
5000 0.99766 ± 0.04287 0.40292 ± 0.03668 0.35340 ± 0.01792 0.61571 ± 0.02619 0.20865 ± 0.06502 −0.18553 ± 0.05695

True values 1.00000 0.40000 0.35000 0.62000 0.20000 −0.20000

tion biases of two algorithms are shown in Table 3 with σ 2 = A ( z) = 1 + a1 z−1 + a2 z−2 = 1 + 1.00z−1 + 0.40z−2 ,
0.302 .
From Tables 1–3 and Fig. 3, we can draw the following conclu- B ( z) = b1 z−1 + b2 z−2 = 0.35z−1 + 0.62z−2 ,
sions.
C ( z) = 1 + c 1 z−1 = 1 + 0.20z−1 ,
• The parameter estimation errors of the AM-RGLS and F-RLS D ( z) = 1 + d1 z−1 = 1 − 0.20z−1 ,
algorithms become generally smaller with the data length t
increasing – see the estimation errors of the last columns in θ = [a1 , a2 , b1 , b2 , c 1 , d1 ]T .
Tables 1–2 and the estimation error curves in Fig. 3.
• For the same data length, the parameter estimation ac- The simulation conditions are similar to those of Example 1 but
curacy of the F-RLS algorithm is higher than that of the the noise variance σ 2 = 0.802 and σ 2 = 1.002 , respectively, the
AM-RGLS algorithm – see the estimation error curves in corresponding noise-to-signal ratios are δns = 124.91% and δns =
Fig. 3. 156.13%. By applying the F-RLS algorithm to estimate the param-
• The average values of the parameter estimates are very close eters of this example system, the parameter estimates and errors
to the true parameters and the variances are small for large t are shown in Table 4 and Fig. 4. For 15 sets of noise realizations,
– see Table 3. the Monte Carlo simulation results are shown in Tables 5–6.
From Tables 4–6 and Fig. 4, we can see that the proposed F-
Example 2. Consider the following Box–Jenkins model, RLS algorithm is effective for estimating the parameters of the
Box–Jenkins system. With the noise-to-signal ratios decreasing, the
B ( z) D ( z) convergence rate of the parameter estimates given by the pro-
y (t ) = u (t ) + v (t ), posed algorithm becomes faster – see the estimation error curves
A ( z) C ( z)
in Fig. 4. With the data length t increasing, the estimation errors
where become generally smaller – see the estimation errors in the last
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.8 (1-9)
8 F. Ding et al. / Digital Signal Processing ••• (••••) •••–•••

[11] M. Gilson, P. Van den Hof, On the relation between a bias-eliminated least
squares (BELS) and an IV estimator in closed-loop identification, Automatica
37 (10) (2001) 1593–1600.
[12] Y. Zhang, Unbiased identification of a class of multi-input single-output sys-
tems with correlated disturbances using bias compensation methods, Math.
Comput. Model. 53 (9–10) (2011) 1810–1819.
[13] Y. Zhang, G.M. Cui, Bias compensation methods for stochastic systems with
colored noise, Appl. Math. Model. 35 (4) (2011) 1709–1716.
[14] Y.B. Hu, B.L. Liu, Q. Zhou, C. Yang, Recursive extended least squares parameter
estimation for Wiener nonlinear systems with moving average noises, Circuits
Syst. Signal Process. 33 (2) (2014) 655–664.
[15] Y.B. Hu, Iterative and recursive least squares estimation algorithms for moving
average systems, Simul. Model. Pract. Theory 34 (2013) 12–19.
[16] J.H. Li, Parameter estimation for Hammerstein CARARMA systems based on the
Newton iteration, Appl. Math. Lett. 26 (1) (2013) 91–96.
[17] G.C. Goodwin, K.S. Sin, Adaptive Filtering, Prediction and Control, Prentice-Hall,
Englewood Cliffs, NJ, 1984.
Fig. 4. The F-RLS estimation errors δ versus t for Example 2. [18] F. Ding, Combined state and least squares parameter estimation algorithms for
dynamic systems, Appl. Math. Model. 38 (1) (2014) 403–412.
[19] F. Ding, J. Ding, Least squares parameter estimation with irregularly missing
column in Table 4. The Monte Carlo simulation results show the data, Int. J. Adapt. Control Signal Process. 24 (7) (2010) 540–553.
effectiveness of the proposed algorithms. [20] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for
multiple-input single-output systems using the auxiliary model, Appl. Math.
Comput. 215 (4) (2009) 1477–1483.
7. Conclusions
[21] J. Ding, C.X. Fan, J.X. Lin, Auxiliary model based parameter estimation for dual-
rate output error systems with colored noise, Appl. Math. Model. 37 (6) (2013)
This paper investigates parameter identification methods for 4051–4058.
output error autoregressive models and Box–Jenkins (i.e., OEARMA) [22] J. Ding, J.X. Lin, Modified subspace identification for periodically non-uniformly
models. Based on the data filtering technique, two recursive least sampled systems by using the lifting technique, Circuits Syst. Signal Process.
33 (5) (2014) 1439–1449.
squares algorithms are derived by filtering the input–output data.
[23] F. Ding, State filtering and parameter estimation for state space systems with
The proposed algorithms have the following properties. scarce measurements, Signal Process. 104 (2014) 369–380.
[24] F. Ding, Hierarchical estimation algorithms for multivariable systems using
• The parameter estimation errors given by the proposed algo- measurement information, Inf. Sci. 277 (2014) 396–405.
rithms become generally smaller with the increasing of the [25] H.B. Chen, W.G. Zhang, F. Ding, Data filtering based least squares iterative algo-
rithm for parameter identification of output error autoregressive systems, Inf.
data length.
Process. Lett. 104 (10) (2014) 573–578.
• The proposed algorithms interactively estimate the parameters [26] D.Q. Wang, F. Ding, Input–output data filtering based recursive least squares
of the system models and the noise models. parameter estimation for CARARMA systems, Digit. Signal Process. 20 (4)
• The proposed filtering based estimation algorithm requires (2010) 991–999.
lower computational load and achieves higher estimation ac- [27] D.Q. Wang, Least squares-based recursive and iterative estimation for output
error moving average systems using data filtering, IET Control Theory Appl.
curacies than the auxiliary model based recursive generalized
5 (14) (2011) 1648–1657.
least squares algorithm. [28] F. Ding, T. Chen, Combined parameter and output estimation of dual-rate sys-
• The proposed algorithms can effectively estimate the parame- tems using an auxiliary model, Automatica 40 (10) (2004) 1739–1748.
ters of the OEAR and Box–Jenkins systems. [29] F. Ding, System Identification – New Theory and Methods, Science Press, Bei-
• The proposed methods can be extended to study identifica- jing, 2013.
[30] F. Ding, System Identification – Performances Analysis for Identification Meth-
tion problems of other linear (time-varying) systems [33–35],
ods, Science Press, Beijing, 2014.
linear-in-parameters systems [36,37] and nonlinear systems [31] F. Ding, Y. Gu, Performance analysis of the auxiliary model based least squares
with colored noise [38–42]. identification algorithm for one-step state delay systems, Int. J. Comput. Math.
89 (15) (2012) 2019–2028.
References [32] Y.J. Liu, F. Ding, Y. Shi, An efficient hierarchical identification method for gen-
eral dual-rate sampled-data systems, Automatica 50 (3) (2014) 962–970.
[1] P. Stoica, P. Babu, Parameter estimation of exponential signals: a system iden- [33] G. Li, C. Wen, W.X. Zheng, Y. Chen, Identification of a class of nonlinear autore-
tification approach, Digit. Signal Process. 23 (5) (2013) 1565–1577. gressive models with exogenous inputs based on kernel machines, IEEE Trans.
[2] C.H.H. Ribas, J.C.M. Bermudez, N.J. Bershad, Identification of sparse impulse Signal Process. 59 (5) (2014) 2146–2159.
responses – design and implementation using the partial Haar block wavelet [34] G. Li, C. Wen, Convergence of normalized iterative identification of Hammer-
transform, Digit. Signal Process. 22 (6) (2012) 1073–1084. stein systems, Syst. Control Lett. 60 (11) (2011) 929–935.
[3] A.Y. Carmi, Compressive system identification: sequential methods and entropy [35] F. Ding, T. Chen, Performance bounds of the forgetting factor least squares al-
bounds, Digit. Signal Process. 23 (3) (2013) 751–770. gorithm for time-varying systems with finite measurement data, IEEE Trans.
[4] W. Yin, A. Saadat Mehr, Identification of LPTV systems in the frequency do- Circuits Syst. I, Regul. Pap. 52 (3) (2005) 555–566.
main, Digit. Signal Process. 21 (1) (2011) 25–35. [36] C. Wang, T. Tang, Recursive least squares estimation algorithm applied to a
[5] J. Huang, Y. Shi, H.N. Huang, Z. Li, l–2–l-infinity filtering for multirate nonlin- class of linear-in-parameters output error moving average systems, Appl. Math.
ear sampled-data systems using T–S fuzzy models, Digit. Signal Process. 23 (1) Lett. 29 (2014) 36–41.
(2013) 418–426. [37] C. Wang, T. Tang, Several gradient-based iterative estimation algorithms for a
[6] Y. Shi, T. Chen, Optimal design of multi-channel transmultiplexers with stop- class of nonlinear systems using the filtering technique, Nonlinear Dyn. 77 (3)
band energy and passband magnitude constraints, IEEE Trans. Circuits Syst. II, (2014) 769–780.
Analog Digit. Signal Process. 50 (9) (2003) 659–662. [38] J. Vörös, Parameter identification of discontinuous Hammerstein systems, Au-
[7] Y. Shi, H. Fang, Kalman filter based identification for systems with randomly tomatica 33 (6) (1997) 1141–1146.
missing measurements in a network environment, Int. J. Control 83 (3) (2010) [39] J. Vörös, Iterative algorithm for parameter identification of Hammerstein sys-
538–551. tems with two-segment nonlinearities, IEEE Trans. Autom. Control 44 (11)
[8] H. Li, Y. Shi, Robust H-infty filtering for nonlinear stochastic systems with un- (1999) 2145–2149.
certainties and random delays modeled by Markov chains, Automatica 48 (1) [40] J. Vörös, Modeling and parameter identification of systems with multi-segment
(2012) 159–166. piecewise-linear characteristics, IEEE Trans. Autom. Control 47 (1) (2002)
[9] L. Ljung, System Identification: Theory for the User, 2nd edn., Prentice-Hall, 184–188.
Englewood Cliffs, New Jersey, 1999. [41] J. Vörös, Recursive identification of Hammerstein systems with discontinu-
[10] Y. Zhu, H. Telkamp, J. Wang, Q. Fu, System identification using slow and irreg- ous nonlinearities containing dead-zones, IEEE Trans. Autom. Control 48 (12)
ular output samples, J. Process Control 19 (1) (2009) 58–67. (2003) 2203–2206.
JID:YDSPR AID:1686 /FLA [m5G; v1.143-dev; Prn:19/11/2014; 8:14] P.9 (1-9)
F. Ding et al. / Digital Signal Processing ••• (••••) •••–••• 9

[42] Y.B. Hu, B.L. Liu, Q. Zhou, A multi-innovation generalized extended stochastic Universities “Blue Project” Middle-Aged Academic Leader (Jiangsu, China).
gradient algorithm for output nonlinear autoregressive moving average sys- His current research interests include model identification and adaptive
tems, Appl. Math. Comput. 247 (2014) 218–224. control.

Feng Ding was born in Guangshui, Hubei Province, China. He received


the B.Sc. degree from the Hubei University of Technology (Wuhan, China) Yanjiao Wang was born in Donghai, Jiangsu Province, China. She re-
in 1984, and the M.Sc. and Ph.D. degrees in automatic control both from ceived the B.Sc. degree from Southeast University (Nanjing, China) in 2013,
the Department of Automation, Tsinghua University, Beijing, in 1991 and and is now an M.Sc. student in the School of Internet of Things Engi-
1994, respectively. neering, Jiangnan University, Wuxi, China. Her interests include system
From 1984 to 1988, he was an Electrical Engineer at the Hubei Phar- modeling, system identification and process control.
maceutical Factory, Xiangfan, China. From 1994 to 2002, he was with the
Department of Automation, Tsinghua University, Beijing, China and he was
a Research Associate at the University of Alberta, Edmonton, Canada from Jie Ding has received Bachelor’s degree and Ph.D. degree in 2006
2002 to 2005. He was a Visiting Professor in the Department of Sys- and 2011, respectively, from the Jiangnan University, Wuxi, China. From
tems and Computer Engineering, Carleton University, Ottawa, Canada from 2008 to 2009, she was a visiting Ph.D. student in the University of
May to December 2008 and a Research Associate in the Department of Saskatchewan, Saskatoon, Canada. She is a faculty member of the Nanjing
Aerospace Engineering, Ryerson University, Toronto, Canada, from January University of Posts and Telecommunications since 2011. She is a visiting
to October 2009. scholar in University of Virginia, USA, from September to November 2014.
He has been a Professor in the School of Internet of Things Engineer- Her research interests include multirate system identification and process
ing, Jiangnan University, Wuxi, China since 2004. He is a Colleges and control.

You might also like