Identification Of Nonlinear Dynamical Systems Using Recurrent Neural Networks Laxmidhar Behera, Swagat Kumar and Subhas Chandra Das Department of Electrical Engineering Indian Institute of Technology,Kanpur Kanpur, 208016 INDIA 05 122597198
[email protected],
[email protected],
[email protected]
AbstractThis paper discusses three learning algorithms to train R.ecrirrenl, Neural Networks for identification of nonlinear dynamical systems. We select Memory Neural Networks(MNN) topology for the recurrent network in our work. MNNs are themselves dynamical systems t,hat have internal memory ohtained by adding trainalile temporal eleinerits to feedforward networks. Three leariling procedures nemcly BackPropagation Through Tiriie(BPTT), Real Time Recurrent Learning(RTRL) and Ext,ended Kalman Filtering(EKF) are used for adjusting the weights in &INN to train such networks to identify the plant. T h e relative effectiveness of different learning algorithms have been discussed by comparing the mean square error associated with them and corresponding computut,ional requirements. T h e simulation results show that HTRL algorithm is efficient for trainiug MNNs to model noiilinear dynamical systems by considering both compntat.ional complexity and modelling accnracy. Eventhough. the accuracy of system identification is best with EKF, but it has the drawback of heing computationally int,ensive.
1, INTRODUCTION
A recurrent network model with internal memory is best suited for ideni:ification of systems for which incomplete or no knowledge about, its dynamics exists. Iu this sense, Memory Neuron Xetworks(MNN)[Y] offer truly dynamic niodcls for identificat,ioii of nonlinear dyriamic systems. T h e special feature of these networks is t h a t they have internal trainable memory and can hence directly model dynamical syst,ems without having t o b e explicitly fed with past inputs and outputs. Thus, they can identify systems whose order is unknown or systems with unknown deltry. Here each unit of neuron has, associated with it, a memory neuron whose single scalar. output siimmarizes the history of past activations of that unit. The weights of coirnectiori into memory neuron involve feedback loops: the overall network is now a recurrent one. The primary nini"of this paper is to analyse. the different learning algorithnis on the basis of modeling accuracy and conrputational intensity.The weight coefficients of MNN nro adjusted using a B P T T update al07803.765 IWO3/SI7.00 o?\m CEEE
la
Figure 1. Structure of Memory Neuron Model
gorithm. To increase t h e modeling accuracy two other algorithms namely RTRL and EKF have been proposed. It is concluded t h a t RTRL identifies the system more efficiently when modeling accuracy as well as computational intensity are taken into account. T h e rest of the paper is organized as follows. T h e following section describes the architecturr arid dynamics of MNNs. In section 111, the different learning algorithms namely BPTT, RTRLII] and EKF[5] have been elaborately discussed. The simulations carried out and the results are presented in section IV. T h e final conclusion have been given in section V.
2 . MEMORY NEURAL NETWORK In this section, the structure ofthe network is descrihed. The network used is similar to the one that is descrihed in [3]. T h e architecture of a MNN is shown in Figure 1. The memory neuron takes its input from the corresponding network neuron and it also has a self feedback. This leads to storage of past values of the network neuron in the memory neuron. In t h e output layer, each network neuron can have'a cascade of memory neurons and each of them send their output, to that network neuron in the output layer. Dynamics of the network
The following notations are used to describe the functioning of the network.
Control Systems and Applications / 1121 L is the number of layers of the network with layer 1 as the input layer and layer L as the output 1ayer.Nl is the number of network neurons in layer 1.z:(k) is the net input t,o the jth network neuron of layer 1 a t time k. s i ( k ) is the output of the j t h network neuron of layer1 a t time k. ,u:(k) is the output of t,he memory neuron of the jth network nenron of layer1 at time k,l < L . w t j ( k )is the connecting weight from the ith network neuron of layer 1 to the jth network neuron of layer 1 + 1 at time k. f i j ( k ) is the connecting weight from the meniory neuron of the ith network neuron of layer 1 to the j t h network neuron of layer 1 1 at time k. a : ( k ) is the connecting weight from the jth netwowrk neuron to its corresponding memory neuron. a f j( k ) is the connecting weight from the (j1)th memory neuron t o the j t h memory neuron of the ith network neuron in the output layer at time k. u:(k) is the output of t h e j t h memory neuron of the ith network neuron in the output layer at time k. @ ( k ) is the connecting weight from the j t h memory neuron of the ith network neuron in the output layer at time.k. Mj is the numher of memory neurons asociated with the jth network neuron of the output layer. g(.) is the activation function of the network neurons. T h e net input to the jth network neurons of layer 1.1 <= 1 < L , a t time k Is given hy
+
In the above equation we assume that s:(k)
= 1 for all 1 andwb, is the hias for the j l h network neuron in the layer 1 + 1. The output of the network neuron is given hy: .$(k) = g ( z l ( k ) ) , l <= 1 <= L (2)
.
where by notation, we have 7ii0 = s f t o ensure stability of the network dynamics, we impose the conditions: 0 _< a$,af;ah 5 1.
3 . LEARNING ALGORITHMS Different learning algorithms t o he used for the M N N are described here. At each instant a n input is supplied to the system and the output of the net is calcnlated using the dynamics of MNN. A teaching signal is then ohtained and is used to calculate the error at the output layer and update all the weights in the network. The usual squared error is used and is given hy
where,yj(kj is the teaching signal for the j t " node at time k.
output.
Back Propagarion Through Time Algorifl~m The training algorithm using hack propagation [4] in time for a recurrent net is based on the o h s e r i d o i i that the performance of such a net, for a fixed nunilrcr of tht: time steps N is identical t o the result,s obtained fsoin a feed forward net with 2N layers oE adjnstahle weiglit,s. The final equation for updating the wcights are given helow: w;(k
where
+ 1) = 4 ( k )

r!e:.+'(k)s!(k); 1 5 1 < L
(9)
is the step size and ef(k) = (s,L(k) y ; ( k ) ) g ' ( z ; . ( k ) )
(10)
T h e activation functions used for all hidden nodes is g l and g2 for the output nodes.
+ ekz ") 92 (z) = c2 (I  e P 2 r , + (1 + e  k Z z, g1 (z) = c1 i (1
(3) (4)
Here cl,cZ,kl and k2 are the parameters of the activation function.For the units in the output layer, the net input is given hy /
NIL1
zf(k) =
.wk.l(k)
sfl(k)
+
$,(k
+ 1)
~
= flj(k)

r,e:+*(k) r,t(k) , I
f;I(k)p(k)
The various memory coefficients are updated as given helow:
(5) T h e output of all the memory neurons except for those in the output layer, are derived by u:.(k) = a ; ( k ) s : ( k )
+ (1  a;.(k) ) w l ( k )
(6)
For the memory neurons inthe output layer, I,&.L,'
= a& '3. ( k ) v f ; ( k  1)
5 1 < L (12)
NLI
i=l
i=U
The ahove is the standard hack propagat.iol1 of error without considering t,he memory neurons. The nprlalL ing of f is the same as that of w except that the outpnt, of the corresponding i?eniory neuron is used rathrr than the network n e y o n .
+ (1  a t ; ( k ) ) v f ; ( k 1)
(7)
a f ; ( k + 1)
P:(k where
= a:('.)
~
+ 1) = /3:(k)
ae
all&.
{ ~ ( ?(k) k ) antj act:. 
q'ef(B:)v ; . ( k )
(14)
(15)
TENCON 2003 / 1122 8"; (k)
=s;.(kI) u;.(kl)
an;
.ae (k)
(").
= @ & ( k )e f ( k )
au,L, aoL.
=~ &  ~ ( k1)  u&.(k  1)
?(k)
anf;
(19)
Two step'size parameters are used in the ahove eqnations 7/' .for. the memory coefficients and 7 for the remaining weights. To ensure the stability of the network, we project the memory coefficient hack to the interval ( O , l ) , if after tlic ahove updating they are outside the interval. R e d Time Recurrent Learning Algorithm
This algorithm can be run on line, learning while seqnences nre heing presented rather than after they are complete. It can thus deal with sequences oC arhitrary length, there are, no requirements to, allocate memory proportional to the niaximum sequence length. T h e nctations used i n this algorithm are as follows:
as!
I
P{(C+l) = > ( k + l )
= yg'(Z;.(k+l))s;'(k+l)
aaii
T h e update off is same os that of w except that t,he output of the corresponding memory ueuron is used rather than the network neuron. The various memory coefficients are updated as in previous algorithm with the difference that the learning is real time. Exrended Kalnian Filter Algorithm
T h e Extended Kalman Filter uses second order process training that processes and uses inforination ahout the shape of t,he training problem's underlying error surface. Williams [6] provides a detailed analytical treatment of E K F training of the recurrent networks, and suggests t h a t a four to six fold decrease relative to RTRL in the numher of presentations of the training data for some simple finite state machine prohlems. This is a method of estimating the state vector. Here the weight, vector a ( t ) is considered as the sta,te vector to be estimat.ed. T h e MNN can he expressed by the following noidinear system equations as a function of the i.la input. a$) = a i ( t  1)
(27)
h(a&)] + .(t)
(28)
yd =
Here yd is the desired output,,y(t) is Lhe rstimat,ecl out.. put vector at time(t1) arid f ( t ) is assumed as the white noise vector with covariance matrix R(i,). The covariance matrix is unknown apriori and has to he estimated. For this purpose, R(t) is assumed to he a diagonal matrix AI. T h e initial state a ( 0 ) is assumed to be a random vector.The following real time learning algorithm [j]is used to update the weights. i i i ( t ) = Zi(t 
1)
+ I
(29) .
where K: the Kalman filter gain is given hy where ai; denotes the connection weight being updated. It may he noted that t,hese values at time (k=O) are initialised to zero. Also depending on the plant equation these values can he.reinitialised to zero after a particular numher of time steps. T h e learning rule for this algorithm is derived as follows: 1 Wij(k
.
aE
.
+ 1)=
I 'Ui;(k)
.Nr
.
8E
 7)(k
am;,
1
K,(t) = P;(tl)H?(t)[I
x
(23)
+
(30)
1 X(t) = X(tI)+(t)[
(Yd  B(t))T(Yd  l/(t))
T,,,",
P&(t),= Pc(t
+ 1)
H,(t)Pj(t l ) @ ( t ) 1 X ( t ) Hi(t)P,(t  l)HT(t)
where
~
NL 1)
 Pi'(t  l)Ki(t)H,(t)
(31) (32)
aw
(33)
Hi(t) = i , a i I < 4  1 )
'
that all the Pt coefficients for the corresponding C(,~,L(k+i)~~p(k+l))~~L (24) ( k + l Note ) weights are initialised to unity. a d ; ( k + , l ) ','p=l . ,
,
. . . w f j ( k : + 1) = .wfj(k)
where
~
qe(k
+ l)f':(k
+ 1)
(25)
'
. . :. , .
p;(k
+
NL
= , ~ ( ~ ; ( k ; )~) L ~ , ~ ; ~+ Il )( P~z  l ( k
+
1)+
p=1
f:l(k
+ l ) Q / ;  ' ( k + 1) [email protected](k
+ I ) R & ( k + 1)] (26)
4.MNN FORMODELLING OF DYNAMICAL SYSTEMS A series parallel model is obtained (for an SISO plant) hy having a network with two input nodes to which we feed ~ ( kand ) yp(k 1 ) . T h e single output of the net will h e & ( k ) . This identification system is shown in Figure 2. $,(k) = F ( u ( k ) , ~ ( k l) . ? j p ( k 1): ...,I&,( k ) >...) (34)
Confrol Systems and Applications / 11 23 then for twothird of remaining t,raining time, the inpilt, is independent and identicdly dirt,ribnted (iid) sequellce nuiform over [2,2] and for rest of,the training time, t,lle input is a single sinusoid givm 12.v , s i r t ( ~ & ) . Aft,er t,hc training the output of the nei.ivoxk is compared wit,li rhat, of the plant on a test signal for 1000 timostnps. For the test phase, the following input is used.
5 k i 500 = 1.0; 500 5 k < 750 = 1.0: 280
k
= 0.3sin(n)
(35)
k + 0.l.7i,!(T7)+ 32
25 k O . G s i n ( ~  ) , 750 5 h < 1000 10
Ydk)
Example 1: This indicat,es the abilitv of t h e klNX to learn a plant of unknrnvn order. Here the output of rhe plant is giveii as below:
U
Eximplc 2: This is an MIMO plant wit.li tmo inpiits

Figure 2. System identification Model
z
Iem
To model an minput,, poutput plant a network with m+p inputs and p outputs will h e used. This will he the case irrespective of the order of the system. The actual outputs of the plant at each instant are used as teaching signals.
Simulation
a
I
m
m
_
m
_
_
_
I
N
m
,
_
. , " . L O
Two examples of nonlinear plants'are identified hy MNN. Seriesparallel niodeUFiaure 2) is used for identifica, I tion.l'he networks with only one hidden layer are used. So the notation ni:n is used to denote a nerwork that has nl hidden network nenrons and has n memory neurons per node in the output layer. SISO plant tias two inputs ( u ( k )and y p ( k  1) and output (qp(k)). The number of inputs t o the identification model does not depend on the order of the plant. .Network parameters: The network size med for all examples and algorithms' is a 6:1 network. The same learning rate is used for all problems with '11 = 0.2 and 171 = 0.1. Also the same activation functions g l for hidden nodes and 92 for output nodes are used with cl=c2=kl=k2=1. Atteuuation constant is used in the plautoutpnt'so that the t,eaching signal for the network is always in [l,l]. .Training.the network: 77000 time steps are used for training the network. The network is trained for 2000 iterations on zero input;
Figure 3. Plant and network output with BPIT algorithm
a .
,,_".#
pigUip 4. Plant and network output
Rm
C.
TENCON 21703/ 1124

I
U ,. .
I
Figure 5. Plant and network output with EKF algorithm
the approximate gradient descent is the least favourable. The complexity of computation increases from BPTT to RTRL to EKF. By introducing dynamics directly into the feed forward network structure, MNN represents an unique class of dynamic model for identifying any generalized plaiit equation. From the extensive simulations of different a1gorithms carried out and observing the results obtained, we conclude that EKF is one of the hest learning algorithms for this model. But, from the ahove discussion we find that the complexity of calculations involved increases with the decrease in error. So, Cutlire research i i i this field hopefully lead to work in that direction.
and two outputs. The plant is specified by:
6. ACKNOWLEDGEMENTS This work was funded hy IMHRD under the project,,MHRDEER&D20030042. titled ” Adaptcue nonlinear contml A foundational framework using classical and yiiantum “ algorithm"
REFERENCES
Example Ex.No.1 Ex.No,Z o/pl Ex.No.2 o/p2
RTRL
EKF 0.006088 0.006642 0.005753 0.001414 0.002595 0.008460 0.001258 0.001563 BPIT 0.013293
The examples described have been simulated using all the algorithms discusscd. Figure 3 , Figure 4, Figure 5
give the outputs of the plant and the model network for the example 1. The mean square error for all the algorithms for hoth of the two examples are shown in the Tahle 1
5 . CONCLUSIONS Memory Neuron Networks offer truly dynaniical models.The memory coefficients are modified online during the learning process. Here the network has a near to feedforward structure which is useful for having an’incremental learning algorithm t,hat is fairly robust. We can consider MNN to he a locally recurrent and globally feed forward architecture that can consider as intermediate between feed forward and general recurrent networks. Ttie Back Propagation through time algorithni is not an online training process 1:ui the real time recurrent learn. ing algorithm is anJonline training algorithm with very good identification properties. The Extended Kahnan Filter is a fast algorithm and shows cornparahle identification capahilities. It can be concluded from the graphs and error obtained in the previous section that EKF algorithm is the hest suitable for modeling while
[I] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks”, IEEE Trans. on Neural Networks, Vol. I, no. I , pp421, 1990. [2] K. S . Narendra and K. Panhasarathy, ”Gradient methods for optimisation of dynamical systems containing neural networks”,lEEE Truns. on Neurnl Networks. Vo1.2, No. 2,pp52262, 1991. [ 3 ] P. S. Sasey, G. Santharam and K. P. Unnikrishnan, ”Memory neuron networks for identification and control ofDynamical Systems”,lEEE Trans. 011Neiirul Networks, Vol. 5 , No. 2, pp 306319, 1994. [4]
R. J. Williams and D. Zipser, “Gradientbased learning algorithms for recurrent connectioonist networks”,Technical Report NUCCS90, Boston: Norrlieastem University, College of Computer Scince.1990.
[SI Youji Iiguni, Hideaki Sakai and Hidekatsu Tokumaru. “A real time learning algorithm for a multilayered neural network based on Extended Kalman Filter”. 1 / 3 3 Truns. on Signal Processing.Vol. 40, No. 4,pp 959966, 1992. [6] R. J. Williams, “Some observations on the use of Extended Kalman Filter as a recurrent network leaming algorithm”,Technical Report NUCCs921. Boston: Norrtheastem University, College of Computer Scince,1992. R. J. Williams, ‘Training recurrent networks using the [7] joim ConExtended K a h n Filter”, in Ii~ternuiion~l ference on Neural Nenvorks ,Baltimore 1992,VoI.ILpp 241246. 1