Identification Of Nonlinear Dynamical Systems Using Recurrent Neural Networks Laxmidhar Behera, Swagat Kumar and Subhas Chandra Das Department of Electrical Engineering Indian Institute of Technology,Kanpur Kanpur, 208016 INDIA 05 12-259-7198 [email protected],[email protected], [email protected]

Abstract-This paper discusses three learning algorithms to train R.ecrirrenl, Neural Networks for identification of non-linear dynamical systems. We select Memory Neural Networks(MNN) topology for the recurrent network in our work. MNNs are themselves dynamical systems t,hat have internal memory ohtained by adding trainalile temporal eleinerits to feed-forward networks. Three leariling procedures nemcly Back-Propagation Through Tiriie(BPTT), Real Time Recurrent Learning(RTRL) and Ext,ended Kalman Filtering(EKF) are used for adjusting the weights in &INN to train such networks to identify the plant. T h e relative effectiveness of different learning algorithms have been discussed by comparing the mean square error associated with them and corresponding computut,ional requirements. T h e simulation results show- that HTRL algorithm is efficient for trainiug MNNs to model noiilinear dynamical systems by considering both compntat.ional complexity and modelling accnracy. Eventhough. the accuracy of system identification is best with EKF, but it has the drawback of heing computationally int,ensive.

1, INTRODUCTION

A recurrent network model with internal memory is best suited for ideni:ification of systems for which incomplete or no knowledge about, its dynamics exists. Iu this sense, Memory Neuron Xetworks(MNN)[Y] offer truly dynamic niodcls for identificat,ioii of nonlinear dyriamic systems. T h e special feature of these networks is t h a t they have internal trainable memory and can hence directly model dynamical syst,ems without having t o b e explicitly fed with past inputs and outputs. Thus, they can identify systems whose order is unknown or systems with unknown deltry. Here each unit of neuron has, associated with it, a memory neuron whose single scalar. output siimmarizes the history of past activations of that unit. The weights of coirnectiori into memory neuron involve feedback loops: the overall network is now a recurrent one. The primary nini"of this paper is to analyse. the different learning algorithnis on the basis of modeling accuracy and conrputational intensity.The weight coefficients of MNN nro adjusted using a B P T T update al0-7803.765 I-WO3/SI7.00 o?\m CEEE

la

Figure 1. Structure of Memory Neuron Model

gorithm. To increase t h e modeling accuracy two other algorithms namely RTRL and EKF have been proposed. It is concluded t h a t RTRL identifies the system more efficiently when modeling accuracy as well as computational intensity are taken into account. T h e rest of the paper is organized as follows. T h e following section describes the architecturr arid dynamics of MNNs. In section 111, the different learning algorithms namely BPTT, RTRLII] and EKF[5] have been elaborately discussed. The simulations carried out and the results are presented in section IV. T h e final conclusion have been given in section V.

2 . MEMORY NEURAL NETWORK In this section, the structure ofthe network is descrihed. The network used is similar to the one that is descrihed in [3]. T h e architecture of a MNN is shown in Figure 1. The memory neuron takes its input from the corresponding network neuron and it also has a self feedback. This leads to storage of past values of the network neuron in the memory neuron. In t h e output layer, each network neuron can have'a cascade of memory neurons and each of them send their output, to that network neuron in the output layer. Dynamics of the network

The following notations are used to describe the functioning of the network.

Control Systems and Applications / 1121 L is the number of layers of the network with layer 1 as the input layer and layer L as the output 1ayer.Nl is the number of network neurons in layer 1.z:(k) is the net input t,o the jth network neuron of layer 1 a t time k. s i ( k ) is the output of the j t h network neuron of layer1 a t time k. ,u:(k) is the output of t,he memory neuron of the jth network nenron of layer1 at time k,l < L . w t j ( k )is the connecting weight from the ith network neuron of layer 1 to the jth network neuron of layer 1 + 1 at time k. f i j ( k ) is the connecting weight from the meniory neuron of the ith network neuron of layer 1 to the j t h network neuron of layer 1 1 at time k. a : ( k ) is the connecting weight from the jth netwowrk neuron to its corresponding memory neuron. a f j( k ) is the connecting weight from the (j-1)th memory neuron t o the j t h memory neuron of the ith network neuron in the output layer at time k. u:(k) is the output of t h e j t h memory neuron of the ith network neuron in the output layer at time k. @ ( k ) is the connecting weight from the j t h memory neuron of the ith network neuron in the output layer at time.k. Mj is the numher of memory neurons asociated with the jth network neuron of the output layer. g(.) is the activation function of the network neurons. T h e net input to the jth network neurons of layer 1.1 <= 1 < L , a t time k Is given hy

+

In the above equation we assume that s:(k)

= 1 for all 1 andwb, is the hias for the j l h network neuron in the layer 1 + 1. The output of the network neuron is given hy: .$(k) = g ( z l ( k ) ) , l <= 1 <= L (2)

.

where by notation, we have 7ii0 = s f t o ensure stability of the network dynamics, we impose the conditions: 0 _< a$,af;ah 5 1.

3 . LEARNING ALGORITHMS Different learning algorithms t o he used for the M N N are described here. At each instant a n input is supplied to the system and the output of the net is calcnlated using the dynamics of MNN. A teaching signal is then ohtained and is used to calculate the error at the output layer and update all the weights in the network. The usual squared error is used and is given hy

where,yj(kj is the teaching signal for the j t " node at time k.

output.

Back Propagarion Through Time Algorifl~m The training algorithm using hack propagation [4] in time for a recurrent net is based on the o h s e r i d o i i that the performance of such a net, for a fixed nunilrcr of tht: time steps N is identical t o the result,s obtained fsoin a feed forward net with 2N layers oE adjnstahle weiglit,s. The final equation for updating the wcights are given helow: w;(k

where

+ 1) = 4 ( k )

-

r!e:.+'(k)s!(k); 1 5 1 < L

(9)

is the step size and ef(k) = (s,L(k)- y ; ( k ) ) g ' ( z ; . ( k ) )

(10)

T h e activation functions used for all hidden nodes is g l and g2 for the output nodes.

+ e-kz ") 92 (z) = c2 (I - e P 2 r , + (1 + e - k Z z, g1 (z) = c1 i (1

(3) (4)

Here cl,cZ,kl and k2 are the parameters of the activation function.For the units in the output layer, the net input is given hy /

NIL1

zf(k) =

.wk.-l(k)

sf-l(k)

+

$,(k

+ 1)

~

= flj(k)

-

r,e:+*(k) r,t(k) , I

f;-I(k)p(k)

The various memory coefficients are updated as given helow:

(5) T h e output of all the memory neurons except for those in the output layer, are derived by u:.(k) = a ; ( k ) s : ( k )

+ (1 - a;.(k) ) w l ( k )

(6)

For the memory neurons inthe output layer, I,&.L,'

= a& '3. ( k ) v f ; ( k - 1)

5 1 < L (12)

NL--I

i=l

i=U

The ahove is the standard hack propagat.iol1 of error without considering t,he memory neurons. The nprlalL ing of f is the same as that of w except that the outpnt, of the corresponding i?eniory neuron is used rathrr than the network n e y o n .

+ (1 - a t ; ( k ) ) v f ; ( k- 1)

(7)

a f ; ( k + 1)

P:(k where

= a:('.)

-~

+ 1) = /3:(k)

ae

all&.

{ ~ ( -?-(k) k ) antj act:. -

q'ef(B:)v ; . ( k )

(14)

(15)

TENCON 2003 / 1122 -8"; (k)

=s;.(k-I) -u;.(k-l)

an;

-.ae (k)

(").

= @ & ( k )e f ( k )

au,L, aoL.

=~ & - ~ ( k1) - u&.(k - 1)

-?-(k)

anf;

(19)

Two step'size parameters are used in the ahove eqnations 7/' .for. the memory coefficients and 7 for the remaining weights. To ensure the stability of the network, we project the memory coefficient hack to the interval ( O , l ) , if after tlic ahove updating they are outside the interval. R e d Time Recurrent Learning Algorithm

This algorithm can be run on line, learning while seqnences nre heing presented rather than after they are complete. It can thus deal with sequences oC arhitrary length, there are, no requirements to, allocate memory proportional to the niaximum sequence length. T h e nctations used i n this algorithm are as follows:

as!

I

P{(C+l) = > ( k + l )

= -yg'(Z;.(k+l))s;-'(k+l)

aaii

T h e update off is same os that of w except that t,he output of the corresponding memory ueuron is used rather than the network neuron. The various memory coefficients are updated as in previous algorithm with the difference that the learning is real time. Exrended Kalnian Filter Algorithm

T h e Extended Kalman Filter uses second order process training that processes and uses inforination ahout the shape of t,he training problem's underlying error surface. Williams [6] provides a detailed analytical treatment of E K F training of the recurrent networks, and suggests t h a t a four to six fold decrease relative to RTRL in the numher of presentations of the training data for some simple finite state machine prohlems. This is a method of estimating the state vector. Here the weight, vector a ( t ) is considered as the sta,te vector to be estimat.ed. T h e MNN can he expressed by the following noidinear system equations as a function of the i.la input. a$) = a i ( t - 1)

(27)

h(a&)] + .(t)

(28)

yd =

Here yd is the desired output,,y(t) is Lhe rstimat,ecl out.. put vector at time(t-1) arid f ( t ) is assumed as the white noise vector with covariance matrix R(i,). The covariance matrix is unknown apriori and has to he estimated. For this purpose, R(t) is assumed to he a diagonal matrix AI. T h e initial state a ( 0 ) is assumed to be a random vector.The following real time learning algorithm [j]is used to update the weights. i i i ( t ) = Zi(t -

1)

+ I
(29) .

where K: the Kalman filter gain is given hy where ai; denotes the connection weight being updated. It may he noted that t,hese values at time (k=O) are initialised to zero. Also depending on the plant equation these values can he.reinitialised to zero after a particular numher of time steps. T h e learning rule for this algorithm is derived as follows: 1 Wij(k

.

aE

.

+ 1)=

I 'Ui;(k)

.Nr

.

8E

- 7)-(k

am;,

1

K,(t) = -P;(t-l)H?(t)[I-

x

(23)

+

(30)

1 X(t) = X(t-I)+-(t)[

(Yd - B(t))T(Yd - l/(t))

T,,,",

P&(t),= Pc(t

+ 1)

H,(t)Pj(t- l ) @ ( t ) 1 X ( t ) Hi(t)P,(t - l)HT(t)

where

~

NL 1)

- Pi'(t - l)Ki(t)H,(t)

(31) (32)

aw

(33)

Hi(t) = i , a i I < 4 - 1 )

'

that all the Pt coefficients for the corresponding -C(,~,L(k+i)-~~p(k+l))~~L (24) ( k + l Note ) weights are initialised to unity. a d ; ( k + , l ) '-,'p=l . ,

,

. . . w f j ( k : + 1) = .wfj(k)

where

~

qe(k

+ l)f':(k

+ 1)

(25)

'

. . :. , .

p;(k

+

NL

= , ~ ( ~ ; ( k ;- )~) L ~ , ~ ; ~+- Il )( P~z - l ( k

+

1)+

p=1

f:-l(k

+ l ) Q / ; - ' ( k + 1) +@(k

+ I ) R & ( k + 1)] (26)

4.MNN FORMODELLING OF DYNAMICAL SYSTEMS A series parallel model is obtained (for an SISO plant) hy having a network with two input nodes to which we feed ~ ( kand ) yp(k- 1 ) . T h e single output of the net will h e & ( k ) . This identification system is shown in Figure 2. $,(k) = F ( u ( k ) , ~ ( k l-) . ? j p ( k- 1): ...,I&,( k ) >...) (34)

Confrol Systems and Applications / 11 23 then for two-third of remaining t,raining time, the inpilt, is independent and identicdly dirt,ribnted (iid) sequellce nuiform over [-2,2] and for rest of,the training time, t,lle input is a single sinusoid givm 12.v , s i r t ( ~ & ) . Aft,er t,hc training the output of the nei.ivoxk is compared wit,li rhat, of the plant on a test signal for 1000 timostnps. For the test phase, the following input is used.

5 k i 500 = -1.0; 500 5 k < 750 = 1.0: 280

k

= 0.3sin(n-)

(35)

k + 0.l.7i,!(T7)+ 32

25 k O . G s i n ( ~ - ) , 750 5 h < 1000 10

Ydk)

Example 1: This indicat,es the abilitv of t h e klNX to learn a plant of unknrnvn order. Here the output of rhe plant is giveii as below:

U

Eximplc 2: This is an MIMO plant wit.li tmo inpiits

-

Figure 2. System identification Model

z

Iem-

To model an m-input,, p-output plant a network with m+p inputs and p outputs will h e used. This will he the case irrespective of the order of the system. The actual outputs of the plant at each instant are used as teaching signals.

Simulation

a

I

m

m

_

m

_

_

_

I

N

m

,

_

. , " . L O

Two examples of nonlinear plants'are identified hy MNN. Series-parallel niodeUFiaure 2) is used for identifica, I tion.l'he networks with only one hidden layer are used. So the notation ni:n is used to denote a nerwork that has nl hidden network nenrons and has n memory neurons per node in the output layer. SISO plant tias two inputs ( u ( k )and y p ( k - 1) and output (qp(k)). The number of inputs t o the identification model does not depend on the order of the plant. .Network parameters: The network size med for all examples and algorithms' is a 6:1 network. The same learning rate is used for all problems with '11 = 0.2 and 171 = 0.1. Also the same activation functions g l for hidden nodes and 92 for output nodes are used with cl=c2=kl=k2=1. Atteuuation constant is used in the plautoutpnt'so that the t,eaching signal for the network is always in [-l,l]. .Training.the network: 77000 time steps are used for training the network. The network is trained for 2000 iterations on zero input;

Figure 3. Plant and network output with BPIT algorithm

a .

,,_".#

pigUip 4. Plant and network output

Rm

C.

TENCON 21703/ 1124

--

I

U ,. .

I

Figure 5. Plant and network output with EKF algorithm

the approximate gradient descent is the least favourable. The complexity of computation increases from BPTT to RTRL to EKF. By introducing dynamics directly into the feed forward network structure, MNN represents an unique class of dynamic model for identifying any generalized plaiit equation. From the extensive simulations of different a1gorithms carried out and observing the results obtained, we conclude that EKF is one of the hest learning algorithms for this model. But, from the ahove discussion we find that the complexity of calculations involved increases with the decrease in error. So, Cutlire research i i i this field hopefully lead to work in that direction.

and two outputs. The plant is specified by:

6. ACKNOWLEDGEMENTS This work was funded hy IMHRD under the project,,MHRDEE-R&D-20030042. titled ” Adaptcue non-linear contml -A foundational framework using classical and yiiantum “ algorithm"

REFERENCES

Example Ex.No.1 Ex.No,Z o/pl Ex.No.2 o/p2

RTRL

EKF 0.006088 0.006642 0.005753 0.001414 0.002595 0.008460 0.001258 0.001563 BPIT 0.013293

The examples described have been simulated using all the algorithms discusscd. Figure 3 , Figure 4, Figure 5

give the outputs of the plant and the model network for the example 1. The mean square error for all the algorithms for hoth of the two examples are shown in the Tahle 1

5 . CONCLUSIONS Memory Neuron Networks offer truly dynaniical models.The memory coefficients are modified online during the learning process. Here the network has a near to feed-forward structure which is useful for having an’incremental learning algorithm t,hat is fairly robust. We can consider MNN to he a locally recurrent and globally feed forward architecture that can consider as intermediate between feed forward and general recurrent networks. Ttie Back Propagation through time algorithni is not an online training process 1:ui the real time recurrent learn. ing algorithm is anJonline training algorithm with very good identification properties. The Extended Kahnan Filter is a fast algorithm and shows cornparahle identification capahilities. It can be concluded from the graphs and error obtained in the previous section that EKF algorithm is the hest suitable for modeling while

[I] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks”, IEEE Trans. on Neural Networks, Vol. I, no. I , pp4-21, 1990. [2] K. S . Narendra and K. Panhasarathy, ”Gradient methods for optimisation of dynamical systems containing neural networks”,lEEE Truns. on Neurnl Networks. Vo1.2, No. 2,pp52-262, 1991. [ 3 ] P. S. Sasey, G. Santharam and K. P. Unnikrishnan, ”Memory neuron networks for identification and control ofDynamical Systems”,lEEE Trans. 011Neiirul Networks, Vol. 5 , No. 2, pp 306-319, 1994. [4]

R. J. Williams and D. Zipser, “Gradient-based learning algorithms for recurrent connectioonist networks”,Technical Report NU-CCS-90, Boston: Norrlieastem University, College of Computer Scince.1990.

[SI Youji Iiguni, Hideaki Sakai and Hidekatsu Tokumaru. “A real time learning algorithm for a multilayered neural network based on Extended Kalman Filter”. 1 / 3 3 Truns. on Signal Processing.Vol. 40, No. 4,pp 959-966, 1992. [6] R. J. Williams, “Some observations on the use of Extended Kalman Filter as a recurrent network leaming algorithm”,Technical Report NU-CCs-92-1. Boston: Norrtheastem University, College of Computer Scince,1992. R. J. Williams, ‘Training recurrent networks using the [7] joim ConExtended K a h n Filter”, in Ii~ternuiion~l ference on Neural Nenvorks ,Baltimore 1992,VoI.ILpp 241-246. 1

Identification of nonlinear dynamical systems using ... - IEEE Xplore

Abstract-This paper discusses three learning algorithms to train R.ecrirrenl, Neural Networks for identification of non-linear dynamical systems. We select ...

308KB Sizes 2 Downloads 324 Views

Recommend Documents

Identification of Coulomb Friction-Impeded Systems With ... - IEEE Xplore
numerically solving a set of equations with data from just a single limit cycle ... Index Terms—Friction modeling, hybrid systems, limit cycles, relay feedback ...

Identification and Control of Nonlinear Systems Using ...
parsimonious modeling and model-based control of nonlinear systems. 1. ...... In this example, the flow rate is constant ... A schematic diagram of the process. Vh.

Mobile Camera Identification Using Demosaicing Features - IEEE Xplore
School of Electrical and Electronic Engineering. Nanyang Technological University. Singapore [email protected]. Abstract—Mobile cameras are typically ...

Nonlinear State–Space Model of Semiconductor Optical ... - IEEE Xplore
Aug 29, 2008 - Page 1 ... then apply the model to design an optical feedback controller ... we use the compressed model to design and demonstrate a con-.

Numerical simulation of nonlinear dynamical systems ...
May 3, 2007 - integration of ordinary, random, and stochastic differential equations. One of ...... 1(yn), v2(yn) and the d × d matrices Bi(yn) are defined by the.

raw tool identification through detected demosaicing ... - IEEE Xplore
RAW tools are PC software tools that develop the RAWs,. i.e. the camera sensor data, into full-color photos. In this paper, we propose to study the internal ...

Probabilistic Critical Path Identification for Cost-Effective ... - IEEE Xplore
School of Computer Science and. Technology. Huazhong University of Science and Technology. Wuhan, China 430074 [email protected]. Steve Versteeg.

Nonlinear Robust Decoupling Control Design for Twin ... - IEEE Xplore
Nonlinear Robust Decoupling Control Design for Twin Rotor System. Q. Ahmed1, A.I.Bhatti2, S.Iqbal3. Control and Signal Processing Research Group, CASPR.

Nonlinear dynamics in a multiple cavity klystron ... - IEEE Xplore
vacuum microwave electron devices is among the most important problems of ... applications such as noise radar technology, chaotic-based communications, ...

Convolutional Multiplexing for Multicarrier Systems - IEEE Xplore
School of Information Engineering. Beijing University of Posts and Telecommunications. Beijing 100876, China. Email: [email protected], [email protected]. Xingpeng Mao. Department of Communication Engineering. Harbin Institute of Technology (Weiha

Convolutional Multiplexing for Multicarrier Systems - IEEE Xplore
Email: [email protected], [email protected]. Xingpeng Mao ... Email: [email protected] .... It's reasonable to compare the MCCM system with the con-.

Studying Nonlinear Dynamical Systems on a Reconfigurable ... - Sites
So, the analog de- signer must depart from the traditional linear design paradigm, ..... [4] B.P. Lathi, Modern Digital and Analog Communication Systems, Oxford.

Off-line Chinese Handwriting Identification Based on ... - IEEE Xplore
method for off-line Chinese handwriting identification based on stroke shape and structure. To extract the features embed- ded in Chinese handwriting character, ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

Modelling of Wave Propagation in Wire Media Using ... - IEEE Xplore
Abstract—The finite-difference time-domain (FDTD) method is applied for modelling of wire media as artificial dielectrics. Both frequency dispersion and spatial ...

Recovery of Sparse Signals Using Multiple Orthogonal ... - IEEE Xplore
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future ...

Efficient Estimation of Critical Load Levels Using ... - IEEE Xplore
4, NOVEMBER 2011. Efficient Estimation of Critical Load Levels. Using Variable Substitution Method. Rui Bo, Member, IEEE, and Fangxing Li, Senior Member, ...

Sensitivity of LMP Using an Iterative DCOPF Model - IEEE Xplore
Abstract--This paper firstly presents a brief review of the FND. (Fictitious Nodal Demand)-based iterative DCOPF algorithm to calculate Locational Marginal Price (LMP), which is proposed in a previous work. The FND-based DCOPF algorithm is particular

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...