2009 International Workshop on Nonlinear Circuits and Signal Processing NCSP'09, Waikiki, Hawaii, March 1-3, 2009

ARTIFICIAL NEURAL NETWORK BASED MODEL & STANDARD PARTICLE SWARM OPTIMIZATION FOR INDOOR POSITIONING SYSTEMS Syahrulanuar NGAH, Kui-Ting CHEN, Yasunori DAIDO, Takaaki BABA Graduate School of Information, Production and System, Waseda University, Japan Phone: +81-93-692-5017 Fax: +81-93-692-5021 Email: [email protected] [email protected] daido [email protected] [email protected]

Abstract Indoor positioning based on the use of radio signals getting more interest in recent years. Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) are the examples of many algorithms have been proposed for the position estimation under various environments. In this paper, an ANN based model and Standard PSO algorithm are introduced to convert the received radio signals into position information. Besides of the detecting accuracy, convergences time also being compared for both algorithm. Both algorithms had shown the same result patterns with NN produced better results than PSO in term of accuracy but suffer with longer time taken during the training process. 1. Introduction

Figure 1: Positioning Under Noisy Environment

Researches on positioning technologies are gaining popularity nowadays. Global Positioning Systems (GPS) is one of the well known systems that widely used. But, for indoor positioning, GPS is does not work well. Thus many algorithms have been proposed for indoor positioning based on radio signals. In this paper, Neural Network based model and standard Particle Swarm Optimization have been used to convert the radio signal from the locator to detect the target positioning in indoor environment. Under the idea environment, the location of the target can be easily obtain for three dimensional positioning with four locators based on Time of Arrival(TOA) measurement. However, in the real world application the measured distance error is ineluctable. It is impossible to remove this error form real world application. Usually, it is considered as noised. Target position can change over time, multidimensional systems [1] - the variation of position may occur on one or more dimensional, the noise change at any time, ambient noise. The remainder of this paper is organized as followed. In section 2, the background of NN and PSO are summarized. Then, the process of detecting and measuring the target will be discussed in section 3. Simulation results are explained in section 4 and section 5 will conclude the paper.

- 665 -

Figure 2: Three layers Neural Network

Figure 3: Average positioning error with number of locator = 3

Figure 4: Average Positioning Error with Ed = 1

three locators are needed to provide the target position. Setup for NN based model and standard PSO almost same with the [5]. The different between this paper and [5] are, number of particles are increased from 5 to 15 and both algorithms mea2.1. Neural Network sured the target in 3D where as in [5] only in 2D. The setup The first artificial neuron was produced in 1943 by the for both algorithms for detecting the target can be written as: neurophysiologist Warren McCulloch and the logician Walter Pits [2]. ANN consists of many layers namely: an input layer, a number of hidden layers and an output layer. The 3.1. NN based model input layer and the hidden layer are connected by synaptic Three layers network architecture is going to be used in links called weights and likewise the hidden layer and output layer also have connection weights. When more than one this simulation. Numbers of neurons in input layer are equal hidden layer exists, weights exist between such layers. Neu- to the number of locators for detecting the target. To ensure ral networks use some sort of “learning” rule by which the the network has the ability to learn, the hidden layer conconnections weights are determined in order to minimize the sists of twenty four (24) neurons. Since the output for this error between the neural network output and desired output. simulation in 3D, then the number of neuron in output layer are three (3). However, the network is still useless until it has been train to solve some particular problem. A gradient 2.2. Particle Swarm Optimization based algorithm, namely Back Propagation, is chosen to train PSO is an evolutionary computation technique which is the network. During the training, the network learns the input based on swarm of particle - introduced by Eberhat and and output relationship. Error at the output will be propagate Kennedy [3]. It has been used to solve many optimization back to the input layer to determine the correct weight conproblems since it was proposed. PSO algorithm is easy to nection between input layer and hidden layer, and between implement and only have a few parameters. It start with ran- hidden layer to output layer. The network will be train with dom population, have fitness value to evaluate and update the 500 data for ensuring it can work properly to detect the target. population and search for the optimum with random tech- During the training, weight connections keep changing until nique, which is similar to other population based optimization it achieved the desired output according to the training data. methods such as Genetic Algorithm (GA)[4]. Particles can be Once the connection is determined, then the network is ready considered as agents flying through problem dimension space to be use. looking for the solution. Any single changes to the network formation, the network need to be train again to identify the best value for weight connection. This included the changes of locators’ number 3. Position Estimation that act as input data to the network. Both algorithms will detect the target in three dimensional (3D) with dimension 10m*10m*10m. In this case, at least 2. Background

- 666 -

3.2. Standard PSO General equations for velocity and particle updating in PSO can be written as: vt+1 = ω ∗ vti + c1 ∗ r1 (plb − xit ) + c2 ∗ r2 (pgb − xit ) i xit+1

=

vt+1 i

+

xit

ω = ωmax − t ∗ (ωmax − ωmin )/T

Number of locator

(1) (2) 3

(3)

where: • T is Number of iteration = 50. • Number of particle = 15.

4

• c1 and c2 are acceleration constant = 2. • r1 and r2 are random numbers between 0 -1 • ωmax = 0.5 and ωmin = 0.45.

5

• t = current iteration. • Plb = particle local best. • Pgb = particle global best Target position that is deployed in the space can be measure using TOA technique which can be written in mathematical as: q T p = (p x − x)2 + (py − y)2 + (pz − z)2 + Ed ∗ rand (4)

Number of locator

where: 3 4 5

• x, y and z is assumed as target coordinate • p x , py and pz is assumed as locator coordinate.

Table 1: Average positioning error Average Possitioning Noise Ed (m) Error(m) NN PSO 0.432 0.490 0.2 0.621 0.712 0.4 0.783 0.850 0.6 0.932 0.982 0.8 1.091 1.112 1 0.282 0.292 0.2 0.415 0.469 0.4 0.555 0.622 0.6 0.667 0.762 0.8 0.744 0.890 1 0.253 0.281 0.2 0.398 0.454 0.4 0.516 0.603 0.6 0.656 0.742 0.8 0.707 0.876 1

Table 2: Average convergence times Convergence times (seconds) simulation = 1000 times NN PSO Average Total Training Time (500)times 21.56 22.031 13.32 21.39 21.980 13.15 21.43 21.930 13.47

• Ed is noise 4. Simulation Results

• rand is a random number between 0-1 Unlike NN, PSO algorithm no needs to be trained. Once all the parameters being setup, the algorithm is ready to be used. The total numbers of simulation are 1000 times for each algorithm and the tag position is randomly placed for each simulation. The results in term of accuracy and convergence times then are compared for both algorithms. Average positioning error to determine the accuracy can be presented as:

Figure 3 shows the average positioning error achieved by both algorithms with the number of locator is equal to three. Both of the algorithms produced almost identical pattern where the performance decreased when the value of Ed increased from 0.2m to 1m, but the accuracies for NN based model is better around 2% to 9% compare with standard PSO. Figure 4 also showed similar patterns by both algorithms where the average positioning error decreased when the number of locators increased from 3 to 5 units. One more time, q 1000 X E Average = (X − x)2 + (Y − y)2 + (Z − z)2 /1000 (5) NN based model produces better result almost 17% when the number of locators equal to 5 units. Generally, both figures 1 clearly show NN based model outperform standard PSO in where: term of accuracy. • X, Y and Z are the position calculated by both algorithms Table 1 shows the summary of average positioning error in X, Y and Z directions for NN based model and Standard PSO. It can be said that, the accuracy of both algorithms increased when the number • x, y and z are the target exact position in X, Y and Z of locator increase and decreased when the value of noise indirections. creased. Again, for all of number of locators and noise value

- 667 -

used in these simulations, NN based model always come out with better accuracies compare with standard PSO. Table 2 shows average time taken to converge for both algorithms. It shows that, number of locators does not affect the convergence times for both algorithms. Generally, NN based model can converge 28 times faster than standard PSO if time taken for training the network does not take into account. But, for the whole process, standard PSO consumes less time compare with the NN based model. Time taken for standard PSO 59% to 61% faster than NN based model. 5. Conclusions In this paper, both algorithms have their own advantages. In term of accuracy, NN based model produces 2% to 17% more accurate result compare with standard PSO, when the number of locator increased from 3 to 5 and noise value increased from 0.2m to 1m for detecting the target in indoor environment. Meanwhile, the advantages of standard PSO is using less time for completing the whole processes. It can reach almost 41% less time compared with NN based model from the beginning until converge to the best positions. References [1] R. C. Eberhart and Y. Shi, “Tracking and optimizing dynamic systems with particle swarms,” in Proc. IEEE Congr. Evolutionary Computation 2001, Seoul, Korea, pp. 94-97, 2001 [2] Aleksander,I. and Morton.H, “An Introduction to Neural Computing,” Chapman and Hall, 1990. [3] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. IEEE Int. Conf. Neural Networks, pp. 1942-1948, 1995. [4] R.C. Eberhart and Y. Shi, “Comparing Inertia Weights and Constriction Factors in Particle Swarm Optimization,” Congress on Evolutionary Computing, vol. 1, pp. 84-88, 2000. [5] Hui ZHU, Bo HUANG, Yuji TANABE and Takaaki BABA, “An Artificial Neural Network Based Model for Indoor Positioning Systems,” in NCSP’08, Australia, pp 255 -258, March 6-8 2008.

- 668 -

2009.Artificial Neural Network Based Model & Standard Particle ...

Artificial Neural Network Based Model & Standard ... Swarm Optimization for Indoor Positioning System.pdf. 2009.Artificial Neural Network Based Model ...

158KB Sizes 3 Downloads 248 Views

Recommend Documents

INTERACTING PARTICLE-BASED MODEL FOR ...
Our starting point for addressing properties of missing data is statistical and simulation model of missing data. The model takes into account not only patterns and frequencies of missing data in each stream, but also the mutual cross- correlations b

Using Artificial Neural Network to Predict the Particle ...
B. Model Implementation and Network Optimisation. In this work, a simple model considering multi-layer perception (MLP) based on back propagation algorithm ...

the standard model of particle physics pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

Mutation-Based Genetic Neural Network
quires BP or other gradient training; “invasive” which refers ...... 719–725. [25] P. de la Brassinne, “Genetic algorithms and learning of neural networks,”.

spatial sound localization model using neural network
Report #13, Apple Computer, Inc, 1988. [19] IEC 61260, Electroacoustic - Octave-band and fractional octave-band filters, 1995. [20] R. Venegas, M. Lara, ...

An Interpretable and Sparse Neural Network Model for ...
An Interpretable and Sparse Neural Network Model ... We adapt recent work on sparsity inducing penalties for architecture selection in neural networks. [1, 7] to ... is mean zero noise. In this model time series j does not Granger cause time series i

A Cyclostationary Neural Network Model for the ...
fects the health of the community and directly influences the sustainability ... in order to enable the development of tools for the management and reduction of pollution ..... [10] G. Masters, Introduction to Environmental Engineering and Science.

Strategies for Training Robust Neural Network Based ...
the proposed solutions. ... in modern machine learning theory is to find solutions to improve the .... and Matwin [6] propose the one-sided selection (OSS), where.

Attention-Based Convolutional Neural Network for ...
the document are irrelevant for a given question. .... Feature maps for phrase representations pi and the max pooling steps that create sentence representations.

Neural Network Based Macromodels for High Level ... - IEEE Xplore
A simple Back Propagation (BP) algorithm is employed to train a feed-forward neural network with the available data set to find out the weights and biases of the interconnecting layers, and subsequently the neural network is used as a model to determ

Recurrent Neural Network based Approach for Early Recognition of ...
a specific AD signature from EEG. As a result they are not sufficiently ..... [14] Ifeachor CE, Jervis WB. Digital signal processing: a practical approach. Addison-.

Text-To-Speech with cross-lingual Neural Network-based grapheme ...
thesising social networks contact names, navigation directions abroad, and many others. These scenarios are very frequent in. TTS usage and developers can ...

Long Short-Term Memory Based Recurrent Neural Network ...
Feb 5, 2014 - an asymmetrical window, with 5 frames on the right and either 10 or. 15 frames on the left (denoted 10w5 and 15w5 respectively). The LSTM ...

Implementation of a neural network based visual motor ...
a robot manipulator to reach a target point in its workspace using visual feedback. This involves primarily two tasks namely, extracting the coordinate information ...

Structured Training for Neural Network Transition-Based ... - Slav Petrov
depth ablative analysis to determine which aspects ... Syntactic analysis is a central problem in lan- .... action y as a soft-max function taking the final hid-.

Image Retrieval Based on Wavelet Transform and Neural Network ...
The best efficiency of 88% was obtained with the third method. Key Words: .... Daubechies wavelets are widely used in signal processing. For an ..... Several procedures to train a neural network have been proposed in the literature. Among ...

Strategies for Training Robust Neural Network Based ...
Cl. ”9”. 13.20. 12.07. 11.65. Avg. 12.33. 11.15. 10.00 system by preserving the existing one proves the efficiency of the later. As the KDEBS method uses a lower dimensional data representation in order to be able to randomly generate useful samp

Neural Network Toolbox
3 Apple Hill Drive. Natick, MA 01760-2098 ...... Joan Pilgram for her business help, general support, and good cheer. Teri Beale for running the show .... translation of spoken language, customer payment processing systems. Transportation.

Neural Network Toolbox
[email protected] .... Simulation With Concurrent Inputs in a Dynamic Network . ... iii. Incremental Training (of Adaptive and Other Networks) . . . . 2-20.

Neural Network Toolbox
to the government's use and disclosure of the Program and Documentation, and ...... tool for industry, education and research, a tool that will help users find what .... Once there, you can download the TRANSPARENCY MASTERS with a click.

Peak, Varvell, The Physics of the Standard Model of Particle Physics ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Peak, Varvell, The Physics of the Standard Model of Particle Physics, General and Mathematical Introduction.

A Neural Conversational Model - arXiv
Jul 22, 2015 - However, most of these systems ... bined with other systems to re-score a short-list of can- ..... CleverBot: What is the color of the apple in the.

Neural Network Toolbox - Share ITS
are used, in this supervised learning, to train a network. Batch training of a network proceeds by making weight and bias changes based on an entire set (batch) of input vectors. Incremental training changes the weights and biases of a network as nee

Hierarchical Dynamic Neighborhood Based Particle ...
Abstract— Particle Swarm Optimization (PSO) is arguably one of the most popular nature-inspired algorithms for real parameter optimization at present. In this article, we introduce a new variant of PSO referred to as Hierarchical D-LPSO (Dynamic. L