Proceedings of the International Conference on Complex Systems and Applications Copyright c 2006 Watam Press

Particle Swarm Optimization: An Efficient Method for Tracing Periodic Orbits and Controlling Chaos Fei Gao

Hengqing Tong

Department of Mathematics, School of Science Wuhan University of Technology Wuhan, Hubei 430070, China [email protected]

Department of Mathematics, School of Science Wuhan University of Technology Wuhan, Hubei 430070, China e [email protected]

Abstract— Chaos control is of vital importance in the fields of chaos application. To avoid too much artificial control factors, by transforming problems related to chaos system into those of different functions’ optimizations, a novel application through particle swarm optimization simulating the swarm intelligence is proposed. It makes the processes of tracing unstable periodic orbits, directing and multi–solutions for controlling H´ enon chaos as a whole. The details of applying the proposed method are given and the experiments done show that the proposed strategy is effective and robust.

I. Introduction Chaos theories and applications have being hot topics in the past 20 years and the key step to applying chaos theory is controlling chaos. Fractals, chaos, complexity and nonlinear science have established contacts with each other closely along with science developments, and the fields of Society, economy, nature, engineering and technology are taking on more and more obviously not fabricative but intrinsic chaotic phenomena and fractal characters [1, 2, 3]. Most local chaos control methods are valid only when the chaotic orbits approximate the fix point well enough. The multi–model solution proposed by A.Duchateau et, al. is a further extension of the OGY method. And it increase the zones of effective control (ZECs) of the attractor, a series of local linear models and control laws was positioned along an uncontrolled convergent trajectory that could direct the chaotic process towards its fixed point. So, controllers with both variable structure and parameter were required. And the fix point is also required, whose solving methods are different for different chaos system. These increase the artificial control factors and difficulties in chaos control[4, 5]. Evolutionary algorithm (EA) is an umbrella term used to describe computer-based problem solving systems through known mechanisms of EVOLUTION as key elements. Although simplistic from their structures, they are sufficiently complex to provide robust and powerful adaptive search mechanisms[6, 7, 8]. Particle Swarm Optimization (PSO)is a relatively new computational intelligence tool relation to artificial neural nets, Fuzzy Logic, and EA, developed by Dr. Eberhart and Dr. Kennedy in 1995[6], inspired by social behavior of bird flocking or fish schooling. In past several years, PSO has

been successfully used across a wide range of application fields as well as in specific applications focused on a specific requirement for the two reason following. The first it is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods. And the second is that there are few parameters to adjust in a wide variety of applications[6, 9]. In this paper, a application of PSO in tracing unstable periodic orbits and directing chaos is proposed. It combines following processes as a whole: tracing unstable periodic orbits, directing the system into the a unstable fix point from any initial through a chaotic orbit converging to the fix point solved adaptively by PSO, and control H´enon chaos with multi–solutions obtained by PSO. The rest of the paper is organized as follows. In Section II., the main concepts of transforming the problems in chaos system is proposed, such as tracing unstable periodic orbits, directing the chaos system into fix point from any initials and stabilize it. Section III.gives the main processes of PSO and some techniques to progress PSO. And experimental results done with H´enon system are reported and analyzed in Section Section IV.The paper concludes with Section V. II. Transformation of Problems in Chaos System into Those of Optimization Now we propose a transformation of problems in chaos system into those of functions’ optimizations with three aspects as a whole. Firstly it tracing the unstable orbits of the chaos system. Secondly it directs system into its unstable fix point from any initial point by global controlling factors {Uk }. And thirdly it stabilize the chaos system in its unstable chaotic periodic orbits. A. Tracing the Unstable Orbits Let

Φ = (Φ1 , Φ2 , . . . , Φn )T : Rn → Rn

(1)

n

where Φi : R → R, i = 1, 2, . . ., n is a nonlinear system, we define a new function Eq. (2) below to get its different unstable period orbits. ° ° ° ° F (X) = °Φ(p) (X) − X ° (2) 2

Then X ∗ s.t. F (X) = 0 is also the Φ’s p period points. When X ∗ is achieved, {X, Φ(1) (X), Φ(2) (X),. . . , Φ(p) (X)}

780

is the Φ’s p period orbit.And we can judge the X ∗ ’s stability by the algorithm in [14]. B. Directing Chaos System From Any Initial Having the unstable period points above, now we can direct the chaos system (1) into these points from any initial points. With the concept of global control[15], we can find a chaotic orbit converging to an unstable period point and force system (1) from any initials to this period point self adaptively. To make the second process understood easily, we take H´enon system for instance. µ ¶ µ ¶ xn+1 a + byn − x2n = (3) yn+1 xn Let any initial X0 = (x1 , y1 ) come into U (Xm , ε1 ), Xm = (xm+1 , ym+1 )T , on the mercy of u1 , u2 , · · · , um through the following system (4). µ ¶ µ ¶ a + byj − x2j + uj xj+1 = (4) yj+1 xj Then we can make the problems of directing chaos system in term of searching U = (u1 , u2 , · · · , um ) into minimizing an objective function (5) f (U ) = k Xm − X ∗ k2

(5)

where Xm is the control result of X0 by control factors u1 , u2 , · · · , um through Eq. (4) above and X ∗ is the fix point of the chaos system. C. Stabilize The Chaos System When system is in U (xf , ε1 ), a multi-model solution for chaos control generated to obtain double plasticity of parameter and structure is proposed to stabilize the system on its unstable fix point U (X ∗ , ε2 ). For parameters a, b in the system (3), hold b = b∗ stable, and let a vary in U (a∗ , δ), where δ is the biggest perturbation value and control factors un are subject to | un |< δ. The objective function is g:

is the exploitation of a population of search points that probe the search space simultaneously [6, 10]. PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations[11]. However, unlike GA, PSO has no evolution operators such as crossover and mutation. The dynamics of population in PSO resembles the collective behavior and self– organization of socially intelligent organisms. The individuals of the population (called particles) exchange information and benefit from their discoveries, as well as the discoveries of other companions, while exploring promising areas of the search space[12]. At step k, Each particle Xi (k) = (xi, 1 (k), . . . , xi, D (k)) keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest Pi (k) = (pi, 1 (k), . . . , pi, D (k)) . Another ”best” value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle, called lbest Li (t) = (li, 1 (t), . . . , li, D (t)) . when a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest Qg (k) = ((qg, 1 (k), . . . , qg, D (k)). The particle swarm optimization concept consists of, at each time step, changing the velocity Vi (k) = (vi, 1 (k), . . . , vi, D (k))

(7)

of each particle toward its pbest and gbest locations (PSO without neighborhood model). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and gbest locations[11]. That is  Ai,d (k) = rand (0, c1 ) · [pi,d (k) − xi,d (k)]    Bi,d (k) = rand (0, c2 ) · [qg,d (k) − xi,d (k)] (8) vi, d (k + 1) = w · vi, d (k) + Ai,d (k) + Bi,d (k)    xi, d (k + 1) = xi, d (k) + vi, d (k + 1)

III. The Main Concept of Particle Swarm Optimization

where w is called Inertia Weight, c1 is called Cognition Acceleration Constant, c2 is called Social Acceleration Constant. The cognitive parameter c1 determines the effect of the distance between the current position of the particle and its best previous position Pi on its velocity. On the other hand, the social parameter c2 plays a similar role but it concerns the best previous position, Pgi , attained by any particle in the neighborhood. rand(a, b) denotes random in [a, b], in this way, the randomicity is introduced to PSO. Vi (k) is limited by a max velocity Vmax as below:  vij , if |vij | ≤ Vmax ,  −Vmax , if vij < −Vmax , vij = (9)  Vmax , if vij > Vmax ,

Particle Swarm Optimization (PSO) belongs to the category of Swarm Intelligence methods closely related to the methods of Evolutionary Computation, which consists of algorithms motivated from biological genetics and natural selection. A common characteristic of all these algorithms

Though PSO without neighborhood model converges fast, sometimes it relapses into local optimal easily. So an improved edition of PSO with circular neighborhood model is also proposed to ameliorate convergence through maintaining more attractors.

g(uk ) =k Xk − X ∗ k

(6)

where Xk is the control result of Xk−1 by control factors uk through Eq. (4) above and X ∗ is the fix point of the chaos system. And when uk is obtained, let Xk+1 := Xk , k = k + 1, find another uk from Eq. (6), until k <= N . In this way, the problems of multi-model solutions for chaos control is translated into the optimization of function (6).

781

Let Ni = {Xi−r , . . . , Xi−1 , Xi , Xi+1 , . . . , Xi+r }, be a neighborhood of radius r of the i−th particle, Xi (local variant). Then, lbest Li (k) is defined as the index of the best particle in the neighborhood of Xi , i.e., j = i − r, . . . , i + r.

(10)

The neighborhood’s topology is usually cyclic, i.e., the first particle X1 is assumed to follow after the last particle, XN . Now the updating mechanism is given:   Ci,d (k) = rand (0, c3 ) × [li,d (k) − xi,d (k)] vi, d (k + 1) = w · vi, d (k) + Ai,d (k) + Bi,d (k) + Ci,d (k)  xi, d (k + 1) = xi, d (k) + vi, d (k + 1) (11) where c3 is called Neighborhood Acceleration Constant, the other parameters are the same as those in Eq. (8). Sometimes Vi (k) can be modified[13] as vi, d (k + 1) = χ [w · vi, d (k) + Ai,d (k) + Bi,d (k) + Ci,d (k)] (12) where χ is Constriction factor, normally χ = 0.9. It is known to all that there is no algorithm fit to any problems. When the objective function f (x) is full of local optimums and more than one minimizer is needed, some established techniques is often combined with PSO to guarantee the detection of a different minimizer, such as deflection and stretching are introduced. Suppose objective function is f (x), we use deflection and stretching technique[14] as below to generate the new objective function F (x): F (x) =

k Y

−1

[tanh(λi kx − x∗i k)]

f (x)

1 y

f (Pgi ) ≤ f (Pj ),

2

−2 −2

w := w − k ·

To exhibit the PSO well performance, we choose 2-D H´enon map as the object controlled. And for the chaotic system we discussed, let CCPSO run 100 times independently. A strange attractor with an unstable fix point inside appears in H´enon map, when a∗ = 1.4, b∗ = 0.3 in H´enon system Eq. (3) is selected. The optimization of function Eq. (3) is difficult, so we choose PSO without neighbor model to seek the system (1)’s unstable period points. With the termination is the evolution generation is T = 5000 or the objective is less than 10−10 , the size of the population M = 40, Cognitive Acceleration c1 = c2 = 2, Constriction factor χ = 0.9, Value of velocity Weight at the beginning and the end are

1

2

wstart − wend T

(16)

And when k x − f (p) (x) k s.t. ≤ 10−10 , we consider PSO is successful. If two period points x1, x2 s.t. k x1 − x2 k≤ 10−10 , we consider they are the same. And we can find the fix point through PSO just as Table I and Fig. ?? shows. TABLE I Unstable periodic points obtained by PSO

(15)

IV. Simulations

0x

wstart = 0.95 and wend = 0.2, and velocity weight w varies as Eq. (16):

(13)

where x∗i (i = 1, 2, · · · k) are k minimizers founded, λi ∈ (0, 1), λ1 , λ2 , δ > 0.

−1

Fig. 1. Unstable orbits obtained by PSO.

G(x) = f (x) + β1 kx − x∗i k [1 + sgn (f (x) − f (x∗i ))] (14) 1 + sgn (f (x) − f (x∗i )) tanh [δ (G(x) − G(x∗i ))]

0 −1

i=1

H(x) = G(x) + β2

Fix point 7 order orbit 9 order orbit 11 order orbit

p

period points X 0

Prob.

1 7 9 11

(0.883896267925588, 0.88389626792595) (0.971789444783738, 0.198034791783514) (0.462072909269982, −0.746521674835037) (0.4502575927245, 0.952974167371092)

100% 100% 94% 80%

Now we can directing the chaos system from any initial point through global control strategy obtained by PSO in terms of control factors U = (u1 , u2 , · · · , um ). In this way, PSO will find a chaotic orbit converging into the unstable fix point X ∗ ’s neighbor U (X ∗ , ε1 ). That is say, minimize function (5) by PSO. We choose system (3)’s unstable fix point achieved in Table I, that is with q = 0.5, its unstable fix point is X ∗ = (0.883896267925588, 0.88389626792595)0 . The control objective is to make the system (3) from any initial x1 on the mercy of control factors U = (u1 , u2 , · · · , um ) to become Xm = xm+1 ∈ U (xf , ε1 ) after m iterations through Eq.(3). And we choose PSO to realize this process for it’s only a problem of m-dimensional function optimization which can be resolve by PSO. Let m = 8, the evolution generation of PSO in each contraction T = 5000, the size of the population M = 40 random in [−2, 2]m , the termination

782

2

1.5

1

0.5

0 xn −0.5

yn u 0.883896

−1

1

2

3

4

5

n+1

6

7

9

un

0.5

0

−0.5 0

20

40

60

80

100

60

80

100

n Fig. 3. Control factors {Un }.

−4 −6 −8 −10 −12 0

20

40 n

V. Conclusions

8

Fig. 2. Directing chaos by PSO.

log10(error)

condition is f ≤ ε1 or iterations larger than 5000.The process of directing chaos is then translated into minimizing the function (5). PSO make H´enon chaos into U (X ∗ , 10−4 ) in probability 100% from initial point X0 = (0, 0)0 after m = 6 operations of control factors U by Eq.(4). The best of which is U = (0.675418512021958, 0.807233627380606, 0.73062097189958, 1.04958930311715, 0.840539432189091, 0.543289327414714), that can direct X0 = (0, 0)0 into U (X ∗ , 10−10 ). PSO make H´enon into U (X ∗ , 10−10 ) in probability 95% from initial point X0 = (0, 0)0 after m = 4 operations of control factors U by Eq.(4). The best of which is U = (0.536321436576675, 0.746148687981672, 1.47322448276479, 0.746126485689516), that can direct X0 = (0, 0)0 into U (X ∗ , 10−10 ). PSO make H´enon chaos into U (X ∗ , 10−10 ) in probability 95% from initial point X0 = (0.8, 0.2)0 after m = 4 operations of control factors U by Eq.(4). The best of which is U = (0.30955173262064, 1.06984781639016, 1.201274036508, − 0.165019329509446), that can direct X0 = (0.8, 0.2)0 into U (X ∗ , 10−10 ). When system is in U (x∗ , ε1 ), a multi-model solution for chaos control generated by PSO to obtain double plasticity of parameter and structure is proposed to stabilize the system on its unstable fix point U (x∗ , ε2 ). The objective function is Eq. (6). And set T = 1000, the size of the population M = 40 random in [−δ, δ], the termination condition is g ≤ 10−10 or iterations larger than 1000. Now we choose one of the successful directing process from initial (0.8, 0.2)0 with m = 8 and which let the initial come into U (X ∗ , 10−4 ) through U = (0.340055700849363, 0.718776619565217, 0.698136745384317, 0.67428759484247, 0.0244717746492404, 0.483883728244462, − 0.103559915164163, 0.554208206244036) obtained from Eq. (4) with parameters a, b is stable. Then we begin to stabilize the Xk in U (X ∗ , 10−10 ) through Eq. (6) with parameter b is stable and a vary randomly in U (a∗ , 0.5) of Eq. (4). When a uk is obtained, let Xk+1 := Xk , k = k + 1, find another uk from Eq. (6) with control factors un s.t. | un |< δ, until k <= 100. In this way, the problems of multi-model solutions for chaos control is translated into finding a series {uk }. Fig. (??) shows the process that the initial point (0.8, 0.2)0 is directed into U (X ∗ , 10−4 ) through U by PSO globally. Fig. (??) and Fig. (??) shows the process PSO stabilize the Xn in U (X ∗ , 10−10 ) from the point after global directed where Un is the control factors, error =|Xn − xf |, n is the iteration from n = 9. From the simulations above, we can conclude that PSO is efficient and robust for the new chaos system(3) in two aspects: tracing unstable period orbits, directing chaos system from any initial.

Fig. 4. Error of PSO directs and stabilize chaos.

An application to chaos system through PSO simulating the swarm intelligence is proposed. Firstly it finds chaos

783

system’s unstable orbits self-adaptively. Secondly it directs and stabilizes the system into its unstable orbits from any initial through control factors obtained and multi–solutions obtained by PSO. As the experiments done show that the proposed strategy is effective and robust in the selection of initial and control factors. Though experiments of PSO are done to chaos system (3), we can easily derive it into the other chaos systems[1, 15]. Acknowledgment We thank the anonymous reviewers for their constructive remarks and comments. The work is partially supported by the Chinese NSF Grant No.30570611 and the Science Foundation Grant No. 02C26214200218 for Technology Creative Research from the Ministry of Science and Technology of China to H. Q. TONG, the Foundation Grant No.XJJ2004113, Project of educational research, the UIRT Project Grant No.A156 and No.A157 granted by Wuhan University of Technology in China.

[13] Ioan Cristian Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Inf. Process. Lett., Vol. 85, No. 6, pp. 317-325, 2003. [14] K. Parsopoulos, M. Vrahatis, “Computing periodic orbits of nonlinear mappings through particle swarm optimization,” Proc. of the 4th GRACM Congress on Computational Mechanics, Patras, Greece, 2002. [15] G. Chen, J. L¨ u., Dynamics of the Lorenz System Family: Analysis, Control and Synchronization Beijing: Science Press, 2003.

Accepted March 2006. email:[email protected] http://monotone.uwaterloo.ca/∼journal/

References [1] J. P. Rui, de Figueiredo, and G. Chen, Nonlinear Feedback Control Systems An Operator Theory Approach, New York: Academic Press, 1993. [2] R. Caponetto, L. Fortuna, S. Fazzino et.al, “Chaotic sequences to improve the performance of evolutionary algorithms,” IEEE Trans Evolution Computation, Vol. 7, No. 3, pp. 289-304, 2003. [3] J. Q. Fang, Control Chaos and Develop High Technique, Beijing: Atomic Energy Press, (in Chinese), 2002. [4] Q. Edward, G. Celso, Y. A. James, “Controlling chaos,” Physical Review Letter, Vol. 64, No. 11, pp. 1196-1199, 1990. [5] A. Duchateau, N. P. Bradshaw, H. Bersini, “A multi–model solution for the control of chaos,” Int. J. Control, Vol. 72, No. 7/8, pp. 727-739, 1999. [6] R. C. Eberhart, Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” Proceedings of the 2000 Congress on Evolutionary Computation, IEEE Service Center: Piscataway, NJ, pp. 84-88, 2000. [7] D. Whitley, “An overview of evolutionary algorithms: Practical issues and common pitfalls,” Information and Software Technology, Vol. 43, No. 14, pp. 817-831, 2001. [8] G. Gao, H. Q. Tong, “Computing Two Linchpins of Topological Degree by a Novel Differential Evolution Algorithm,” International Journal of Computational Intelligence and Applications, Vol. 5, No. 3, pp. 1-16, 2005. [9] P. Paul, “An Introduction to Particle Swarm Optimization,” http://www.adaptiveview.com/articles/ipsop1.html, 2003. [10] J. F. Schutte, J. A. Reinbolt, B. J. Fregly et al., “Parallel global optimization with the particle swarm algorithm,” Int. J. Numer. Meth. Engng., Vol. 61, pp. 2296-2315, 2004. [11] X. H. Hu, “Particle Swarm Optimization,” http://www.swarm– intelligence.org/, 2002. [12] Ch. Skokos et al, “Particle Swarm Optimization: An efficient method for tracing periodic orbits in 3D galactic potentials,” Mon.Not.Roy.Astron.Soc., Vol. 359, pp. 251-260, 2005.

784

Particle Swarm Optimization: An Efficient Method for Tracing Periodic ...

gaofei@mail.whut.edu.cn e [email protected] ..... http://www.adaptiveview.com/articles/ipsop1.html, 2003. [10] J. F. Schutte ... email:journal@monotone.uwaterloo.ca.

182KB Sizes 0 Downloads 284 Views

Recommend Documents

Particle Swarm Optimization: An Efficient Method for Tracing Periodic ...
trinsic chaotic phenomena and fractal characters [1, 2, 3]. Most local chaos control ..... http://www.adaptiveview.com/articles/ipsop1.html, 2003. [10] J. F. Schutte ...

An Improved Particle Swarm Optimization for Prediction Model of ...
An Improved Particle Swarm Optimization for Prediction Model of. Macromolecular Structure. Fuli RONG, Yang YI,Yang HU. Information Science School ...

Particle Swarm Optimization for Clustering Short-Text ... | Google Sites
Text Mining (Question Answering etc.) ... clustering of short-text corpora problems? Which are .... Data Sets. We select 3 short-text collection to test our approach:.

Chang, Particle Swarm Optimization and Ant Colony Optimization, A ...
Chang, Particle Swarm Optimization and Ant Colony Optimization, A Gentle Introduction.pdf. Chang, Particle Swarm Optimization and Ant Colony Optimization, ...

A Modified Binary Particle Swarm Optimization ...
Traditional Methods. ○ PSO methods. ○ Proposed Methods. ○ Basic Modification: the Smallest Position Value Rule. ○ Further Modification: new set of Update ...

particle swarm optimization pdf ebook download
File: Particle swarm optimization pdf. ebook download. Download now. Click here if your download doesn't start automatically. Page 1 of 1. particle swarm ...

A Modified Binary Particle Swarm Optimization ... - IEEE Xplore
Aug 22, 2007 - All particles are initialized as random binary vectors, and the Smallest Position. Value (SPV) rule is used to construct a mapping from binary.

EJOR-A discrete particle swarm optimization method_ Unler and ...
Page 3 of 12. EJOR-A discrete particle swarm optimization method_ Unler and Murat_2010 (1).pdf. EJOR-A discrete particle swarm optimization method_ Unler ...

Application of a Parallel Particle Swarm Optimization ...
Application of a Parallel Particle Swarm Optimization. Scheme to the Design of Electromagnetic Absorbers. Suomin Cui, Senior Member, IEEE, and Daniel S.

Entropy based Binary Particle Swarm Optimization and ... - GitHub
We found that non-ear segments have lesser 2-bit entropy values ...... CMU PIE Database: 〈http://www.ri.cmu.edu/research_project_detail.html?project_.

An Interactive Particle Swarm Optimisation for selecting a product ...
Abstract: A platform-based product development with mixed market-modular strategy ... applied to supply chain, product development and electrical economic ...

Srinivasan, Seow, Particle Swarm Inspired Evolutionary Algorithm ...
Tbe fifth and last test function is Schwefel function. given by: d=l. Page 3 of 6. Srinivasan, Seow, Particle Swarm Inspired Evolutionar ... (PS-EA) for Multiobjective ...

DART: An Efficient Method for Direction-aware ... - ISLAB - kaist
DART: An Efficient Method for Direction-aware. Bichromatic Reverse k Nearest Neighbor. Queries. Kyoung-Won Lee1, Dong-Wan Choi2, and Chin-Wan Chung1,2. 1Division of Web Science Technology, Korea Advanced Institute of Science &. Technology, Korea. 2De

DART: An Efficient Method for Direction-aware ... - ISLAB - KAIST
direction with respect to his/her movement or sight, and the direction can be easily obtained by a mobile device with GPS and a compass sensor [18]. However,.

Towards An Efficient Method for Studying Collaborative ...
emergency care clinical settings imposes a number of challenges that are often difficult .... account for these activities, we added “memory recall and information ...

An Efficient MRF Embedded Level Set Method For Image ieee.pdf ...
Whoops! There was a problem loading more pages. An Efficient MRF Embedded Level Set Method For Image ieee.pdf. An Efficient MRF Embedded Level Set ...

An optimization method for solving the inverse Mie ...
the LSP to the particle parameters over their domain, calling for variable density of the database. Moreover, there may be a certain threshold in the dependence ...

An Efficient Method for Channel State Information ...
School of Electrical and Computer Engineering ... Index Terms—degrees of freedom, relay X channel, decode- ... achievable degrees of freedom (DoF) [3], [4].

TECHNICAL NOTES An efficient method for PCR ...
Fax: + 44 1482-465458;. E-mail: ... techniques. The protocol is cheap and efficient, with the ... could be significantly cheaper in a laboratory which is not regularly ...

Efficient Optimization for Autonomous Robotic ... - Abdeslam Boularias
robots (Amor et al. 2013). The main ..... the grasping action (Kazemi et al. 2012). objects ..... a snake robot's controller (Tesch, Schneider, and Choset. 2011a ...

Quantum Evolutionary Algorithm Based on Particle Swarm Theory in ...
Md. Kowsar Hossain, Md. Amjad Hossain, M.M.A. Hashem, Md. Mohsin Ali. Dept. of ... Khulna University of Engineering & Technology, ... Proceedings of 13th International Conference on Computer and Information Technology (ICCIT 2010).

A Harmony-Seeking Firefly Swarm to the Periodic ...
Abstract—Mobile robots nowadays can assist wireless sensor networks (WSNs) in many ... mission is to service an already-deployed WSN by periodically replacing all damaged .... a delivery or pickup customer. Let VD and VP denote the set ...

Quantum Evolutionary Algorithm Based on Particle Swarm Theory in ...
hardware/software systems design [1], determination ... is found by swarms following the best particle. It is ..... “Applying an Analytical Approach to Shop-Floor.

A Comparative Study of Differential Evolution, Particle Swarm ...
BiRC - Bioinformatics Research Center. University of Aarhus, Ny .... arPSO was shown to be more robust than the basic PSO on problems with many optima [9].