Proceedings of 13th International Conference on Computer and Information Technology (ICCIT 2010) 23-25 December, 2010, Dhaka, Bangladesh

Quantum Evolutionary Algorithm Based on Particle Swarm Theory in Multiobjective Problems Md. Kowsar Hossain, Md. Amjad Hossain, M.M.A. Hashem, Md. Mohsin Ali Dept. of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, Bangladesh [email protected], [email protected], [email protected], [email protected]

approximation and they do not guarantee to identify optimal trade-offs [24]. These techniques are no longer acceptable to solve MO problems. So, more efficient techniques are needed to be developed. Evolutionary Algorithm (EA) seems to be suited to MO problems. In EA, a population of candidate solutions is used. Reproduction is used to combine these existing solutions to produce new solutions. Finally, Natural selection is used to produce new population. So, EA can produce a whole set of potential Pareto optimal solutions and perform better global search of the search space [2]. QEA is a probabilistic algorithm like EA. It is characterized on the basis of the concept and principles of quantum computing such as qubits and superposition of states [4, 5, 6, 7]. QEA can keep the balance between exploration and exploitation more easily than EA. But it is not suitable for complex optimization problem because it traps into local optima during solving multi-peak optimization problems [8-9]. PSO is an evolutionary computing algorithm based on swarm intelligence [10-11]. QEA and PSO have many similarities. However, PSO does not use any evolution operators like QEA. The optimum solution is found by swarms following the best particle. It is easily implemented, it has fewer parameters to adjust and it has faster convergence speed [10, 12]. PSEQEA is developed by combining PSO with QEA to improve the performance of QEA [13]. In PSEQEA, the evolutionary equation of PSO is embedded in the evolutionary operator of QEA. So, it reduces the structure complexity and the number of parameters of QEA. The Knapsack problem simulation and a series of test functions show that the proposed hybrid algorithm possesses better ability to find the global optimum than that of QEA and PSO [13]. In this paper, some well-known test functions are used to study the performance PSEQEA to solve MO problems. This paper is organized as follows. Section II describes the basic concepts and terminology of MO. Section III presents an overview of some recent evolutionary techniques such as QEA, PSO, and PSEQEA. Section IV describes about weighted aggregation approach. In section V the performance of PSEQEA using Weighted Aggregation technique

Abstract Quantum Evolutionary Algorithm (QEA) is an optimization algorithm based on the concept of quantum computing and Particle Swarm Optimization (PSO) algorithm is a population based intelligent search technique. Both these techniques have good performance to solve optimization problems. PSEQEA combines the PSO with QEA to improve the performance of QEA and it can solve single objective optimization problem efficiently and effectively. In this paper, PSEQEA is studied to solve multiobjective Optimization (MO) problems. Some wellknown non-trivial functions are used to observe the performance of PSEQEA to detect the Pareto optimal points and the shape of the Pareto front using both Fixed Weighted Aggregation method and Adaptive Weighted Aggregation method. Moreover, Vector Evaluated PSEQEA (VEPSEQEA) borrows concept from Schaffer’s Vector Evaluated Genetic Algorithm (VEGA) that can also cope with MO problems. Simulation results show that PSEQEA and VEPSEQEA perform better than PSO and VEPSO to discover the Pareto frontier. Keywords: Multi objective Optimization, Particle Swarm Optimization, Quantum Evolutionary Algorithm, Weighted Aggregation method. I. INTRODUCTION In most real-world optimization scenarios, multiple, often conflicting objectives are involved. Complex hardware/software systems design [1], determination of protein’s atomic structure [2] and production scheduling [3] are such type of scenarios. In single objective optimization problem, there is only one solution, but in multi objective optimization problem, a whole set of possible solutions of equal quality is found when all objectives are simultaneously considered. Traditional gradient based technique, a number of stochastic techniques such as tabu search; simulated annealing etc. could be used to detect Pareto optimal solutions. In first technique, the objective functions are combined in a single objective function. It has some limitations: 1) only one solution is found from one optimization run. 2) Influenced by the shape of the Pareto front. The solutions obtained from the second technique tends to be stuck in a very good

978-1-4244-8494-2/10/$26.00 ©2010 IEEE

21

probability of finding the qubit in “1” state. QEA has better characteristics of diversity because the qubits represent the superposition of states. When α i 2 and

to solve MO problems is exhibited. In section VI a modified version of PSEQEA is used to solve MO problems similar to VEGA system. Finally concluding remark follows in section VII.

2 approaches to “0” and “1”, the qubit βi chromosome converges to a single state and diversity disappears. So, the qubit representation possesses the characteristics of exploration and exploitation simultaneously [13]. Q-gate is a variation operator of QEA which is used to update the probability amplitudes of a qubit and the qubit should satisfy the normalization condition α ′ 2 + β ′ 2 = 1 when Q-

II. MULTI-OBJECTIVE OPTIMIZATION Consider a multi-objective optimization (without loss of generality only the minimization case is considered) problem with k objectives ( f i , i = 1, 2,3,..., k ) and n decision variables

(x i , i = 1, 2,3,..., n ) :

(1) f ( x ) = f 1 ( x ), f 2 ( x ),... f k ( x ) subject to the following inequality constraints (2) g i ( x ) ≤ 0, i = 1, 2,...., m The objective functions may be conflicting with each other, thus, in most cases it is difficult to obtain the global minimum for each objectives at the same time. Therefore, a set of Pareto optimal solution need to be achieved. The related concepts of Pareto dominance, Pareto optimality, Pareto optimal set and Pareto front are defined as follows [14]: Pareto dominance: let u = (u 1 , u 2 ,...., u k ) and

gate is applied. The following Q-gate is used as rotation gate: ª cos( Δ θ i ) − sin( Δ θ i ) º (5) U (Δθ i ) = « cos( Δ θ i ) »¼ ¬ sin( Δ θ i ) where Δθ i , i = 1,2,.., m is the rotation angle of a qubit towards the “0” state or “1” state depending on its sign [7]. B. Particle Swarm Optimization Particle Swarm Optimization (PSO) was first introduced by Kennedy and Eberhart in 1995[10]. It was inspired by the behaviour of bird swarms looking for food. In PSO, the self experience is combined with social experience. Each candidate solution can be represented as a particle in the swarm. The swarm moves forward to obtain a global solution. Each particle in the swarm uses its own flying experience and its companions experience to update its position. The position and velocity of ith particle in Ddimensional search space is denoted as X i = ( X i1, X i 2 ,....,X iD ) and Vi = (Vi1 ,Vi 2 ,....,ViD ) ,

v = ( v1 , v 2 ,...., v k ) be two vectors. Then, u dominates v if and only u i ≤ v i , i = 1, 2,..., k and there exists at least one element with u i < v i . Pareto optimality: A solution x is said to be Pareto optimal if and only if there does not exist another solutionn y so that f(x) is dominated by f(y). The set of all Pareto optimal solutions for a given multiobjective optimization problem is called Pareto optimal set (P*). Pareto front: For a given multi-objective optimization problem and its Pareto optimal set P*, the Pareto front (PF*) is defined as: (3) PF* = { f ( x) = ( f 1 ( x), f 2 ( x),... f k ( x)) | x ∈ P*}

respectively. The best position of ith particle is given as Pi = ( Pi1 , Pi 2 ,...., PiD ) and the best position

A Pareto front can be convex, concave or partially convex and/or concave and/or discontinuous. A Pareto front ( PF * ) is said to be convex if and only if ∀u, v ∈ PF*,∀λ ∈(0,1), ∃w∈ PF*: λ || u || +(1 − λ) || v ||≥|| w || On the contrary, a Pareto front PF * is said to be concave if and only if ∀u, v ∈ PF*, ∀λ ∈ (0,1), ∃w ∈ PF* : λ || u || +(1 − λ ) || v ||≤|| w ||

discovered by whole population until now is Pgd = ( Pg 1 , Pg 2 ,...., PgD ) . Thus the movement of particles can be expressed by the following formula: Vid =Vid + C1r1(Pid − Xid ) + C2r2 (Pgd − Xid ) (6)

X id = X id + Vid (7) where C1 and C2 are coefficient of learning factors. r1 and r2 are two separately generated random numbers in [0,1].

III. RECENT EVOLUTIONARY TECHNIQUES A. Quantum Evolutionary Algorithm QEA uses qubit chromosome to represent solutions. An m qubit chromosome can be defined as: ªα 1 α 2 " α m º (4) «β » ¬ 1 β2 " βm ¼

where

α

2

[α i

β i ]T

is

one

qubit,

C. PSO embedded in QEA (PSEQEA) The evolutionary operation plays an important role to find optimum solutions. In QEA, the Q-gate is used to perform evolutionary operation and uses only the information of optimum performance to move forward. So it traps in local optimum during solving complex optimization problems. Moreover, it is complex to update Q-gate through lookup table. PSO uses not only the optimum performance but also suboptimal performance through the communication

and

2 2 + β = 1, i = 1,2,3,...m . α i gives the probability

of finding the qubit in “0” state and β i

2

gives the

22

of the individuals of the swarm. It uses simple evolutionary equation. As a result, it is simple, easy, and has better search capability than QEA. So in PSEQEA, PSO is combined with QEA to improve the performance of QEA. The structure of PSEQEA is just like QEA. Only the rotation angle is updated using evolutionary equation of PSO rather than lookup table. So the rotation angle can be expressed as: (8) θ = C1 ( Pid − X id ) + C 2 ( Pgd − X id )

B. Bang-Bang Weighted Aggregation (BWA) In this approach, the weights are changed abruptly. The change of weights can be realized for a twoobjective multi-objective optimization problem, according to the following equations,

2πt w1 (t ) = sign(sin( )) , w2 (t) =1.0− w1 F

where t represents the generation index and F represents the weights’ change frequency. sign () function is used to change the weights abruptly.

where C1, C2, Pid, Pgd are same like mentioned in PSO. The [αi βi ]T are updated by Q-gate as follows: ªα id′ º ª cos θ « β ′ » = « sin θ ¬ id ¼ ¬

− sin θ º ªα id º cos θ »¼ «¬ β id »¼

C. Dynamic Weighted Aggregation (DWA) In this approach, the weights can be gradually changed according to the following equations,

(9)

2πt w1 (t) = (sin( )) , w2 (t) = 1.0 − w1 F

The procedure of PSEQEA is described in the following [13]. In this algorithm, Qt represents the qubit chromosomes and Pt represents the binary solutions. At first, Qt and parameters are initialized. Pt is created by observing Qt states and then Pt is evaluated. After that, Qt is updated using (8) and (9). Pit and Pgt are stored at each iteration. Here, Pit represents the previous best solution of individual itself and Pgt represents the population’s global best solution.

(12)

where t is the generation number and the weights are changed from 0 to 1 periodically depending on the value of F. If the Pareto front is convex then the slow change of the weights forces the optimizer to move efficiently on the Pareto front than BWA. If it is concave, the performance of DWA and BWA is similar [15]. The above three approaches have been applied in experiments using PSEQEA and PSO where F=100 for BWA and F=200 for DWA.

IV. THE WEIGHTED AGGREGATION APPROACH

V. PERFORMANCE EVALUATION AND RESULT ANALYSIS

The Weighted Aggregation is straightforward and efficient approach in multi-objective optimization. In this method the different objectives are summed up to a weighted combination. It can be expressed as follows:

To test the performance of PSEQEA and PSO the following benchmark problems are used as test functions, defined in [17, 18]: F1 (convex, uniform Pareto Front):

k

F = ¦ wi f i ( x)

(11)

f1 =

(10)

i =1

1 n 2 ¦ xi n i =1

, f2 =

1 n ( x i − 2) 2 ¦ n i =1

(13)

F2 (convex, non-uniform Pareto Front): where

th

f i the i objective and k represents the total

number of objectives to be optimized, wi

f 1= x1 , g = 1 +

≥ 0 , is the

F3 (concave Pareto Front):

k

weight assigned to objective

f i such that

§ 9 n f · xi , f2 = g¨¨1− 1 ¸¸ (14) ¦ g¹ n −1 i=2 ©

¦wi = 1.

§ § f1 ·2 · 9 n f 1= x1 , g =1+ ¦xi , f2 = g¨¨1− ¨ ¸ ¸¸ g¹ n−1i=2 ¹ © ©

i =1

The Weighted Aggregation approach can be of the following types:

(15)

F4 (neither purely convex nor purely concave Pareto Front):

A. Conventional Weighted Aggregation (CWA) In this approach the weights are fixed. It has the following disadvantages: 1) only one Pareto optimal solution can be obtained per optimization run and a priori knowledge is needed to specify the weights [15]. 2) Requires more time and is computationally expensive. The search has to be repeated several times to find a desired number of Pareto optimal solutions. 3) Unable to find the Pareto solutions that are located in the concave region of Pareto front [16].

4 § f 9 n f · f 1= x1 , g =1+ ¦xi , f2 =g¨¨1−4 1 −§¨ 1 ·¸ ¸ g © g¹ n−1i=2 ¹ ©

(16)

F5 (Pareto Front consists of separated convex parts):

f 1= x1 , g =1+

9 n ¦xi n−1i=2

§ · f2 = g¨¨1− f1 − ( f1 )sin(10πf1)¸¸ g g © ¹

23

(17)

There are various types of method available to compare the performance of different algorithms. When the Pareto optimal solutions are known then the Error Ratio and the Generational Distance are used as the performance measure indicators [19]. The method of coverage metrics is proposed by Zitzler and Theiele for comparing the performances of different multiobjective evolutionary algorithm [20]. We used the Graphical presentation [21] to compare the performance of PSEQEA and PSO as well as of VEPSEQEA and VEPSO. For performance comparison, above five functions (13)-(17) were solved using PSEQEA and PSO [23]. 150 generations were used with x ∈ 0,1 . The value

The performance of DWA and BWA are shown in Fig.3-Fig.6. In each case, both convex Pareto front and concave Pareto front is obtained. In case of convex Pareto front DWA is better than BWA but when the Pareto front is concave, performance of both approaches are quite similar which is shown in Fig.4. For PSEQEA, the number of swarm was 10 for each function. But for PSO, the number of swarm was 20 to get nearly same type of result like PSEQEA that means PSEQEA found promising result with less population than PSO. Fig.3- Fig.6 shows that PSEQEA performs better than PSO. 5

[ ]

2

0.6

0

0

PSEQEA PSO

1.4 1.2 1 0.8 0.6 0.4 0.2 0

0 0.20.40.60.8 1 1.2 f1

0 0.20.40.60.8 1 1.2 f1

Fig. 4: Results on F3: DWA (left) and BWA (right) PSEQEA PSO

1 0.7 f2

0.4

PSEQEA PSO

1.4 1.1

0.1

0 0 0.2 0.4 0.6 0.8 1 2 3 4 f1 f1 Fig. 1: CWA for F1 (left) and F2 (right)

-0.5

-0.1

-0.8

-0.4

1

0.8

PSO

0.6

Fig. 5: Results on F4: DWA (left) and BWA (right)

PSO

1

0.2

0.2

0.5

0

0

0 0.20.40.60.8 1 1.2 f1

PSEQEA PSO

0

-0.5

-0.5

-1

-1

0

0 0.2 0.4 0.6 0.8 1 f1

0 0.2 0.4 0.6 0.8 1

Fig. 2: CWA for F3 (left) and F4 (right)

f1

PSEQEA PSO

0.5

f2

f2

0.4

0.4

1

1.5

f2

0.6

PSEQEA

f2

0.8

PSEQEA

0 0.2 0.4 0.6 0.8 1 f1

0 0.2 0.4 0.6 0.8 1 f1

1

1

PSEQEA PSO

0.8 0.5 0.2

1.2

PSEQEA PSO

f2

f2

1

-0.2

0 0

1

1.4 1.2 1 0.8 0.6 0.4 0.2 0

0.2

1

1

2 3 4 0 1 2 3 4 f1 f1 Fig. 3: Results on F1: DWA (left) and BWA (right)

0.4

2

2

0

f2

f2

3

0.8

3

f2

PSEQEA PSO

PSEQEA PSO

4 f2

f2

3

1

4

PSEQEA PSO

4

of C1 and C2 were set to 0.05 π . The dimension n was set to 2.The number of qubits used for each functions were 14 per variable. To build and maintain the archive of Pareto solutions of algorithms, same pseudo code provided in [15] was used. In CWA, the desired number of Pareto optimal points was 20. So, the algorithm was run 20 times by varying weights. The obtained Pareto fronts of F1-F4 are given in Fig.1 and Fig.2. As F1 and F2 represents the convex Pareto fronts so the CWA approach finds it easily. It cannot find the Pareto front of F3 because of its concavity (only two ends are found). In F4, it only finds the convex part. The obtaining result from PSEQEA is slightly better than PSO except in Fig. 3 where both are same. The computational cost is also very low with fast convergence rate. To obtain Pareto solutions of all functions required less than 1 minute for each function. 5

5

0 0.2 0.4 0.6 0.8 1

f1

Fig. 6: Results on F5: DWA (left) and BWA (right)

24

4

VI. A POPULATION- BASED NONPARETO APPROACH

(18)

θ t2 = C1 ( Pid2 − X id2 ) + C 2 ( Pgd1 − X id2 )

(19)

VEPSEQEA

3 f2

VEPSO

0

0

0 0

REFERENCES

f2

6 4 2 0

VEPSEQEA VEPSO

2

VEPSEQEA VEPSO

8 6 4

VEPSEQEA VEPSO

f2

8

1.2

[1] E. Zitzler. “Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications,” Ph.D. thesis, Swiss Fed. Inst. Techn. Zurich, 1999. [2] T.S. Bush, C.B..A. Catlow, and P.D. Battle. “Evolutionary Programming Techniques for Predicting Inorganic Crystal-Structuxes,” J. Materials Chemistry, 5(8):1269-1272, 1995. [3] K. Swinehart, M. Yasin, and E. Guimaraes. “Applying an Analytical Approach to Shop-Floor Scheduling: A Case-Study,” Int. J. Materiab gJ Product Technology, 11(1-2):98-107, 1996. [4] Narayannan A, Moore M. “Quantum-inspired genetic algorithms,” Proc. ofIEEE into Conference on Evolutionary Computation, Nagoya, IEEE Press, pp.61-66, 1996. [5] Han K H, Kim J H. “Genetic quantum algorithm and its application to combinatorial optimization problems,” Proc. of IEEE Congress on Evolutionary Computation, vol.7, pp. 1354-1360, 2000. [6] Han K H, Kim J H. “Quantum-Inspired Evolutionary Algorithms with a New Termination Criterion H ∈ gate, and Two-phase scheme,” IEEE Transactions on Evolutionary Computation,vol.8, no.2, pp.156-169, 2004. [7] Han K H, Kim J H. “Quantum-inspired evolutionary algorithm for a class of combinatorial optimization,” IEEE Transactions on Evolutionary Computation, vol.6, no.6, pp. 580-593, 2002. [8] Gexiang zhang, Weidong Jin and Laizhao Hu. “Quantum Evolutionary Algorithm for Multiobjective Optimization Problems,” in

0 0.3 0.6 0.9 1.2 3 4 5 f1 f1 Fig. 7: Results for F1 (left) and F2 (right)

1

1

In this paper, an algorithm named PSEQEA is used to solve MO problems. It was found that PSEQEA solves well known test functions (both concave and convex) very efficiently with small swarm size. In order to reflect essential aspects of MO, two objectives are sufficient [18]. So, low dimension was used. Finally, a modified version of PSEQEA known as VEPSEQEA that adopts the idea of VEGA was also developed to solve the same problems which outperforms VEPSO.

4 2

f1

VII. CONCLUSION

6

1

0.2 0.4 0.6 0.8

Fig. 9: Results for F5

f2

2

8

VEPSO

-1

For both PSEQEA and PSO, two swarms of size 20 and 150 generations were used. The performance of VEPSEQEA and VEPSO are shown in Fig.7-Fig.9 and it is found that VEPSEQEA has meaningfully better performance than VEPSO in all functions. 10

2 0

Here, t represents the generation number, superscripts (1) and (2) represent the swarms. C1, C2, Pid, Pgd are same like mentioned above in PSO and PSEQEA.

4

VEPSEQEA

1

To develop VEPSEQEA, two swarms are run concurrently, each one of which are evaluated according to one of the objectives of a two-objective optimization problem. At the end of each generation, the global best particle of each swarm is exchanged with each other. So the rotation angle of each swarm can be express using the following equation:

θ t1 = C1 ( Pid1 − X id1 ) + C 2 ( Pgd2 − X id1 )

3 f2

VEGA approach was first proposed by Schaffer in the mid-1980s. In VEGA, only the selection mechanism of GA is modified. At each generation, a number of subpopulations were generated according to each objective function in turn. All these sub-populations are shuffled together. After that, crossover and mutation operators are applied to generate new population in the usual way [22].

2 0 -2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 f1 f1 Fig. 8: Results for F3 (left) and F4 (right)

25

proceedings of IEEE int. Symposium on Intelligent Control, Houston, Texas, pp.703-708, 2003. [9] Ying Li, Yanning Zhang and Rongcuan Zhao etc. “The immune quantum-inspired Evolution algorithm,” in proceeding of IEEE int. Conference on systems, Man, Cybernetics, pp.3301-3305, 2004 [10] Kennedy, J and Eberhart, R. C. “Particle Swarm Optimization,” in proceedings of IEEE int. Conference on Evolutionary Computation, Perth Western Australia , pp.1942-1948, 1995. [11] Kennedy, J and Eberhart, R. C. “A Discrete Binary Version of the Particles Swarm Algorithm,” In proceedings of IEEE int. Conference on Systems, Man, and Cybernetics, Vol.5, pp.4104-4108, 1997.

Applied Computing, San Antonio, Texas, pp. 351-357, 1999, ACM.

[20] E.Zitzler

and L. Thiele, “Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach,” IEEE Transaction on Evolutionary Computation, vol. 3, no. 4, pp. 257-271,1999.

[21] R. Sarker, Liang, K. and Newton, C. “A New Multiobjective Evolutionary Algorithm,” European Journal of Operational Research, Elsevier Science, 140(1), pp12-23. [22] J.D. Schaffer. “Multiple Objective Optimization with Vector Evaluated Genetic Algorithms,” In Genetic Algorithma and their Applicationa: Pron. first Int. Conf. on Genetic Algorithma, pages 93-100, 1985.

[12] M. Clerc, “The swarm and the queen: Toward a deterministic and adaptive particle swarm optimization,” Proc. IEEE Int. Congr. Evolutionary Computation, vol. 3,p. 1957, 1999.

[23] K. E. Parsopoulos , M. N. Vrahatis, “Particle swarm optimization method in multiobjective problems,” Proc. of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain

[13] Yang Yu, Yafei Tian, Zhifeng Yin. “Hybrid Quantum Evolutionary Algorithms Based on Particle Swarm Theory,” In proceedings of IEEE conference on Industrial Engineering and Applications,2006.

[24] Ajith Abraham, Lakhmi Jain and Robert Goldberg. “Evolutionary Multiobjective Optimization: Theoretical Advances and Applications,” Springer Verlag, London, ISBN 1852337877, 12 Chapters, 315 pages, 173 Illustrations, 2005.

[14] D.A. Van Veldhuizen and G.B. Lamont. “Multiobjective evolutionary algorithms: Analyzing the state-of-art,” Evolutionary Computation,8(2): 125-147,2000. [15] Y. Jin, M. Olhofer, and B. Sendhoff. “Dynamic Weighted Aggregation for Evolutionary MultiObjective Optimization: Why Does It Work and How?,” In Proc. GECCO 2001 Conf, 2001. [16] P.J. Fleming. “Computer aided control systems using a multi-objective optimization approach,” In proc. IEE Control”85 Conference, pages 174179, Cambridge,U.K., 1985. [17] J.D. Knowles and D.W. Corne. “Approximating the Nondominated Front Using the Pareto Archived Evolution Strategies,” Euolutionary Computation, 8(2):149-172, 2000. [18] E. Zitzler, K. Deb, and L. Thiele. “Comparison of Multiobjective Evolution Algorithms: Empirical Results,” Evolutionary Computation, 8(2):173-195, 2000. [19] D. Van Veldhuizen and G. Mamont,“Multiobjective Evolutionary Algorithm Test Suites,” In J. Carroll, H. Haddad, D. Oppenheim, B. Bryant and G. Lamont, editors Proceddings of the 1999 ACM Sysposium on

26

Quantum Evolutionary Algorithm Based on Particle Swarm Theory in ...

Md. Kowsar Hossain, Md. Amjad Hossain, M.M.A. Hashem, Md. Mohsin Ali. Dept. of ... Khulna University of Engineering & Technology, ... Proceedings of 13th International Conference on Computer and Information Technology (ICCIT 2010).

154KB Sizes 3 Downloads 312 Views

Recommend Documents

Quantum Evolutionary Algorithm Based on Particle Swarm Theory in ...
hardware/software systems design [1], determination ... is found by swarms following the best particle. It is ..... “Applying an Analytical Approach to Shop-Floor.

Srinivasan, Seow, Particle Swarm Inspired Evolutionary Algorithm ...
Tbe fifth and last test function is Schwefel function. given by: d=l. Page 3 of 6. Srinivasan, Seow, Particle Swarm Inspired Evolutionar ... (PS-EA) for Multiobjective ...

Entropy based Binary Particle Swarm Optimization and ... - GitHub
We found that non-ear segments have lesser 2-bit entropy values ...... CMU PIE Database: 〈http://www.ri.cmu.edu/research_project_detail.html?project_.

Adaptation Algorithm and Theory Based on Generalized Discrepancy
rithms is that the training and test data are sampled from the same distribution. In practice ...... data/datasets.html, 1996. version 1.0. S. Sch˝onherr. Quadratic ...

Adaptation Algorithm and Theory Based on Generalized Discrepancy
pothesis set contains no candidate with good performance on the training set. ...... convex program and solving the equations defining λi is, in gen- eral, simple ...

Adaptation Algorithm and Theory Based on ... - Research at Google
tion bounds for domain adaptation based on the discrepancy mea- sure, which we ..... the target domain, which is typically available in practice. The following ...

PDF Constructing Reality: Quantum Theory and Particle Physics Full ...
Download Constructing Reality: Quantum Theory and Particle Physics, Download Constructing Reality: Quantum ... Publisher : Cambridge University Press.

Span-Program-Based Quantum Algorithm for Evaluating Formulas
Jul 7, 2012 - Abstract: We give a quantum algorithm for evaluating formulas over an extended gate set, including all two- and three-bit binary gates (e. g., NAND, 3-majority). The algorithm is optimal on read-once formulas for which each gate's input

A NOVEL EVOLUTIONARY ALGORITHMS BASED ON NUMBER ...
Proceedings of the International Conference on Advanced Design and Manufacture. 8-10 January, 2006, Harbin, China. A NOVEL EVOLUTIONARY ...

A NOVEL EVOLUTIONARY ALGORITHMS BASED ON NUMBER ...
Fei Gao. Dep. of Mathematics, Wuhan University of Technology, 430070, P. R .China. E-mail: ... based on Number Theoretic Net for detecting global optimums of.

Span-Program-Based Quantum Algorithm for Evaluating Formulas
Jul 7, 2012 - [Link], 2006. 299. [24] PETER HØYER, TROY LEE, AND ROBERT Å PALEK: Negative weights make adversaries stronger. In Proc. 39th STOC ...

particle swarm optimization pdf ebook download
File: Particle swarm optimization pdf. ebook download. Download now. Click here if your download doesn't start automatically. Page 1 of 1. particle swarm ...

An Improved Particle Swarm Optimization for Prediction Model of ...
An Improved Particle Swarm Optimization for Prediction Model of. Macromolecular Structure. Fuli RONG, Yang YI,Yang HU. Information Science School ...

A Modified Binary Particle Swarm Optimization ... - IEEE Xplore
Aug 22, 2007 - All particles are initialized as random binary vectors, and the Smallest Position. Value (SPV) rule is used to construct a mapping from binary.

EJOR-A discrete particle swarm optimization method_ Unler and ...
Page 3 of 12. EJOR-A discrete particle swarm optimization method_ Unler and Murat_2010 (1).pdf. EJOR-A discrete particle swarm optimization method_ Unler ...

Particle Swarm Optimization for Clustering Short-Text ... | Google Sites
Text Mining (Question Answering etc.) ... clustering of short-text corpora problems? Which are .... Data Sets. We select 3 short-text collection to test our approach:.

A Modified Binary Particle Swarm Optimization ...
Traditional Methods. ○ PSO methods. ○ Proposed Methods. ○ Basic Modification: the Smallest Position Value Rule. ○ Further Modification: new set of Update ...

A Comparative Study of Differential Evolution, Particle Swarm ...
BiRC - Bioinformatics Research Center. University of Aarhus, Ny .... arPSO was shown to be more robust than the basic PSO on problems with many optima [9].