147

Letters Synthesizing a Predatory Search Strategy for VLSI Layouts Alexandre Linhares

Abstract—When searching for prey, many predator species exhibit a remarkable behavior: After prey capture, the predators promptly engage in “area-restricted search,” probing for consecutive captures nearby. Biologists have been surprised with the efficiency and adaptability of this search strategy to a great number of habitats and prey distributions. We propose to synthesize a similar search strategy for the massively multimodal problems of combinatorial optimization. The predatory search strategy restricts the search to a small area after each new improving solution is found. Subsequent improvements are often found during area-restricted search. Results of this approach to gate matrix layout, an important problem arising in very large scale integrated (VLSI) architectures, are presented. Compared to established methods over a set of benchmark circuits, predatory search is able to either match or outperform the best-known layouts. Additional remarks address the relation of predatory search to the “big-valley” hypothesis and to the field of artificial life. Index Terms—Adaptive behavior, “big-valley” hypothesis, combinatorial optimization, predatory search, very large scale integrated (VLSI) layout.

I. INTRODUCTION Convergent evolution is one of the most impressive concepts of Darwinian thought. As stated in the literature, “It is all the more striking a testimony to the power of natural selection . . . that numerous examples can be found in real nature, in which independent lines of evolution appear to have converged, from very different starting points, on what looks very like the same endpoint” [1, p. 94]. Eyesight is a good example of a remarkable biological tool that has appeared independently many times. For instance, the octopus’ eye has evolved from a line independent of our lineage, and there are records of some 40 such “parallel” lines of evolution leading to the development of eyes [2]. The reason for the emergence of remarkably similar phenotypes in distinct (and distant) genotypes is simple: a high cost/benefit ratio for owners that live on the same ecological niche. As has been argued [3], however, the process of survival of the fittest is not restricted to physiological and anatomical adaptations, such that behavioral adaptations also converge. Consider, for instance, the Griffiths Triangle [4] shown on Fig. 1. The triangle positions predatory species according to the costs associated with each of the three phases of predation: 1) search and locate, 2) pursuit and attack, and 3) handling and digestion. If we select the set of search-intensive predators, that is, the species that devote most of their time and energy locating prey, then we find some convergent search strategies, regardless of the lineage of the species in this ecological niche. A case in point is the focus of this study: the area-restricted search behavior, on which predators initially search for prey at a chosen Manuscript received May 14, 1998; revised February 5, 1999. This work was supported by FAPESP Foundation process #97/12785-8. The author is with the National Space Research Institute of the Brazilian Ministry of Science and Technology, LAC-INPE, S.J. Campos, SP 12201-970, Brazil. Publisher Item Identifier S 1089-778X(99)04554-3.

Fig. 1. The Griffiths Triangle. The ecological literature distinguishes three discrete steps of predation: 1) search for prey, 2) attack it, and 3) digest it. Predatory species are classified by Griffiths over their associated costs on each step. Lions, for instance, do not carry a great cost on searching, since their prey is big and easily spotted. On the other hand, snakes spend significant amounts of time searching, which, for species like Bitis caudalis, usually means a sit-and-wait search, on which the predator virtually waits for the environment to pass through its attack area.

pace and direction. When confronted with prey (or good evidence of it), however, they abruptly change their movement patterns, slowing down and making turns, intensifying the search to the vicinity of prey confrontation. After some time without success, predators display a “give-up” on area-restricted searching and go on to scan other areas. This particular behavioral sequence has appeared many times and is documented in birds [5], lizards [6], insect predators1 [7]–[9], and other search-intensive predators (see, for instance, the classic monograph [10]). This strategy is effective for so many species because (besides being simple and general) it is able to strike a good balance between the exploitation (intense search in a defined area) and the exploration (extensive search through many areas) of the search space. Ethological studies have ascribed intentions, in the sense of [11], to this behavior, concluding that the creatures probe for consecutive captures [5], [9], [10]. There is a high chance for such consecutive captures in densely populated areas, and also, since the give-up times on arearestricted search are usually short, not much is spent in terms of energy and of predation risk over less promising areas [7], [10]. The adaptability of this behavioral sequence has been demonstrated in a landmark study [5]; furthermore, ethologists have found this strategy particularly aggressive and demonstrated that there must be a great evolutionary pressure toward the spacing out of the prey [12]. A. Synthetic Predatory Search Since this strategy has played so well under many distinct search spaces (habitats), it naturally suggests the idea of a similar, synthetic predatory search strategy that could be of use to the massively multimodal search problems of computer science [13]. That is, how should a search strategy of intensification by area restriction perform, for instance, in the complex topology of combinatorial optimization problems? This is our main concern here. The results are promising for the traveling salesman problem (TSP). In a companion paper [14], this strategy is implemented by applying a classical move scheme under “regular” search, but 1 And also on host-seeking parasites, which are similar to insect predators in many ways [5].

1089–778X/99$10.00 1999 IEEE

148

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 3, NO. 2, JULY 1999

(a)

(c)

(b)

(d)

M

Fig. 2. Gate Matrix Layout. The problem consists of permuting the columns of and filling in with 1’s the consecutive positions between the leftmost 1 and rightmost 1 in each line (to account for the connection) such that the maximum column sum, which is equal to the number of required tracks, is minimized. For instance, given (a) an original net-gate Matrix, a random permutation of the gates (b) requires ten tracks, while an optimum gate (column) permutation (c) holds the maximum column sum equal to four, folding into (d) the four required tracks. In (b)–(d), the gate number is on top of each gate, and the number of connections through it is at bottom. The corresponding track number and cost of each layout are also shown.

considering only a small “valley” of the search space whenever an improving solution is found. This small valley, corresponding to a restricted area, is intensively probed for some iterations and then gradually augmented, until a give-up moment, when the algorithm changes back to regular search. This strategy compares favorably with some previous approaches and has been able to find optimum solutions for TSP’s of up to 400 cities [14]. The vast majority of improvements were found during area-restricted search. In this paper, the same model is used in a different setting: the Gate Matrix Layout problem [15], a NP-hard problem arising in the context of very large scale integrated (VLSI) physical layouts. In this context, predatory search also seems to be effective: optimal solutions are found to many benchmark circuits, and, for a small set of circuits, the resulting layouts are actually better than the best previously available in the literature. II. GATE MATRIX LAYOUT There are J gates and I nets on a gate matrix layout circuit. In a symbolic representation, gates are vertical wires holding transistors at specific positions. Nets interconnect all the distinct gates that share transistors at the same position. Fig. 2 illustrates the problem. Mathematically, we are given a binary I 2 J matrix M: The columns of M represent the J gates of the circuit, and the rows represent the I nets, such that the positions with mij = 1 represent transistors to implement interconnections to any other gate x with mix = 1 (see Fig. 2). Thus, on each net i; all gates with mij = 1 will be interconnected. This interconnection, on matrix terms, means that on each row there will be no positions of value 0 between two positions of value 1, a mathematical property known as the consecutive ones property for columns.

The objective of the problem is to minimize the layout area. Since there is a constant number of gates, the objective then becomes to minimize the area needed to implement the nets, given by the number of tracks, which is a variable of the gate ordering, as explained below. In gate matrix layout, it is possible to change the ordering of the gates with no alteration of the underlying circuit logic. Therefore, the nets are actually implemented by a permutation of the gates (columns) and by an interval that connects, on each row, the leftmost transistor to the rightmost one, leading us to a new filled matrix M = m0ij ; defined by 0 mij

=

1; 0;

if 9x9y j (x) j otherwise.

(y) ^

mix = miy

=1

The consecutive 1’s property connects, on each row, the leftmost–rightmost transistors of a gate ordering . If the intervals of two distinct nets do not overlap, then both nets can be assigned to the same track, in a process known as folding. The gate matrix layout problem consists of finding a permutation such that

TRACKS (M ) = j

I max 2f1;...;J g

i=1

0 mij

(1)

is minimized. That is, we want to minimize the maximum column sum of the permuted (and consecutive-ones filled) matrix, for this is the number of required tracks of the physical layout [15], [16]. By permuting the columns of the matrix, one can obtain a layout with fewer tracks and a smaller area. This is important for many assembly considerations, but mostly for the cost and performance of the circuit. A minor goal is to minimize the wiring length, given two layouts with an equal number of tracks. Thus for the purposes of this paper

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 3, NO. 2, JULY 1999

we use a slightly modified cost function I 0 COST (M ) = I J max mij j 2f1;...;J g i=1

+

J

I

j =1 i=1

0

ij :

m

(2)

This is done because (1) holds many solutions of equal cost, rendering a great number of null moves. It is easy to see that, for any distinct layouts X and Y; TRACKS (X ) < TRACKS (Y ) implies COST (X ) < COST (Y ); such that by minimizing COST (X ) one also minimizes TRACKS (X ). Also, the form in (2) is a heuristic in itself, such that it minimizes the track number with high priority and the total wiring with low priority. The minimization of the total wiring also affects the performance of the circuit, but is a minor goal compared to minimization of the circuit area. Also, circuits with smaller wiring tend to demand fewer tracks. Besides gate matrix layout, this combinatorial problem arises in other VLSI architectures, such as Weinberger arrays, and PLA folding [16], [17]. Also, the same combinatorial optimization problem appears in other industrial settings. For instance, it has been formulated in an independent study as a problem occurring in the sequencing of cutting patterns (on the wood-cutting industry) to minimize the number of open stacks and also of scheduling the production runs of a flexible machine to minimize the number of open client orders [18]. A. Complexity Results Gate matrix layout has long been known to be NP-hard, and, in terms of graph theory, it can be interpreted as an application of interval graph augmentation: given a graph G = (V ; E ), find an augmentation F , corresponding to a set of edges, such that the resulting graph G0 = (V ; E [ F ) is an interval graph of minimum clique size [15]–[16], [19]. Another complexity result discards the possibility of an absolute approximation algorithm (unless P = NP, obviously) [20]. This problem is especially important to the theory of NPcompleteness because of the surprising nonconstructive results obtained recently [21]–[23]. For instance, it has been proved that there exists a decision algorithm that verifies, in polynomial-time, the existence of a k-tracks layout, for any integer positive k. This existence proof is nonconstructive, however, such that, although it is known that the algorithm must in fact exist, the algorithm itself is not known, nor is it known if such algorithm would be of any help to construct a k-tracks layout [21]. These results are simply intriguing. Important advances in NP-completeness theory should be expected from this line of inquiry. III. A MODEL

FOR

AREA-RESTRICTED SEARCH

A. Model A combinatorial optimization problem is defined as the dual ;Z Z: ! < measures the cost of each solution. In the gate matrix layout problem, each solution s 2 is associated with a permutation of the gates and Z is defined by (2). By exchanging two distinct gates a new layout is obtained. This operation defines a move. The set N (s), called the neighborhood of s, is the set of all solutions obtained by a single move from s. Predatory search considers, at each step, a randomly chosen subset N 0 (s) N (s), called the sampled neighborhood of s, with constant cardinality jN 0 (s)j = NumProposals . If, for any two states s0 and sn and an R 2 <, there is a sequence s0 ; s1 ; s2 ; . . . ; sn01 ; sn ; such that si+1 2 N (si ) and Z (si+1 ) R for all 0 i < n, then we say that sn 2 A(s0 ; R), and that there is a path from s0 to sn that respects restriction R. The set A(s; R) , given a solution s and a restriction R, defines a restricted search area. Thus, area-restricted

( ), where is a discrete and finite set of solutions and

149

search is triggered by imposing “low” values to R and, conversely, is released by setting “high” values to R. A truly adaptive value for restriction R must, however, vary as a function of the search space topology, for any (low or high) value specified a priori will sometimes be inadequate. Thus, we define a list of NumLevel + 1 restriction levels, Restriction [0 ; 1 ; . . . ; NumLevel ]; computed from Z (b) and a sampling of NumLevel solutions in N (b), where the parameter NumLevel = J is the number of columns of M and b is the best solution found by the algorithm. This list is then ordered, such that Restriction[1] holds the very best cost sampled, Restriction[2] holds the second-best cost sampled, up to Restriction[NumLevel], which holds the very worst cost sampled. The variable Restriction[0] is set to Z (b). This list is computed whenever a new improving solution is found. This strategy is adaptive in the sense that low means very close (in terms of cost) to the best solution found and high means the opposite, in spite of the topology of the problem. Therefore, Restriction[Level] is correlated with the size of the search area at each step, jA(b; Restriction [Level ])j; where Level 2 f0; 1; . . . ; NumLevelg: The basis of predatory search is therefore a local search procedure on which a move from a solution s to a new solution s0 is executed whenever s0 2 N 0 (s) and Z (s0 ) Restriction [Level ]. Under arearestricted search, the variable Level is set to low values, imposing an intense search on a small area. It starts with Level = 0, looking for a solution better than the best found, and is gradually increased until Level = Lthreshold , when a give-up on arearestricted search is implemented, by setting a high restriction value Level = LhighThreshold . The search then continues until Level reaches its highest value, NumLevel. A pseudocode of predatory search is presented in [14], where the following operations are carried out while (Level < NumLevel ). 1) Propose Move: Create the set N 0 (current) by sampling NumProposals solutions on N(current); Set proposal = fxjx 2 N 0 (current ) and Z (x) Z (y ) for all y 2 0 N (current )g 2) Check if proposal respects restriction level: if Z (proposal ) Restriction [Level ] then a) b)

Accept move: Set current = proposal; Check if move is an improvement: if Z (current )

3) Increment Iteration Counter: Set Counter = Counter + 1 ; if Counter > (Cthreshold 3 NumLevel ) then fSet Counter = 0 ; Set Level = Level + 1 ; If Level = Lthreshold then Set Level = LhighThresholdg. B. Discussion Note that Steps 1) and 2a) suggest a move and implement it if the proposed solution respects the restriction level. Step 2b) triggers an area restricted search when an improving solution is found, by setting Level = 0; and Step 3) gradually augments the search space (by incrementing the value of Level), until a certain point is reached, when area-restricted search is released, by setting Level = LhighThreshold : As has been pointed out, predators intensify the search in small areas by applying two basic mechanisms: 1) turning back and 2) slowing down the search. The strategy analogous to “turning back” should be clear in this model, for solutions that are out of the restricted search boundaries are discarded in favor of solutions contained within it. In addition, a mechanism analogous to “slowing down” can be

150

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 3, NO. 2, JULY 1999

TABLE I COMPUTATIONAL COMPARISONS TO GM_PLAN

included in the model, by augmenting the subset of the sampled neighborhood at each step simply by setting a higher value for NumProposals. Finally, the natural predatory strategy is concerned with the number of prey found, instead of the quality of the prey. In an optimization setting, the number of good solutions found is usually irrelevant, but not the quality of those solutions. To account for this fundamental discrepancy, predatory search is set to find as many improving solutions as possible by only triggering area-restricted search after an improving solution is found. Thus predatory search effectively tries to maximize the number of improving solutions, a strategy that should on the limit lead to the optimum solution (since we are considering finite and discrete search spaces). C. Parameter Settings Some parameter settings, following the proposal in [14], are considered here. For instance, Cthreshold, the counter threshold to increment variable LEVEL, is set constant at three; Lthreshold, the last threshold on which area-restricted search is executed, is also set constant at four, while LhighTreshold equals Max (NumLevel0Lthreshold, Lthreshold+1). The number of proposed moves at each step, NumProposals, equals 4% of the whole neighborhood. These parameters should be small values, as discussed previously [14]; in this work they have been empirically obtained. IV. NUMERICAL RESULTS Predatory search was implemented in PASCAL and executed on a Pentium II 266 Mhz processor.2 The algorithm was run on the 25 benchmark circuits collected over [15], [17], [24]–[31]. Numerical results obtained on the best of five runs are reported 2 As

usual, running times do not include the problem load time.

AND

GM_LEARN

in Table I and are compared to those obtained by the artificial intelligence procedures developed by Chen and Hu [24], [25]. Their first procedure, GM_Plan, formulates the problem of gate matrix layout as an artificial intelligence planning problem. Their second procedure, GM_Learn, acts on top of GM_Plan, but includes a learning technique which enables the algorithm to improve its own performance as it is executed. The first column of Table I displays the 25 circuits and the corresponding original reference that introduced each circuit. Columns 2–4 display the number of gates, the number of nets, and a lower bound to the track number, obtained by selecting the maximum column sum of the original matrix M [15], [24], [25]. Column 5 holds the track number originally obtained on the very first reference to the circuit. Columns 6–7 contain the track number obtained by GM_Plan [24] and by GM_Learn [25], followed by the track number obtained by predatory search, in column 8. Next, the overall cost value obtained by predatory search, as defined in (2), is included, followed by the time in seconds taken by the algorithm to stop. Note that (<1) means less than one second of running time. Finally, the number of state changes (Step 2a) of the algorithm) is presented on column 11. It can be seen from Table I that the computation times for predatory search are small. Although direct time comparisons are generally regarded as misleading, in a direct comparison, predatory search is slightly faster than GM_Plan on nine circuits and slower in the largest circuit (w4, for which the GM_Plan solution is much worse than the one presented here). The running times for GM_Learn, which usually finds higher quality solutions than GM_Plan, are much higher. For instance, running times of 73, 12, and 102 s on a Vax 750 machine are reported for problems v4470, w1, and x0, respectively (times for large circuits such as w4 are not reported for GM_Learn). The same problems took 7, 1, and 3 s under predatory search, and 9,

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 3, NO. 2, JULY 1999

1, and 11 s under GM_Plan. Since their results were obtained on a machine that may be considerably slower than ours, it is possible that GM_Plan, or even maybe GM_Learn, if executed on the same machine, could finish in shorter computation times than predatory search. It is difficult to focus on just how much has been gained due to a smaller computational effort and just how much has been gained due to more advanced hardware. For future comparisons, the number of moves executed by predatory search should provide a better standard. The quality of the layouts obtained by predatory search appears impressive. For instance, in the circuits v4000, v4470, w4, and x0, predatory search found better layouts than all previously found in the literature. Also, over all other test problems, the layouts match those obtained by either the planning or the learning procedures of Chen and Hu [24], [25]. Finally, 12 of the layouts obtained, v1, v4000, v4050, vc1, vw1, w1, w2, wan, wli, x4, x5, and x6, have LB (X ) = TRACK (X ) and are, thus, provably optimal.

V. CONCLUSION A predatory search strategy for the solution of an NP-hard combinatorial problem in the context of VLSI physical layouts has been suggested. The strategy is based on an adaptive behavioral sequence widely observed across the animal spectra, that of restricting the search space around good points at certain times. The implementation of this strategy is capable of finding optimal and near-optimal solutions to the gate matrix layout problem, and in some cases even finds layouts better than the best previously published in the literature. We must stress that a remarkable new result seems pertinent to area-restricted search in combinatorial optimization: a correlation between the cost of local minimum solutions and the distance, in terms of moves, between these local minimum solutions has been established recently. In this important study, solutions that are many moves away from good ones were found to hold high cost, and solutions of small cost were found to be few moves away from each other (and also few moves away from the best one) [32]. This suggests that good solutions are in fact clustered (few moves from each other), and also that, as the distance between solutions and the global optimum decreases, the quality of the solutions improves, posing the so-called “big-valley” hypothesis. Results along these lines have appeared for the traveling salesman and graph bisection problems [32], and a similar result seems to hold for flowshop sequencing [33]. Should this be a general fact of combinatorial optimization search spaces, then it would provide a reasonable theoretical basis for area-restricted searching. This hypotheses also suggests an alternative model of predatory search on which the restricted areas are not bound by the solution cost, but by the minimum number of steps needed to reach solution b. We are currently engaged on this direction. The idea of applying a synthetic behavioral sequence to optimization is not new, for a similar approach, on which a digital ant colony gradually builds solutions to combinatorial problems (replicating even the emergent autocatalytic process of ant foraging [34], [35], see also [36]) is by now well known. These are synthetic approaches to behavior-based artificial intelligence, as in [37] and [38], in the sense that, instead of replicating the causes for the emergence of the behavior, these methods investigate the effects of such behavior suitably defined for combinatorial search. The analogous problem of creating a computational environment with the right dynamics for the emergence of such behaviors constitutes a major challenge to the related field of artificial life [39], [40].

151

ACKNOWLEDGMENT The author thanks Dr. S.-J. Chen of National Taiwan University for the circuits used in this paper and for [25]. He also thanks Dr. H. H. Yanasse of the National Space Research Institute of Brazil for many insightful remarks about the relation of gate matrix layout to problems of an operational research nature. REFERENCES [1] R. Dawkins, The Blind Watchmaker. New York: Norton, 1987. [2] L. F. Land, “Optics and vision in invertebrates,” in Handbook of Sensory Physiology, Vol. VII, H. Autrum, Ed. Berlin: Springer Verlag, 1980, pp. 471–592. [3] R. Dawkins, The Extended Phenotype. Oxford, U.K.: Oxford Univ. Press, 1982. [4] D. Griffiths, “Foraging costs and relative prey size,” Amer. Naturalist, vol. 116, pp. 743–752, 1980. [5] J. N. M. Smith, “The food searching behavior of two European thrushes. II. The adaptiveness of the search patterns,” Behavior, vol. 59, pp. 1–61, 1974. [6] R. B. Huey and E. R. Pianka, “Ecological consequences of foraging mode,” Ecology, vol. 62, pp. 991–999, 1981. [7] K. Nakamuta, “Mechanism of the switchover from extensive to areaconcentrated search behavior of the ladybird beetle, Coccinella septempuctata bruckii,” J. Insect Physiol., vol. 31, pp. 849–856, 1985. [8] W. J. Bell, “Searching behavior patterns in insects,” Annu. Rev. Entomology, vol. 35, pp. 447–467, 1990. [9] P. Kareiva and G. Odell, “Swarms of predators exhibit ‘preytaxis’ if individual predators use area-restricted search,” Amer. Naturalist, vol. 130, pp. 233–270, 1987. [10] E. Curio, The Ethology of Predation. Berlin, Germany: SpringerVerlag, 1976. [11] D. C. Dennett, The Intentional Stance. Cambridge, MA: MIT Press, 1989. [12] N. Tinbergen, M. Impekoven, and D. Frank, “An experiment on spacingout against predation,” Behavior, vol. 28, pp. 307–321, 1967. [13] A. Linhares, “Preying on optima: A predatory search strategy for combinatorial problems,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., 1998, pp. 2974–2978. , “State-space search strategies gleaned from animal behavior: A [14] traveling salesman experiment,” Biol. Cybern., vol. 78, pp. 167–173, 1998. [15] O. Wing, S. Huang, and R. Wang, “Gate matrix layout,” IEEE Trans. Computer-Aided Design, vol. CAD-4, pp. 220–231, 1985. [16] R. M¨ohring, “Graph problems related to gate matrix layout and PLA folding,” Computing, vol. 7, pp. 17–51, 1990. [17] T. Ohtsuki, H. Mori, E. S. Kuh, T. Kashiwabara, and T. Fujisawa, “Onedimensional logic gate assignment and interval graphs,” IEEE Trans. Circuits Syst., vol. 26, pp. 675–683, 1979. [18] H. H. Yanasse, “On a pattern sequencing problem to minimize the maximum number of open stacks,” Eur. J. Oper. Res., vol. 100, pp. 454–463, 1997. [19] T. Kashiwabara and T. Fujisawa, “NP-completeness of the problem of finding a minimum-clique number interval graph containing a given graph as subgraph,” in Proc. 1979 Int. Symp. Circuits and Systems, 1979, pp. 657–660. [20] N. Deo, M. S. Krishnamoorthy, and M. A. Langston, “Exact and approximate solutions for the gate matrix layout problem,” IEEE Trans. Computer-Aided Design, vol. CAD-6, pp. 79–84, 1987. [21] M. R. Fellows and M. A. Langston, “Nonconstructive advances in polynomial-time complexity,” Inform. Processing Lett., vol. 26, pp. 157–162, 1987. [22] , “Nonconstructive tools for proving polynomial-time decidability,” J. ACM, vol. 35, pp. 727–739, 1988. [23] N. G. Kinnersley, and M. A. Langston, “Obstruction set isolation for the gate matrix layout problem,” Discrete Applied Math., vol. 54, pp. 169–213, 1994. [24] Y. H. Hu and S. J. Chen, “GM_Plan: A gate matrix layout algorithm based on artificial intelligence planning techniques,” IEEE Trans. Computer-Aided Design, vol. 9, pp. 836–845, 1990. [25] S. J. Chen and Y. H. Hu, “GM_Learn: An iterative learning algorithm for CMOS gate matrix layout,” Proc. Inst. Elec. Eng., vol. 137, pt. E, pp. 301–309, 1990. [26] H. W. Leong, “A new algorithm for gate matrix layout,” in Proc. Int. Conf. Computer-Aided Design, 1986, pp. 316–319.

152

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 3, NO. 2, JULY 1999

[27] D. V. Heinbuch, CMOS3 Cell Library. Reading, MA: AddisonWesley, 1988. [28] Y. C. Chang, S. C. Chang, and L. H. Hsu, “Automated layout generation using gate matrix approach,” in Proc. 24th Design Automation Conf., 1987, pp. 552–558. [29] O. Wing, “Automated gate matrix layout,” in Proc. Int. Symp. Circuits and Systems, 1982, pp. 681–685. [30] A. Hashimoto and J. Stevens, “Wire routing by optimizing channel assignment within large apertures,” in Proc. 8th Design Automation Workshop, 1971, pp. 155–169. [31] K. Nakatani, T. Fujii, T. Kikuno, and N. Yoshida, “A heuristic algorithm for gate matrix layout,” in Proc. Int. Conf. Computer-Aided Design, 1986, pp. 324–327. [32] K. D. Boese, A. B. Kahng, and S. Muddu, “A new adaptive multi-start technique for combinatorial global optimizations,” Oper. Res. Lett., vol. 16, pp. 101–113, 1994. [33] C. R. Reeves, “Landscapes, operators, and heuristic search,” Ann. Oper. Res., submitted for publication.

[34] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” IEEE Trans. Evol. Comput., vol. 1, pp. 53–66, 1997. [35] M. Dorigo, V. Maniezzo, and A. Colorni, “The ant system: Optimization by a colony of cooperating agents,” IEEE Trans. Syst., Man, Cybern. B, vol. 26, pp. 29–41, 1996. [36] R. Schoonderwoerd, O. E. Holland, J. L. Brucen, and L. J. M. Rothkrantz, “Ant-based load balancing in telecommunication networks,” Adaptive Behavior, vol. 5, pp. 169–207, 1996. [37] L. Steels, “The artificial life roots of artificial intelligence,” Artif. Life, vol. 1, pp. 75–110, 1993. [38] R. A. Brooks, “Intelligence without representation,” Artif. Intell., vol. 47, pp. 139–160, 1991. [39] M. Dyer, “Toward synthesizing artificial neural networks that exhibit cooperative intelligent behavior: Some open issues in artificial life,” Artif. Life, vol. 1, pp. 111–131, 1993. [40] M. A. Boden, Ed., The Philosophy of Artificial Life. Oxford, U.K.: Oxford Univ. Press, 1996.