IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

On the Impact of Dissimilarity Measure in k-Modes Clustering Algorithm Michael K. Ng, Mark Junjie Li, Joshua Zhexue Huang, and Zengyou He Abstract—This correspondence describes extensions to the k-modes algorithm for clustering categorical data. By modifying a simple matching dissimilarity measure for categorical objects, a heuristic approach was developed in [4], [12] which allows the use of the k-modes paradigm to obtain a cluster with strong intrasimilarity and to efficiently cluster large categorical data sets. The main aim of this paper is to rigorously derive the updating formula of the k-modes clustering algorithm with the new dissimilarity measure and the convergence of the algorithm under the optimization framework. Index Terms—Data mining, clustering, k-modes algorithm, categorical data.

Ç 1

INTRODUCTION

SINCE first published in 1997, the k-modes algorithm [5], [6] has become a popular technique in solving categorical data clustering problems in different application domains (e.g., [1], [11]). The k-modes algorithm extends the k-means algorithm [9] by using a simple matching dissimilarity measure for categorical objects, modes instead of means for clusters, and a frequency-based method to update modes in the clustering process to minimize the clustering cost function. These extensions have removed the numeric-only limitation of the k-means algorithm and enable the k-means clustering process to be used to efficiently cluster large categorical data sets from real world databases. An equivalent nonparametric approach to deriving clusters from categorical data is presented in [2]. A note in [8] discusses the equivalence of the two independently developed k-modes approaches. The distance between two objects computed with the simple matching similarity measure is either 0 or 1. This often results in clusters with weak intrasimilarity. Recently, He et al. [4] and San et al. [12] independently introduced a new dissimilarity measure to the k-modes clustering process to improve the accuracy of the clustering results. Their main idea is to use the relative attribute frequencies of the cluster modes in the similarity measure in the k-modes objective function. This modification allows the algorithm to recognize a cluster with weak intrasimilarity and, therefore, assign less similar objects to such a cluster so that the generated clusters have strong intrasimilarities. Experimental results in [4] and [12] have shown that the modified k-modes algorithm is very effective. The aim of this paper is to give a rigorous proof that the object cluster membership assignment method and the mode updating formulae under the new dissimilarity measure indeed minimize the objective function. We also prove that, using the new dissimilarity measure, the convergence of the clustering process

VOL. 29, NO. 3,

MARCH 2007

503

is guaranteed. In [4] and [12], the new dissimilarity measure was introduced heuristically. With the formal proofs, we assure that the modified k-modes algorithm can be used safely. The outline of this paper is as follows: In Section 2, we review the k-modes algorithm. In Section 3, we study and analyze the k-modes algorithm with the new similarity measure. In Section 4, examples are given to illustrate the effectiveness of the k-modes algorithm with the new similarity measure. Finally, a concluding remark is given in Section 5.

2

THE k-MODES ALGORITHM

We assume the set of objects to be clustered is stored in a database table T defined by a set of attributes, A1 ; A2 ; . . . ; Am . Each attribute Aj describes a domain of values, denoted by DOMðAj Þ, associated with a defined semantic and a data type. In this paper, we only consider two general data types, numeric and categorical, and assume other types used in database systems can be mapped to one of these two types. The domains of attributes associated with these two types are called numeric and categorical, respectively. A numeric domain consists of real numbers. A domain DOMðAj Þ is defined as categorical if it is finite and unordered, e.g., for any a; b 2 DOMðAj Þ, either a ¼ b or a 6¼ b, see, for instance, [3]. An object X in T can be logically represented as a conjunction of attribute-value pairs ½A1 ¼ x1  ^ ½A2 ¼ x2  ^    ^ ½Am ¼ xm , where xj 2 DOMðAj Þ for 1  j  m. Without ambiguity, we represent X as a vector ½x1 ; x2 ;    ; xm . X is called a categorical object if it has only categorical values. We consider that every object has exactly m attribute values. If the value of an attribute Aj is missing, then we denote the attribute value of Aj by . Let X ¼ fX1 ; X2 ; . . . ; Xn g be a set of n objects. Object Xi is represented as ½xi;1 ; xi;2 ; . . . ; xi;m . We write Xi ¼ Xk if xi;j ¼ xk;j for 1  j  m. The relation Xi ¼ Xk does not mean that Xi and Xk are the same object in the real-world database, but rather that the two objects have equal values in attributes A1 ; A2 ; . . . ; Am . The k-modes algorithm, introduced and developed in [5], [6], has made the following modifications to the k-means algorithm: 1) using a simple matching dissimilarity measure for categorical objects, 2) replacing the means of clusters with the modes, and 3) using a frequency-based method to find the modes. These modifications have removed the numeric-only limitation of the k-means algorithm but maintain its efficiency in clustering large categorical data sets [6]. Let X and Y be two categorical objects represented by ½x1 ; x2 ;    ; xm  and ½y1 ; y2 ;    ; ym , respectively. The simple matching dissimilarity measure between X and Y is defined as follows: dðX; Y Þ 

m X

ðxj ; yj Þ;

j¼1

where . M.K. Ng and M.J. Li are with the Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong. E-mail: {mng, jjli}@math.hkbu.edu.hk. . J.Z. Huang is with the E-Business Technology Institute, The University of Hong Kong, Pokfulam Road, Hong Kong. E-mail: [email protected]. . Z. He is with the Department of Computer Science and Engineering, Harbin Institute of Technology, 92 West Dazhi Street, PO Box 315, Harbin 150001, China. E-mail: [email protected]. Manuscript received 7 Jan. 2006; revised 13 June 2006; accepted 31 July 2006; published online 15 Jan. 2007. Recommended for acceptance by M. Figueiredo. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TPAMI-0010-0106. 0162-8828/07/$25.00 ß 2007 IEEE

Published by the IEEE Computer Society

 ðxj ; yj Þ ¼

0; 1;

xj ¼ yj xj 6¼ yj :

ð1Þ

It is easy to verify that the function d defines a metric space on the set of categorical objects. Traditionally, the simple matching approach is often used in binary variables which are converted from categorical variables [10, pp. 28-29]. We note that d is also a kind of generalized Hamming distance. The k-modes algorithm uses the k-means paradigm to cluster categorical data. The objective of clustering a set of n categorical objects into k clusters is to find W and Z that minimize

504

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

F ðW ; ZÞ ¼

k X n X

wli dðZl ; Xi Þ

ð2Þ

subject to

k X

NO. 3,

MARCH 2007

problem occurs frequently when clusters have weak intrasimilarities, i.e., the attribute modes do not have high frequencies. Let us consider the following example to demonstrate the

l¼1 i¼1

wli 2 f0; 1g;

VOL. 29,

1  l  k; 1  i  n;

ð3Þ

problem using the simple matching dissimilarity. The data set is described with three categorical attributes A1 (two categories: 1 or 2), A2 (two categories: 1 or 2), and A3 (five categories: 1, 2, 3, 4, or 5)

wli ¼ 1;

1  i  n;

ð4Þ

and there are two clusters with their modes and their three objects:

l¼1

and 0<

n X

wli < n;

1  l  k;

ð5Þ

i¼1

where kð nÞ is a known number of clusters, W ¼ ½wli  is a k-by-n {0, 1} matrix, Z ¼ ½Z1 ; Z2 ; . . . ; Zk , and Zi is the ith cluster center with the categorical attributes A1 ; A2 ;    ; Am . Minimization of F in (2) with the constraints in (3), (4), and (5) forms a class of constrained nonlinear optimization problems whose solutions are unknown. The usual method towards optimization of F in (2) is to use partial optimization for Z and W . In this method, we first fix Z and find necessary conditions on W to minimize F . Then, we fix W and minimize F with respect to Z. This process is formalized in the k-modes algorithm as follows: Algorithm The k-modes algorithm 1. Choose an initial point Z ð1Þ 2 IRmk . Determine W ð1Þ such that F ðW ; Z ð1Þ Þ is minimized. Set t ¼ 1. 2. Determine Z ðtþ1Þ such that F ðW ðtÞ ; Z ðtþ1Þ Þ is minimized. If F ðW ðtÞ ; Z ðtþ1Þ Þ ¼ F ðW ðtÞ ; Z ðtÞ Þ, then stop; otherwise goto Step 3. 3. Determine W ðtþ1Þ such that F ðW ðtþ1Þ ; Z ðtþ1Þ Þ is minimized. If F ðW ðtþ1Þ ; Z ðtþ1Þ Þ ¼ F ðW ðtÞ ; Z ðtþ1Þ Þ, then stop; otherwise set t ¼ t þ 1 and goto Step 2.

The above example shows that the similarity measure does not represent the real semantic distance between the objects and the cluster mode. For example, if an object X ¼ ½1 1 5 is assigned to one of the clusters, then we find that dðC1 ; XÞ ¼ 1 ¼ dðC2 ; XÞ. Therefore, we cannot determine the assignment of X properly.

3

THE NEW DISSIMILARITY MEASURE

He et al. [4] and San et al. [12] independently introduced a dissimilarity measure in the k-modes objective function. More precisely, they minimize Fn ðW ; ZÞ ¼

k X n X

wli dn ðZl ; Xi Þ

ð6Þ

l¼1 i¼1

The matrices W and Z are calculated according to the following

subject to the same conditions as in (3), (4), and (5). The

two theorems:

dissimilarity measure dn ðZl ; Xi Þ is defined as follows:

Theorem 1. Let Z^ be fixed and consider the problem: ^ min F ðW ; ZÞ W

subject to

dn ðZl ; Xi Þ ¼

ð3Þ; ð4Þ; and ð5Þ:

^ is given by The minimizer W  1; if dðZ^l ; Xi Þ  dðZ^h ; Xi Þ; ^ li ¼ w 0; otherwise:

m X

ðzl;j ; xi;j Þ;

ð7Þ

j¼1

where (

1  h  k; ðzl;j ; xi;j Þ ¼

Theorem 2. Let X be a set of categorical objects described by categorical ð1Þ

ð2Þ

ðn Þ

attributes A1 ; A2 ; . . . ; Am and DOMðAj Þ ¼ faj ; aj ; . . . ; aj j g, where nj is the number of categories of attribute Aj for 1  j  m. Let the cluster centers Zl be represented by ½zl;1 ; zl;2 ;    ; zl;m  for P P 1  l  k. Then, the quantity kl¼1 ni¼1 wli dðZl ; Xi Þ is minimized ðrÞ

iff zl;j ¼ aj 2 DOMðAj Þ, where         ðrÞ ðtÞ fwli jxi;j ¼ aj ; wli ¼ 1g  fwli jxi;j ¼ aj ; wli ¼ 1g;

1  t  nj ;

for 1  j  m. Here, jXj denotes the number of elements in the set X.

1; jc j 1  jcl;j;rl j ;

if zl;j 6¼ xi;j ; otherwise;

where jcl j is the number of objects in the lth cluster, given by jcl j ¼ jfi j wli ¼ 1gj; ðrÞ

and jcl;j;r j is the number of objects with category aj

of the

jth attribute in the lth cluster, given by     ðrÞ jcl;j;r j ¼ fwls j zl;j ¼ xs;j ¼ aj ; wls ¼ 1g: According to the definition of ðÞ, the dominant level of the mode category is considered in the calculation of the dissimilarity

^ is not unique, so We remark that the minimum solution W

measure. When the mode category is 100 percent dominant, we

wli ¼ 1 may arbitrarily be assigned to the first minimizing index l

have jcl j ¼ jcl;j;r j and therefore the corresponding function value is

and the remaining entries of this column are put to zero. This

the same as in (1) in the original k-modes algorithm.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 29, NO. 3,

MARCH 2007

505

It is clear that l;j is minimized iff cl;j;t is maximal for 1  t  nj . Thus, the term

Let us consider the example in Section 2 again, the computed parameters jcl;j;r j are given as follows:

jfwli jxi;j ¼ zl;j ; wli ¼ 1gj u t

must be maximal. The result follows.

Now, if an object X ¼ ½1 1 5 is assigned to one of the clusters, the new dissimilarity measure can represent the real semantic distance, we have dn ðC1 ; XÞ ¼ 1 and dn ðC2 ; XÞ ¼ 5=3. The object X is assigned to the first cluster properly. Now, the key issue is to derive rigorously the updating formula of the k-modes clustering algorithm with the new dissimilarity measure, similar to Theorem 2. In [4], [12], the authors presented heuristically the updating formula only using the k-modes framework. We remark that the matrix W can be calculated according to Theorem 1. Theorem 3 below rigorously shows the updating formula of Z in the k-modes clustering algorithm with the new dissimilarity measure. Theorem 3. Let X be a set of categorical objects described by categorical ð1Þ

ð2Þ

ðn Þ

attributes A1 ; A2 ; . . . ; Am and DOMðAj Þ ¼ faj ; aj ; . . . ; aj j g, where nj is the number of categories of attribute Aj for 1  j  m. Let the cluster centers Zl be represented by ½zl;1 ; zl;2 ;    ; zl;m  for Pk Pn l¼1 i¼1 wli dn ðZl ; Xi Þ is minimized

1  l  k. Then, the quantity ðrÞ

iff zl;j ¼ aj 2 DOMðAj Þ, where         ðrÞ ðtÞ fwli jxi;j ¼ aj ; wli ¼ 1g  fwli jxi;j ¼ aj ; wli ¼ 1g;

1  t  nj ; ð8Þ

According to (8), the category of attribute Aj of the cluster mode Zl is determined by the mode of categories of attribute Aj in the set of objects belonging to cluster l. By comparing the results in Theorems 2 and 3, the cluster centers Z are updated in the same manner even when we use different distance functions in (1) and (7), respectively. It implies that the same k-mode algorithm can be used. The only difference is that we need to count and store jcl;j;r j and jcl j in each iteration for the distance function evaluation. Combining Theorems 1 and 3 with the algorithm forms the k-modes algorithm with the new dissimilarity measure in which the modes of clusters in each iteration are updated according to Theorem 3 and the partition matrix is computed according to Theorem 1. We remark that the updating formulae of W and Z in Theorems 1 and 3, respectively, are determined by solving two minimization subproblems from (2): min F ðW ; ZÞ ¼ W

min F ðW ; ZÞ ¼ Z

l¼1 i¼1

k X n X m X

wli ðzl;j ; xi;j Þ

l¼1 i¼1 j¼1

are nonnegative and independent. Minimizing the quantity is equivalent to minimizing each inner sum. We write the ðl; jÞth inner sum (1  l  k and 1  j  m) as l;j

¼

n X

wli ðzl;j ; xi; jÞ:

i¼1 ðtÞ

When zl;j ¼ aj , we have l;j

¼

n X

  cl;j;t þ wli 1  cl ðtÞ

i¼1; xi;j ¼aj

n X

wli ðtÞ

i¼1; xi;j 6¼aj

  cl;j;t þ ðcl  cl;j;t Þ ¼ cl;j;t 1  cl c2l;j;t : ¼ cl  cl

for a given W

k X n X

wli dn ðZl ; Xi Þ

for a given Z:

l¼1 i¼1

The convergence of the k-modes algorithm with the new dissimilarity measure can be obtained as in Theorem 4 below. Theorem 4. The k-modes algorithm with the new dissimilarity measure converges in a finite number of iterations. Q Proof. We first note that there are only a finite number ðN ¼ m j¼1 nj Þ of possible cluster centers (modes). We then show that each possible center appears at most once by the k-modes algorithm. Assume that Z ðt1 Þ ¼ Z ðt2 Þ , where t1 6¼ t2 . According to the k-modes algorithm, we can compute the minimizers W ðt1 Þ and W ðt2 Þ for Z ¼ Z ðt1 Þ and Z ¼ Z ðt2 Þ , respectively. Therefore, we have       Fn W ðt1 Þ ; Z ðt1 Þ ¼ Fn W ðt1 Þ ; Z ðt2 Þ ¼ Fn W ðt2 Þ ; Z ðt2 Þ :

for 1  j  m.

wli dn ðZl ; Xi Þ ¼

wli dn ðZl ; Xi Þ

l¼1 i¼1

and

However, the sequence Fn ð; Þ generated by the k-modes algorithm with the new dissimilarity measure is strictly decreasing. Hence, the result follows. u t

Proof. For a given W, all the inner sums of the quantity k X n X

k X n X

The result of Theorem 4 guarantees the decrease of the objective function values with respect the iterations of the k-modes algorithm with the new dissimilarity measure.

4

EXPERIMENTAL RESULTS

In [4], [12], experimental results are given to illustrate that the k-mode algorithm with the new dissimilarity measure performs better in clustering accuracy than the original k-mode algorithm. The main aim of this section is to illustrate the convergence result and evaluate the clustering performance and efficiency of the k-mode algorithm with the new dissimilarity measure. We will use a soybean data set obtained from the UCI Machine Learning Repository [13] to generate several examples to test the k-modes algorithm with the new dissimilarity measure. The soybean data set includes 47 records, each of which is described by 35 attributes. It is an experimental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnoses. Each record is labeled as one of the four diseases: D1, D2, D3, and D4. Except for D4, which has 17 instances, all of the other diseases only have 10 instances each.

506

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

Fig. 1. The objective function values against the iterations with different initial guesses.

We only selected 21 attributes in this experiments because the other attributes only have one category. We carried out 100 runs of the k-mode algorithm with the new dissimilarity measure and the original k-mode algorithm on the data set. In each run, the same initial cluster centers were used in both algorithms. In Fig. 1, we show the 100 curves, where each curve refers to the objective function values with the iterations of

VOL. 29,

NO. 3,

MARCH 2007

the k-mode algorithm using the new dissimilarity measure. It is clear from the figure that the objective function values are decreasing in each curve. With our results in Theorem 3, we show that the objective function values are decreasing when the new similarity measure is used. We also see in Fig. 1 that the algorithm stops after a finite number of iterations, i.e., the objective function values do not decrease any more. This is exactly the results we showed in Theorem 4. The k-modes algorithm with the new dissimilarity measure can be used safely. To evaluate the performance of clustering algorithms, we consider three measures: 1) accuracy ðACÞ, 2) precision ðP EÞ, and 3) recall ðREÞ. Objects in an lth cluster are assumed to be classified either correctly or incorrectly with respect to a given class of objects. Let the number of correctly classified objects be al , let the number of incorrectly classified objects be bl , and let the number of objects in a given class but not in a cluster be cl . The clustering accuracy, recall, and precision are defined as follows: PK  al  PK  al  Pk l¼1 l¼1 al þcl a þb a l l l ; and RE ¼ ; AC ¼ l¼1 ; P E ¼ n n n respectively. Table 1 shows the summary results for both algorithms. According to Table 1, the performance of the k-mode algorithm with the new similarity measure is better than the original k-mode algorithm for AC, P E, and RE. Next, we test the scalability of the k-mode algorithm with the new dissimilarity measure. Synthetic categorical data sets are generated by the method in [7] to evaluate the algorithm. The number of clusters, attributes, and categories of synthetic data ranges between 3 to 24. The number of objects ranges between 10,000

TABLE 1 The Summary Results for 100 Runs of Two Algorithms on the Soybean Data Set

Fig. 2. (a) Computational times for different numbers of clusters. (b) Computational times for different numbers of categories. (c) Computational times for different numbers of attributes. (d) Computational times for different numbers of objects.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

and 80,000. The computational results are performed by using a machine with an Apple iBook G4 and 1G RAM. The computational times of both algorithms are plotted with respect to the number of clusters, attributes, categories, and objects, while the other corresponding parameters are fixed. All of the experiments are repeated five times and the average computational times are depicted. Fig. 2a shows the computational times against the number of clusters, while the numbers of categories and attributes are 12 and the number of objects is 80,000. Fig. 2b shows the computational times against the number of categories, while the numbers of clusters is 3 and the number of attributes is 12 and the number of objects is 80,000. Fig. 2c shows the computational times against the number of attributes, while the numbers of clusters is 3 and the number of categories is 12 and the number of objects is 80,000. Fig. 2d shows the computational times against the number of objects, while the number of attributes and categories are 12 and the number of clusters is 3. According to the figures, we see that both algorithms are scalable, i.e., the computational times increase linearly with respect to either the number of attributes, categories, clusters or objects. The k-mode algorithm with the new similarity measure requires more computational times than the original k-mode algorithm. It is an expected outcome since the calculation of the new dissimilarity measure requires some additional arithmetic operations. However, according to the tests, the k-mode algorithm with the new dissimilarity measure is still scalable, i.e., it can cluster categorical objects efficiently.

5

CONCLUSION

In this paper, we rigorously derive the updating formula of the k-modes clustering algorithm with the new dissimilarity measure and the convergence of the algorithm under the optimization framework. Experimental results show that the k-mode algorithm with the new dissimilarity measure is efficient and effective in clustering categorical data sets.

ACKNOWLEDGMENTS This research was supported in part by RCG grant nos. 7046/03P, 7035/04P and 7035/05P and HKBU FRGs. Joshua Zhexue Huang research was supported in part by NSFC grants 60473091 and 60475026.

REFERENCES [1] [2] [3] [4]

[5]

[6]

[7] [8] [9] [10] [11]

[12]

[13]

B. Andreopoulos, A. An, and X. Wang, “Clustering the Internet Topology at Multiple Layers,” WSEAS Trans. Information Science and Applications, vol. 2, no. 10, pp. 1625-1634, 2005. A. Chaturvedi, P. Green, and J. Carroll, “K-Modes Clustering,” J. Classification, vol. 18, pp. 35-55, 2001. K.C. Gowda and E. Diday, “Symbolic Clustering Using a New Dissimilarity Measure,” Pattern Recognition, vol. 24, no. 6, pp. 567-578, 1991. Z. He, S. Deng, and X. Xu, “Improving k-Modes Algorithm Considering Frequencies of Attribute Values in Mode,” Proc. Int’l Conf. Computational Intelligence and Security, pp. 157-162, 2005. Z. Huang, “A Fast Clustering Algorithm to Cluster Very Large Categorical Data Sets in Data Mining,” Proc. SIGMOD Workshop Research Issues on Data Mining and Knowledge Discovery, pp. 1-8, 1997. Z. Huang, “Extensions to the k-Means Algorithm for Clustering Large Data Sets with Categorical Values,” Data Mining and Knowledge Discovery, vol. 2, no. 3, pp. 283-304, 1998. Z. Huang and M. Ng, “A Fuzzy k-Mode Algorithm for Clustering Categorical Data,” IEEE Trans. Fuzzy Systems, vol. 7, no. 4, 1999. Z. Huang and M. Ng, “A Note on k-Modes Clustering,” J. Classification, vol. 20, pp. 257-261, 2003. A.K. Jain and R.C. Dubes, Algorithms for Clustering Data. Prentice Hall, 1988. L. Kaufman and P.J. Rousseeuw, Finding Groups in Data—An Introduction to Cluster Analysis. Wiley, 1990. V. Manganaro, S. Paratore, E. Alessi, S. Coffa, and S. Cavallaro, “Adding Semantics to Gene Expression Profiles: New Tools for Drug Discovery,” Current Medicinal Chemistry, vol. 12, pp. 1149-1160, 2005. O. San, V. Huynh, and Y. Nakamori, “An Alternative Extension of the k-Means Algorithm for Clustering Categorical Data,” Int’l J. Applied Math. and Computer Science, vol. 14, no. 2, pp. 241-247, 2004. UCI Machine Learning Repository, http://www.ics.uci.edu/mlearn/ MLRepository.html, 2006.

VOL. 29, NO. 3,

MARCH 2007

507

On the Impact of Dissimilarity Measure in k-Modes ... - IEEE Xplore

Jan 15, 2007 - intrasimilarity and to efficiently cluster large categorical data sets. ... Index Terms—Data mining, clustering, k-modes algorithm, categorical data.

856KB Sizes 5 Downloads 189 Views

Recommend Documents

Impact of Practical Models on Power Aware Broadcast ... - IEEE Xplore
The existing power aware broadcast protocols for wireless ad hoc and sensor networks assume the impractical model where two nodes can communicate if and only if they exist within their transmission radius. In this paper, we consider practical models

Impact of Load Forecast Uncertainty on LMP - IEEE Xplore
always contain certain degree of errors mainly due to the random nature of the load. At the same time, LMP step change exists at critical load level (CLL).

Evolutionary Computation, IEEE Transactions on - IEEE Xplore
search strategy to a great number of habitats and prey distributions. We propose to synthesize a similar search strategy for the massively multimodal problems of ...

Impact of the Lips for Biometrics - IEEE Xplore
Afterward, five various mouth corners are detected through the proposed system, in which it is also able to resist shadow, beard, and ro- tation problems. For the feature extraction, two geometric ratios and ten parabolic-related parameters are adopt

On the Polarization Entropy - IEEE Xplore
polarimetric SAR image. In this paper, the authors propose a new method to calculate the polarization entropy, based on the least square method. Using a ...

On the Structure of Balanced and Other Principal ... - IEEE Xplore
[9]. [IO]. 11. IMFULSE RESPONSE INTERSION. In h s section we mill give the essential results starting with the following. Theorem 2.1: Z has an M-delay inverse ...

On the Stability and Agility of Aggressive Vehicle ... - IEEE Xplore
is used to capture the dynamic friction force characteristics. We also introduce the use of vehicle lateral jerk and acceleration in- formation as the agility metrics to ...

Error Characterization in the Vicinity of Singularities in ... - IEEE Xplore
plified specification and monitoring of the motion of mo- bile multi-robot systems ... framework and its application to a 3-robot system and present the problem of ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...

Future Perspectives on Nanotechnology/Material ... - IEEE Xplore
Delphi Studies and Sci-Tech Policies in Japan, Mainland China and Taiwan ... culture and geography. .... approach technologies which will meet with China's.

Symposium on Emerging Topics in Control and Modeling - IEEE Xplore
Dec 2, 2010 - 132 IEEE CONTROL SYSTEMS MAGAZINE » DECEMBER 2010 student-led event ... sion were the technical cosponsors of the event, and the ...

Delay-Privacy Tradeoff in the Design of Scheduling ... - IEEE Xplore
much information about the usage pattern of one user of the system can be learned by ... include, a computer where the CPU needs to be shared between the ...

The Viterbi Algorithm - IEEE Xplore
HE VITERBI algorithm (VA) was proposed in 1967 as a method of decoding convolutional codes. Since that time, it has been recognized as an attractive solu-.

Effects of Heterogenous Mobility on Rate Adaptation ... - IEEE Xplore
rate adaptation and user scheduling (JRAUS) policy for cellular networks and compare it with the conventional and reference. JRAUS policies. We also evaluate ...

Research on Excitation Control of Flexible Power ... - IEEE Xplore
induction machine; direct-start; Back-to-back converters;. Speed control mode. I. INTRODUCTION. The power imbalance caused by power system fault.

Computation of Posterior Marginals on Aggregated ... - IEEE Xplore
Abstract—Optimum soft decoding of sources compressed with variable length codes and quasi-arithmetic codes, transmitted over noisy channels, can be ...

On Some Sufficient Conditions for Distributed Quality-of ... - IEEE Xplore
that of an optimal, centralized algorithm. Keywords-distributed algorithms; quality-of-service (QoS); conflict graph; wireless networks; interference models; frac-.

I iJl! - IEEE Xplore
Email: [email protected]. Abstract: A ... consumptions are 8.3mA and 1.lmA for WCDMA mode .... 8.3mA from a 1.5V supply under WCDMA mode and.

Inverse Halftoning Based on the Bayesian Theorem - IEEE Xplore
Abstract—This study proposes a method which can generate high quality inverse halftone images from halftone images. This method can be employed prior to ...

On the bit error probabilities of GMSK in the Rayleigh ... - IEEE Xplore
detector, also known as MSK-type receiver, can be employed instead of a rather complicated ... literature, a number of studies have been reported focusing.