Data Clustering Yanchang Zhao University of Technology, Sydney, Australia Phone: +61 2 6155 1550 E-mail: [email protected] Postal address: PO Box 123, Broadway, NSW 2007, Australia Longbing Cao University of Technology, Sydney, Australia Phone: +61 2 9514 4477 E-mail: [email protected] Postal address: PO Box 123, Broadway, NSW 2007, Australia Huaifeng Zhang University of Technology, Sydney, Australia Phone: +61 2 6155 1562 E-mail: [email protected] Postal address: PO Box 123, Broadway, NSW 2007, Australia Chengqi Zhang University of Technology, Sydney, Australia Phone: +61 2 9514 7941 E-mail: [email protected] Postal address: PO Box 123, Broadway, NSW 2007, Australia

Data Clustering* Yanchang Zhao†, Longbing Cao, Huaifeng Zhang, Chengqi Zhang University of Technology, Sydney, Australia PO Box 123, Broadway, NSW 2007, Australia {yczhao, lbcao, hfzhang, chengqi}@it.uts.edu.au

INTRODUCTION Clustering is one of the most important techniques in data mining. This chapter presents a survey of popular approaches for data clustering, including well-known clustering techniques, such as partitioning clustering, hierarchical clustering, density-based clustering and grid-based clustering, and recent advances in clustering, such as subspace clustering, text clustering and data stream clustering. The major challenges and future trends of data clustering will also be introduced in this chapter. The remainder of this chapter is organized as follows. The background of data clustering will be introduced in Section 2, including the definition of clustering, categories of clustering techniques, features of good clustering algorithms, and the validation of clustering. Section 3 will present main approaches for clustering, which ranges from the classic partitioning and hierarchical clustering to recent approaches of bi-clustering and semi-supervised clustering. Challenges and future trends will be discussed in Section 4, followed by the conclusions in the last section.

*

This work was supported by the Australian Research Council (ARC) Linkage Project LP0775041 and Discovery Projects DP0449535, DP0667060 & DP0773412, and by the Early Career Researcher Grant from University of Technology, Sydney, Australia. † Corresponding author.

BACKGROUND Data clustering is sourced from pattern recognition (Theodoridis & Koutroumbas, 2006), machine learning (Alpaydin, 2004), statistics (Hill & Lewicki, 2007) and database technology (Date, 2003). Data clustering is to partition data into groups, where the data in the same group are similar to one another and the data from different groups are dissimilar (Han & Kamber, 2000). More specifically, it is to segment data into clusters so that the intra-cluster similarity is maximized and that the inter-cluster similarity is minimized. The groups obtained are a partition of data, which can be used for customer segmentation, document categorization, etc. Clustering techniques can be “clustered” into groups in multiple ways. In terms of the membership of objects, there are two kinds of clustering, fuzzy clustering and hard clustering. Fuzzy clustering is also known as soft clustering, where an object can be in more than one cluster, but with different membership degree. In contrast, an object in hard clustering can belong to one cluster only. Generally speaking, clustering is referred to as hard clustering implicitly. In terms of approaches, data clustering techniques can be classified into the following groups: partitioning clustering, hierarchical clustering, density-based clustering, grid-based clustering and subspace clustering. In terms of the type of data, there are spatial data clustering, text clustering, multimedia clustering, time series clustering, data stream clustering and graph clustering. For a good clustering algorithm, it is supposed to have the following features: 1) the ability to detect clusters with various shapes and different distributions; 2) the capability of finding clusters with considerably different sizes; 3) the ability to work when outliers are present; 4) no or few parameters needed as input; and 5) scalability to both the size and the dimensionality of data.

How to evaluate the results is another important problem for clustering. For the validation of clustering results, there are many different measures, such as Compactness (Zait & Messatfa, 1997), Conditional Entropy (CE) and Normalized Mutual Information (NMI) (Strehl & Ghosh, 2002; Fern & Brodley, 2003). The validation measures can be classified into three categories, 1) internal validation, such as Compactness, Dunn’s validation index, Silhouette index and Hubert’s correlation with distance matrix, which is based on calculating the properties of result clusters, 2) relative validation, such as Figure of merit and Stability, which is based on comparisons of partitions, and 3) external validation, such as CE, NMI, Hubert’s correlation, Rand statistics, Jaccard coefficient, and Folkes and Mallows index, which is based on comparing with a known true partition of data (Halkidi et al., 2001, Brun et al., 2007).

DATA CLUSTERING TECHNIQUES The popular clustering techniques will be briefly presented in this section. More detailed introduction and comparison of various clustering techniques can be found in books on data mining and survey papers on clustering (Berkhin, 2002; Grabmeier & Rudolph, 2002; Han & Kamber, 2000; Jain, Murty, & Flynn, 1999; Kolatch, 2001; Xu & Wunsch, 2005; Zait & Messatfa, 1997).

Partitioning Clustering The idea of partitioning clustering is to partition the data into k groups first and then try to improve the quality of clustering by moving objects from one group to another. A typical method of partitioning clustering is k-means (Alsabti, Ranka, & Singh, 1998; Macqueen, 1967), which randomly selects k objects as cluster centers and assigns other objects to the nearest cluster centers, and then improves the clustering by iteratively

updating the cluster centers and reassigning the objects to the new centers. k-medoids (Huang, 1998) is a variation of k-means for categorical data, where the medoid (i.e., the object closest to the center), instead of the centroid, is used to represent a cluster. Some other partitioning methods are PAM and CLARA proposed by Kaufman & Rousseeuw (1990) and CLARANS by Ng and Han (1994). The disadvantage of partitioning clustering is that the result of clustering is dependent on the selection of initial cluster centers and it may result in a local optimum instead of a global one. A simple way to improve the chance of obtaining the global optimum is to run k-means multiple times with different initial centers and then choose the best clustering result as output. Another disadvantage of k-means is that it tends to result in sphere-shaped clusters with similar sizes. Moreover, how to choose a value for k also remains as a non-trivial question.

Hierarchical Clustering With hierarchical clustering approach, a hierarchical decomposition of data is built in either bottom-up (agglomerative) or top-down (divisive) way (see Figure 1). Generally a dendrogram is generated and a user may select to cut it at a certain level to get the clusters. With agglomerative clustering, every single object is taken as a cluster and then iteratively the two nearest clusters are merged to build bigger clusters until the expected number of clusters is obtained or when only one cluster is left. AGENS is a typical agglomerative clustering algorithm (Kaufman & Rousseeuw, 1990). Divisive clustering works in an opposite way, which puts all objects in a single cluster and then divides the cluster into smaller and smaller ones. An example of divisive clustering is DIANA (Kaufman & Rousseeuw, 1990). Some other popular hierarchical clustering algorithms are BIRCH (Zhang, Ramakrishnan, & Livny, 1996), CURE (Guha, Rastogi, & Shim,

1998), ROCK (Guha, Rastogi, & Shim, 1999) and Chameleon (Karypis, Han, & Kumar, 1999). In hierarchical clustering, there are four different methods to measure the distance between clusters: centroid distance, average distance, single-link distance and completelink distance. Centroid distance is the distance between the centroids of two clusters. Average distance is the average of the distances between every pair of objects from two clusters. Single-link distance, also known as minimum distance, is the distance between the two nearest objects from two clusters. Complete-link distance, also referred to as maximum distance, is the distance between the two objects which are the farthest from each other from two clusters.

Figure 1. Hierarchical Clustering

Density-Based Clustering The rationale of density-based clustering is that a cluster is composed of well-connected dense regions. DBSCAN is a typical density-based clustering algorithm, which works by expanding clusters to their dense neighborhood (Ester, Kriegel, Sander, & Xu, 1996).

Although a user is not required to guess the number of clusters before clustering, he has to provide two other parameters, the radius of neighborhood and the density threshold, to run DBSCAN. AGRID (Zhao & Song, 2003) is an efficient density-based algorithm in that it uses grid to reduce the complexity of distance computation and cluster merging. By partitioning the data space into cells, only neighboring cells are taken into account when computing density and merging clusters. Some other density-based clustering techniques are OPTICS (Ankerst, Breunig, Kriegel, & Sander, 1999) and DENCLUE (Hinneburg & Keim, 1998). The advantage of density-based clustering is that it can filter out noise and find clusters of arbitrary shapes (as long as they are composed of connected dense regions). However, most density-based approaches utilize the indexing techniques for efficient neighborhood inquiry, such as R-tree and R*-tree, which do not scalable well to high-dimensional space.

Grid-Based Clustering Grid-based clustering works by partitioning the data space into cells with grid and then merging them to build clusters. Some grid-based methods, such as STING, WaveCluster and CLIQUE, use regular grids to partition data space, while some employ adaptive or irregular grids, such as adaptive grid (Goil, Nagesh, & Choudhary, 1999) and optimal grid (Hinneburg & Keim, 1999). STING (Wang, Yang, & Muntz, 1997) is a grid-base multi-resolution clustering technique in which the spatial area is divided into rectangular cells and organized into a statistical information cell hierarchy. Statistical information, such as the count, mean, minimum, maximum, standard deviation and distribution, is stored for each cell. Thus, the statistical information of cells is captured and clustering can be performed without recourse to the individual objects. WaveCluster (Sheikholeslami, Chatterjee, & Zhang,

1998) is proposed to look at the multi-dimensional data space from a signal processing perspective. The objects are taken as a d-dimensional signal, and the high frequency parts of the signal correspond to the boundary of clusters, where the distribution of objects changes rapidly. The low frequency parts with high amplitude correspond to clusters, where data are concentrated. CLIQUE (Agrawal, Gehrke, Gunopulos, & Raghavan, 1998) works like APRIORI (Agrawal & Srikant, 1994), a famous algorithm for association mining. It partitions each dimension into intervals and computes the dense units in all dimensions. Then the dense units are combined to generate the dense units in higher dimensions. The advantage of gird-based clustering is that the processing time is very fast, because it is independent on the number of objects, but dependent on the number of cells. Its disadvantage is that the quality of clustering is dependent on the granularity of cells and that the number of cells increases exponentially with the dimensionality of data. Therefore, adaptive grid and optimal grid are designed to tackle the above problems. MAFIA (Goil et al., 1999) uses adaptive grids to partition a dimension depending on the distribution of data in the dimension, in contrast to partition every dimension evenly. OptiGrid (Hinneburg & Keim, 1999) uses contracting projections of data to determine the optimal cutting hyper-planes for data partitioning, where the grid is “arbitrary”, as compared with equidistant, axis-parallel grids.

Model-Based Clustering Model-based clustering assumes that the data are generated by a mixture of probability distributions, and it attempts to learn statistical probability models from data, with each model representing one particular cluster (Zhong & Ghosh, 2003). The model type is often set as Gaussian or hidden Markov models (HMMs), and then the model structure is

determined by model selection techniques and parameters are estimated using maximum likelihood algorithms. Some examples of model-based clustering methods are DMBC (Distributed Model-Based Clustering) (Kriegel et al., 2005), and model-based clustering based on data summarization (bEMADS and gEMADS) (Jin et al., 2005). Another kind of model-based clustering is neural network approaches, such as SOM (self-organizing feature maps) (Kohonen, 1988).

Fuzzy Clustering Generally speaking, an object is either a member of a cluster or not a member of it. Fuzzy clustering is an extension by allowing an object to be a member of more than one cluster. For example, an object is the member of three clusters with membership as 0.5, 0.3 and 0.2, respectively. Fuzzy clustering is also known as soft clustering, as compared with hard clustering where the membership can be either 1 or 0 only. Two examples of fuzzy clustering are fuzzy k-means (Karayiannis, 1995) and EM (Expectation Maximization) algorithm (Dempster, Laird, & Rubin, 1977). EM algorithm generates fuzzy clustering in two steps. The expectation (E) step calculates for each object the expectation of probabilities in each cluster, and then the maximization (M) step computes the distribution parameters of clusters, i.e., maximizing the likelihood of the distributions given the data. The two steps are repeated to improve the clustering, until the improvement (say, the increase in log-likelihood) becomes negligible. EM algorithms may result in a local maximum instead of the global maximum. Similar to k-means, the chance to get the global maximum can be improved by running EM for multiple times with different initial guesses and then choosing the best clustering.

Subspace Clustering In some real-world applications, the dimensionality of data is in hundreds or even thousands, and more often than not, no meaningful clusters can be found in the full dimensional space. Therefore, subspace clustering is proposed for high dimensional data, where two clusters can be in two different subspaces and the subspaces can be of different dimensionalities. Well-known subspace clustering

algorithms are CLIQUE

(Agrawal et al., 1998), MAFIA (Goil et al., 1999), Random Projection (Fern & Brodley, 2003), Projected clustering (Aggarwal, Wolf, Yu, Procopiuc, & Park, 1999), MonteCarlo (Procopiuc, Jones, Agarwal, & Murali, 2002), and Projective clustering (E. K. K. Ng, Fu, & Wong, 2005). With most techniques for subspace clustering, a cluster is defined as an axis-parallel hyper-rectangle in a subspace. A technique for subspace clustering proposed by Fern and Brodley (2003) is to find the subspaces in a random projection and ensemble way. The dataset is first projected into random subspaces, and then EM algorithm is used to discover clusters in the projected dataset. The algorithm generates several groups of clusters with the above method and then combines them into a similarity matrix, from which the final clusters are discovered with an agglomerative clustering method. MAFIA (Goil et al., 1999) is an efficient algorithm for subspace clustering using a density and grid based approach. It uses adaptive grids to partition a dimension depending on the distribution of data in the dimension. The bins and cells that have low density of data are pruned to reduce computation. The boundaries of the bins are not rigid, which improves the quality of clustering.

Bi-Clustering Bi-clustering, also known as co-clustering or two-way clustering, is to group objects for a subset of attributes by performing simultaneous clustering of both rows and columns (Cheng & Church, 2000; Madeira & Oliveira, 2004). It is a kind of subspace clustering. Bi-clustering is widely used for clustering microarray data to analyze the activities of genes under many different conditions. Microarray data can be viewed as a matrix, where each row represents a gene, each column stands for a condition and each entry gives the expression level of a gene under a condition. From microarray data, four major types of biclusters are to discover: 1) biclusters with constant values, 2) biclusters with constant values on rows or columns, 3) biclusters with coherent values, and 4) biclusters with coherent evolutions (Madeira & Oliveira, 2004).

Text Clustering Text clustering, also referred to as document clustering, is to group documents based on the similarity in the terms used and is widely used for document categorization, information retrieval and web search engine (Beil, Ester, & Xu, 2002; Zhong & Ghosh, 2005). Most algorithms for text clustering are based on a vector space model, where each document is represented by a vector of frequencies of terms. At first, a bag of words is collected for each document by filtering tags, stemming and pruning. Then the similarity of two documents is measured by how many words they share in common, and the documents can be clustered into groups with one of the clustering algorithms introduced above. Some examples of text clustering algorithms are Suffix Tree Clustering (Zamir & Etzioni, 1998), FTC (Frequent Term-based Clustering) and HFTC (Hierarchical FTC) (Beil, Ester, & Xu, 2002).

Data Stream Clustering As tradition clustering deals with static data, data stream clustering is to cluster data streams where new data arrive continuously, such as click-streams, retail transactions and stock prices (C. C. Aggarwal, J. Han, J. Wang, & P. S. Yu, 2003; Guha, Meyerson, Mishra, Motwani, & O'Callaghan, 2003). For data stream clustering, the clusters are adjusted dynamically according to new data, and emerging clusters are also detected. New data can come in two ways, either as new records, such as transaction data, or as new dimensions, such as stock price data and other time series data. In either way, new data can bring changes in clusters as time elapses. Aggarwal et al. (2003) proposed the concepts of pyramid time frame and micro-clustering to cluster evolving data streams. The statistical information of data is stored as micro-clusters, and the micro-clusters are stored at snapshots in time following a pyramidal pattern. The above are then used in an offline process to explore stream clustering over different horizons.

Semi-Supervised Clustering Generally speaking, clustering is unsupervised learning. However, sometimes there is a small amount of knowledge which can be used to guide clustering, and such kind of clustering is referred to as semi-supervised clustering. The knowledge available is normally not enough for a supervised learning to classify the data. The knowledge can be either pairwise constraints, such as must-link and cannot-link, or class labels for some objects. Some examples of semi-supervised clustering techniques are COP-COBWEB (Constraint-Partitioning COBWEB) (Wagstaff & Cardie, 2000), CCL (Constrained Complete-Link) (Klein et al., 2002), MPC-KMeans (Metric Pairwise Constrained KMeans) (Basu et al., 2003), semi-supervised clustering with user feedback (Cohn et al.,

2003), and a probabilistic model for semi-supervised clustering (Basu et al., 2004).

FUTURE TRENDS Data miming is confronted with larger volume of data, higher dimensionality, more complex data and new types of applications. The above are also challenges for clustering. More scalable algorithms are needed to clustering data in Gigabytes or even in Terabytes and of dimensionality in hundreds and even in thousands. In addition to scalability, the other problem introduced by high dimensionality is the meaningfulness of similarity, the definition of clusters and the meaning of clustering. Another challenge is from new types of data and more complex data, such as multimedia data, semi-structured/unstructured data and stream data. The visualization of clusters and the change/trend analysis of clusters is also a trend of future research. More challenges will also be brought by new applications of clustering, such as bioinformatics, astronomy and meteorology.

CONCLUSION We have presented a survey of popular data clustering approaches, including both classic methods and recent advanced algorithms. The basic ideas of the approaches have been introduced and their characteristics analyzed. The techniques are designed for different applications and for different types of data, such as numerical data, categorical data, spatial data, text data and microarray data. The definitions of clusters in the algorithms are not always the same, and most of them favor certain types of clusters, such as sphereshaped clusters, convex clusters and axis-parallel clusters. New definitions of clusters and novel techniques for clustering keep emerging as data mining is applied in new applications and in new fields.

BIBLIOGRAPHY

Aggarwal, C. C., Han, J., Wang, J., & Yu, P. S. (2003). A framework for clustering evolving data streams. Paper presented at the 29th international conference on Very Large Data Bases, Berlin, Germany. Aggarwal, C. C., Wolf, J. L., Yu, P. S., Procopiuc, C., & Park, J. S. (1999). Fast algorithms for projected clustering. Paper presented at SIGMOD '99: the 1999 ACM SIGMOD international conference on Management of Data, New York, NY, USA. Agrawal, R., Gehrke, J., Gunopulos, D., & Raghavan, P. (1998). Automatic subspace clustering of high dimensional data for data mining applications. Paper presented at SIGMOD '98: the 1998 ACM SIGMOD international conference on Management of Data, New York, NY, USA. Agrawal, R., & Srikant, R. (1994, September). Fast Algorithms for Mining Association Rules in Large Databases. Paper presented at the 20th International Conference on Very Large Data Bases (VLDB), Santiago, Chile. Alpaydin, E. (2004). Introduction to Machine Learning: The MIT Press. Alsabti, K., Ranka, S., & Singh, V. (1998). An Efficient K-Means Clustering Algorithm. Paper presented at the First Workshop on High Performance Data Mining, Orlando, Florida. Ankerst, M., Breunig, M. M., Kriegel, H.-P., & Sander, J. (1999). OPTICS: ordering points to identify the clustering structure. Paper presented at SIGMOD '99: the 1999 ACM SIGMOD international conference on Management of Data, New York, NY, USA.

Basu, S.; Bilenko, M. & Mooney, R. (2003). Comparing and unifying search-based and similarity-based approaches to semi-supervised clustering. Paper presented at ICML'03: the twentieth International Conference on Machine Learning, 2003. Basu, S.; Bilenko, M. & Mooney, R. J. (2004). A probabilistic framework for semisupervised clustering. Paper presented at KDD '04: the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2004, 59-68. Beil, F., Ester, M., & Xu, X. (2002). Frequent term-based text clustering. Paper presented at the eighth ACM SIGKDD international conference on Knowledge Discovery and Data Mining. Berkhin, P. (2002). Survey of Clustering Data Mining Techniques. Accrue Software, San Jose, CA, USA. URL: http://citeseer.ist.psu.edu/berkhin02survey.html. Brun, M.; Sima, C.; Hua, J.; Lowey, J.; Carroll, B.; Suh, E. & Dougherty, E. R. (2007). Model-based evaluation of clustering validation measures. Pattern Recognition, Elsevier Science Inc., 2007, 40, 807-824. Cheng, Y., & Church, G. M. (2000). Biclustering of Expression Data. Paper presented at the eighth International Conference on Intelligent Systems for Molecular Biology. Cohn, D.; Caruana, R. & McCallum, A. (2003). Semi-supervised clustering with user feedback. Technical Report TR2003-1892, Cornell University, USA, 2003. Date, C. J. (2003). An Introduction to Database Systems: Addison-Wesley Longman Publishing Co., Inc. Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1), 138.

Ester, M., Kriegel, H.-P., Sander, J. o. r., & Xu, X. (1996). A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. Paper presented at the second International Conference on Knowledge Discovery and Data Mining (KDD'96). Fern, X., & Brodley, C. (2003). Random Projection for High Dimensional Data Clustering: A Cluster Ensemble Approach. Paper presented at the twentieth International Conference on Machine Learning (ICML'03). Goil, S., Nagesh, H., & Choudhary, A. (1999). MAFIA: Efficient and scalable subspace clustering for very large data sets. Technical Report CPDC-TR-9906-010: Northwestern University. Grabmeier, J., & Rudolph, A. (2002). Techniques of Cluster Algorithms in Data Mining. Data Mining and Knowledge Discovery, 6(4), 303-360. Guha, S., Meyerson, A., Mishra, N., Motwani, R., & O'Callaghan, L. (2003). Clustering Data Streams: Theory and Practice. IEEE Transactions on Knowledge and Data Engineering, 15(3), 515-528. Guha, S., Rastogi, R., & Shim, K. (1998). CURE: an efficient clustering algorithm for large databases. Paper presented at SIGMOD '98: the 1998 ACM SIGMOD international conference on Management of data, New York, NY, USA. Guha, S., Rastogi, R., & Shim, K. (1999). ROCK: A Robust Clustering Algorithm for Categorical Attributes. Paper presented at the 15th International Conference on Data Engineering, 23-26 March 1999, Sydney, Australia. Halkidi, M.; Batistakis, Y. & Vazirgiannis, M. (2001). On Clustering Validation Techniques. Journal of Intelligent Information Systems, 2001, 17, 107-145.

Han, J., & Kamber, M. (2000). Data mining: concepts and techniques. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Hill, T., & Lewicki, P. (2007). Statistics: Methods and Applications. Tulsa, OK: StatSoft. Hinneburg, A., & Keim, D. A. (1998). An Efficient Approach to Clustering in Large Multimedia Databases with Noise. Paper presented at the fourth International Conference on Knowledge Discovery and Data Mining (KDD'98). Hinneburg, A., & Keim, D. A. (1999). Optimal Grid-Clustering: Towards Breaking the Curse of Dimensionality in High-Dimensional Clustering. Paper presented at the VLDB'99: the 25th International Conference on Very Large Data Bases, September 7-10, 1999, Edinburgh, Scotland, UK. Huang, Z. (1998). Extensions to the k-Means Algorithm for Clustering Large Data Sets with Categorical Values. Data Mining and Knowledge Discovery, 2(3), 283-304. Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: a review. ACM Computing Surveys, 31(3), 264-323. Jin, H.; Wong, M. & Leung, K. (2005). Scalable Model-Based Clustering for Large Databases Based on Data Summarization. IEEE Transactions on Pattern Anal. Mach. Intell., IEEE Computer Society, 2005, 27, 1710-1719. Karayiannis, N. B. (1995). Generalized fuzzy k-means algorithms and their application in image compression. Paper presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference. Karypis, G., Han, E.-H., & Kumar, V. (1999). Chameleon: hierarchical clustering using dynamic modeling. Computer, 32(8), 68-75.

Kaufman, L., & Rousseeuw, P. J. (1990). Finding groups in data. an introduction to cluster analysis: Wiley Series in Probability and Mathematical Statistics. Applied Probability and Statistics, New York: Wiley, 1990. Klein, D.; Kamvar, S. D. & Manning, C. D. (2002). From Instance-level Constraints to Space-Level Constraints: Making the Most of Prior Knowledge in Data Clustering. Paper presented at ICML '02: the nineteenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 2002, 307-314. Kohonen, T. (1988). Self-organized formation of topologically correct feature maps. Neurocomputing: foundations of research, MIT Press, 1988, 509-521. Kolatch, E. (2001). Clustering Algorithms for Spatial Databases: A Survey.Unpublished manuscript. Department of Computer Science, University of Maryland, College Park. URL: "http://citeseer.ist.psu.edu/kolatch01clustering.html" Kriegel, H.; Kroger, P.; Pryakhin, A. & Schubert, M. (2005). Effective and Efficient Distributed Model-Based Clustering. Paper presented at ICDM '05: the Fifth IEEE International Conference on Data Mining, IEEE Computer Society, 2005, 258-265. Macqueen, J. B. (1967). Some methods of classification and analysis of multivariate observations. Paper presented at the the Fifth Berkeley Symposium on Mathematical Statistics and Probability. Madeira, S. C., & Oliveira, A. L. (2004). Biclustering Algorithms for Biological Data Analysis: A Survey. IEEE/ACM Transactions Computational Biology and Bioinformatics, 1(1), 24-45.

Ng, E. K. K., Fu, A. W.-c., & Wong, R. C.-W. (2005). Projective Clustering by Histograms. IEEE Transactions on Knowledge and Data Engineering, 17(3), 369383. Ng, R. T., & Han, J. (1994). Efficient and Effective Clustering Methods for Spatial Data Mining. Paper presented at VLDB'94: the 20th International Conference on Very Large Data Bases, San Francisco, CA, USA. Procopiuc, C. M., Jones, M., Agarwal, P. K., & Murali, T. M. (2002). A Monte Carlo algorithm for fast projective clustering. Paper presented at SIGMOD'02: the 2002 ACM SIGMOD international conference on Management of data, New York, NY, USA. Sheikholeslami, G., Chatterjee, S., & Zhang, A. (1998). WaveCluster: A Multi-Resolution Clustering Approach for Very Large Spatial Databases. Paper presented at VLDB '98: the 24th International Conference on Very Large Data Bases, San Francisco, CA, USA. Strehl, A., Ghosh, J. (2002). Cluster ensembles - a knowledge reuse framework for combining multiple partitions. Machine Learning Research, 3, pp. 583-417. Theodoridis, S., & Koutroumbas, K. (2006). Pattern Recognition, Third Edition: Academic Press, Inc. Wagstaff, K. & Cardie, C. (2000). Clustering with Instance-level Constraints. Paper presented at ICML'00: the seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 2000, 1103-1110. Wang, W., Yang, J., & Muntz, R. R. (1997). STING: A Statistical Information Grid Approach to Spatial Data Mining. Paper presented at VLDB'97: the 23rd

International Conference on Very Large Data Bases, August 25-29, 1997, Athens, Greece. Xu, R., & Wunsch, D., II. (2005). Survey of clustering algorithms. IEEE Transactions on Neural Networks, 16(3), 645-678. Zait, M., & Messatfa, H. (1997). A comparative study of clustering methods. Future Generation Computer Systems, 13(2-3), 149-159. Zamir, O. & Etzioni, O. (1998). Web document clustering: a feasibility demonstration. Paper presented at SIGIR '98: the 21st annual international ACM SIGIR conference on Research and development in information retrieval, ACM, 1998, 46-54. Zhang, T., Ramakrishnan, R., & Livny, M. (1996). BIRCH: an efficient data clustering method for very large databases. Paper presented at SIGMOD'96: the 1996 ACM SIGMOD international conference on Management of data, New York, NY, USA. Zhao, Y., & Song, J. (2003). AGRID: An Efficient Algorithm for Clustering Large HighDimensional Datasets. Paper presented at the PAKDD'03: 7th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Seoul, Korea. Zhong, S. & Ghosh, J. (2003). A unified framework for model-based clustering. Journal of Machine Learning Research, MIT Press, 2003, 4, 1001-1037. Zhong, S., & Ghosh, J. (2005). Generative model-based document clustering: a comparative study. Knowledge and Information Systems, 8(3), 374-384.

KEY TERMS

Bi-clustering – Also known as co-clustering, it is to group objects for a subset of attributes by performing simultaneous clustering of both rows and columns.

Data Clustering – Data clustering is to partition data into groups, where the data in the same group are similar to one another and the data from different groups are different from one another.

Data Stream Clustering – It is to group continuously arriving data, instead of static data, into groups based on the similarity.

Density-Based Clustering – Density-based clustering takes densely populated regions as clusters, while objects in sparse areas are removed as noises.

Fuzzy Clustering – Also known as Soft Clustering. For fuzzy clustering, an object can be classified with fractional membership into multiple groups, in contrast to Hard Clustering where an object can be classified into one group only.

Grid-Based Clustering – It is to partition the whole space into cells with grids and then merge the cells to build clusters.

Hierarchical Clustering – It is to build a hierarchical decomposition of data in either bottom-up or top-down way. Generally a dendrogram is generated and a user may select to cut it at a certain level to get the clusters.

Model-Based clustering – Model-based clustering assumes that the data are generated by a mixture of probability distributions, and attempts to learn statistical probability models from data, with each model representing one particular cluster.

Partitioning Clustering – It is a clustering approach which uses centers to represent clusters and then improves the partitioning by moving objects from group to group.

Semi-Supervised Clustering – Semi-supervised clustering is a partly supervised clustering which is guided with a small amount of knowledge, such as pairwise constraints and class labels for some objects.

Subspace Clustering – It is to find clusters in subspaces, where two clusters may exist in two different subspaces and the subspaces may also have different dimensionalities.

Text Clustering – It is to group documents based on the similarity in their topics and text.

data clustering

Clustering is one of the most important techniques in data mining. ..... of data and more complex data, such as multimedia data, semi-structured/unstructured.

90KB Sizes 0 Downloads 333 Views

Recommend Documents

Survey on Data Clustering - IJRIT
common technique for statistical data analysis used in many fields, including machine ... The clustering process may result in different partitioning of a data set, ...

Survey on Data Clustering - IJRIT
Data clustering aims to organize a collection of data items into clusters, such that ... common technique for statistical data analysis used in many fields, including ...

Clustering in Data Streams
Small(er)-Space Algorithm (cont'd). • Application in data stream model. − Input m (a multiple of 2k) points at a time. − Reduce the first m points to 2k medians. − Maintain at most m level-i medians. − On seeing m, generate 2k level-(i+1) m

Rough clustering of sequential data
a Business Intelligence Lab, Institute for Development and Research in Banking Technology (IDRBT),. 1, Castle Hills .... using rough approximation to cluster web transactions from web access logs has been attempted [11,13]. Moreover, fuzzy ...

A Survey on Data Stream Clustering Algorithms
The storage, querying, processing and mining of such data sets are highly .... problems, a novel approach to manipulate the heterogeneous data stream ...

An Approach to Data Mining: Clustering
analysis. Data mining uses sophisticated mathematical algorithms to segment ... It is a main task of exploratory data mining, and a common technique for statistical ... Let us apply the k-Means clustering algorithm to the same example as in the ...

Computing Clustering Coefficients in Data ... - Research at Google
The analysis of the structure of large networks often requires the computation of ... provides methods that are either computational unfeasible on large data sets ...

Survey on clustering of uncertain data urvey on ...
This paper mainly discuses on different models of uncertain data and feasible methods for .... More specifically, for each object oi, we define a minimum bounding .... The advances in data collection and data storage have led to the need for ...

A cluster ensemble method for clustering categorical data
optimal partitioning kًiق determined by each attribute Ai. X. (1) ... Ai 2 Vi, xj 2 Xg. So, we can combine the set of r ..... www.cs.umb.edu/~dana/GAClust/index.html.

Improving Categorical Data Clustering Algorithm by ...
categorical data clustering by giving greater weight to uncommon attribute value ..... Chang, C., Ding, Z.: Categorical Data Visualization and Clustering Using ... Huang, Z.: Extensions to the k-Means Algorithm for Clustering Large Data Sets.

An Efficient Algorithm for Clustering Categorical Data
the Cluster in CS in main memory, we write the Cluster identifier of each tuple back to the file ..... algorithm is used to partition the items such that the sum of weights of ... STIRR, an iterative algorithm based on non-linear dynamical systems, .

Protecting sensitive knowledge based on clustering method in data ...
Protecting sensitive knowledge based on clustering method in data mining.pdf. Protecting sensitive knowledge based on clustering method in data mining.pdf.

Spike sorting: Bayesian clustering of non-stationary data
i}Nt i=1}T t=1 . We assume that in each frame data are approximated well by a mixture-of-Gaussians, where each Gaussian corresponds to a single source neuron. ..... 3.1. Problem formulation. A probabilistic account of transitions between mixtures-of-

Clustering Based Active Learning for Evolving Data ...
Clustering Based Active Learning for Evolving. Data Streams. Dino Ienco1, Albert Bifet2, Indr˙e Zliobait˙e3 and Bernhard Pfahringer4. 1 Irstea, UMR TETIS, Montpellier, France. LIRMM ... ACLStream (Active Clustering Learning for Data Streams)to bett

Web page clustering using Query Directed Clustering ...
IJRIT International Journal of Research in Information Technology, Volume 2, ... Ms. Priya S.Yadav1, Ms. Pranali G. Wadighare2,Ms.Sneha L. Pise3 , Ms. ... cluster quality guide, and a new method of improving clusters by ranking the pages by.

Fuzzy Clustering
2.1 Fuzzy C-Means . ... It means we can discriminate clearly whether an object belongs to .... Sonali A., P.R.Deshmukh, Categorization of Unstructured Web Data.

Spectral Clustering - Semantic Scholar
Jan 23, 2009 - 5. 3 Strengths and weaknesses. 6. 3.1 Spherical, well separated clusters . ..... Step into the extracted folder “xvdm spectral” by typing.