Dictionary Learning Based on Laplacian Score in Sparse Coding Jin Xu and Hong Man Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030 USA {jxu4,Hong.Man}@stevens.edu

Abstract. Sparse coding, which produces a vector representation based on sparse linear combination of dictionary atoms, has been widely applied in signal processing, data mining and neuroscience. Constructing a proper dictionary for sparse coding is a common challenging problem. In this paper, we treat dictionary learning as an unsupervised learning process, and propose a Laplacian score dictionary (LSD). This new learning method uses local geometry information to select atoms for the dictionary. Comparisons with alternative clustering based dictionary learning methods are conducted. We also compare LSD with full-training-datadictionary and others classic methods in the experiments. The classification performances on binary-class datasets and multi-class datasets from UCI repository demonstrate the effectiveness and efficiency of our method. Keywords: Sparse coding, Unsupervised learning, Clustering, Dictionary learning, Laplacian Score.

1

Introduction

Sparse coding (sparse representation) has been studied extensively in recent years. In sparse coding, a signal vector y is represented by combination of a minimum set of atoms from a pre-defined dictionary D atoms. There are many applications for sparse coding, Such as classification [1], image denoising [2] and online learning [3]. In sparse coding, computational complexity is a challenging problem. There are three methods to reduce the computation of sparse coding. (i) Proper dictionary [4]. The dictionary represents the key information about the data, and the size of dictionary can affect the computation significantly. How to construct an ideal dictionary is the focus of this paper. (ii) Dimension reduction [5]. This approach attempts remove the redundant features of data for sparse coding. (iii) Algorithm optimization [6]. This approach is to use different optimization methods to speed up the process of sparse representations. Dictionary learning for sparse coding is an active research topic. At first, the pre-constructed dictionaries are chosen, such as steerable wavelets, curvetted, DCT matrix, and more. Then a tunable selection of candidate atoms for dictionary is applied, such as wavelet packets and bandelets. Recently, more research P. Perner (Ed.): MLDM 2011, LNAI 6871, pp. 253–264, 2011. c Springer-Verlag Berlin Heidelberg 2011 

254

J. Xu and H. Man

works are focus on the learning the dictionary atoms from data examples. Unsupervised learning methods [7] are also shown success in dictionary learning for sparse coding. Laplacian score (LS) [8] was proposed for unsupervised feature selection. LS is to evaluate the locality preserving ability and to rank the feature power for feature selection. LS has been successfully applied in face recognition [8] and semi-supervised learning [9]. In this work, we propose to use LS to evaluate the locality preserving ability of candidate atoms from data examples, and the higher ranked candidates are selected as new atoms for the dictionary in sparse coding. This method is compared with classic clustering methods such as self-organized map (SOM) and neural gas (NGAS) in dictionary learning to show effectiveness. The contributions of this work are listed as follows: – An unsupervised learning methods based on LS for dictionary learning in sparse representation is presented. While most of LS applications has been concentrated on feature selection, and we first proposed atoms selection based LS criteria. – The proposed dictionaries are applied to both binary-class and mutli-class UCI datasets, and reconstruction results and classification results are obtained. – Competitive experimental results on UCI datasets [10] demonstrate the capabilities of the proposed dictionary learning method. The rest of paper is organized as follows: Section 2 presents related work and sparse representation based classification. Section 3 presents Laplacian score (LS) criteria for dictionary learning. Section 4 presents the experimental results on the UCI datasets. Finally, Section 5 concludes the paper and discusses some future works.

2

Related Work

Recently, sparse representation based classification (SRC) [11] has been proposed and shown successful application in image classifications. SRC utilizes category based reconstruction error to classify testing data. The performance of SRC can be used to evaluate the dictionary effectiveness in sparse coding. In sparse coding, a dictionary containing a set of n atoms (data vectors) is defined as D = [a11 , · · · , an1 1 , · · · , a1c , · · · , anc c ], where D ∈ Rm×n , c is class label for each atom, ni is the number of atoms associated with class i. The intention of sparse representation is to code a new test vector y in the form y = Dx ∈ Rm

(1)

where x = [0, · · · , 0, αi,1 , αi,2 , · · · , αi,ni , 0, · · · , 0]T ∈ Rn . x should be sparse with the least number of nonzero elements. Normally, 1 -regularized least squares method [12] is used to solve this problem. x  = arg min{y − Dx22 + λx1 }

(2)

Dictionary Learning Based on Laplacian Score in Sparse Coding

255

In SRC, it utilizes the sparse representation residuals to determine the target class [11]. For each class i, a function is defined as δi : Rn → Rn , which chooses the coefficients associated to i-th class. Then the classification process can be shown as: label(y) = arg min ri (y), ri (y) = y − Dδi (x)2 (3) Algorithm 1. Sparse Representation based on Classification 1: Input: a dictionary D ∈ Rm×n with c classes, a test data y ∈ Rm 2: Solve 1 -regularized least squares problem: x  = arg min{y − Dx22 + λx1 } 3: Compute the residuals: ri (y) = y − Dδi (x)2 f or i = 1, · · · , c 4: Output: label(y) = arg min ri (y)

The SRC process is shown in Algorithm 1. A case study of SRC is shown in Fig.1. “Libreas Movement” dataset from UCI Repository are used. A dictionary is trained using Laplacian score criteria, which has 65 atoms from 13 classes. Then the dictionary is applied on a test data to get the sparse vector for atoms. The left figure shows the details of the sparse coding coefficients. The reconstruction residuals for each class are shown in the right figure. It is obvious to see that the residual for class 6 is the smallest. Then the SRC algorithm will classify the test data to the class 6. Actually, the total classes for the “Libreas Movement” datasets are 15. However, there are just 13 classes data as chosen in this case study. How to leverage all category data in the dictionary is a challenging problem in dictionary learning. A spasre coding result

Sparse representation based classification result 40

0.7

35 30

0.5

Construction residual

Cofficients associate to atoms

0.6

0.4

0.3

25 20 15

0.2 10 0.1

0

5

20 40 Atoms in the dictionary

60

0

1 2 3 4 5 6 7 8 9 10111213 Classes

Fig. 1. Classification result for data Libreas Movement

256

J. Xu and H. Man

In dictionary learning, a specific criteria is optimized with sparse coding. The dictionary updating and sparse coding are processed iteratively until some thresholds are reached. Although it has proven effectiveness in many applications, the iterative process has heavy computational cost. The following methods have been introduced to address this issue. The method of optimal directions (MOD) is a frame design algorithm [13]. In MOD, first, a dictionary is initialed. Then sparse coefficients of a signal are calculated from the dictionary. The coefficients and the original signal are then used to update the dictionary. The stop criteria is based on the least square error. KSVD [4] is different from MOD in that the atoms in the dictionary are updated sequentially. It is related to the k-means method as it tries to update the atoms based on associate examples. In KSVD, first, the group examples for the atoms are found. Then the residuals for the chosen examples are calculated. Finally, singular value decomposition on the residuals is used to update the dictionary atoms. Efficient sparse coding algorithm [14] is based on iteratively solving two least square optimization problems: 1 norm regularized and 2 norm constrained. The problem can be written as: min α,D

1 Dα − X2 + λα1 2σ 2

subject to ΣD ≤ c

(4)

In the learning process, this method optimizes the dictionary D or the sparse coefficients α while the other is fixed. This method is a kind of acceleration of the sparse coding process, which can be applied in the large databases. Supervised dictionary learning [15] tries to incorporate different category information into the sparse coding process. The formulation can be expressed as min (Σi C(yi f (xi , αi , θ)) + λ0 Dα − X2 + λ1 α1 ) + λ2 θ22

α,θ,D

(5)

where C is the loss function which is similar to the loss function of SVM. θ, λ0 , λ1 , λ2 are regularization parameters. The loss function utilizes label information in the optimization process.

3

Laplacian Score for Dictionary Learning

Laplacian score evaluates local geometrical structures without data label information. This method is based on Laplacian Eigenmaps and Locality Preserving Projection. Given a data set X = [x1 , x2 , · · · , xn ], where X ∈ Rm×n . The feature vectors for the data set are F = {f1 , f2 , · · · , fm }. Assume Sr is the Laplacian score for the rth sample xr , r = 1, · · · , n. The Laplacian score based on each sample can be stated as follows: 1. A nearest neighbor graph G is constructed with different feature vectors (fi and fj , i = 1, · · · , m). More specifically, if a feature vector fi is among k nearest neighbors of fj , fi and fj are defined as connected in the graph G.

Dictionary Learning Based on Laplacian Score in Sparse Coding

257

2. The weight matrix of graph G is Sij . When fi and fj are connected, Sij = e



fi −fj 2 t

, otherwise Sij = 0, where t is a suitable constant. 3. Then Sr for each sample can be calculated as Sr =

˜ Tr L˜ x xr T ˜ r D˜ x xr

(6)

˜ r is calculated via: where D = diag(S1), 1 = [1, · · · , 1]T , L = D − S, and x ˜ r = xr − x

xTr D1 1T D1

(7)

The results of Sr are used to choose atoms for the dictionary, which aims to effectively utilize the graph structure information. SOM and NGAS are classic cluster methods in unsupervised learning. SOM and NGAS can preserve the topological properties of the training data, which is the inspiration of our work. The centroids trained from SOM and NGAS are used as atoms in dictionary for sparse coding. Related work [16] has shown some applications between NGAS and sparse coding. It is important to mention that class label are needed for atoms. In this work, the atoms trained by SOM and NGAS are labeled based on 5-nearest neighbors training data voting.

4

Experiments

In this section, we present our experiments on classification of different UCI datasets, especially on multi-category datasets, which are more challenge in applications. The experiments are designed to show the effectiveness of the proposed dictionary method. The results are presented with SRC classification accuracy and data reconstruction errors. 4.1

Experiment Datasets

Six UCI datasets are chosen in the experiments. The details are shown in Table 1. Data “Car Evaluation” and “Tic-Tac-Toe” are binary datasets. Data “Contraceptive Method Choice” has three classes. The rest are typical multi-class datasets which are frequently used [17] in data mining research. The feature number and the data size are also shown in Table 1. 4.2

Experiment Setup

In the experiments, 5-fold cross-validation is applied on each dataset for comparison of different learning models. Training data samples and the LS, SOM and NGAS methods are used to construct the dictionaries. We select different sizes of dictionaries to shown the performances. The sizes of dictionaries rang from 10% to 50% of the training data size. Then SRC classifier, according to Algorithm 1, is applied on the testing data to evaluate the performance of different

258

J. Xu and H. Man Table 1. UCI Experiment Data Sets Name

Feature number Total size Class number

Car Evaluation

6

1728

2

Tic-Tac-Toe

9

958

2

Contraceptive Method Choice

9

1473

3

Glass

10

214

7

Image Segmentation

19

2310

7

Libras Movement

90

360

15

dictionaries. In order to get comprehensive results, three references are introduced: LibSVM, k-nearest neighbors classifier and full-training-data-dictionary. It is important to point out that all three references are based on the entire training data. It is not expected for the proposed dictionary learning method to reach the same performance with these three references. For simplicity, the dictionaries trained with LS, SOM and NGAS are abbreviated as LSD, SOMD and NGASD. The LibSVM classifier is denoted as SVM. K-nearest neighbors classifier is denoted as KNN5, which is evaluated with 5 neighbors in our experiments. The full-training-data-dictionary is abbreviated as ALLIND. In sparse coding, reconstruction error for each test data is expressed as  (8) errory = 2 y − Dx2 and it indicates the quality of the dictionary. In our experiments, the average reconstruction errors for LSD, SOMD, NGASD and ALLIND are shown with different dictionary sizes. The sparse coding tool is from the Stanford implementation [12]. The libSVM classifer is from the Support Vector Machines Library [18]. When the datasets are multi-categories, the “1-v-r” method [19] are utilized with LibSVM to transform mulit-class problems to binary problems. SOM and NGAS training methods are from the SOM TOOLBOX [20]. 4.3

Experiment Result

The comparisons of different models on different datasets are presented in this section. Fig. 2 shows the results on the data “Car Evaluation”. From the classification results, we can observe that LSD has higher classification accuracy compared with SOMD and NGASD. The accuracy of LSD is increased with dictionary size. LSD performance reaches the same level as SVM when dictionary size is 20% of training data, and LSD has almost the same accuracy as ALLIND when dictionary size reaches 50% of the training data. In the right figure, the reconstruction error of LSD decreases with more atoms in dictionary. The performance of data “Tic-Tac-Toe ” is shown in Fig. 3. In the left figure, LSD has better performance than SVM, and it can surpass ALLIND performance

Dictionary Learning Based on Laplacian Score in Sparse Coding

Classification performance with different dictionary

259

Reconstruction performance with different dictionary

1

0.4 0.35

0.9

Reconstruction error

Classification accuracy

0.3 0.8

0.7

0.6 SVM KNN5 ALLIND SOMD NGASD LSD

0.5

0.4 0.1

0.2

0.3

0.4

0.25 0.2 0.15

ALLIND SOMD NGASD LSD

0.1 0.05 0 0.1

0.5

0.2

0.3

0.4

0.5

Dictionary size

Dictionary size

Fig. 2. Left : Classification comparison with data “Car Evaluation ” Right : Reconstruction error with data “Car Evaluation ”

Classification performance with different dictionary

Reconstruction performance with different dictionary

0.9

0.5

0.85 0.45

Reconstruction error

Classification accuracy

0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.1

SVM KNN5 ALLIND SOMD NGASD LSD

0.4

0.35

0.3

ALLIND SOMD NGASD LSD

0.25

0.2

0.3

Dictionary size

0.4

0.5

0.2 0.1

0.2

0.3

0.4

0.5

Dictionary size

Fig. 3. Left : Classification comparison with data “Tic-Tac-Toe ” Right : Reconstruction error with data “Tic-Tac-Toe ”

260

J. Xu and H. Man

Reconstruction performance with different dictionary

Classification performance with different dictionary 2

0.52

1.8 0.5

0.48

Reconstruction error

Classification accuracy

1.6

0.46

0.44

0.42

0.4

0.38 0.1

SVM KNN5 ALLIND SOMD NGASD LSD

1.4 1.2 1 0.8 ALLIND SOMD NGASD LSD

0.6 0.4 0.2

0.2

0.3

0.4

0 0.1

0.5

Dictionary size

0.2

0.3

0.4

0.5

Dictionary size

Fig. 4. Left : Classification comparison with data “Contraceptive Method Choice ” Right : Reconstruction error with data “Contraceptive Method Choice ”

Classification performance with different dictionary

Reconstruction performance with different dictionary

1

12

10

0.8

Reconstruction error

Classification accuracy

0.9

0.7

0.6

0.5

SVM KNN5 ALLIND SOMD NGASD LSD

0.4

0.1

0.2

0.3

Dictionary size

0.4

8

6 ALLIND SOMD NGASD LSD

4

2

0.5

0 0.1

0.2

0.3

0.4

0.5

Dictionary size

Fig. 5. Left : Classification comparison with data “Glass” Right : Reconstruction error with data “Glass”

Dictionary Learning Based on Laplacian Score in Sparse Coding

261

when dictionary size is larger than 40% of training data. The SOMD performance can be competitive with SVM. In the reconstruction performance, NGASD has better rate than ALLIND, which is an interesting finding. Fig.4 shows experiment performance on data “Contraceptive Method Choice ”, which has 3 classes. This dataset is generally difficult for classification, and the best accuracy is around 52% reached by SVM. LSD is ranked second among all the models. It surpasses KNN5 and ALLIND when dictionary size is larger then 30% of training data. Meanwhile, LSD has smaller reconstruction error than SOMD and NGASD. Fig.5 shows the results on the data “Glass”. In the left figure, ALLIND has the highest accuracy and LSD performs are better than KNN5 and SVM. The performance of SOMD is also competitive with KNN5. It seems that SRC method has advantage than classic classifiers in this dataset. In reconstruction error, LSD reaches a stable level when the dictionary size is larger than 20% of training data. The results of data “Image Segmentation” are shown in Fig.6. The performance of KNN5 is almost the same as ALLIN. With increasing dictionary size, the performance of LSD improves. And LSD reaches the level of SVM when the dictionary size is 40% of training data. In the right figure, the reconstruction error of LSD continuously decreases with more atoms in the dictionary. Fig.7 shows the performance of data “Libras Movement”. ALLIND has the best performance and LSD can be comparable with SVM when the dictionary size is 40% of training data. In reconstruction error, LSD has decreasing trend but NGAS has lower error. From the results of different figures, we observe that LSD classification performance tends to converge when the dictionary size is in the range of 40%-50% of the training data. We show the average accuracy results based on different datasets in Table 2. The average accuracy of SVM, KNN5 and ALLIND are 72.63%, 77.31% and 78.78% respectively. It is interesting to see that the accuracy of LSD with 40% training data can be better than SVM with 100% training data. Overall, from the classification results, LSD is relatively better than SOMD and NGASD. The reason may be due to the atoms labels in SOMD and NASD, which is assigned from training. In LSD, the atoms labels are associated with data which have unbiased information. In the construction error results, we can observe LSD has competitively smaller error compared with SOMD and NASD. It seems that unsupervised selection is more robust and meaningful than unsupervised transform for the dictionary learning. Table 2. Accuracy [%] average in different dictionary sizes Dictionary size (rate) 10% 20% 30% 40% 50% SOMD

44.75 51.07 52.59 52.69 53.98

NGASD

43.28 42.65 39.55 38.40 44.89

LSD

56.78 68.10 70.30 73.85 75.85

262

J. Xu and H. Man

Reconstruction performance with different dictionary 60

0.9

50

Reconstruction error

Classification accuracy

Classification performance with different dictionary 1

0.8 SVM KNN5 ALLIND SOMD NGASD LSD

0.7

0.6

30

20 ALLIND SOMD NGASD LSD

10

0.5

0.4 0.1

40

0.2

0.3

0.4

0 0.1

0.5

0.2

0.3

0.4

0.5

Dictionary size

Dictionary size

Fig. 6. Left : Classification comparison with data “Image Segmentation” Right : Reconstruction error with data “Image Segmentation”

Reconstruction performance with different dictionary

Classification performance with different dictionary 0.9

0.9 0.8

0.8

Reconstruction error

Classification accuracy

0.7 0.6 SVM KNN5 ALLIND SOMD NGASD LSD

0.5 0.4 0.3

0.7

ALLIND SOMD NGASD LSD

0.6

0.5

0.4 0.2

0.3

0.1 0 0.1

0.2

0.3

Dictionary size

0.4

0.5

0.2 0.1

0.2

0.3

0.4

0.5

Dictionary size

Fig. 7. Left : Classification comparison with data “Libras Movement ” Right : Reconstruction error with data “Libras Movement ”

Dictionary Learning Based on Laplacian Score in Sparse Coding

5

263

Conclusion

We have presented a novel dictionary learning method which is able to incorporate geometry local information. It is an unsupervised learning method for searching the best atoms in the training data. The experiments on the UCI datasets provide a comprehensive comparison on different unsupervised dictionary learning models. The proposed LSD has shown effectiveness in SRC classification and reconstruction processes. LSD based on partial training data shows competitive performance compared with SVM based on full training data. More experiments and theoretical analysis are needed to assess the proposed dictionary in real application. Beyond this, there are some interesting findings for future investigation: (i) Exploration of the connection between category data distribution and data geometry information in the dictionary learning; (ii) Exploration of the relations between the unsupervised selection and unsupervised transform in dictionary learning; (iii) Exploration of the optimal dictionary size for dictionary learning.

References 1. Yang, J., Yu, K., Gong, Y., Huang, T.: Linear spatial pyramid matching using sparse coding for image classification. In: IEEE Conference on CVPR, pp. 1794– 1801 (2009) 2. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 15(12), 3736– 3745 (2006) 3. Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online Dictionary Learning for Sparse Coding. In: International Conference on Machine Learning (2009) 4. Aharon, M., Elad, M., Bruckstein, A.M.: The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representations. IEEE Transactions on Signal Processing 54(11), 4311–4322 (2006) 5. Zhang, L., Yang, M., Feng, Z., Zhang, D.: On the dimensionality reduction for sparse representation based face recognition. In: 20th International Conference on Pattern Recognition (ICPR), pp. 1237–1240 (2010) 6. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: International Conference on Machine Learning, Haifa, Israel, pp. 399–406 (2010) 7. Sprechmann, P., Sapiro, G.: Dictionary learning and sparse coding for unsupervised clustering. In: IEEE International Conference on Acoustics Speech and Signal Processing, (ICASSP) (2010) 8. He, X., Cai, D., Niyogi, P.: Laplacian score for feature selection. Neural Information Processing Systems (2005) 9. Zhao, J., Lu, K., He, X.: Locality sensitive semi-supervised feature selection. Neurocomputing 71, 1842–1849 (2008) 10. Frank, A., Asuncion, A.: UCI machine learning repository (2010), http://archive.ics.uci.edu/ml 11. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2), 210–227 (2009)

264

J. Xu and H. Man

12. Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for largescale l1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing 1(4), 606–617 (2007) 13. Engan, K., Aase, S.O., Husey, J.H.: Multi-frame compression: theory and design. Signal Process 80, 2121–2140 (2000) 14. Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. Advances in Neural Informa- tion Processing Systems 19, 801–808 (2007) 15. Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Supervised dictionary learning. In: Advances in Neural Information Processing Systems, (NIPS) (2008) 16. Labusch, K., Barth, E., Martinetz, T.: Sparse Coding Neural Gas: Learning of Overcomplete Data Representations. Neurocomputing 72, 1547–1555 (2009) 17. Athitsos, V., Sclaroff, S.: Boosting nearest neighbor classifiers for multiclass recognition. In: IEEE CVPR Workshops (2005) 18. Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines, Software (2001), http://www.csie.ntu.edu.tw/~ cjlin/libsvm 19. Lee, Y., Lin, Y., Wahba, G.: Multicategory support vector machines, theory, and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99, 67–81 (2004) 20. Kohonen, T.: SOM TOOLBOX, Software (2005), http://www.cis.hut.fi/projects/somtoolbox/

Dictionary Learning Based on Laplacian Score in ... - Springer Link

plied in signal processing, data mining and neuroscience. Constructing a proper dictionary for sparse coding is a common challenging problem. In this paper, we treat dictionary learning as an unsupervised learning pro- cess, and propose a Laplacian score dictionary (LSD). This new learning method uses local geometry ...

245KB Sizes 3 Downloads 209 Views

Recommend Documents

Asymptotic tracking by a reinforcement learning-based ... - Springer Link
Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611, U.S.A.;. 2.Department of Physiology, University of Alberta, ...

Asymptotic tracking by a reinforcement learning-based ... - Springer Link
NASA Langley Research Center, Hampton, VA 23681, U.S.A.. Abstract: ... Keywords: Adaptive critic; Reinforcement learning; Neural network-based control.

LNAI 4285 - Query Similarity Computing Based on ... - Springer Link
similar units between S1 and S2, are called similar units, notated as s(ai,bj), abridged ..... 4. http://metadata.sims.berkeley.edu/index.html, accessed: 2003.Dec.1 ...

Interactive Cluster-Based Personalized Retrieval on ... - Springer Link
consists of a good test-bed domain where personalization techniques may prove ... inserted by the user or implicitly by monitoring a user's behavior. ..... As the underlying distributed memory platform we use a beowulf-class linux-cluster .... Hearst

Are survival processing memory advantages based on ... - Springer Link
Feb 8, 2011 - specificity of ancestral priorities in survival-processing advantages in memory. Keywords Memory . ... have shown that survival processing enhances memory performance as measured by recall, ..... Nairne, J. S., Pandeirada, J. N. S., & T

LNCS 4261 - Image Annotations Based on Semi ... - Springer Link
MOE-Microsoft Key Laboratory of Multimedia Computing and Communication ..... of possible research include the use of captions in the World Wide Web. ... the Seventeenth International Conference on Machine Learning, 2000, 1103~1110.

Augmented reality registration algorithm based on ... - Springer Link
CHEN Jing1∗, WANG YongTian1,2, GUO JunWei1, LIU Wei1, LIN JingDun1, ... 2School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China; .... One is degrees and the other is meters or ... these years many successf

Interactive Cluster-Based Personalized Retrieval on ... - Springer Link
techniques based on user modeling to initiate the search on a large ... personalized services, a clustering based approach towards a user directed ..... Communities on the Internet Using Unsupervised Machine Learning Techniques. ... Distributed Compu

Are survival processing memory advantages based on ... - Springer Link
Published online: 8 February 2011. © Psychonomic Society, Inc. 2011 ... study investigated the specificity of this idea by comparing an ancestor-consistent ...

Fragments of HA based on Σ1-induction - Springer Link
iIΣ1 of IΣ1 (in the language of PRA), using Kleene's recursive realizability ... L(PRA), where in the presence of coding functions we may assume that no two .... q-realizes φ) by induction on the formation of φ. s always denotes one of r and q.

LNCS 4261 - Image Annotations Based on Semi ... - Springer Link
Keywords: image annotation, semi-supervised clustering, soft constraints, semantic distance. 1 Introduction ..... Toronto, Canada: ACM Press, 2003. 119~126P ...

Hooked on Hype - Springer Link
Thinking about the moral and legal responsibility of people for becoming addicted and for conduct associated with their addictions has been hindered by inadequate images of the subjective experience of addiction and by inadequate understanding of how

Contrasting effects of bromocriptine on learning of a ... - Springer Link
Materials and methods Adult male Wistar rats were subjected to restraint stress for 21 days (6 h/day) followed by bromocriptine treatment, and learning was ...

A New Multi-view Learning Algorithm Based on ICA ... - Springer Link
the image retrieval, and through comparison, the conclusion is made that the. ICA basis ..... iteration algorithm has a closed form of solution f. ∗ .... Proc. of the Conference on Computational Learning Theory, 1998, pp. ... Video Retrieval.

Wiki-based Knowledge Sharing in A Knowledge ... - Springer Link
and also includes a set of assistant tools that support this collaboration. .... knowledge, and can also query desirable knowledge directly by the search engine.

Elected in 100 milliseconds: Appearance-Based Trait ... - Springer Link
Jan 23, 2010 - Springer Science+Business Media, LLC 2010. Abstract Recent research ... We also reanalyze previous data to show that facial competence is ...

Wiki-based Knowledge Sharing in A Knowledge ... - Springer Link
with other hyper text systems such as BBS or Blog, Wiki is more open and .... 24. Wiki-based Knowledge Sharing in A Knowledge-Intensive Organization.

Language Recognition Based on Score ... - Semantic Scholar
1School of Electrical and Computer Engineering. Georgia Institute of ... over all competing classes, and have been demonstrated to be effective in isolated word ...

Language Recognition Based on Score ... - Semantic Scholar
1School of Electrical and Computer Engineering. Georgia Institute ... NIST (National Institute of Standards and Technology) has ..... the best procedure to follow.