Discriminative Dictionary Learning with Low-Rank Regularization for Face Recognition Liangyue Li, Sheng Li, and Yun Fu Department of Electrical and Computer Engineering Northeastern University Boston, MA 02115, USA {li.liangy, li.sheng1}@husky.neu.edu, [email protected]

Face recognition has been studied extensively due to its real world challenges and great value in numerous applications [1][2][3][4][5]. Wright et al.[6] addressed face recognition problem from sparse signal representation theory and achieved promising results. Given a test signal and an over-complete dictionary with prototype signals as atoms, sparse representation seeks a sparsest representation of the test signal among all the linear combinations of the dictionary atoms. This sparsity has been supported by research in human visual system which found that nerve cells in the connecting pathway only react to a certain amount of stimuli[7]. Given an over-complete dictionary D and a query sample y, the essence of sparse representation is to solve the following minimization problem: min kαk1 α

s.t.

y = Dα

where α is the coding coefficient whose non-zero elements are those corresponding to the category y belongs to. Here the dictionary D can either be pre-specified or gradually adapted to fit the training samples given. Wright et al. [6] pre-specified the dictionary as the original training samples. A problem with this strategy is that the original images in the training set may not faithfully represent the test samples due to the noise and uncertainty in it. Besides, the distinctive message resided in the training set might be ignored in this way. As a result, we need to adapt the learned dictionary based on the specific training set.

Dictionary

Coefficients



×

×

Training Samples

=

I. I NTRODUCTION

Training Matrix

=

Abstract— We consider learning a discriminative dictionary in sparse representation and specifically focus on face recognition application to improve its performance. This paper presents an algorithm to learn a discriminative dictionary with low-rank regularization on the dictionary. To make the dictionary more discriminative, we apply the Fisher discriminant function to the coding coefficients with the goal that they have a small ratio of the within-class scatter to between-class scatter. However, noise in the training samples will undermine the discrimination power of the dictionary. To handle this problem, we base on low-rank matrix recovery theory and apply a low-rank regularization on the dictionary. The proposed discriminative dictionary learning with low-rank regularization (D2 L2 R2 ) algorithm is evaluated on several face image datasets in comparison with existing representative dictionary learning and classification algorithms. The experimental results demonstrate its superiority.

×

Low-Rank Sub-dictionaries Fisher Criterion on Codings

Fig. 1. Illustration of D2 L2 R2 model. Each sub-dictionary learned is of low rank, which can reduce the negative effect of noise contained in training samples. The coding coefficients conform to Fisher discrimination criterion, making the dictionary discriminative for the training samples.

In order to have a well adapted dictionary for discriminative representation of test samples, research progress on dictionary learning has been achieved. Generalizing K-means clustering process, Aharon et al.[8] presented K-SVD algorithm to learn an over-complete dictionary by updating dictionary atoms and sparse representations iteratively. Recently, a discriminative K-SVD method that considers classification error when learning the dictionary was proposed [9]. Jiang et al.[10] associated label information with each dictionary atom to enforce discriminability. To reduce computational complexity, [11] [12] emphasized specific discriminative criteria to learn an over-complete dictionary. In [13], the authors introduced the Fisher criterion to learn a structured dictionary. Studer et al.[14] investigated dictionary learning from sparsely corrupted signals. However, the methods above can only work well on clean training samples or with small noise and sparse corruption. Imagine the training samples are corrupted with large noise, then in order for representing the training samples, the dictionary atoms will also get corrupted. Given a matrix M of low rank, matrix completion aims at recovering it from noisy observations of a random small portion of its elements. It has been proved that under certain assumptions, the problem can be exactly solved and several methods have been proposed[15][16][17]. In our case of dictionary learning for face recognition, training samples in the same class are linearly correlated and live in a low dimensional manifold. Therefore, a sub-dictionary for representing

samples from one class should reasonably of low rank. Ma et al.[5] integrated rank minimization into sparse representation and achieved impressive results especially when corruption existed. Inspired by the previous work, we aim at learning a discriminative dictionary for face recognition which can handle training samples that are corrupted with large noise. We propose a discriminative dictionary learning with lowrank regularization (D2 L2 R2 ) algorithm and illustrate it in Fig. 1. In the figure, the training dataset can approximately be recovered by the multiplication of the dictionary and the coding coefficient matrix. Each sub-dictionary is of low rank (can be seen as multiplication of two matrices of smaller size) to reduce the negative effect of noise contained in training samples. The coding coefficients conform to Fisher discrimination criterion. Benefiting from the above design, our algorithm has the following advantages: First, the Fisher discriminant function can help us achieve a small ratio of the within-class scatter to between-class scatter on the coefficients, making the dictionary learned has strong discriminative power. Second, low-rank regularization will return us a compact and pure dictionary that can reconstruct the denoised images even when the training samples are contaminated. Different from FDDL proposed in [13], our D2 L2 R2 algorithm can well cope with training samples with large noise and can still achieve impressive performance due to the low-rank regularization on the sub-dictionaries. Though DLRD SR proposed in [5] was claimed to be able to handle noisy samples as well, it may suffer from certain information loss because of the low-rank regularization, our D2 L2 R2 algorithm compensates this by enforcing the Fisher criterion on the coding coefficients of the training sets. The rest of this paper is organized as follows. Section II reviews sparse representation based classification method. Section III introduces our Discriminative Dictionary Learning with Low-Rank Regularization (D2 L2 R2 ). Section IV presents the optimization solution for our model. Section V describes the classification scheme. Section VI presents experimental results. We draw conclusion in section VII . II. S PARSE R EPRESENTATION BASED C LASSIFICATION (SRC) R EVISITED We briefly review robust face recognition using sparse representation proposed in [6]. Define matrix Y as the entire training set which consists of n training samples from all c different classes: Y = [Y1 , Y2 , . . . , Yc ] where Yi ∈ Rd×ni is all the training samples from i-th class, d is the dimension of samples, and ni is the number of samples from i-th class. To classify a test sample y, we need to go through two phases: coding and classification. (a) Coding phase: we obtain the coding coefficient of y by solving the l1 minimization problem: α = arg min kαk1

subject to

ky − Y αk2 ≤ 

(1)

(b) Classification phase: y is classified as the category with

the smallest residual: min ri (y) = ky − Y δi (α)k2

(2)

i

where δi (α) is a function that picks the coefficients corresponding to i-th class. III. D ISCRIMINATIVE D ICTIONARY L EARNING WITH L OW-R ANK R EGULARIZATION (D2 L2 R2 ) To learn a discriminative dictionary even when large noise exists in the training samples, we propose a discriminative dictionary learning algorithm with low-rank regularization. A. Notations and model overview Define matrix Y as the entire training set which consists of n training samples from all c different classes: Y = [Y1 , Y2 , . . . , Yc ] where Yi ∈ Rd×ni is all the training samples from i-th class, d is the dimensionality of each sample vector, and ni is i-th class’ sample size. From Y , we want to learn a discriminative dictionary for future classification task. Rather than learning the dictionary as a whole from all the training samples, we learn a subdictionary Di for the i-th class separately. With all the subdictionaries learned, we will get the whole dictionary as D = [D1 , D2 , . . . , Dc ], where c is the number of classes, Di is the sub-dictionary for the i-th class, each Di is of size d × mi , d is the dimension of each dictionary atom which is the same with the feature dimension of each training sample, and mi is the number of atoms in the i-th sub-dictionary. We represent the entire training set Y using the whole dictionary D and denote by X the sparse coefficient matrix we obtain. We should have Y ≈ DX, and X could be written as X = [X1 , X2 , . . . , Xc ], where Xi is the sub-matrix that is the coefficients for representing Yi using D. In this paper, we propose the following D2 L2 R2 model: X JD,X = arg min{R(D, X)+λ1 kXk1 +λ2 F (X)+α kDi k∗ } D,X

i

where R(D, X) is reconstruction error term for expressing the discrimination power of D, kXk1 is the l1 regularization on coding coefficient matrix, F (X) is the Fisher discriminant function of the coefficients X, and kDi k∗ is the nuclear norm of each sub-dictionary Di , which is the convex envelope of its matrix rank. We will break down the model in the following subsections. B. Discriminative reconstruction error Sub-dictionary Di should have the capacity to well represent samples from i-th class. To illustrate this mathematically, we rewrite Xi , the coding coefficient matrix of Yi over D, as Xi = [Xi1 ; Xi2 ; . . . ; Xic ], where Xij ∈ Rmj ×ni is the coding coefficient matrix of Yi over Dj . We will have to minimize kYi − Di Xii kF . On the other hand, Di should not samples from other classes, that is Pc be able to represent i 2 kD X k should be as small as possible, where i j F j=1,j6=i each Xji has nearly zero elements. Lastly, it is obvious that the whole dictionary D can well represent samples from any class Yi , so we require the minimization of kYi − DXi k2F .

Pc Denote R(Di , Xi ) = kYi −Di Xii k2F + j=1,j6=i kDi Xji k2F + kYi − DXi k2F as the discriminative reconstruction error term for sub-dictionary Di , we want to minimize the value of R(Di , Xi ). C. Fisher discriminant for sparse codings In addition to the discriminative reconstruction term, we want to make the coding coefficient matrix X discriminative as well. In this way, D will have discriminative power for training samples Y . We apply Fisher discrimination criterion[18] on the coding coefficient matrix X so that the ratio of within-class scatter to between-class scatter will be minimized and samples from different classes can be separated. Denote by SW (X) the within-class scatter of X and SB (X) the between-class scatter of X, they are defined as SW (X)

=

SB (X)

=

c X X

(xk − x ¯i )(xk − x ¯i )T

i=1 xk ∈Xi c X

ni (¯ xi − x ¯)(¯ xi − x ¯)T

i=1

where x ¯i is the mean sample of Xi , x ¯ is the mean sample of X, and ni is the number of samples in i-th class. As we have mentioned, we will introduce Fisher criterion to X and denote this term as F (X) which is defined as F (X) = tr(SW (X)) − tr(SB (X)) + ηkXk2F

(3)

Note here that minimizing tr(SW (X)) − tr(SB (X)) is equivalent to minimizing the ratio of within-class scatter to between-class scatter. The last term ηkXk2F is to make the function convex and stable in which η is set as η = 1 [13]. D. Low-Rank regularization on the dictionary In face recognition, training samples in the same class are linearly correlated and reside in a low dimensional subspace. Therefore, a sub-dictionary for representing samples from one class should be reasonably of low rank. Besides, requiring that the sub-dictionaries are of low rank can separate the noisy information and make the dictionary more pure and compact. Of all the possible sub-dictionary Di that can represent samples from i-th class, we want to find the one with the most compact bases, that is to minimize kDi k∗ . 2

2

Algorithm 1 Inexact ALM algorithm for Eq. (7) Input: Initial Dictionary Di , Matrix Yi , parameters α,β,λ Output: Di ,Ei ,Xii Initialize: J = 0, Ei = 0, T1 = 0, T2 = 0, T3 = 0, µ = 10−6 , maxµ = 1030 , ε = 10−8 , ρ = 1.1 while not converged do 1.Fix other variables and update Z by:   1 T3 2 1 i Z = arg min kZk1 + kZ − (Xi + )kF µ 2 µ Z 2.Fix other variables and update Xii by: Xii = (Dit Di + I)−1 (Dit (Yi − Ei ) + Z +

Dit T1 − T3 ) µ

3.Fix other variables and update J by:   α 1 T2 J = arg min kJk∗ + kJ − (Di + )k2F µ 2 µ J length normalization for each column in J; 4.Fix other variables and update Di by: Pc Di = [2 µλ (Yi Xiit + ( j=1,j6=i Dj Xij )Xiit ) +Yi Xiit − Ei Xiit + J + (T1 Xiit − T2 )/µ] (2( µλ + 1)Xii Xiit + I)−1 length normalization for each atom in Di ;; 5.Fix other variables and update Ei by: Ei = arg min Ei

β µ kEi k2,1 + 1 2 kEi − (Yi

! − Di Xii +

T1 2 µ )kF

6.Update T1 , T2 and T3 by: T1

= T1 + µ(Yi − Di Xii − Ei )

T2

= T2 + µ(Di − J)

T3

= T3 + µ(Xii − Z)

7.Update µ by: µ = min(ρµ, maxu ) 8.Determine when to stop the iterations: kDi − Jk∞ <  and kYi − Di Xii − Ei k∞ <  and kXii − Zk∞ <  end while

IV. O PTIMIZATION OF D2 L2 R2

2

E. The D L R model Considering the discriminative reconstruction error, Fisher discrimination criterion on the coding coefficients and the low-rank regularization on the dictionary all together, we have the following D2 L2 R2 model:  Pc J(D,X) = arg min D,X

R(Di , X Pi )c + λ1 kXk1 + λ2 F (X) + α i=1 kDi k∗ i=1

 (4)

In the next section, we solve our model by alternatively optimizing D and X.

We consider solving the proposed objective function in Eq.(4) by dividing it into two sub-problems: First, updating Xi (i = 1, 2, . . . , c) by fixing dictionary D and all Xj (j 6= i) and we can get coding coefficient matrix X by putting all the Xi (i = 1, 2, . . . , c) together; Second, updating Di by fixing X and Dj (j 6= i). However, when Di is updated, the coding coefficient of Yi over Di should also be updated to reflect this change, that is, Xii is also updated. Once we have initialized D we can proceed by iteratively repeating the above process until a stopping criterion is reached.

TABLE I A LGORITHM FOR D2 L2 R2 MODEL

A. Updating coding coefficient matrix X First, we assume D is fixed, the original objective function Eq.(4) reduces to a sparse coding problem. We update each Xi one by one and make all Xj (j 6= i) fixed. This can be done by solving the following:   kY − Di Xii k2F + kYi − DXi k2F + Pci j 2  J(Xi ) = arg min  j=1,j6=i kDj Xi kF + Xi λ1 kXi k1 + λ2 Fi (Xi ) (5) ¯ i k2 − Pc kX ¯ k − Xk ¯ 2 +ηkXi k2 where Fi (Xi ) = kXi − X F F k=1 ¯ k and X ¯ are matrices composed of the mean of vectors and X of k-th class and all classes. This reduced objective function can be solved by using Iterative Projection Method in [19] by rewriting it as J(Xi ) = arg min{Q(Xi ) + 2τ kXi k1 } Xi

Discriminative Dictionary Learning with Low-Rank Regularization Initialize dictionary D The columns of Di are random vectors with unit length. 2. Update the coefficient matrix X Obtain Xi one by one by solving Eq.(5) using the Iterative Projection Method while D is fixed. 3. Update dictionary D. Fix X and solve for Di by solving Eq.(6) with ALM. 4. Output Check the values of J(D,X) in two consecutive iterations, if their difference is small enough or the maximum loops are done, output X and D; otherwise continue with step 2. 1.

V. C LASSIFICATION BASED ON OUR METHOD We code a query sample y against the dictionary D learned and obtain the coding coefficient by solving x = arg min{ky − Dxk22 + γkxk1 }

More details can be referred to [13].

(8)

x

B. Updating sub-dictionaries Di

Denote by x = [x1 ; x2 ; . . . ; xc ], where xi is the coefficient vector over sub-dictionary Di . We can calculate the residual associated with i-th class as

Next, if X is fixed, we update Di by fixing all the other Dj (j 6= i). Notice that the coding coefficient of Yi over Di should also be updated at the same time. The objective ei = ky − Di xi k22 + wkx − x ¯i k22 (9) function then reduces to Pc   where x ¯i is the learned mean coefficient of class i, and w is kYi − Di Xii − j=1,j6=i Dj Xij k2F + P a preset weight parameter. The identity of testing sample y c j i 2 2  J(Di ) = arg min  j=1,j6=i kDj Xi kF + kYi − Di Xi kF is determined according to Di ,Xii +αkDi k∗ identity(y) = arg min{ei } (10) Pc i Denote r(Di ) = kYi − Di Xii − j=1,j6=i Dj Xij k2F + Pc j 2 VI. E XPERIMENTAL R ESULTS j=1,j6=i kDj Xi kF , the above objective function can be converted to the following: We apply our algorithm to face recognition on ORL[22], Extend Yale B [23] and CMU PIE[24] databases to verify its performance. The robustness of our D2 L2 R2 algorithm to ilmin kXii k1 + αkDi k∗ + βkEi k2,1 + λr(Di ) Di ,Ei ,Xii (6) lumination changes, pixel corruptions, block corruptions, and s.t. Yi = Di Xii + Ei noise will be tested. Experimental results will be presented which can be solved as the following essentially the same along with some analysis. problem: A. Parameter selection min

Di ,Ei ,Xii

s.t.

kZk1 + αkJk∗ + βkEi k2,1 + λr(Di ) Yi = Di Xii + Ei , Di = J, Xii = Z

The above problem can then be solved by the following Augmented Lagrange Multiplier [20] method: min

Di ,Ei ,Xii

kZk1 + αkJk∗ + βkEi k2,1 + λr(Di ) +tr[T1t (Yi − Di Xii − Ei )] +tr[T2t (Di − J)] + tr[T3t (Xii − Z)] + µ2 (kYi − Di Xii − Ei k2F +kDi − Jk2F + kXii − Zk2F )

(7)

where T1 , T2 and T3 are Lagrange multipliers and µ(µ > 0) is a balance parameter. The details of solving the problem can be referred to Algorithm 1. Each atom of the dictionary is normalized to a unit vector. A similar proof to demonstrate the convergence property of Algorithm 1 can be found in Lin et. al’s work [21]. We summarize our algorithm for D2 L2 R2 in Table I.

We set the number of dictionary columns of each class as training size. There are 5 parameters in our algorithm: λ1 and λ2 in Eq.(5) and α, β and λ in Eq.(6). In the experiment, we found that changing α and λ wouldn’t affect the result that much, and we set them both as 1. Parameters of the comparison algorithms are chosen by cross validation in 5fold fashion. For ORL, λ1 = 0.005, λ2 = 0.05, β = 0.1; for Extended Yale B, λ1 = 0.005, λ2 = 0.005, β = 0.01; for PIE, λ1 = 0.005, λ2 = 0.05, β = 0.1. B. Face recognition We compare D2 L2 R2 with DLRD SR[5], FDDL[13], LDA[1], SRC[6] and LRC[25]. DLRD SR applies low-rank regularization on the dictionary but without Fisher criterion on the coefficients, FDDL introduces Fisher criterion but has no low-rank requirement for the dictionary, LDA has Fisher criterion alone but no discriminative reconstruction error minimization. LRC is a recently proposed classifier which falls in the category of nearest subspace classification.

( AVERAGING OVER FIVE RANDOM SPLITS ) Occlusions D2 L2 R2 [ours] DLRD SR[5] FDDL[13] LRC[25] SRC[6] LDA[1]

0 94.3 93.2 96.8 91.2 90.3 91.7

10 91.1 90.4 86.8 83.6 79.8 72.2

20 82.6 81.3 74.5 70.3 63.0 54.8

30 77.2 76.5 61.9 62.6 53.9 39.4

40 68.9 67.9 49.2 47.2 38.2 26.1

50 59.8 57.3 36.9 40.1 26.7 19.7

ORL Face Database. The ORL database contains 400 images in total, ten different images for each of 40 different subjects. The background of the images is uniform and dark while the subjects are in frontal, upright posture. The images were shot under different lighting condition and with various facial expression and details[22]. For each class, we select half of the images randomly as training samples and rest as testing and repeat the experiment on five random splits. The images are normalized to 32 × 32. The images are manually corrupted by an unrelated block image at a random location. Fig. 2 shows an example of images with 20% block corruptions. We list the recognition accuracies under different levels of occlusions in Table II. From the table, we can see that our algorithm constantly performs the best under different levels of corruptions (> 0%). FDDL achieves the best result when there is no corruption, however, when the percentage of occlusions increases, performance of FDDL along with that of LRC, SRC and LDA drops rapidly, but D2 L2 R2 and DLRD SR can still obtain much better recognition rates. This demonstrates the effectiveness of lowrank regularization when noise exists. Comparing D2 L2 R2 with DLRD SR, D2 L2 R2 performs better due to the Fisher criterion on the coefficients.

images are normalized to size 32 × 32. We replace a certain percentage of randomly selected pixels of each image with pixel value 255. Fig. 3 exemplifies random pixel corruption on both training and testing face samples. Fig. 4 shows the recognition accuracy under various noise percentage. D2 L2 R2 performs the best most of the time, but when there is no corruption or the percentage of corruption is very small, D2 L2 R2 cannot beat FDDL. We see that the low-rank regularization doesn’t help much in this case, but can on the hand degrade the performance. However, compared with DLRD SR, D2 L2 R2 still obtain better accuracy due to the benefit from Fisher criterion. This can also be validated from LDA’s good performance.

Fig. 3. Example of 30% random pixel corruptions on both training (1st row) and testing (2nd row) samples from PIE database.

100

D2L2R2[ours] DLRD_SR[5] FDDL[13] LDA[1] SRC6] LRC[25]

90

80

recognition rate(%)

TABLE II AVERAGE R ECOGNITION RATE (%) OF DIFFERENT ALGORITHMS ON ORL DATABASE WITH VARIOUS CORRUPTION PERCENTAGE (%)

70

60

50

40

30

20

10

0

10

20

30

40

percentage of corrupted pixels (%)

Fig. 4. Average recognition accuracy on PIE database with different percentage of pixel corruptions (averaging over five random splits).

Fig. 2. Example of ORL images with 20% block corruptions. Top: original images. Bottom: corresponding corrupted images with 20% block corruptions.

CMU PIE. PIE database consists of 41368 images of 68 subjects, each person under 13 different poses, 43 different illumination conditions, and with 4 different expressions[24]. We use the first 15 subjects and select those images with frontal position and various expression and illumination. Each subject has 50 images, and we randomly select 10 images as training and the rest as testing and repeat our experiment on five random splits. The

Extended Yale B. The Extended Yale B dataset contains 2414 frontal-face images of 38 subjects captured under various laboratory-controlled lighting conditions [23]. We choose the first 15 subjects and each subject has around 60 images. We randomly take half as training samples, and the rest as testing samples and repeat the experiment five times. The images are normalized to size 32 × 32. We replace a certain percentage of randomly selected pixels from the images with pixel value 255. We list the recognition accuracies in Table III. Both D2 L2 R2 and DLRD SR perform well when noise exists whereas recognition rates of LRC, SRC and LDA decrease fast with increasing

TABLE III AVERAGE R ECOGNITION RATE (%) OF DIFFERENT ALGORITHMS ON YALE B DATABASE WITH VARIOUS CORRUPTION PERCENTAGE (%) ( AVERAGING OVER FIVE RANDOM SPLITS ) Occlusions D2 L2 R2 [ours] DLRD SR[5] FDDL[13] LRC[25] SRC[6] LDA[1]

0 95.52 97.37 97.24 96.17 96.08 93.97

10 94.05 92.59 69.91 80.13 66.55 55.45

20 84.32 81.38 54.83 61.03 49.48 39.33

30 69.38 64.48 44.65 49.70 38.54 30.73

40 46.66 34.35 33.06 36.51 28.62 23.47

noise, which demonstrates the superiority of low-rank regularization in terms of handling noise. C. Results discussion Both D2 L2 R2 and DLRD SR perform better under noise condition from the results of the three datasets, which no doubt demonstrates low-rank regularization’s advantage in dealing with noise. Comparing D2 L2 R2 with DLRD SR, the former one can achieve better results almost all the time. This is due to the Fisher discriminant function on the coefficient matrix, which can make the dictionary learned more discriminative. However, this function is defined on the coefficients of training set, so when the training set is not sufficient compared to the testing set, D2 L2 R2 and FDDL might not perform that well. VII. C ONCLUSION This paper presents an algorithm to learn a discriminative dictionary with low-rank regularization (D2 L2 R2 ) for face recognition. The D2 L2 R2 can learn a structured dictionary where each sub-dictionary is of low rank, which can separate the noise information in the training samples. The discrimination power comes as two fold: First, the correlation between each sub-dictionary and samples from other classes is minimized; Second, Fisher discrimination criterion is applied to the coding coefficient matrix. We verify our algorithm in face recognition, and the experimental results clearly demonstrate its superiority to many other state-ofthe-art methods, especially when there is noise or corruption present. ACKNOWLEDGMENTS This research is supported in part by the NSF CNS 1135660 and 1314484, Office of Naval Research award N00014-12-1-0125, Air Force Office of Scientific Research award FA9550-12-1-0201, and IC Postdoctoral Research Fellowship award 2011-11071400006. R EFERENCES [1] P. N. Belhumeur, J. a. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, pp. 711–720, July 1997. [2] W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Computing Surveys (CSUR), vol. 35, no. 4, pp. 399–458, 2003.

[3] X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face recognition using laplacianfaces,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 3, pp. 328–340, 2005. [4] Z. Lei, D. Yi, and S. Li, “Discriminant image filter learning for face recognition with local binary pattern like representation,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2512–2517, IEEE, 2012. [5] L. Ma, C. Wang, B. Xiao, and W. Zhou, “Sparse representation for face recognition based on discriminative low-rank dictionary learning,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2586–2593, IEEE, 2012. [6] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 2, pp. 210–227, 2009. [7] T. Poggio, T. Serre, et al., Learning a dictionary of shape-components in visual cortex: comparison with neurons, humans and machines. PhD thesis, Massachusetts Institute of Technology, 2006. [8] M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” Signal Processing, IEEE Transactions on, vol. 54, no. 11, pp. 4311–4322, 2006. [9] Q. Zhang and B. Li, “Discriminative k-svd for dictionary learning in face recognition,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2691–2698, IEEE, 2010. [10] Z. Jiang, Z. Lin, and L. Davis, “Learning a discriminative dictionary for sparse coding via label consistent k-svd,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1697– 1704, IEEE, 2011. [11] H. Lee, A. Battle, R. Raina, and A. Y. Ng, “Efficient sparse coding algorithms,” in Advances in Neural Information Processing Systems 19 (B. Sch¨olkopf, J. Platt, and T. Hoffman, eds.), pp. 801–808, Cambridge, MA: MIT Press, 2007. [12] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Localityconstrained linear coding for image classification,” in In: IEEE Conference on Computer Vision and Pattern Recognition, 2010. [13] M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 543–550, IEEE, 2011. [14] C. Studer and R. Baraniuk, “Dictionary learning from sparsely corrupted or compressed signals,” in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pp. 3341– 3344, IEEE, 2012. [15] R. Keshavan, A. Montanari, and S. Oh, “Matrix completion from noisy entries,” The Journal of Machine Learning Research, vol. 11, pp. 2057–2078, 2010. [16] E. Candes, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?,” Arxiv preprint ArXiv:0912.3599, 2009. [17] E. Cand`es and B. Recht, “Exact matrix completion via convex optimization,” Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717–772, 2009. [18] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification. New York: Wiley, 2. ed., 2001. [19] L. Rosasco, S. Mosci, S. Santoro, A. Verri, and S. Villa, “Iterative projection methods for structured sparsity regularization,” tech. rep., Technical Report MIT-CSAIL-TR-2009-050, MIT, 2009. [20] D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods. Academic Press, 1982. [21] Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” arXiv preprint arXiv:1009.5055, 2010. [22] F. Samaria and A. Harter, “Parameterisation of a stochastic model for human face identification,” in Applications of Computer Vision, 1994., Proceedings of the Second IEEE Workshop on, pp. 138 –142, dec 1994. [23] K.-C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, pp. 684 –698, may 2005. [24] T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) database,” in Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, May 2002. [25] I. Naseem, R. Togneri, and M. Bennamoun, “Linear regression for face recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, pp. 2106 –2112, nov. 2010.

Discriminative Dictionary Learning with Low-Rank ...

matrix recovery theory and apply a low-rank regularization on the dictionary. ..... thesis, Massachusetts Institute of Technology, 2006. [8] M. Aharon, M. Elad, and ...

421KB Sizes 0 Downloads 228 Views

Recommend Documents

Discriminative Tag Learning on YouTube Videos with ... - CiteSeerX
mance is obtained despite the high labeling noise. Fan et al. [8] also show that more effective classifiers can be ob- tained after pruning out the noisy tags by an ...

Discriminative Unsupervised Learning of Structured Predictors
School of Computer Science, University of Waterloo, Waterloo ON, Canada. Alberta Ingenuity .... the input model p(x), will recover a good decoder. In fact, given ...

Hybrid Generative/Discriminative Learning for Automatic Image ...
1 Introduction. As the exponential growth of internet photographs (e.g. ..... Figure 2: Image annotation performance and tag-scalability comparison. (Left) Top-k ...

Scene Understanding with Discriminative Structured ...
Department of Computer Science and Technology, Tsinghua University ... Particularly, we adopt online Exponentiated Gradi- ent (EG) algorithm to solve ... M3N with online EG algorithm. Section 6 ...... Accelerated training of conditional ran-.

Discriminative Learning can Succeed where Generative ... - Phil Long
Feb 26, 2007 - Given a domain X, we say that a source is a probability distribution P over. X × {−1, 1}, and a learning problem P is a set of sources.

Phrase Clustering for Discriminative Learning - Research at Google
Aug 7, 2009 - data, even if the data set is a relatively large one. (e.g., the Penn Treebank). While the labeled data is generally very costly to obtain, there is a ...

A Discriminative Learning Approach for Orientation ... - Semantic Scholar
... Research Group. Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany ... 180 and 270 degrees because usually the document scan- ning process results in ... best fit of the model gives the estimate for orientation and skew.

Learning discriminative space-time actions from ... - Semantic Scholar
Abstract. Current state-of-the-art action classification methods extract feature representations from the entire video clip in which the action unfolds, however this representation may include irrelevant scene context and movements which are shared a

Cross-Lingual Discriminative Learning of Sequence ... - Dipanjan Das
lingual information only as soft constraints, via poste- ... intuitive soft constraints during learning. We mea- ...... Error Analysis: In order to get a sense for the types.

Cross-Input Learning and Discriminative Prediction in ...
resolve the complexities in program inputs. These techniques, implemented in Jikes RVM, produce significant performance improvement on a set of Java applications. Index Terms—Cross-Input Learning, Java Virtual Machine,. Evolvable Computing, Adaptiv

Discriminative Tag Learning on YouTube ... - Research at Google
opment of social sharing websites (i.e., Flickr, Picasa, and. YouTube), the tags .... With the help of image search engines and photo host- ... inference problem of finding the best sub-tag label h∗ as follows: ... lar tricks are also introduced in

Learning discriminative space-time actions from weakly ...
pipeline for challenging human action data [5, 9], however classification performance ... The task of the mi-MIL is to recover the latent class variable yij of every.

A Discriminative Learning Approach for Orientation ... - Semantic Scholar
180 and 270 degrees because usually the document scan- ning process results in .... features, layout and font or text-printing technology. In Urdu publishing ...

Phoneme Alignment Based on Discriminative Learning - CS - Huji
a sequence of phoneme start times rather than a single number. The main ..... A direct search for the maximizer is not feasible since the number of .... eralization ability of on-line learning algorithms. In NIPS, ... Speaker independent phone.

Monitoring the Errors of Discriminative Models with ...
One key component of our system is BayesDB, a probabilistic programming platform for probabilistic data analysis. (Mansinghka et al., 2015b). A second key component is CrossCat, a Bayesian non-parametric method for learning the joint distribution ove

IMPROVED DISCRIMINATIVE TRAINING ...
Techniques for improving lattice-based Maximum Mu- ... 2. MMIE OBJECTIVE FUNCTION. MMIE training was proposed in [1] as an alternative to .... This stage.

Pack only the essentials: Adaptive dictionary learning ...
RLS and showed that accuracy and space complexity are close to the case of sampling with exact. RLS at the cost of an extra dependency on the inverse of the minimal eigenvalue of the kernel matrix. Unfortunately, the minimal eigenvalue can be arbitra

Parametric Dictionary Learning for TWRI Using ...
†Center for Advanced Communications, College of Engineering, Villanova University, Villanova, PA 19085. E-mail: ... sparse reconstruction was employed for scene recovery and. Particle .... ondary reflections at one or more interior walls.