EXEMPLAR-BASED SPARSE REPRESENTATION PHONE IDENTIFICATION FEATURES Tara N. Sainath1 , David Nahamoo1 , Bhuvana Ramabhadran1 Dimitri Kanevsky1 , Vaibhava Goel1 and Parikshit M. Shah2 1

2

IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 MIT Laboratory for Information and Decision Systems, Cambridge, MA, 02139 {tsainath, nahamoo, bhuvana, kanevsky, vgoel}@us.ibm.com1 , [email protected]

ABSTRACT Exemplar-based techniques, such as k-nearest neighbors (kNNs) and Sparse Representations (SRs), can be used to model a test sample from a few training points in a dictionary set. In past work, we have shown that using a SR approach for phonetic classification allows for a higher accuracy than other classification techniques. These phones are the basic units of speech to be recognized. Motivated by this result, we create a new dictionary which is a function of the phonetic labels of the original dictionary. The SR method now selects relevant samples from this new dictionary to create a new feature representation of the test sample, where the new feature is better linked to the actual units to be recognized. We will refer to these new features as Spif . We present results using these new Spif features in a Hidden Markov Model (HMM) framework for speech recognition. We find that the Spif features allow for a 2.9% relative reduction in Phonetic Error Rate (PER) on the TIMIT phonetic recognition task. Furthermore, we find that the Spif features allow for a 4.8% relative improvement in Word Error Rate (WER) on a large vocabulary 50 hour Broadcast News task. Index Terms— Sparse representations, speech recognition 1. INTRODUCTION The concept of “neighborhood” modeling has been explored in speech recognition in a variety of contexts. For example, in linear discriminant analysis (LDA), a frame is represented by information about itself and the temporal context of neighboring frames. Furthermore, in feature space maximum mutual information (fMMI), a Gaussian is represented using information about itself and neighboring Gaussians. Recently, SRs and kNNs have been used to represent a test frame by its closest neighbors in training [1]. SRs are attractive for a variety of reasons. First, both SRs and kNNs can be thought of as exemplar-based techniques, meaning that information about individual training examples is used to make a classification decision. This is contrast to Gaussian Mixture Models (GMMs), which pool information about training examples together to estimate means and variances. Both SRs and kNNs have been shown to offer improvements over non-exemplar based approaches such as GMMs for classification tasks. Furthermore, unlike kNNs which characterize a test point by selecting a small fixed number of k neighbors from the training set, SRs do not fix the number of neighborhood points chosen in training, and have even shown improvements over kNN for classification [1]. Motivated by the benefits of SRs, in this paper we explore using SRs to create a new set of sparse representation phone identification features (Spif ). Specifically, we create a new set of posterior based features from the SR classification decision rule, first introduced in [1].

One drawback of exemplar-based methods is that a small neighborhood of points are selected from all of the training data in order to make a classification decision. This implies that only a few classes are given non-zero posterior values, something we will refer to as feature sharpness. When classification errors are made, feature sharpness results in bad classes having their probabilities over-emphasized. This is particularly a problem in recognition tasks where class boundaries are determined via a dynamic programming approach (e.g., HMMs), thus requiring class probabilities to be compared across frames, something which exemplar-based methods cannot easily do. To address this issue, we explore three techniques to smooth the sharp Spif features to better utilize them for recognition. We investigate the Spif features for both small and large vocabulary tasks. On the TIMIT corpus [2], we show that applying the SR features on top of our best context-dependent (CD) HMM system allows for a 0.7% absolute reduction in phonetic error rate (PER). Furthermore, on a 50 hour Broadcast News task [3], we achieve a reduction in word error rate (WER) of 0.9% absolute with the SR features on top of our best discriminatively trained HMM system. The rest of this paper is organized as follows. Section 2 discusses the creation of Spif features, while Section 3 discusses various techniques explored to smooth sharp Spif features. Sections 4 and 5 present the experiments and results, respectively. Finally, Section 6 concludes the paper and discusses future work. 2. SR PHONE IDENTIFICATION FEATURES In this section, we review the use of SR for classification [1], and use this framework to create our Spif features. 2.1. Classification Using Sparse Representations To use SRs for classification, we first take all training examples ni from class i and concatenate them into a matrix Hi as columns, in other words Hi = [xi,1 , xi,2 , . . . , xi,ni ] ∈ ℜm×ni , where xi,j ∈ ℜm is a feature vector from training belonging to class i with dimension m. Furthermore, define matrix H to be training examples from all w classes, in other words H = [H1 , H2 , . . . , Hw ] = [x1,1 , x1,2 , . . . , xw,nw ] ∈ ℜm×N . H is an over-complete dictionary where m << N and N is the total number of training examples from all classes. We can represent test vector y as a linear combination of training examples, in other words y = Hβ. After solving y = Hβ, we must assign y to a specific class. Ideally the optimal β should be sparse, and only be non-zero for the elements in H which belong to the same class as y. Let us define a selector δi (β) ∈ ℜN as a vector whose entries are zero except for entries in β corresponding to class i. We then compute the l2 norm

for β for class i as ∥ δi (β) ∥2 . As discussed in [1], the best class for y will be the class in β with the largest l2 norm: i∗ = max ∥ δi (β) ∥2

(1)

i

2.2. Sparse Representation Phone Identification Features In this section, we discuss how we can use β to create a set of Spif vectors. First, define matrix Hphnid = [p1,1 , p1,2 , . . . , pw,nw ] ∈ ℜr×N , which has the same number of columns N as the original H, but a different number of rows r. Recall that each xi,j ∈ H has a corresponding class label i. We define each pi,j ∈ Hphnid corresponding to feature vector xi,j ∈ H to be a vector with zeros everywhere except at the index i corresponding to class of xi,j . Figure 1 shows the Hphnid corresponding to H, where each pi,j becomes a phone identification vector with a value of 1 corresponding to the class of xi.j . Here r, the dimension of each pi,j , is equivalent to the total number of classes. p x 0,1 x 0,2 x

H=

1,1

0,1

x 2,1

0.2 0.3

0.7 0.1

0.5 0.6

0.1 0.1

H phnid =

c=0 c=0 c=1 c=2

p 0,2

p 1,1

p

frame allows for a large improvement in speed without a reduction in accuracy [5]. Using this fact, we use training data belonging to a small subset of Gaussians to seed H. To determine these Gaussians at each frame, we decode the data using a language model (LM), and find the best aligned Gaussian at each frame. For each Gaussian, we compute the 4 other closest Gaussians to this Gaussian. After we find the top 5 Gaussians at a specific frame, we seed H with the training data aligning to these top 5 Gaussians. We explore using both a trigram and unigram LMs to obtain the top Gaussians. 2.3.3. Using a Lattice Seeding H as in Section 2.3.2 is similar to finding the best H at the frame level. However, the goal of speech recognition is to recognize words, and therefore we explore seeding H using information related to competing word hypotheses. Specifically, we create a lattice of competing word hypotheses and obtain the top Gaussians at each frame from the Gaussian alignments of the lattice.

2,1

1

1

0

0

0

0

1

0

0

0

0

1

Fig. 1. Hphnid corresponding to H Once β is found by solving y = Hβ, we use this same β to select important classes within the new dictionary Hphnid . Specifically, let us define a new feature vector Spif , as Spif = Hphnid β 2 , where each element of β is squared, i.e., β 2 = {βi2 }. Notice that we are using β 2 , as this is similar to the ∥ δi (β) ∥2 classification rule given by Equation 1. Each row i of the Spif vector roughly represents the l2 norm of β entries for class i. A speech signal is defined by a series of feature vectors, Y = {y 1 , y 2 . . . y n }, for example Mel-Scale Frequency Cepstral Coefficients (MFCCs). For every test sample y t ∈ Y , we solve y t = H t β t t to compute a β t . Then given this β t , a corresponding Spif vector is formed. Since β t at each sample represents a weighting of entries in H t that best represent test vector y t , this makes it difficult to t compare β t values and the Spif vectors across frames. Therefore, t to ensure that the values can be compared across samples, the Spif t vectors are normalized at each sample. Thus, the new S¯pif at sample St t t is computed as S¯pif = tpif . A series of Spif vectors is created ∥Spif ∥1

1 2 n as {S¯pif , S¯pif . . . S¯pif }, and are used for recognition.

2.3. Construction of Dictionary H Success of SRs depends on a good choice of H. In [4], various methods for seeding H from a large sample set were explored. Below we summarize the main techniques used in this work to select H. 2.3.1. Seeding H from Nearest Neighbors For each y, we find a neighborhood of closest points to y from all examples in the training set. These k neighbors become the entries of H. While this approach works well on small-vocabulary tasks, it is computationally expensive for large data sets. 2.3.2. Using a Language Model In speech recognition, when an utterance is scored using a set of HMMs (which have output distributions given by Gaussians), typically evaluating only a small subset of these Gaussians at a given

3. REDUCING SHARPNESS ESTIMATION ERROR As described in Section 2.3, for computational efficiency, Spif features are created by first pre-selecting a small amount of data for dictionary H. This implies that only a few classes are present in H and only a few Spif posteriors are non-zero, something we will define as feature sharpness. Feature sharpness by itself is advantageous - for example if we were able to correctly predict the right class at each frame and capture this in Spif the WER would be close to zero. However, because we are limited by the amount of data that can be used to seed H, incorrect classes may have their probabilities boosted over correct classes, something we will refer to as sharpness estimation error. In this section, we explore various techniques to smooth out the sharp Spif features and reduce estimation error. 3.1. Choice of Class Identification The Spif vectors are defined based on the class labels in H. We explore two choice of class labels in this paper. First, we explore using monophone class labels. Second, we investigate labeling classes in H by a set of context independent (CI) triphones. While using triphones increases the dimension of the Spif vector, the elements in the vector are less sharp now since β values for a specific monophone are more likely to be distributed within the 3 different triphones of this monophone. 3.2. Posterior Combination Another technique to reduce feature sharpness is to combine Spif posteriors with posteriors coming from an HMM system, a technique which is often explored when posteriors are created using Neural Nets [3]. Specifically, let us define hj (yt ) as the output distribution for observation yt and state j of an HMM system. In addij tion, define Spif (yt ) as the Spif posterior corresponding to state j. Note that the number of Spif posteriors could be less than the number of HMM states, so the same Spif posterior could map to multiple HMM states. For example, the Spif posterior corresponding to phone “aa” could map to HMM states “aa-b-0”, “aa-m-0”, etc. Given the HMM and Spif posteriors, the final output distribution bj (yt ) is given by Equation 2, where λ is a weight on the Spif posterior stream, selected on a held-out set. j bj (yt ) = hj (yt ) + λSpif (yt )

(2)

3.3. Spif Feature Combination As we will show in Section 5.2, Spif features created using different methodologies to select H offer complementary information. For example, Spif features created when H is seeded with a lattice have higher frame accuracy and incorporate more sequence information than when H is seeded using a unigram or trigram LM. However, Spif features created from lattice information are much sharper compared to features created with a uni/trigram LM. Thus, tri we explore combining different Spif features. If we denote Spif , uni lat Spif and Spif as being created from the three different H selection comb methodologies, we combine these features to produce a new Spif feature as given by Equation 3. Weights {α, β, γ} are chosen on a held-out set with the constraint that α + β + γ = 1. lat uni tri comb + γSpif + βSpif = αSpif Spif

(3)

4. EXPERIMENTS The small vocabulary recognition experiments are conducted on TIMIT [2]. Similar to [4], acoustic models are trained on the training set, and results are reported on the core test set. The initial acoustic features are 13-dimensional MFCC features. The large vocabulary experiments are conducted on an English broadcast news transcription task [3]. The acoustic model is trained on 50 hours of data from the 1996 and 1997 English Broadcast News Speech Corpora. Results are reported on 3 hours of the EARS Dev-04f set. The initial acoustic features are 19-dimensional PLP features. Both corpora utilize the following recipe for training. First, a set of CI HMMs are trained, either using information from the phonetic transcription (TIMIT) or from flat-start (Broadcast News). The CI models are then used to bootstrap the training of a set of CD triphone models. In this step, given an initial set of MFCC or PLP features, a set of LDA features are created. After the features are speaker adapted, a set of discriminatively trained features and models are created using the boosted Maximum Mutual Information (BMMI) criterion. Finally, models are adapted via MLLR. On TIMIT, we explore creating Spif features from both LDA and fBMMI features, while for Broadcast news, we only create Spif features after the fBMMI stage. The initial LDA/fBMMI features are used for both y and H to solve y = Hβ and crate Spif features at each frame. In this work, we explore the Approximate Bayesian Compressive Sensing (ABCS) [1] SR method. Once series of Spif vectors are created, an HMM is built on the training features. 5. RESULTS 5.1. TIMIT 5.1.1. Frame Accuracy The success of Spif first relies on the fact that the classification accuracy per frame, computed using Equation 1, should ideally be high. Table 1 shows the classification accuracy for the GMM and SR methods,1 for both LDA and fBMMI feature spaces. Notice that the SR technique offers significant improvements over the GMM method. 5.1.2. Recognition Results - Class Identification First, Table 2 shows the phonetic error rate (PER) at the CD level for different class identification choices. Since only a kNN is used 1 We

have not included the accuracy of the HMM since this takes into account sequence information which both the GMM and SR methods do not.

Classifier GMM SR

Frame Acc. (LDA) 61.5 64.0

Frame Acc. (fBMMI) 70.4 71.7

Table 1. Frame Accuracy on TIMIT Testcore Set knn to seed H on TIMIT, we will call the feature Spif . We have also listed results for other CD-ML trained systems reported in the literature on TIMIT. Notice that smoothing out sharpness error of the Spif features by using triphones rather than monophones results in a decrease in error rate. The Spif -triphone features outperform the LDA features and also offer the best result of all methods on TIMIT at the CD level for ML trained systems.

System knn - monophones, IBM CD HMM (this paper) Spif Monophone HTMs [6] Baseline LDA Features, IBM CD HMM Heterogeneous Measurements [7] knn - triphones, IBM CD HMM (this paper) Spif

PER (%) 25.1 24.8 24.5 24.4 23.8

Table 2. PER on TIMIT Core Test Set - CD ML Trained Systems Second, we explore Spif features created after the fBMMI stage. Table 3 shows that the performance is now worse than the fBMMI system. Because the fBMMI features are already discriminative in nature and offer good class separability, Spif features created in this space are too sharp, explaining the increase in PER. Features Baseline fBMMI Features knn Spif - triphones

PER 19.4 20.7

Table 3. PER on TIMIT Core Test Set - fMMI Level 5.1.3. Recognition Results - Posterior Combination We explore reducing feature sharpness by combining Spif posteriors with HMM posteriors, as shown in Table 4. We observe that on TIMIT, combining posteriors from two different feature streams has virtually no impact in recognition accuracy compared to the baseline fBMMI system, indicating there is little complementarity between the two systems. Because gains were not observed with posterior combination, further Spif feature combination was not explored. Features Baseline fBMMI Features knn Spif , Posterior Combination

PER 19.4 19.4

Table 4. PER on TIMIT Core Test Set - Posterior Combination 5.2. Broadcast News In this section, we explore the Spif features on Broadcast News. 5.2.1. Recognition Results - Choice of H and Class Identity Table 5 shows the frame accuracy and WER on Broadcast news for different choice of H and class identity. We also quantify the sharpness estimation error between the different Spif methods. We define

“sharpness” of a Spif vector by calculating the entropy from the nonzero probabilities of the feature. The sharper the Spif feature, the lower the entropy. A very sharp Spif feature that emphasizes the incorrect class for a frame will lead to a classification error. Therefore, we measure sharpness error by the average entropy of all misclassified Spif frames. Please note that sharpness is only measured for monophone Spif features. Using triphone Spif smooths out class probabilities since the feature dimension is increased. However, it is difficult to quantifiably compare feature sharpness for the monophone and triphone Spif features since the correct phone labels and dimensions are of the two features are different. First, notice the trend between frame accuracy and entropy in uni Table 5. Spif features have a low frame accuracy and hence a low lat features have a very high frame accuracy, they WER. While Spif tri and have a higher entropy on misclassified frames compared to Spif tri uni Spif , and hence have a high WER. Spif features created from a trigram LM offer the best tradeoff between feature sharpness and accuracy, and achieve a WER close to the baseline. However, if tri features, we see feature sharpness is reduced by using triphone Spif now on a word recognition task that the WER increases slightly.

Frame Acc. -

Spif Ent. -

WER 18.7

70.3 70.3 76.3

2.27 2.27 2.29

19.5 18.2 17.8

Table 7. WER on Broadcast News, Posterior and Spif Comb.

WER

6. CONCLUSIONS

-

19.4

70.3 68.3 77.2 -

2.27 2.23 0.86 -

19.5 29.0 21.6 19.8

In this paper, we derived a novel set of Spif features which take advantage of the SR exemplar-based classification benefits. We also explored various feature sharpness reduction techniques to allow these features to successfully be used for recognition purposes. On TIMIT, we found that these features achieved a PER of 23.8%, the best result on TIMIT reported to date when HMM parameters are trained using the maximum likelihood principle. Furthermore, we found that these features allowed for a 0.9% absolute improvement in WER on a large vocabulary 50 hour Broadcast News task.

Frame Acc.

Baseline fBMMI, ML Training tri Spif - monophones uni Spif - monophones lat Spif - monophones tri Spif - triphones

Table 5. WER on Broadcast News, Class Identification

7. ACKNOWELDGEMENTS

5.2.2. Oracle Results of Reducing Estimation Error We motivate the need for reducing sharpness error, with the followtri ing oracle experiment. Given the Spif -monophone features, x% of the frames which are misclassified are corrected to have a probability of 1 at the correct phone index and 0 elsewhere. Table 6 shows the results when 1%, 3%, and 5% of the misclassified Spif features are corrected. Notice that just by correcting a small % of misclassified features, the WER reduces significantly. This motivates us to explore different techniques to reduce Spif sharpness in the next section.

tri Spif tri Spif tri Spif tri Spif

Features Baseline fBMMI Features, BMMI Training+MLLR tri Spif - monophones tri , Post. Comb. Spif lat uni tri , + γSpif + βSpif αSpif Posterior Combination

Spif Entropy Error Frames -

Features

Features - 0% Cheating - 1% Cheating - 3% Cheating - 5% Cheating

First, notice that through posterior combination, we reduce the WER by 0.5% absolute from 18.7% to 18.2%, showing the complementarity between the fBMMI and Spif feature spaces. Second, by doing additional Spif feature combination, we are able to increase the frame accuracy from 70.3% to 76.3%, without a reduction in Spif entropy as it increases slightly from 2.27 to 2.29. This results in a further decrease in WER of 0.4% absolute from 18.2% to 17.8%, indicating the importance of reducing feature sharpness, particularly for misclassified Spif frames.

Frame Accuracy 70.3 71.4 73.7 76.1

WER 19.5 19.4 18.8 17.6

Table 6. WER on Broadcast News, Oracle Results 5.2.3. Recognition Results - Posterior and Spif Combination In this section, we explore reducing sharpness through posterior and Spif combination. Table 7 shows the baseline results for the fBMMI and Spif -monphone features at 18.7% and 19.5% respectively. The frame accuracies and entropies of misclassified frames for various Spif combination features are also listed. Note that the frame accuracy is only reported on the Spif feature and does not include frame accuracy after posterior combination.

The authors would like to thank Hagen Soltau, George Saon, Brian Kingsbury, Stanley Chen and Abhinav Sethy for their contributions towards the IBM toolkit and recognizer utilized in this paper. 8. REFERENCES [1] T. N. Sainath, A. Carmi, D. Kanevsky, and B. Ramabhadran, “Bayesian Compressive Sensing for Phonetic Classification,” in In Proc. ICASSP, 2010. [2] L. Lamel, R. Kassel, and S. Seneff, “Speech Database Development: Design and Analysis of the Acoustic-Phonetic Corpus,” in Proc. of the DARPA Speech Recognition Workshop, 1986. [3] B. Kingsbury, “Lattice-Based Optimization of Sequence Classification Criteria for Neural-Network Acoustic Modeling,” in Proc. ICASSP, 2009. [4] T. N. Sainath, B. Ramabhadran, D. Nahamoo, D. Kanenvsky, and A. Sethy, “Sparse Representation Features for Speech Recognition,” in Proc. Interspeech, 2010. [5] G. Saon, G. Zweig, B. Kingsbury, L. Mangu, and U. Chaudhari, “An Architecture for Rapid Decoding of Large Vocabulary Conversational Speech,” in Proc. Eurospeech, 2003. [6] L. Deng and D. Yu, “Use of Differential Cepstra as Acoustic Features in Hidden Trajectory Modeling for Phonetic Recognition,” in Proc. ICASSP, 2007. [7] A. Halberstat and J. Glass, “Heterogeneous Measurements and Multiple Classifiers for Speech Recognition,” in Proc. ICSLP, 1998.

Exemplar-Based Sparse Representation Phone ...

1IBM T. J. Watson Research Center, Yorktown Heights, NY 10598. 2MIT Laboratory for ... These phones are the basic units of speech to be recognized. Moti- vated by this ..... to seed H on TIMIT, we will call the feature Sknn pif . We have also.

80KB Sizes 3 Downloads 254 Views

Recommend Documents

Incorporating Sparse Representation Phone ...
Sparse representation phone identification features (SPIF) is a recently developed technique to obtain an estimate of phone posterior probabilities conditioned ...

Exemplar-Based Sparse Representation Features ...
in LVCSR systems and applying them on TIMIT to establish a new baseline. We then .... making it difficult to compare probabilities across frames. Thus, to date SVMs ...... His conversational biometrics based security patent was recognized by.

SPARSE REPRESENTATION OF MEDICAL IMAGES ...
coefficients, enables a potentially significant reduction in storage re- quirements. ... applications have increased the amount of image data at an explo- sive rate. ..... included in the SparseLab package that is available online at http://.

Self-Explanatory Sparse Representation for Image ...
previous alternative extensions of sparse representation for image classification and face .... linear combinations of only few active basis vectors that carry the majority of the energy of the data. ..... search Funds for the Central Universities (N

Temporal Representation in Spike Detection of Sparse ... - Springer Link
and stream mining within a framework (Section 2.1): representation of time with ... application data sets (Section 3.2), experimental design and evaluation ...

Sparse Representation based Anomaly Detection using ...
HOMVs in given training videos. Abnormality is ... Computer vision, at current stage, provides many elegant .... Global Dictionary Training and b) Modeling usual behavior ..... for sparse coding,” in Proceedings of the 26th Annual International.

Sparse Representation based Anomaly detection with ...
ABSTRACT. In this paper, we propose a novel approach for anomaly detection by modeling the usual behaviour with enhanced dictionary. The cor- responding sparse reconstruction error indicates the anomaly. We compute the dictionaries, for each local re

Sparse Representation based Anomaly Detection with ...
Sparse Representation based Anomaly. Detection with Enhanced Local Dictionaries. Sovan Biswas and R. Venkatesh Babu. Video Analytics Lab, Indian ...

Bayesian Pursuit Algorithm for Sparse Representation
the active atoms in the sparse representation of the signal. We show that using the .... in the MAP sanse, it is done with posterior maximization over all possible ...

Random Sparse Representation for Thermal to Visible ...
except the elements associated with the ith class, which are equal to elements of xi. ..... mal/visible face database.,” Oct. 2014, [Online]. Available: http://www.

Sparse Representation Features for Speech Recognition
ing the SR features on top of our best discriminatively trained system allows for a 0.7% ... method for large vocabulary speech recognition. 1. ... of training data (typically greater than 50 hours for large vo- ... that best represent the test sampl

Representation: Revisited - GEOCITIES.ws
SMEC, Curtin University of Technology. The role of representation in ... Education suffered a decline in the last 20 or 30 years. (vonGlaserfled, 1995), which led ...

Representation: Revisited
in which social interchange has a major role in constructing and representing knowledge ... Explicitly speaking, the construction and representation of meaning.

Hardware and Representation - GitHub
E.g. CPU can access rows in one module, hard disk / another CPU access row in ... (b) Data Bus: bidirectional, sends a word from CPU to main memory or.

Sparse Sieve MLE
... Business School, NSW, Australia; email: [email protected] .... space of distribution functions on [0,1]2 and its order k controls the smoothness of Bk,Pc , with a smaller ks associated with a smoother function along dimension s.

Recursive Sparse, Spatiotemporal Coding - CiteSeerX
In leave-one-out experiments where .... the Lagrange dual using Newton's method. ... Figure 2. The center frames of the receptive fields of 256 out of 2048 basis.

TECHNOLOGIES OF REPRESENTATION
humanities are changing this, with imaging and visualizing technologies increasingly coming to the ... information visualization are all very different in their nature, and in the analytical and interpretive .... The best example of this is a thermom

REPRESENTATION OF GRAPHS USING INTUITIONISTIC ...
Nov 17, 2016 - gN ◦ fN : V1 → V3 such that (gN ◦ fN )(u) = ge(fe(u)) for all u ∈ V1. As fN : V1 → V2 is an isomorphism from G1 onto G2, such that fe(v) = v′.

Crime Scene Representation
There are different ways to register a crime scene such as description, written notes, .... use of DXF files (Interchangeable Format of Drawing), whose format is opened and ... The QCAD is the most popular CAD systems for Linux being a good.

Representation and Commemoration_War Remnants Museum ...
Page 1 of 9. 1. Representation and Commemoration: War Remnants Museum Vietnam. Unit Title Investigating Modern History – The Nature of Modern. History. 5. The Representation and Commemoration of the Past. Duration 5 weeks. Content Focus Students in

Group Sparse Coding - NIPS Proceedings
we propose and evaluate the mixed-norm regularizers [12, 10, 2] to take into account the structure ... 2 introduces the notation used in the rest of the paper, and.

Cheap 10Pcs⁄Lot Phone Cable Protector Diy Cartoon Mobile Phone ...
Cheap 10Pcs⁄Lot Phone Cable Protector Diy Cartoon M ... er Cable Winder Free Shipping & Wholesale Price.pdf. Cheap 10Pcs⁄Lot Phone Cable Protector Diy ...

Sparse Preference Learning
and allows efficient search for multiple, near-optimal solutions. In our experi- ... method that can lead to sparse solutions is RankSVM [6]. However ..... IOS Press.