Speaker Recognition using Kernel-PCA and Intersession Variability Modeling Hagai Aronowitz IBM T.J. Watson Research Center, Yorktown Heights, NY, USA [email protected]

Abstract This paper presents a new method for text independent speaker recognition. We embed both training and test sessions into a session space. The session space is a direct sum of a common-speaker subspace and a speaker-unique subspace. The common-speaker subspace is Euclidean and is spanned by a set of reference sessions. Kernel-PCA is used to explicitly embed sessions into the common-speaker subspace. The common-speaker subspace typically captures attributes that are common to many speakers. The speaker-unique subspace is the orthogonal complement of the commonspeaker subspace and typically captures attributes that are speaker unique. We model intersession variability in the common-speaker subspace, and combine it with the information that exists in the speaker-unique subspace. Our suggested framework leads to a 43.5% reduction in error rate compared to a Gaussian Mixture Model (GMM) baseline. Index Terms: speaker recognition, speaker variability modeling, intersession variability modeling, kernel PCA

1. Introduction An appealing proposal named anchor modeling [1] has been made in the last few years to model a speaker relatively to a set of reference speakers. Though a significant progress has been achieved under the anchor modeling framework [2], it still has some weaknesses, which we discuss in section 2. To overcome these weaknesses, we introduce an approach based on kernel-PCA [3]. Given a set of reference speakers we define a session space which is a direct sum of two subspaces. The first subspace, named the common-speaker subspace, is spanned by the reference speakers. The second subspace, named the speaker-unique subspace, is the orthogonal complement of the common-speaker space. The speaker-unique subspace captures information that does not intersect with the span of the reference models. Using kernelPCA we derive a distance preserving projection from the common-speaker subspace to a Euclidean space, where intersession variability can be modeled. For classification, the information extracted from the common-speaker subspace is integrated with the information extracted from the speakerunique subspace. The remainder of this paper is organized as follows: the anchor modeling framework is described and analyzed in section 2. Section 3 introduces the kernel-PCA based approach. In section 4 we describe the experimental setup and results on NIST 2004-SRE. Finally, we discuss in section 5 the relationship between the kernel-PCA based approach and other speaker recognition approaches [2], [4] and conclude.

2. Anchor modeling In this section we overview the anchor modeling framework and discuss its weaknesses.

2.1. Anchor modeling overview Anchor modeling is based on projecting a spoken session into a space of reference spoken sessions. A spoken session is therefore not represented in an absolute way but relatively to a set of spoken sessions. According to the anchor modeling approach, spoken sessions are embedded into an anchor space where ad-hoc distances may be defined as in [1] and [2], probabilistic modeling may be carried out as in [2] and [5] or discriminative classification can be done as in [5]. Anchor modeling was originally developed for speaker recognition [1] but can be used more generally to embed spoken sessions for other tasks such as language identification [5]. A spoken session is embedded into an anchor space by representing it with a vector defined as a sequence of the normalized likelihood ratios between the spoken session and the anchor models. The vector is named speaker characterization vector (SCV) and is defined as:

[(

)

)]

(

T SCV ( X ) = sˆ X λ1 , K , sˆ X λ E .

(1)

(

In (1) X is a sequence of F acoustic feature vectors, sˆ X λi

)

is the average log-likelihood ratio of X for the GMM model representing reference speaker λi, relative to a universal background model (UBM) λUBM:

(

)

Pr X λi sˆ X λi = 1 log . F Pr X λUBM

(

)

(

)

(2)

2.2. Weaknesses of the anchor modeling framework 2.2.1. Theoretical basis The anchor modeling framework embeds a spoken session into the anchor space using log-likelihood ratios. To the best of our knowledge, no theoretical basis for the embedding defined by (2) has been published. Other embeddings may be defined and lead to different results. The lack of theoretic basis has led to a variety of ad-hoc metrics in the embedded space [1], [2].

2.2.2. Accuracy The concept of session projection into anchor space is advantageous for speaker retrieval. However, important information may be lost. The amount of information lost is a function of the number of anchor models and the way they are chosen. In section 4 we present empirical evidence that shows that significant information is lost even when hundreds of anchor speakers are used.

2.2.3. Efficiency The anchor modeling framework is relatively an efficient framework for indexing audio archives for speaker retrieval.

However, the anchor modeling framework still requires scoring a test session against hundreds of anchor models, which can be time consuming.

3. Kernel-PCA based speaker recognition In this section we present our method for speaker recognition based on kernel-PCA. We first introduce a method for embedding sessions into a space named session-space and then show how to model speaker variability in session-space and how to classify in session-space.

3.1. Kernel-PCA Kernel-PCA [3] is a kernelized version of the principal component analysis (PCA) algorithm. Function K(x,y) is a kernel if there exists a dot product space F (named feature space) and a mapping f : V → F from observation space V (named input space) for which:

∀x, y ∈ V

K (x, y ) = f (x ), f ( y ) .

(3)

Given a set of reference vectors A1,…,An in V, the kernelmatrix K is defined as K i, j = K Ai , A j . The goal of kernel-

(

)

PCA is to find an orthogonal basis for the subspace spanned by the set of mapped reference vectors f ( A1 ),..., f ( An ) . The outline of the kernel-PCA algorithm is as follows: ~ 1. Compute a centralized kernel matrix K :

~ K = K − 1n K − K 1n + 1n K 1n where 1n is an nxn matrix with all values set to one.

(4)

eigenvalues λ1,…,λn and corresponding ~ eigenvectors v1,…, vn for matrix K . 3. Normalize each eigenvector by the square root of its corresponding eigenvalue (only for the non-zero eigenvalues λ1,…, λm).

2. Compute

v~i = vi / λi

, i={1,…,m}

(5)

The i-th eigenvector in feature space denoted by f i is:

f i = ( f ( A1 ),..., f ( An ))v~i .

(6)

The set of eigenvectors { f1 ,..., f m } is an orthonormal basis for

the subspace spanned by { f ( A1 ),..., f ( An )} . Let x be a vector in input space V with a projection into feature space denoted by f (x ) . f (x ) can be uniquely expressed as a linear combination of the basis vectors { f i (x )} with coefficients {α ix } , and a vector ux in V/span { f1 ,..., f m } which is the complementary subspace of span { f1 ,..., f m } . f (x ) =

m

∑α ix fi + u x

(7)

i =1

Note that α ix = f (x ), f i . Using equations (3, 6), α ix can be

T ( x ) = (v~1 ,..., ~ vm )T (K (x, A1 ),..., K (x, An ))T .

(9)

The following property holds for projection T: if f (x ) =

m

m

i =1

i =1

∑α ix fi + u x and f ( y ) = ∑α iy f i + u y then: f (x ) − f ( y )

2

= T (x ) − T ( y )

2

+ ux − u y

2

(10)

Equation (10) implies that projection T preserves distances in the feature subspace spanned by { f ( A1 ),..., f ( An )} .

3.2. Kernel-PCA for speaker recognition Given training sessions for a speaker and a test session, it is desirable to project both the training sessions and the test session into a space which can be naturally modeled while still preserving relevant information. Relevant information is defined in this paper as distances in feature space F defined by a kernel function. Equation (9) suggests such a projection. Using projection T as the chosen projection has the advantage of having Rm as a natural target space for modeling. Equation (10) quantifies the amount distances are distorted by projection T. In order to capture some of the information lost by projection T we define a second projection:

U (x ) = u x .

(11)

Although we cannot explicitly apply projection U, we can easily calculate the distance between two projected vectors x and y given their distance in feature space F and their distance after projection with T. U (x ) − U ( y )

2

= f (x ) − f ( y )

2

− T (x ) − T ( y )

2

(12)

Using both projections T and U enables us to capture the relevant information. The subspace spanned by { f ( A1 ),..., f ( An )} is named the common-speaker subspace, as attributes that are common to several speakers will typically be mapped into that subspace. The complementary space is named the speaker-unique subspace, as attributes that are unique to a single speaker will typically be projected to that subspace. Note that our approach is a generalization of the approach taken in [4] and later in [6] and [7] where both input and feature spaces are Euclidean.

3.3. Modeling in common speaker subspace (CSS) The purpose of the projection of the common-speaker subspace into Rm using the distance preserving projection T, is to enable modeling of intersession speaker variability. Intersession speaker variability modeling has proven to be extremely successful for speaker recognition [4], [6-10]. As in [4], target speaker S is modeled by a multivariate normal distribution N(µS, ΣS). µS is estimated from the training data of speaker S using maximum likelihood estimation. Given n(s) training sessions for speaker s, T(xs,1),…,T(xs,n(s)) denote the n(s) projected spoken sessions of speaker s. µS is estimated as:

expressed as:

n( s )

α ix = (K (x, A1 ),..., K (x, An ))v~i . We define a projection T:V→Rm as:

(8)

µs = 1

n(s )

∑T (xs,i ) .

(13)

i =1

In order to be able to estimate the covariance matrix given limited training data for target speakers, the covariance matrix

is shared among all speakers of the same gender G (ΣS=ΣG). The covariance matrix is an mxm dimensional matrix and is estimated using a separate development dataset. Given a development dataset of k speakers of gender G, We first estimate ΣG using a maximum likelihood criterion (14) n( s )

ΣG =

1 ∑ n(s )

∑ ∑ (T (xs,i ) − µ s )(T (x s,i ) − µ s )T .

(14)

s i =1

s

We then regularize ΣG by adding a positive noise component η to the elements of its diagonal ~ Σ G = Σ G + ηI .

(15)

The resulting covariance matrix is guaranteed to have eigenvalues greater than η, therefore it is invertible. ~ Given an estimated speaker model (µS, ΣG ) and a projected test session T(y), the likelihood of the projected test session given speaker s is

(

)

Pr T ( y ) s =

1

(2π )m / 2 Σ~ G

e

1/ 2



(T ( y )− µ s )T Σ~ G−1 (T ( y )− µ s ) (16)

For the sake of efficiency (when multiple speakers are to be ~ scored), we diagonalize Σ G by computing its eigenvectors {ei} and eigenvalues {βi}. Defining E as {e1T,…,emT), equation (16) reduces to: m

(

)

Pr T ( y ) s =

1

e

m

(2π )m / 2 ∏

2β i

i =1

2

s i

(17)

βi

i =1

~ where T ( y ) = E ⋅ T ( y ) , coefficient of x.

[T~( y )− µ~ ]

µ~s = Eµ s and [x]i is the i-th

3.4. Modeling in speaker-unique subspace (SUS) ∆2u ( y , s ) denotes the average squared distance between a test session y projected into the speaker unique subspace and the corresponding projected training sessions for speaker s. We assume

(

)

Pr U ( y ) s =



1

4.1. Datasets and Protocol We use the NIST-2005 SRE, NIST-2006 SRE and a subset of Switchboard-2 phase-II as a development dataset. In this paper we restrict our experiments to male only data. We use the NIST-2006 SRE and Switchboard-2 phase-II for training a male UBM, for anchor/reference modeling (727 distinct speakers) and for common speaker subspace and speaker unique subspace modeling (295 speakers with 7-8 sessions each). We use 245 distinct speakers from the NIST-2005 SRE for TZ-normalization [4] (same sessions are used for both Tnorm and Z-norm). We use the core subset of the NIST-2004 SRE (which consists of a speaker set disjoint from the development dataset) for testing our techniques. In order to increase the number of trials, every target model was tested against every test session.

4.2. Systems description

2

−∑

4. Experiments

The GMM baseline system was inspired by the GMM-UBM system described in [11]. The front-end is based on of Melfrequency cepstrum coefficients (MFCC). An energy based voice activity detector is used to locate and remove nonspeech frames and the cepstral mean of the speech frames is removed. The final feature set consists of 13 cepstral coefficients augmented by 13 delta cepstral coefficients extracted every 10ms using a 25ms window. Feature warping [12] is applied with a 300 frame window. GMMs of order 1024 with fixed diagonal covariance matrices are adapted from a UBM. The kernel-PCA based system uses the same front-end as the GMM baseline. We examined 3 kernels based on GMMs trained for both training sessions and test sessions: K1 denotes a linear kernel on the weighted-normalized GMM means inspired by [5], K2 denotes a kernel on the GMM weights inspired by [13], and K3 is a combination of both.

K1 (x, y ) =

g =1

K 2 (x , y ) =

∆2u ( y ,s ) 2σ u2

D µx µy g,d g,d

G

g ∑ wUBM ∑

2 d =1 2σ g , d

G wx w y g g

(20)

∑ wUBM

(21)

K 3 (x, y ) = K1 ( x, y ) + αK 2 ( x, y )

(22)

g =1 g

e (18) 2π σ u and estimate σu from the development data. Equation (18) is based on an assumption that intra speaker-variability in speaker unique subspace is distributed normally.

In (20-22) x and y denote a pair of sessions, wgx , wgy and

3.5. Modeling in feature space

µ gx , d , µ gy, d denote the weight and the d-th mean coordinate

Multiplying equations (17, 18) results in (19):

of the g-th Gaussian of the GMM trained for sessions x and y respectively, and σ g, d denotes the d-th coordinate of the

m

(

)

Pr f ( y ) s =

−∑ 1

(2π )

m m +1 2 σu i =1



e βi

i =1

[T~( y )− µ~ ] 2β i

s i

2



∆2u ( y ,s ) 2σ 2 u

. (19)

standard deviation of the g-th Gaussian. An additional baseline system named ‘weights baseline’ was developed using GMMs for parameterization of both training and testing sessions. The scoring function is based on kernel K2 and is:

y G  wgx − w g 

( ) ∑

Pr x y =

g =1

2



(23)

wUBM g

4.3. Results In Table 1 we present the equal error rate (EER) and minimal DCF [14] for the baseline GMM, the weights baseline and the kernel-PCA based systems. In Figure 1 we present a DET curve [14] for selected experiments. Table 1. Results for the kernel-PCA based systems compared to the GMM baseline and weights baseline

System GMM basline K-PCA (K1) + Euclid. Distance K-PCA (K1) + CSS modeling K-PCA (K1) + SUS modeling K-PCA (K1) + CSS+SUS modeling weights baseline K-PCA (K2) + Euclid. Distance K-PCA (K2) + CSS modeling K-PCA (K2) + SUS modeling K-PCA (K2) + CSS+SUS modeling K-PCA (K3) + CSS+SUS modeling

EER (%) 10.98 15.76 9.33 6.30 6.21 13.79 13.88 9.19 13.79 9.01 6.20

minDCF 0.0425 0.0589 0.0402 0.0268 0.0268 0.0509 0.0532 0.0364 0.0469 0.0355 0.0261

Figure 1: Selected kernel-PCA based systems compared to the GMM baseline and the weights baseline.

5. Discussion The kernel-PCA framework is a generalization of both the anchor modeling framework and GMM-means supervector modeling with a multivariate normal distribution [4]. Using kernel K1 and common-speaker subspace modeling is closely related to anchor modeling with probabilistic modeling in anchor space [2] (proof in [5]), resulting in an EER of 9.33%. GMM supervector modeling with a multivariate normal distribution [4] is a special case of the kernel-PCA framework in which the reference sessions are chosen as all the development data sessions used to model intersession

variability, and using a linear kernel on the normalized GMM means (kernel K1). As opposed to the mentioned speaker recognition techniques, the kernel-PCA based framework is capable of explicitly integrating various sources of information encoded in the kernel function. Contrary to anchor modeling, the speaker unique subspace can be modeled. These properties lead to an EER of 6.2%. Furthermore, the time complexity is significantly reduced compared to anchor modeling. Examining Table 1, we deduce that probably more reference speakers should be used for K1, and that more development data is needed for robust CSS modeling using K1, contrary to (lower dimensional) K2. We note that reasonable accuracy can be achieved using only GMM weights (EER=9.01%) though such a system is highly correlated to a GMM-means based system. We are currently investigating methods for improved modeling in common-speaker space and for extensions of the kernel-PCA framework to related applications.

6. References [1] D. E. Sturim, D. A. Reynolds, E. Singer and J. P. Campbell, "Speaker indexing in large audio databases using anchor models," in Proc. ICASSP, 2001. [2] M. Collet, Y. Mami, D. Charlet and F. Bimbot, “Probabilistic anchor models approach for speaker verification,” in Proc. Interspeech, 2005, pp. 2005-2008. [3] B. Schölkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [4] H. Aronowitz, D. Irony, D. Burshtein, "Modeling intraspeaker variability for speaker recognition," in Proc. Interspeech, 2005. [5] E. Noor and H. Aronowitz, "Efficient language identification using anchor models and support vector machines," in Proc. ISCA Odyssey Workshop, 2006. [6] S.S. Kajarekar, “Four Weightings and a Fusion: A Cepsral-SVM System for Speaker Recognition,” in Proc. of ASRU, 2005. [7] A.O. Hatch, S. Kajarekar, and A. Stolcke, "Within-class Covariance Normalization for SVM-based Speaker Recognition," in Proc. of Interspeech, 2006. [8] R. Vogt and S. Sridharan, “Experiments in session variability modeling for speaker verification,” in Proc. ICASSP, 2006. [9] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint factor analysis versus eigenchannels in speaker recognition,” to appear in IEEE Trans. Audio Speech and Language Processing. [10] W. M. Campbell, D. E. Sturim, D. A. Reynolds, A. Solomonoff, “SVM based speaker verification using a GMM supervector kernel and NAP variability compensation,” in Proc. ICASSP, 2006, pp. 97-100. [11] D. A. Reynolds, T. F. Quatieri and R. B. Dunn, "Speaker verification using adapted Gaussian mixture models," Digital Signal Processing, Vol. 10, No.1-3, pp. 19-41, 2000. [12] J. Pelecanos and S. Sridharan, ”Feature warping for robust speaker verification,” in Proc. ISCA Odyssey Workshop, 2001, pp. 213-218. [13] W. M. Campbell, J. P. Campbell, D. A. Reynolds, D. A. Jones, and T. R Leek, “High-level speaker verification with support vector machines,” in Proc. ICASSP, 2004. [14] The NIST Year 2004 Speaker Recognition Evaluation Plan, http://www.ist.gov/speech/tests/spk/2004/.

Speaker Recognition using Kernel-PCA and ...

Modeling in common speaker subspace (CSS). The purpose of the projection of the common-speaker subspace into Rm using the distance preserving ...

81KB Sizes 2 Downloads 223 Views

Recommend Documents

SPEAKER-TRAINED RECOGNITION USING ... - Vincent Vanhoucke
advantages of this approach include improved performance and portability of the ... tion rate of both clash and consistency testing has to be minimized, while ensuring that .... practical application using STR in a speaker-independent context,.

Efficient Speaker Recognition Using Approximated ...
metric model (a GMM) to the target training data and computing the average .... using maximum a posteriori (MAP) adaptation with a universal background ...

SPEAKER-TRAINED RECOGNITION USING ... - Vincent Vanhoucke
approach has been evaluated on an over-the-telephone, voice-ac- tivated dialing task and ... ments over techniques based on context-independent phone mod-.

Automatic speaker recognition using dynamic Bayesian network ...
This paper presents a novel approach to automatic speaker recognition using dynamic Bayesian network (DBN). DBNs have a precise and well-understand ...

Speaker Recognition using Kernel-PCA and ... - Semantic Scholar
[11] D. A. Reynolds, T. F. Quatieri and R. B. Dunn, "Speaker verification using adapted Gaussian mixture models,". Digital Signal Processing, Vol. 10, No.1-3, pp.

Automatic Speech and Speaker Recognition ... - Semantic Scholar
7 Large Margin Training of Continuous Density Hidden Markov Models ..... Dept. of Computer and Information Science, ... University of California at San Diego.

Speaker Recognition Final Report - GitHub
Telephone banking and telephone reservation services will develop ... The process to extract MFCC feature is demonstrated in Figure.1 .... of the network. ..... //publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf. [2] David ...

Speaker Recognition in Adverse Conditions
parametric density estimation methods are generally known to be superior when .... Application of a threshold to perform the verification decision creates to two ...

Speaker Recognition in Adverse Conditions
cation is performed in the presence of little or no interference. In general, these systems ..... ing the Gaussian kernel is that it does not have finite support. However, for ..... AT&T Bell Labs in Murray Hill, New Jersey, and also for the. Speech 

Intersession Variability in Speaker Recognition
Switchboard-I database (SWB-I) is used to study the ... subspace (IVS), computed from the SWB-I database, is ... under the name of intersession variability.

speaker identification and verification using eigenvoices
approach, in which client and test speaker models are confined to a low-dimensional linear ... 100 client speakers for a high-security application, 60 seconds or more of ..... the development of more robust eigenspace training techniques. 5.

THE UMD-JHU 2011 SPEAKER RECOGNITION SYSTEM D Garcia ...
presence of reverberation and noise via the use of frequency domain perceptual .... best capture the speaker and the channel variabilities in supervector space.

Approaches to Speech Recognition based on Speaker ...
best speech recognition submissions in its Jan- ... ity such as telephone type and background noise. ... of a single vector to represent each phone in context,.

THE UMD-JHU 2011 SPEAKER RECOGNITION SYSTEM D Garcia ...
University of Maryland, College Park, MD, USA. 2 ..... (FL), FDLP-mel (FM) and Cortical (CC) features for the NIST SRE 2010 extended core data set. For the ...

Speaker Recognition in Two-Wire Test Sessions
The authors would like to thank Oshry Ben-Harush, Itshak. Lapidot, and Hugo Guterman for providing automatic speaker diarization labels, and Reda Dehak, Najim Dehak, and David van Leeuwen for their assistance with the NAP-SVM system. 8. References. [

SPEAKER IDENTIFICATION IMPROVEMENT USING ...
Air Force Research Laboratory/IFEC,. 32 Brooks Rd. Rome NY 13441-4514 .... Fifth, the standard error for the percent correct is zero as compared with for all frames condition. Therefore, it can be concluded that using only usable speech improves the

Fast Speaker Adaptive Training for Speech Recognition
As we process each speaker we store the speaker-specific count and mean statistics in memory and then at the end of the speaker's data we directly increment ...

Speaker Recognition in Two-Wire Test Sessions - Semantic Scholar
techniques both in the frame domain and in the model domain. The proposed .... summed the two sides of the 4w conversations to get the corresponding 2w ...

Highly Noise Robust Text-Dependent Speaker Recognition Based on ...
conditions and non-stationary color noise conditions (factory, chop- per and babble noises), which are also the typical conditions where conventional spectral subtraction techniques perform poorly. Index Terms: Robust speaker recognition, hypothesize

Speaker Recognition in Encrypted Voice Streams
Alexi, Diana, Vitaly, Daniel, Elena, Gancho, Anna, Daniela for never giving up on me and supporting me in the past several months. Last (and least), I would like to thank my gym for the long opening hours, Trey Parker and Matt Stone for the philosoph

Fusion of heterogeneous speaker recognition systems in ... - CiteSeerX
Speech@FIT, Faculty of Information Technology Brno University of Tech- nology, Czech Republic. ... The STBU consortium was formed to learn and share the tech- nologies and available ...... as Associate Professor (Doc.) and Deputy Head of.

Approaches to Speech Recognition based on Speaker ...
best speech recognition submissions in its Jan- ... when combined with discriminative training. .... The covariances are shared between the classes, and ..... tions for hmm-based speech recognition. Computer. Speech and Language, 12:75–98 ...