MUSIC IDENTIFICATION WITH WEIGHTED FINITE-STATE TRANSDUCERS Eugene Weinstein 1,2 and Pedro Moreno 1 1

Google Inc. 76 Ninth Avenue, New York, NY 10011. 2 Courant Institute of Mathematical Sciences 251 Mercer Street, New York, NY 10012. ABSTRACT Music identification is the process of matching an audio stream to a particular song. Previous work has relied on hashing, where an exact or almost-exact match between local features of the test and reference recordings is required. In this work we present a new approach to music identification based on finite-state transducers and Gaussian mixture models. We apply an unsupervised training process to learn an inventory of music phone units similar to phonemes in speech. We also learn a unique sequence of music units characterizing each song. We further propose a novel application of transducers for recognition of music phone sequences. Preliminary experiments demonstrate an identification accuracy of 99.5% on a database of over 15,000 songs running faster than real time. Index Terms— Music identification, acoustic modeling, finite-state transducers.

In this work we present a search based approach using finite-state transducers (FSTs). We learn an inventory of music sound units via clustering, and train Gaussian mixture models for each unit. The use of Gaussian mixtures to model the acoustics of music phones allows us to leverage techniques commonly used in speech recognition. Our acoustic modeling approach can tolerate small variations in acoustic conditions more naturally than hashing, which is based on exact or almost exact matching of fingerprints. In our approach songs are represented by unique sequences of music phones. We construct a compact recognition transducer where each path corresponds to a unique song. This representation offers many unique advantages. For instance generic automata operations such as minimization allow us to remove redundancies in the song graph. As a result efficient search is possible. Effectively we propose to transform the problem of music identification into a string matching operation where FSTs are a natural formalism for constraining the search.

1. INTRODUCTION Automatic identification of music has recently been of substantial interest [4, 1, 3]. This technology can be applied to a number of usage scenarios. End users can identify the song title, album and recording artist(s) of a song with just a short audio snippet. Content distribution networks can identify copyrighted audio within their systems. Finally, recording labels can monitor radio broadcasts to ensure correct accounting. Music identification is challenging because the recording might be noisy, of poor quality, and/or marred with channel effects. In addition, the test recording might consist of only a few seconds of audio which needs to be aligned to the reference recording to make a match. Previous work in music identification can be classified into hashing and search approaches. Hashing approaches (e.g., [4]) compute local fingerprints for an audio snippet, retrieve songs with matching fingerprints from a hash table, and pick amongst these candidates using some metric of match accuracy. Some hashing approaches [3] learn the features discriminatively. In a search-based approach [1], cepstral features over the audio stream are decoded directly into a sequence of audio events, as in speech recognition. Both the decoding and the mapping of sound sequences to songs is driven by a hidden Markov model (HMM) representation. However, this system looks only for atomic sound sequences of a particular length, presumably to control search complexity.

2. ACOUSTIC MODELING Our acoustic modeling approach consists of jointly learning an inventory of music phones and the sequence of units best representing each song. We represent each song as a sequence of mel-frequency cepstral coefficient (MFCCs) vectors. Cepstra have been used extensively in speech processing, and have also recently been shown to be effective in the analysis of music [1, 9]. We use the first twelve coefficients, the energy, and their first and second derivatives to produce a 39dimensional feature vector. Each song is initially broken into pseudo-stationary segments. We use 100ms windows over the feature stream. Single diagonal covariance Gaussian models are fitted to each window and the Kullback-Liebler (KL) divergence is measured between adjacent windows. We hypothesize segment boundaries where the KL divergence is above an experimentally determined threshold. We next apply clustering to the segment collection. For each segment we train a single initial diagonal covariance Gaussian model. As in [5] we use maximum likelihood as an objective distance function rather than the more common Euclidean distance. See [5] for further details. Since song data labeled with music phone transcriptions is not available we cannot use the standard EM training al-

900

factors of x is denoted by Fact(x) and more generally the set of factors of all songs in S is denoted by Fact(S). A correct transcription of an in-set song snippet is thus an element of Fact(S). In what follows we will assume that only one song identifier sx is associated to each factor x. This in fact turns out to be empirically the case even for relatively short song snippets. The recognition transducer T must thus represent the following mapping:

1,024 phones 512 phones 256 phones

800

Edit Distance

700 600 500 400 300 200 0

2

4

6

8 10 12 14 Training Iteration

16

18

20

Fig. 1. Average edit distance per song vs. training iteration gorithm. Instead we use an approach similar to that of [1]. Music phones are represented with Gaussian mixture acoustic models (GMMs). We alternate between finding the best music phone transcription per song given the current model and refining the GMMs given the current transcriptions. The reference transcription changes for each iteration of GMM learning. Therefore it is not possible to use training data likelihood as a measure of convergence. We instead use the edit distance, a string similarity metric defined for our purposes as the minimal number of insertions, substitutions, and deletions of music phones required to transform one transcription into another. For a song set S let ti (s) be the transcription given to song s at iteration i and ED(a, b) the edit distance of sequences a and b. P At each iteration i, we compute the total edit distance Ci = s∈S ED(ti (s), ti−1 (s)) as our convergence measure. We have found Ci to converge after around twenty iterations. Figure 1 illustrates how this quantity changes during training for three phone inventory sizes. 3. RECOGNITION TRANSDUCER Our music identification system is based on weighted finitestate transducers and Viterbi decoding as is common in speech recognition [8]. The decoding is based on the acoustic model described in the previous section and a compact transducer that maps music phone sequences to corresponding song identifiers. This section describes the algorithms for constructing such a transducer. Given a finite set of songs S, the music identification task consists of determining the songs in S that contain a query song snippet x. Since a song snippet may be an arbitrary sample extracted from a song, the recognition transducer must map any sequence of music phones appearing in a song to the corresponding song identifiers. 3.1. Factor automata More formally, let ∆ denote the set of music phones. The song set S = {x1 , . . . , xm } is a set of sequences in ∆∗ . A factor, or substring, of a sequence x ∈ ∆∗ is a sequence of consecutive phones appearing in x. Thus, y is a factor of x iff there exists u, v ∈ ∆∗ such that x = uyv. The set of

[[T ]] : Fact(S) → x 7→

N [[T ]](x) = yx .

(1)

The size of Fact(S) may be quite large, and thus it is not immediately clear that it is possible to construct a compact transducer with this property. In our experiments, |S| is about 15,000 and the average length of a song is more than 1,700. Thus, in the worst case, | Fact S| can be as large as 15,000 × 1,7002 ≈ 43 × 109 . The size of a naive trie-based representation would thus be prohibitive. However, several automata-theoretic results suggest that a compact representation may be possible [2]. In fact, the size of the minimal deterministic automaton representing a single sequence x is linear in |x|. More precisely [2]: |Q| ≤ 2|x| − 2

|E| ≤ 3|x| − 4.

(2)

where Q and E are the set of states and transitions of the factor automaton of x. For the case of the minimal deterministic factor automaton representing S, we have found that the number of transitions is about twice that of the original automaton. In particular, the minimal automaton representing all 15,000 songs has about 27.5M transitions and the corresponding minimal factor automaton about 59.5M transitions. 3.2. Factor Transducer Construction In the following, we describe the construction of the factor automaton of S from a finite automaton accepting the sequences of S, and more generally, the construction of a compact recognition transducer T . We construct a deterministic and minimal automaton representing the sequences in S and more generally a deterministic (subsequential) and minimal finite-state transducer mapping each song to its identifier using transducer determinization and minimization algorithms [6, 7]. Figure 2 shows the transducer mapping each song to its identifier before application of determinization and minimization, when S is reduced to three short songs. Let A0 be the acceptor obtained by omitting the output labels of T0 . The factor automaton A of S is constructed from A0 by creating an -transition from the initial state of A0 to all other states of A0 and making all states of A0 final. This automaton clearly accepts any factor of A0 . Applying determinization and minimization, we create the minimal deterministic automaton A1 representing Fact(S). Figure 3(a) shows the result of that construction when applied to the acceptor A0 obtained by omitting the output labels of the transducer of Figure 2.

mp_72:ε 0

1

mp_240:ε

mp_736:ε

mp_2:Beatles--Let_It_Be

2 mp_736:ε

3

4

mp_240:ε

mp_20:Madonna--Ray_Of_Light

5

10 mp_889:Van_Halen--Right_Now

mp_736:ε 6

mp_28:ε

mp_349:ε

7

mp_448:ε

8

9

Fig. 2. Finite-state transducer T0 mapping each song to its identifier. mp_2

mp_2/0

mp_20/1

mp_20

0

mp_72

2

mp_240

5

0

mp_2 mp_2

2/0

mp_240/0

5/0

mp_2/0 mp_2/0

1

1/0

mp_240/0

mp_240

3

mp_736 mp_736

mp_20 mp_20

mp_240

4

mp_72/0

mp_240

6

7

(a)

3/0

mp_736/1 4/0

mp_20/1 mp_20/0

mp_240/0 mp_736/0 mp_240/0

6/0

7/0

(b)

Fig. 3. (a) Deterministic and minimal unweighted factor acceptor A1 for two songs. (b) Deterministic and minimal weighted factor acceptor A2 for two songs. This leads to a compact automaton representing Fact(S), but it does not allow us to output the song identifier associated with each factor. To accomplish this, one could augment all accepting paths of the automaton A1 , i.e. all factors, with output labels corresponding to the song identifiers. However, since output labels cannot be shared among factors along the same path, this would require splitting states to separate paths and would yield a transducer exponentially larger than A1 . We instead create a compact weighted acceptor over the tropical semiring accepting Fact(S) that associates the total weight sx to each x ∈ Fact(S). A crucial advantage of this representation is the use of weighted determinization and minimization [6] during which the song identifier is treated as a weight that can be distributed along a path. The property that the sum of the weights along the path labeled with x is sx is preserved by these operations. A2 is constructed by adding weights to A1 . The weight assigned to an -transition reaching state q is the integer song identifier of the unique accepting path in A0 going through q. All other weights are zero. This guarantees the weight of each factor x to be exactly sx . The final automaton A2 is produced by applying weighted determinization and minimization over the tropical semiring. The weighted acceptor A2 , obtained after determinization and minimization, is transformed into a transducer T by simply treating each output weight integer as a regular output symbol. Figure 3(b) shows the weighted automaton A2 corresponding to the unweighted automaton A1 of Figure 3(a). Note that the number of states and transitions has not increased from A1 to A2 . Remarkably, even in the case of 15,000 songs, the total number of transitions of the weighted acceptor A2 was about 59.6M, only about 0.17% more than that of A1 . This confirms empirically the benefits of our representation. Thus, the construction algorithm just described leads to a recognition transducer T whose size is less than 2.2 times that of the

minimal deterministic transducer representing all songs. 3.3. Weighted Factor Transducers The finite-state transducer T just described is unweighted, that is all song snippets are assigned the same log-likelihood. However the distribution of the weights along a given path can be changed (without affecting the total weight) to substantially improve recognition speed when using a Viterbi decoder with beam pruning. This can be done by applying the weight pushing algorithm in the log semiring [7] to the transducer T . As a result each prefix x of a path in T is weighted according to the frequency of snippets starting with x. As a result, more frequent factors are preferred during decoding. Our experiments show that this can result in a substantially higher accuracy for a given speed. 4. EXPERIMENTAL RESULTS Our data set consisted of 15,455 songs in 128kbps MP3 format. The average song duration was 3.9 minutes, for a total of over 1, 000 hours of training audio. For the main set of experiments, we trained and tested on “clean” data, i.e., audio without any added noise or distortions. However, a preliminary experiment on 1, 000 song snippets mixed with additive white noise at a SNR of 25dB resulted in an identification accuracy of 95.9%, suggesting that our acoustic models are robust to noise. Figure 4 gives accuracy rates for three experiments. All figures are for in-set song classification only – the rejection performance of the classifiers is tested separately. We selected 15,455 randomly collected segments (one per song) from our song collection. FS is the full song identification task where we use a full song transducer T0 and full song snippets. This experiment tests the performance of the system under ideal

quence of music units. While we have not explored this aspect deeply, preliminary experiments suggest that this representation may yield interesting insights into structural features of music, such as repeated chorus elements and tune similarity, or plagiarism, across songs. In future work, we will characterize the performance of the system in the presence of various types of noise and distortions. We also plan to explore the use of discriminative learning techniques to further improve the performance of the system under noise and signal distortion conditions.

100 99 98 Accuracy

97 96 95 94 93 FS PS PS-PUSH

92 91 5

10

15 20 Beam Size

25

30

Acknowledgements Fig. 4. In-set accuracy for three different experiments circumstances. PS is the partial song task where identification is performed on ten seconds of song audio, with the unweighted factor transducer T . PS-PUSH is the same task after using weight pushing to add weights to T . We have also tested the ability of our system to reject outof-set songs. We constructed a mixture Gaussian background acoustic model by clustering mixture components across all music phones. We then constructed a three-dimensional feature vector [Lr , Lb , (Lr − Lb )] for each song snippet, where Lr and Lb are the log-likelihoods of the best path and background acoustic models, respectively. We applied a support vector machine (SVM) to 15,455 positive and 13,811 negative (out-of-set) song snippets. With ten-fold cross-validation and a radial basis function kernel, we achieve an accuracy of 96.4%. Music identification using the whole song audio works almost flawlessly (99.7% in-set identification accuracy). Ten second snippets are recognized with the factor transducer with degraded accuracy (low 90’s). However, weight pushing improves the accuracy of the system back up to 99.5%. All experiments run faster than real time: for instance with a search beam width of 12, the runtime is 0.48 of real time. 5. CONCLUSION In this paper we describe a novel application of weighted finite-state transducers to music identification. In contrast to previous approaches our system does not rely on obtaining a near-exact match between test and reference feature values. In addition our system matches music phone sequences of arbitrary length. We have performed an initial set of identification experiments which have demonstrated the accuracy and potential of our system. We believe that the combination of GMMs for acoustic modeling and FSTs for song structure representation and manipulation offers some intriguing advantages over more traditional hashing schemes. GMMs naturally tolerate small changes in the music signal. Simple signal processing techniques such as cepstral mean and variance normalization can be also leveraged to handle small noise and channel distortion effects. Our approach also learns a symbolic representation for each song, i.e., how to transcribe each song as a se-

The authors would like to sincerely thank Mehryar Mohri for his substantial contribution to this project. In addition, we would like to acknowledge all the members of the Google speech team, especially Michiel Bacchiani, Johan Schalkwyk, and Michael Riley, for their help and advice. This work was partially funded by NYSTAR and by the Dept. of the Army Award Number W23RYX-3275-N605. The U.S. Army Medical Research Acquisition Activity is the awarding and administering acquisition office. The content of this material does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred.

6. REFERENCES [1] E. Batlle, J. Masip, and E. Guaus. Automatic song identification in noisy broadcast audio. In IASTED International Conference on Signal and Image Processing, Kauai, Hawaii, 2002. [2] M. Crochemore. Transducers and repetitions. Theoretical Computer Science, 45(1):63–86, 1986. [3] M. Fink, M. Covell, and S. Baluja. Social and interactive television application based on real time ambient audio indentification. EuroITV 2006, May 2006. [4] J. Haitsma, T. Kalker, and J. Oostveen. Robust audio hashing for content identification. In Content-Based Multimedia Indexing (CBMI), Brescia, Italy, September 2001. [5] M.Bacchiani and M. Ostendorf. Joint lexicon, acoustic unit inventory and model design. Speech Communication, 29:99–114, November 1999. [6] M. Mohri. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269–311, 1997. [7] M. Mohri. Statistical Natural Language Processing. In M. Lothaire, editor, Applied Combinatorics on Words. Cambridge University Press, 2005. [8] M. Mohri, F. C. N. Pereira, and M. Riley. Weighted Finite-State Transducers in Speech Recognition. Computer Speech and Language, 16(1):69–88, 2002. [9] D. Pye. Content-based methods for the management of digital music. In ICASSP, pages 2437–2440, Istanbul, Turkey, June 2000.

Music Identification with Weighted Finite-State ... - Research at Google

tle, album and recording artist(s) of a song with just a short au- .... In the following, we describe the construction of the factor au- tomaton .... We applied a support.

78KB Sizes 2 Downloads 327 Views

Recommend Documents

Efficient and Robust Music Identification with Weighted Finite-State ...
of Mathematical Sciences, New York, NY USA, and Google Inc. e-mail: {mohri ... tion II presents an overview of our music identification ap- ...... he worked for about ten years at AT&T Labs - ... M.Eng. and B.S. degree in Computer Science from.

Learning with Weighted Transducers - Research at Google
b Courant Institute of Mathematical Sciences and Google Research, ... over a vector space are the polynomial kernels of degree d ∈ N, Kd(x, y)=(x·y + 1)d, ..... Computer Science, pages 262–273, San Francisco, California, July 2008. Springer-.

Efficient and Robust Music Identification with Weighted Finite-State ...
large-scale music identification system can be constructed for a database of over 15 .... In a second step we apply ordinary k-means clustering to refine the clusters ... 3) Measuring Convergence: In speech recognition, each utterance is usually ...

Music Identification with Weighted Finite-State ...
ply an unsupervised training process to learn an inventory of music phone units ... database of over 15,000 songs running faster than real time. Index Terms— ...

Efficient and Robust Music Identification with Weighted Finite-State ...
that it helps speed up the construction of the weighted suffix automaton in our task by a ... In a practical setting, the test recording provided to a music identification ...... to be r (line 19) and that of r is set to be s[q′] (18), and l[r] set

Efficient and Robust Music Identification with Weighted Finite-State ...
be used to give a compact representation of all song snippets for a large .... song during the course of three iterations of acoustic model training. mpx stands for ...... transducers in speech recognition,” Computer Speech and Language, vol.

Efficient and Robust Music Identification with Weighted Finite-State ...
a database of over 15 000 songs while achieving an identification ... our system is robust to several different types of noise and ..... can be distributed along a path. ...... [9] D. Pye, “Content-based methods for the management of digital music,

Improved Consistent Sampling, Weighted ... - Research at Google
simple data statistics to reduce the running time of hash ... statistics (such as histograms of color or texture). .... To obtain the distribution P(z), let us analyze its.

Question Identification on Twitter - Research at Google
Oct 24, 2011 - It contains two steps: detecting tweets that contain ques- tions (we call them ... empirical study,. 2http://blog.twitter.com/2011/03/numbers.html.

AUTOMATIC LANGUAGE IDENTIFICATION IN ... - Research at Google
this case, analysing the contents of the audio or video can be useful for better categorization. ... large-scale data set with 25000 music videos and 25 languages.

Large-scale speaker identification - Research at Google
promises excellent scalability for large-scale data. 2. BACKGROUND. 2.1. Speaker identification with i-vectors. Robustly recognizing a speaker in spite of large ...

Weighted Proximity Best-Joins for Information ... - Research at Google
1Department of Computer Science, Duke University; {rvt,junyang}@cs.duke.edu. 2Google ... input to the problem is a set of match lists, one for each query term, which ... Two matchsets with different degrees of clusteredness but equal-size enclosing .

Filters for Efficient Composition of Weighted ... - Research at Google
degree of look-ahead along paths. Composition itself is then parameterized to take one or more of these filters that are selected by the user to fit his problem.

Weighted Flowtime on Capacitated Machines - Research at Google
clouds which share resources and machines among ... farms and clouds often have excess capacity provi- ...... com/solutions/cloud-computing/index.html〉.

4. OpenFst: An Open-Source, Weighted Finite ... - Research at Google
and its Applications to Speech and Language. Michael ... APIs that make it easy to embed and extend; and (d) is a platform for active research and use among.

Encoding linear models as weighted finite-state ... - Research at Google
be used to apply the model to lattice input (or other more gen- eral automata) ..... of smoothing methods and n-gram orders on the development set, and settled ...

music models for music-speech separation - Research at Google
applied, section 3 describes the training and evaluation setup, and section 4 describes the way in which parameters were tested and presents the results. Finally, section 5 ..... ments, Call Centers and Clinics. 2010, A. Neustein, Ed. Springer.

RESEARCH ARTICLE Predictive Models for Music - Research at Google
17 Sep 2008 - of music, that is for instance in terms of out-of-sample prediction accuracy, as it is done in Sections 3 and 5. In the first .... For example, a long melody is often composed by repeating with variation ...... under the PASCAL Network

Frame by Frame Language Identification in ... - Research at Google
May 20, 2014 - Google Services. Reported results .... Figure 1: DNN network topology. EM iterations .... eral Google speech recognition services such as Voice.

BACKGROUND MUSIC IDENTIFICATION THROUGH CONTENT ...
Department of Computer Science and Information Engineering, National Chiayi .... ies/TV programs sometimes contain different noise ele- .... top-1 precision.

BACKGROUND MUSIC IDENTIFICATION THROUGH CONTENT ...
In digital music domain, some industrial sys- tems like Snocap, Music2Share, and Shazam provide ma- ture techniques in music sharing and search applications ...

BACKGROUND MUSIC IDENTIFICATION THROUGH CONTENT ...
tical steps are employed to analyze the audio signal. First, based on our observation in TV broadcasting and video, we leverage the stereo format capability to ...

HMM-based script identification for OCR - Research at Google
be specified [4], but to the best of our understanding doing so in current .... to distinguish character classes (or here, script classes in ..... IEEE Computer Society.

Automatic Language Identification using Long ... - Research at Google
applications such as multilingual translation systems or emer- gency call routing ... To establish a baseline framework, we built a classical i-vector based acoustic .... the number of samples for every language in the development set (typically ...