Structured Inference for Recurrent Hidden Semi-markov Model Haoli Bai Sch. of Comp. Sci.& Eng. The Chinese Univ. of HongKong [email protected]

Hao Liu SMILE Lab, Yingcai Honors College Univ. of Elec. Sci. & Tech. of China [email protected]

Lirong He, Zenglin Xu SMILE Lab, Sch. of Comp. Sci.& Eng., Univ. of Elec. Sci. & Tech. of China [email protected], [email protected]

Abstract Recent advances in generative sequential modeling have suggested to combine recurrent neural networks with state space models. This combination can model not only the long term dependency in sequential data, but also the uncertainty included in the hidden states. Inheriting these advantages, we present a structured and stochastic sequential neural network, which models the uncertainty of the segmentation and labels via discrete random variables. We also present a bi-directional efficient inference network by reparameterizing the categorical segmentation with the Gumbel-Softmax approximation and resorting to the Stochastic Gradient Variational Bayes. Experimental results have demonstrated the significant improvement of our model over the state-of-the-art methods in a number of tasks.

1

Introduction

Models for sequential data analysis such as recurrent neural networks (RNNs) [1] and hidden Markov models (HMMs) [2] are widely used. Recent literatures have investigated approaches of combining probabilistic generative models and recurrent neural networks for the sake of their complementary strengths in the nonlinear representation learning and effective estimation of parameters [3, 4, 5]. However, most of existing models are designed primarily for continuous situations and do not extend to discrete latent variables [3, 6, 7, 8], Although the work by [4] utilizes discrete variables with informative information for label prediction of segmentation, the inference approach does not explicitly take advantage of structured information to exploit the bi-directional temporal information, and thus may lead to suboptimal performance. To address such issues, we present the Stochastic Sequential Neural Network (SSNN) which is composed with a generative network and an inference network. The generative network is a Recurrent HSMM [4], which includes a continuous sequence (i.e., hidden states in RNN) as well as two discrete sequences (i.e., segmentation variables and labels in SSM). The inference network can take advantage of the bi-directional temporal information by augmented variables and efficiently approximating the categorical variables in segmentation and segment labels via the Gumbel-Softmax approximation [9, 10]. Thus, SSNN can not only model the complex and long-range dependencies in sequential data, but also maintain the structure learning ability of SSMs with efficient inference. Experimental results in terms of both model fitting and labeling of learned segments have demonstrated the promising performance of the proposed model in a number of tasks. 1

2

Model

First, we introduce the model notations. x1:T = [x1 , x2 , ..., xT ] denotes a sequence of temporal sequences of vectors, which depends on the deterministic variables h1:T = [h1 , h2 , ..., hT ] in RNN, hidden state variables z1:T = [z1 , z2 , ..., zT ] and time duration variables d1:T = [d1 , d2 , ..., dT ] in HSMM. Here xt ∈ Rm , ht ∈ Rh , zt ∈ {1, 2, ..., K} and dt ∈ {1, 2, .., M }. We set s1:L = [s1 , s2 , .., sL ] as the beginning of the segments. An illustration is given in Figure 1(a). For the simplicity of explanation, we present our model on a single sequence. Time segments

2

1 𝑠1

Time step t

1

3

𝑠2 2

3

𝐿

𝑠3 4

5

6

7

······

Duration variable d1:𝑇 d1 =2

d2 =1 d3 = 3 d4 = 2 d5 = 1 d6 = 1

Hidden state 𝑧1:𝑇 𝑧1

𝑧2

𝑧3

𝑧4

𝑧5

𝑧6

𝑧7

Observed sequence 𝒙1:𝑇 𝑥1

𝑥2

𝑥3

𝑥4

𝑥5

𝑥6

······

𝑥7

······

𝑠𝐿−1

𝑠𝐿

𝑇−1

𝑇

d 𝑇−1 = 1

𝑧𝑇−1

𝑧𝑇

······ 𝑥𝑇−1

𝑥𝑇

(a)

x3

x4

x1

x2

h1

h2

h3

h4

z1

z12

z13

z14

d1

d2

d3

d4

d1

d2

d3

d4

z1

z2

z3

z4

I1

I2

I3

I4

h1

h2

h3

h4

x1

x2

x3

x4

(b)

(c)

Figure 1: (a) is a visualization of the sequence x1:T with the corresponding time segments, hidden states z1:T and duration variables d1:T , And (b,c) are the generative network and inference network of SSNN respectively.

2.1

Generative Model

In order to model the long-range temporal dependencies and the uncertainty in segmentation and labeling of time series, we aim to take advantage from RNN and HSMM, and learn categorical information and representation information from the observed data recurrently. As illustrated in Figure 1(b), we design a generative network with one sequence of continuous latent variables modeling the recurrent hidden states, and two sequences of discrete variables denoting the segment duration and labels, respectively. The joint probability can be factorized as: T Y pθ (x1:T , z1:T , d1:T ) = pθ (x1:T |z1:T , d1:T )pθ (z1 )pθ (d1 |z1 ) · pθ (zt |zt−1 , dt−1 )pθ (dt |zt , dt−1 ). t=2

(1) To learn more interpretative latent labels, we follow the design in HSMM to set zt and dt as categorical random variables. The distribution of zt and dt is   I(zt = zt−1 ) if dt−1 > 1 I(dt = dt−1 − 1) if dt−1 > 1 pθ (zt |zt−1 , dt−1 ) = , pθ (dt |zt , dt−1 ) = , pθ (zt |zt−1 ) otherwise pθ (dt |zt ) otherwise where I(x) is the indicator function (whose value equals 1 if x is True, and otherwise 0). The transition probability pθ (zt |zt−1 ) and pθ (dt |zt ) can be achieved by learning a transition matrix. The joint emission probability pθ (x1:T |z1:T , d1:T ) can be further factorized into multiple segments. Specifically, for the i-th segment xsi :si +dsi −1 starting from si , the corresponding generative distribution is si +dsi −1

pθ (xsi :si +dsi −1 |zsi , dsi ) =

Y

si +dsi −1

pθ (xt |xsi :t−1 , zsi ) =

t=si

Y

pθ (xt |ht , zsi ),

(2)

t=si

where ht is the latent deterministic variable in RNN. It can better model the complex dependency among segments, and capture past information of the observed sequence xt−1 as well as the previous (zs ) (zs ) (zs ) state ht−1 . We design ht = σ(Wx i xt−1 +Wh i ht−1 +bh i ), where σ(·) is the tanh activation K×h×m K×h×h function, Wx ∈ R and Wh ∈ R are weight parameters, and bh ∈ RK×h is the bias (zsi ) (zs ) (zs ) term. Wx ∈ Rh×m is the zsi -th slice of Wx , and it is similar for Wh i and bh i . Finally, the distribution of xt given ht and zsi is designed by a Normal distribution, pθ (xt |ht , zsi ) = N (x; µ, σ 2 ), (zs )

(zs )

(3)

where the mean satisfies µ = Wµ i ht + bµ i , and the covariance is a diagonal matrix with its log (zs ) (zs ) diagonal elements log σ 2 = Wσ i ht + bσ i . θ denotes all the parameters in the generative model. 2

2.2

Structured Inference

The marginal log-likelihood log p(x) is usually intractable, so we focus on maximizing the evidence lower bound also known as ELBO, as follows, log pθ (x) ≥ L(x1:T ; θ, φ) = Eqφ (z1:T ,d1:T |x1:T ) [log pθ (x1:T , z1:T , d1:T ) − log qφ (z1:T , d1:T |x1:T )], where qφ (·) denotes the approximate posterior distribution, and θ and φ denote parameters for their corresponding distributions, repsectively. We devise a bi-directional inference scheme and resort to the Stochastic Gradient Variatioanl Bayes (SGVB) method since it could efficiently learn the approximation with relatively low variances [11]. 2.2.1

Bi-directional Inference

In order to find a more informative approximation to the posterior, we augment both random variables dt , zt with bi-directional information in the inference network. Such attempts have been explored in many previous work [8, 12, 6], however they mainly focus on continuous variables, and little attention is paid to the discrete variable. Specifically, we first learn a bi-directional deterministic variable hˆt = BiRNN(x1:t , xt:T ) , where BiRNN is a bi-directional RNN with each unit implemented as an LSTM [13]. Similar to [5], we further use a backward recurrent function It = gφI (It+1 , [xt , hˆt ]) to explicitly capture forward and backward information in the sequence via hˆt , where [xt , hˆt ] is the concatenation of xt and hˆt . The posterior approximation can be factorized as qφ (z1:T , d1:T |x1:T ) = qφ (z1 |I1 )qφ (d1 |z1 , I1 ) ·

T Y

qφ (zt |dt−1 , It )qφ (dt |dt−1 , zt , It ),

(4)

t=2

and the graphical model for the inference network is shown in Figure.1(c). We use φ to denote all parameters in inference network. Furthermore, We design the posterior distributions of dt and zt to be categorical distributions, i.e.: q(zt |dt−1 , It ; φ) = Cat(softmax(WzT It )), q(dt |dt−1 , zt , It ; φ) = Cat(softmax(WdT It )),

(5)

where Cat denotes the categorical distribution. The posterior distributions of zt and dt depend on both the forward sequences (i.e., ht:T and xt:T ) and the backward sequences (i.e., h1:t−1 and x1:t−1 ), leading to a more informative approximation. However, the reparameterization tricks and their extensions [14] are not directly applicable due to the discrete random variables, i.e., dt and zt in our model. Thus we turn to the recently proposed Gumbel-Softmax reparameterization trick [9, 10], as shown in the following. 2.2.2

Gumbel-Softmax Reparameterization

The Gumbel-Softmax reparameterization proposes an alternative to the back propagation through discrete random variables via the Gumbel-Softmax distribution, and circumvents the non-differentiable categorical distribution. To use the Gumbel-Softmax reparameterization, we first map the discrete pair (zt , dt ) to a N -dimensional vector γ(t), and γ(t) ∼ Cat(π(t)), where π(t) is a N -dimensional vector on the simplex and N = K × M . Then we use y(t) ∈ RN to represent the Gumbel-Softmax distributed variable: exp((log(πi (t)) + gi )/τ ) yi (t) = Pk for i = 1, ..., N, (6) j=1 exp((log(πj (t)) + gj )/τ ) where gi ∼ Gumbel(0, 1), and τ is the temperature that will be elaborated in the experiment. we set y(t) ∼ Concrete(π(t), τ ) according to [10]. Now we can sample y(t) from the Gumbel-Softmax posterior in replacement of the categorically distributed γ(t). For simplicity, we denote F (z, d) = log pθ (x1:T , z1:T , d1:T ) − log q(z1:T , d1:T |x1:T ), and furthermore, F˜ (y, g) is the corresponding approximation term of F (z, d) after the GumbelSoftmax trick. The Gumbel-Softmax approximation of L(x1:T ; θ, φ) is: L(x1:T ; θ, φ) ≈ Ey∼Concrete(π(t),τ ) [F˜ (y, g)] = Eg∼QN Gumbel(0,1) [F˜ (y, g)].

(7)

Then we use the back-propagation gradient with the ADAM [15] optimizer to learn parameters θ and φ. 3

3

Experiment

We evaluate the SSNN on several datasets across multiple scenarios. Specifically, we first evaluate its performance of finding complex structures and estimating data likelihood on a synthetic dataset and two speech datasets (TIMIT & Blizard). Then we test the SSNN with learning segmentation and latent labels on Human activity [16] dataset, Drosophila dataset [17] and PhysioNet [18] Challenge dataset, and compare the results with HSMM and its variants. Finally we provide an additional challenging test on the multi-object recognition problem using the generated multi-MNIST dataset. All models in the experiment use the Adam [15] optimizer. Temperatures of Gumbel-Softmax are fixed throughout training. We implement the proposed model based on Theano [19] and Block & Fuel [20]. Due to the limited space, more experiments and details can be found in the Appendix 5. Speech Modeling We test SSNN on the modeling of speech data, i.e., Blizzard and TIMIT datasets [21]. Blizzard records the English speech with 300 hours by a female speaker. TIMIT is a dataset with 6300 English sentences read by 630 speakers. we report the average log-likelihood for half-second sequences on Blizzard, and report the average log-likelihood per sequence for the test set sequences on TIMIT. We compare our method with a number of methods, which are introduced in Appendix 5.1. From Table 3 it can be observed that on both datasets SSNN outperforms the state of the art methods by a large margin, indicating its superior ability in speech modeling. M ODELS VRNN-GMM VRNN-G AUSS VRNN-I-G AUSS SRNN(s+Resq ) SRNN(s) SRNN(filt) RNN-GMM RNN-G AUSS SSNN

Blizzard ≥ 9107 ≥ 9223 ≥ 9223 ≥ 11991 ≥10991 ≥10846 7413 3539 ≥ 13123

TIMIT ≥ 28982 ≥ 28805 ≥28805 ≥ 60550 ≥59269 50524 26643 -1900 ≥ 64017

Table 1: Average log-likelihood per sequence on the test sets.

M ODELS HSMM SUB HSMM HDP-HSMM CRF-AE R HSMM- DP SSNN

D ROSOPHILA 47.37 ± 0.27 39.70 ± 2.21 43.59 ± 1.58 57.62 ± 0.22 36.21 ± 1.37 34.77 ± 3.70

H UMAN ACTIVITY 41.59 ± 8.58 22.18 ± 4.45 35.46 ± 6.19 49.26 ± 10.63 16.38 ± 5.06 14.70 ± 5.45

P HYSIONET 45.04 ± 1.87 43.01 ± 2.35 42.58 ± 1.54 45.73 ± 0.66 31.95 ± 4.12 29.29 ± 5.34

Table 2: Mean and standard deviation of the error rate (%).

Segmentation and Labeling of Time Series To show the advantages of SSNN over HSMM and its variants when learning the segmentation and latent labels from sequences, we take experiments on Human activity [16], Drosophila dataset [17] and PhysioNet [18] Challenge dataset. Human activity dataset consists of time series signals from sensors mounted on the volunteers.Drosophila dataset records the time series movement of fruit flies’ legs. Both Human Activity and Drosophila dataset are used for segmentation prediction. PhysioNet Challenge dataset records observation labeled with one of the four hidden states, i.e., Diastole, S1, Systole and S2. we compare the predicted segments or latent labels with the ground truth, and report the mean and the standard deviation of the error rate for all methods. Details of hyper-parameters and setting are shown in Appendix 5.2. We report the comparison with subHSMM [22], HDP-HSMM [23], CRF-AE [24] and rHSMM-dp [4]. Experimental results are shown in Table 3. It can be observed that SSNN achieves the lowest mean error rate, indicating the effectiveness of combining RNN with HSMM to collectively learn the segmentation and the latent states.

4

Conclusion

In order to learn the structures (e.g., the segmentation and labeling) of high-dimensional time series in a unsupervised way, we have proposed a Stochastic Sequential Neural Network (SSNN) with structured inference. For better model interpretation, we further restrict the labels and segmentation to be two sequences of discrete variables respectively. In order to exploit forward and backward temporal information, we carefully design a structured inference method. To overcome the difficulties of inferring discrete latent variables in deep neural networks, we resort to the recently proposed Gumbel-Softmax functions. The advantages of the proposed inference method in SSNN have been fully demonstrated in both synthetic and real-world sequential benchmarks. 4

References [1] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Cognitive modeling, 5(3):1, 1988. [2] Lawrence R Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286, 1989. [3] Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured representations and fast inference. In Advances in Neural Information Processing Systems, pages 2946–2954, 2016. [4] Hanjun Dai, Bo Dai, Yan-Ming Zhang, Shuang Li, and Le Song. Recurrent hidden semi-markov model. ICLR, 2017. [5] Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. In Advances in Neural Information Processing Systems, pages 2199–2207, 2016. [6] Rahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. arXiv preprint arXiv:1511.05121, 2015. [7] Evan Archer, Il Memming Park, Lars Buesing, John Cunningham, and Liam Paninski. Black box variational inference for state space models. arXiv preprint arXiv:1511.07367, 2015. [8] Rahul G Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. arXiv preprint arXiv:1609.09869, 2016. [9] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. stat, 1050:1, 2017. [10] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. [11] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [12] Mohammad Emtiyaz Khan and Wu Lin. Conjugate-computation variational inference: Converting variational inference in non-conjugate models to inferences in conjugate models. arXiv preprint arXiv:1703.04265, 2017. [13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [14] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016. [15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [16] Jorge-L Reyes-Ortiz, Luca Oneto, Albert Sama, Xavier Parra, and Davide Anguita. Transition-aware human activity recognition using smartphones. Neurocomputing, 171:754–767, 2016. [17] Jamey Kain, Chris Stokes, Quentin Gaudry, Xiangzhi Song, James Foley, Rachel Wilson, and Benjamin De Bivort. Leg-tracking and automated behavioural classification in drosophila. Nature communications, 4:1910, 2013. [18] David B Springer, Lionel Tarassenko, and Gari D Clifford. Logistic regression-hsmm-based heart sound segmentation. IEEE Transactions on Biomedical Engineering, 63(4):822–832, 2016. [19] Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016. [20] Bart Van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXiv preprint arXiv:1506.00619, 2015. [21] Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla, P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013–indian language task. In Blizzard Challenge Workshop, volume 2013, 2013.

5

[22] Matthew Johnson and Alan Willsky. Stochastic variational inference for bayesian time series models. In International Conference on Machine Learning, pages 1854–1862, 2014. [23] Matthew J Johnson and Alan S Willsky. Bayesian nonparametric hidden semi-markov models. Journal of Machine Learning Research, 14(Feb):673–701, 2013. [24] Waleed Ammar, Chris Dyer, and Noah A Smith. Conditional random field autoencoders for unsupervised structured prediction. In Advances in Neural Information Processing Systems, pages 3311–3319, 2014. [25] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980–2988, 2015. [26] Otto Fabius and Joost R van Amersfoort. arXiv:1412.6581, 2014.

Variational recurrent auto-encoders.

arXiv preprint

[27] Shixiang Gu, Zoubin Ghahramani, and Richard E Turner. Neural adaptive sequential monte carlo. In Advances in Neural Information Processing Systems, pages 2629–2637, 2015. [28] Zhe Gan, Chunyuan Li, Ricardo Henao, David E Carlson, and Lawrence Carin. Deep temporal sigmoid belief networks for sequence modeling. In Advances in Neural Information Processing Systems, pages 2467–2475, 2015. [29] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014. [30] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. [31] Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv preprint arXiv:1605.06432, 2016.

6

5 5.1

Appendix Speech Modeling

Speech modeling on these two datasets has shown to be challenging since there’s no good representation of the latent states [25, 26, 27, 28, 29]. For the TIMIT and Blizzard datasets, the sampling frequency is 16KHz and the raw audio signal is normalized using the global mean and standard deviation of the training set. The data preprocessing and the performance measures are identical to those reported in [25, 5], i.e. we report the average log-likelihood for half-second sequences on Blizzard, and report the average log-likelihood per sequence for the test set sequences on TIMIT. For the raw audio datasets, we use a fully factorized Gaussian output distribution. In the experiment, We split the raw audio signals in the chunks of 2 seconds. The waveforms are divided into non-overlapping vectors with size 200. For Blizzard we split the data using 90% for training, 5% for validation and 5% for testing. For testing we report the average log-likelihood for each sequence with segment length 0.5s. For TIMIT we use the predefined test set for testing and split the rest of the data into 95% for training and 5% for validation. During training we use back-propagation through time (BPTT) for 1 second. For the first second we initialize hidden units with zeros and for the subsequent 3 chunks we use the previous hidden states as initialization. the temperature τ starts from a large value 0.1 and gradually anneals to 0.01. We compare our method with the following methods. For RNN+VRNNs [25], VRNN is tested with two different output distributions: a Gaussian distribution (VRNN-GAUSS), and a Gaussian Mixture Model (VRNN-GMM). We also compare to VRNN-I in which the latent variables are constrained to be independent across time steps. For SRNN [5], we compare with the smoothing and filtering performance denoted as SRRR (smooth), SRNN (filt) and SRNN (smooth+ Resq ) respectively. The results of VRNN-GMM, VRNN-Gauss and VRNN-I-Gauss are taken from [25], and those of SRNN (smooth+Resq ), SRNN (smooth) and SRNN (filt) are taken from [5]. 5.2

Segmentation and Labeling of Time Series

We first introduce the dataset in more detail. For Human activity, it is collected by [16] that consists of signals collected from waist-mounted smartphones with accelerometers and gyroscopes. Each volunteer is asked to perform 12 activities. There are 61 recorded sequences, and the maximum time steps T ≈ 3, 000. Each xt is a 6 dimensional vector. For Drosophila [17], at each time step t, xt is a 45-dimension vector, which consists of the raw and some higher order features. the maximum time steps T ≈ 10, 000. In the experiment, we fix the τ at small value 0.0001. PhysioNet Challenge dataset [18] records observation labeled with one of the four hidden states, i.e., Diastole, S1, Systole and S2. The experiment aims to exam SSNN on learning and predicting the labels. In the experiment, we find that annealing of temperature τ is important, we start from τ = 0.15 and anneal it gradually to 0.0001. we compare the predicted segments or latent labels with the ground truth, and report the mean and the standard deviation of the error rate for all methods. We use leave-one-sequence-out protocol to evaluate these methods, i.e., each time one sequence is held out for testing and the left sequences are for training. We set the truncation of max possible duration M to be 400 for all tasks. We also set the number of hidden states K to be the same as the ground truth. We report the comparison with subHSMM [22], HDP-HSMM [23], CRF-AE [24] and rHSMM-dp [4]. For the HDP-HSMM[22] and subHSMM[23], the observed sequences x1:T are generated by standard multivariate Gaussian distributions. The duration variable dt is from the Poisson distribution. We need to tune the concentration parameters α and γ. As for the hyper parameters, they can be learned automatically. For subHSMM, we tune the truncation threshold of the infinite HMM in the second level. For CRF-AE, we extend the original model to learn continuous data. We use mixture of Gaussian for the emission probability. For R-HSMM-dp, it is a version of R-HSMM[4] with the exact MAP estimation via dynamic programming. 7

Figure 2: Visualization and comparison of SSNN and DRAW on multi-object recognition problems. 5.3

Sequential Multi-objects Recognition

To further verify the ability of modeling complex spatial dependency, we test SSNN on the multiple objects recognition problem. This problem is interesting but hard, since it requires the model to capture the dependency of pixels in images and to recognize the objects in images. we construct a small image dataset including 3000 images, named as multi-MNIST. We begin with a 50×50 dataset of multi-MNIST digits. Each image contains three non-overlapping random MNIST digits with equal probability. Our goal is to sequentially recognize each digit in the image. In the experiment, we train our model with 2500 images and test on the rest 500 images. We compare the proposed model to DRAW [30] and visualize our learned latent representations in Figure 2. It can be observed that our model identifies the number and locations of digits correctly, while DRAW sometimes misses modes of data. The result shows that our method can accurately capture not only the number of objects but also locations. During training, we fix the maximum time steps T = 3 and feed the same image as input sequentially to SSNN. We interpret the latent variable dt as intensity and zt as the location variable in the training images firstly. Then We train SSNN with random initialized parameters on 60,000 multi-MNIST images from scratch, i.e., without a curriculum or any form of supervision. All experiments are performed with a batch size of 64. The learning rate of model is 1 × 10−5 and baselines are trained using a higher learning rate 1 × 10−3 . The LSTMs in the inference network had 256 cell units. Table 3: The results measure on the log-likelihood and the goodness-of-fit (denoted by R2 ) given by three methods on the prediction of all latent states on respective dependent variables in pendulum dynamics. For both measures, the higher the better.

Measured groundtruth variables

5.4

sin φ cos φ dφ dt

DVBF-LL log-likelihood 3990.8 7231.1 -11139

DKF log-likelihood 1737.6 6614.2 -20289

R2 0.961 0.982 0.916

R2 0.929 0.979 0.035

Our method (SSNN) log-likelihood R2 4424.6 0.975 8125.3 0.997 -9620 0.941

Synthetic Experiment

To validate that our method is able to model high dimensional data with complex dependency, we simulated a complex dynamic torque-controlled pendulum governed by a differential equation to generate 2 non-Markovian observations from a dynamical system: ml2 d dtφ(t) = −µ dφ(t) 2 dt + mgl sin φ(t) + u(t). For a fair comparison with [31], we set m = l = 1, µ = 0.5, and g = 9.81. We convert the generated 8

ground-truth angles to image observations. The system can be fully described by the angle and angular velocity. We compare our method with Deep Variational Bayes Filter(DVBF-LL) [31] and Deep Kalman Filters(DKF) [6]. The ordinary least square regression results are shown in Table 3. Our method is clearly better than DVBF-LL and DKF in predicting sin φ, cos φ and dφ dt . SSNN achieves a higher goodness-of-fit than other methods. The results indicate that generative model and inference network in SSNN are capable of capturing complex sequence dependency.

9

Structured Inference for Recurrent Hidden Semi-markov ...

Recent advances in generative sequential modeling have suggested to combine recurrent neural networks with state space models. This combination can model not only the long term dependency in sequential data, but also the uncertainty included in the hidden states. Inheriting these advantages, we present a structured.

699KB Sizes 1 Downloads 238 Views

Recommend Documents

Efficient Inference and Structured Learning for ... - Semantic Scholar
1 Introduction. Semantic role .... speech tag).1 For each lexical unit, a list of senses, or frames ..... start, and then subtract them whenever we assign a role to a ...

Efficient Inference and Structured Learning for ... - Research at Google
constraints are enforced by reverting to k-best infer- ..... edge e∗,0 between v−1 and v0. Set the weight ... does not affect the core role assignment, the signature.

Structured Learning with Approximate Inference - Research at Google
little theoretical analysis of the relationship between approximate inference and reliable ..... “soft” algorithmic separability) gives rise to a bound on the true risk.

Chemotherapy for recurrent cervical carcinoma
prospective trials in patients with recurrent/metastatic cervical cancer. Patient-reported quality of life was deemed an essential study endpoint in this patient popu ...

Hierarchical Deep Recurrent Architecture for Video Understanding
Jul 11, 2017 - and 0.84333 on the private 50% of test data. 1. Introduction ... In the Kaggle competition, Google Cloud & ... for private leaderboard evaluation.

Fast and Accurate Recurrent Neural Network Acoustic Models for ...
Jul 24, 2015 - the input signal, we first stack frames so that the networks sees multiple (e.g. 8) ..... guage Technology Workshop, 1994. [24] S. Fernández, A.

recurrent neural networks for voice activity ... - Research at Google
28th International Conference on Machine Learning. (ICML), 2011. [7] R. Gemello, F. Mana, and R. De Mori, “Non-linear es- timation of voice activity to improve ...

Inference Protocols for Coreference Resolution - GitHub
R. 23 other. 0.05 per. 0.85 loc. 0.10 other. 0.05 per. 0.50 loc. 0.45 other. 0.10 per .... search 3 --search_alpha 1e-4 --search_rollout oracle --passes 2 --holdout_off.

LEARNING AND INFERENCE ALGORITHMS FOR ...
Department of Electrical & Computer Engineering and Center for Language and Speech Processing. The Johns ..... is 2 minutes, and the video and kinematic data are recorded at 30 frames per ... Training and Decoding Using SS-VAR(p) Models. For each ...

Recurrent Neural Networks
Sep 18, 2014 - Memory Cell and Gates. • Input Gate: ... How LSTM deals with V/E Gradients? • RNN hidden ... Memory cell (Linear Unit). . =  ...

Recurrent Neural Network based Approach for Early Recognition of ...
a specific AD signature from EEG. As a result they are not sufficiently ..... [14] Ifeachor CE, Jervis WB. Digital signal processing: a practical approach. Addison-.

Using Recurrent Neural Networks for Slot Filling in Spoken ... - Microsoft
experiments on the well-known airline travel information system. (ATIS) benchmark. ... The dialog manager then interprets and decides on the ...... He received an. M.S. degree in computer science from the University .... He began his career.

Richer Syntactic Dependencies for Structured ... - Microsoft Research
equivalent with a context-free production of the type. Z →Y1 ...Yn , where Z, Y1,. .... line 3-gram model, for a wide range of values of the inter- polation weight. We note that ... Conference on Empirical Methods in Natural Language. Processing ..

structured language modeling for speech ... - Semantic Scholar
20Mwds (a subset of the training data used for the baseline 3-gram model), ... it assigns probability to word sequences in the CSR tokenization and thus the ...

STRUCTURED LANGUAGE MODELING FOR SPEECH ...
A new language model for speech recognition is presented. The model ... 1 Structured Language Model. An extensive ..... 2] F. JELINEK and R. MERCER.

Bayesian Optimization for Likelihood-Free Inference
Sep 14, 2016 - There are several flavors of likelihood-free inference. In. Bayesian ..... IEEE. Conference on Systems, Man and Cybernetics, 2: 1241–1246, 1992.

Recurrent Bubbles, Economic Fluctuations, and Growth∗
Jul 3, 2017 - estimated version of our model fitted to U.S. data, we argue that 1) there is evidence of ... post-Great Recession dismal recovery of the U.S. economy. ... in spot markets when they exist, and liquidity service may convince people to ..

recurrent deep neural networks for robust
network will be elaborated in Section 3. We report our experimental results in Section 4 and conclude our work in Section 5. 2. RECURRENT DNN ARCHITECTURE. 2.1. Hybrid DNN-HMM System. In a conventional GMM-HMM LVCSR system, the state emission log-lik

On Recurrent Neural Networks for Auto-Similar Traffic ...
auto-similar processes, VBR video traffic, multi-step-ahead pre- diction. ..... ulated neural networks versus the number of training epochs, ranging from 90 to 600.