IEEE SIGNAL PROCESSING LETTERS

1

Data-driven and feedback based spectro-temporal features for speech recognition G.S.V.S. Sivaram∗, Student Member, IEEE, Sridhar Krishna Nemala, Student Member, IEEE, Nima Mesgarani, and Hynek Hermansky, Fellow, IEEE

Abstract—This paper proposes novel data-driven and feedback based discriminative spectro-temporal filters for feature extraction in automatic speech recognition (ASR). Initially a first set of spectro-temporal filters are designed to separate each phoneme from the rest of the phonemes. A hybrid Hidden Markov Model/Multilayer Perceptron (HMM/MLP) phoneme recognition system is trained on the features derived using these filters. As a feedback to the feature extraction stage, top confusions of this system are identified, and a second set of filters are designed specifically to address these confusions. Phoneme recognition experiments on TIMIT show that the features derived from the combined set of discriminative filters outperform conventional speech recognition features, and also contain significant complementary information. Index Terms—confusion analysis, discriminative spectro-temporal features, speech recognition.

filters,

I. I NTRODUCTION

I

T is well known that the information about speech sounds, such as phonemes, is encoded in the spectro-temporal dynamics of speech. Conventional automatic speech recognition (ASR) features encode either the spectral or the temporal variations of the spectro-temporal pattern. These features, typically extracted over a time scale of the order of few hundred milliseconds, are transformed to posterior probability estimates of various phone classes in multilayer perceptron (MLP) based acoustic modeling. Though effectively MLP receives a context of several hundred milliseconds at its input layer, recently, there has been an increased research effort in deriving the features that explicitly capture the joint spectro-temporal dynamics of the speech. Such an approach is primarily motivated by the spectro-temporal receptive field (STRF) model for predicting the response of a cortical neuron to the input speech, where STRF describes the two-dimensional spectro-temporal pattern to which the neuron is most responsive to [1]. Most of the works, so far, have used the parametric twodimensional Gabor filters for extracting features. The parameters of the Gabor functions are either selected using the data [2], [3] or preselected to form the various streams of information [4]. Even though multiple spectro-temporal feature streams were formed and combined using MLPs in [4], All authors are affiliated with the ECE Dept. and Center for Language and Speech Processing, Johns Hopkins University, Baltimore, USA, (phone: +1410-516-7031; fax +1-410-516-5566; email: {sivaram, nemala, nmesgara1, hynek}@jhu.edu) G.S.V.S. Sivaram and H. Hermansky are also affiliated with the Human Language Technology, Center of Excellence, Johns Hopkins University, USA.

it is difficult to interpret what each feature stream is trying to achieve. Previously, we proposed a feature extraction using a set of two-dimensional filters to discriminate each phoneme from the rest of the phonemes [5]. In other words, twodimensional filters used to extract features are learned from the data in a discriminative way. A hybrid Hidden Markov Model/Multilayer Perceptron (HMM/MLP) phoneme recognition system is trained on the features derived using these filters. In this paper, we introduce feedback to the feature extraction stage by analyzing the confusion matrix of the recognition system and designing an additional set of filters that are optimized to discriminate the most confused phoneme pairs. Phoneme recognition experiments on TIMIT database show significant improvement when the features derived using these additional set of filters are combined at the posterior level (output of the MLP) with that of the earlier system ([5]) using the Dempster-Shafer (DS) theory of evidence [6]. Also, we show that the proposed discriminative spectro-temporal features capture significant complementary information to both the spectral (PLP, [7]) and the temporal (MRASTA,[8]) features. II. F EATURE

EXTRACTION

Speech is represented in the spectro-temporal domain (log critical band energies) for both learning the two-dimensional filter shapes and extracting the features. This representation is obtained by first performing a Short Time Fourier Transform (STFT) on the speech signal with an analysis window of length 25 ms and a frame shift of 10 ms. The magnitude square values of the STFT output are then projected on a set of frequency weights which are equally spaced on the Bark frequency scale to obtain the spectral energies in various critical bands. Finally, the spectro-temporal representation is obtained by applying the logarithm on these critical band energies. The block schematic of the proposed feature extraction is shown in the Fig. 1, which is described below. A. Design of MLDA one-vs-rest 2-D filters Labels for the TIMIT training data are obtained by mapping 61 hand-labeled symbols to a standard set of 39 phonemes [9]. A set of spectro-temporal patterns corresponding to each phoneme is obtained from the spectro-temporal representation of the training utterances by taking a context of 2N +1 frames centered on every frame that belongs to the phoneme. In our

IEEE SIGNAL PROCESSING LETTERS

Log critical band energies

2

Posterior probabilities MLDA one−vs−rest Features 2−D filters

Hierarchical MLP Posterior probabilities

Top confusion Design filters

pairs

Combine

Decode (CV data)

posteriors (DS)

MLDA one−vs−one Features 2−D filters

Hierarchical MLP Posterior probabilities

Fig. 1. Block schematic of the feature extraction. Blocks along the dotted lines indicate the steps involved in designing second set of filters. Hierarchical MLP is depicted in Fig. 2.

experiments, each spectro-temporal pattern of any particular phoneme has 19 critical bands (K) and 21 frames (2N + 1). In order to derive 2-D filter shapes, we ask the question : what are the directions (patterns) along which the spectrotemporal patterns of a phoneme are well separated from that of the rest of the phonemes? Fisher Linear Discriminant Analysis (FLDA) gives only one optimal discriminating pattern for each phoneme (two-class problem) due to the rank limitation of its between-class scatter matrix. The resultant low-dimensional feature space (projections of a spectro-temporal pattern on these discriminating patterns) may hinder classification performance. Modified Linear Discriminant Analysis (MLDA) is a generalization of FLDA that overcomes this limitation by modifying the between-class scatter matrix [10]. It redefines the betweenclass scatter matrix as the weighted sum of average sample to sample scatter. This modification yields multiple solutions (or discriminating patterns) to the generalized eigen vector problem which arises in the conventional FLDA. A set of discriminating patterns are obtained using MLDA technique to discriminate all the spectro-temporal patterns of a phoneme from hundred thousand randomly chosen spectrotemporal patterns corresponding to the rest of the phonemes in the training data. These discriminating patterns are used as 2-D filters for extracting features. We have used a set of 13 discriminating 2-D filters per phoneme in our experiments, which would yield a feature vector of length 13 × 39 = 507.

the total number of critical bands, while the temporal extent (context) of the 2-D filter is given by 2N + 1. ˜ k) = h(−n, k), It is to be noted that by defining h(n, f (n) of (1) can be interpreted as the response of a cortical ˜ k), to the neuron, which is characterized by the STRF h(n, input speech representation S(n, k) [1]. C. Hierarchical estimation of posterior probabilities Hierarchical estimation of posterior probabilities has been shown to be useful for speech recognition in [11][12]. It consists of two MLPs in series as shown in the Fig. 2. We refer to this architecture as hierarchical MLP. The first MLP estimates the posterior probabilities of phonemes conditioned on the input acoustic feature vector by minimizing the cross entropy between the outputs and the corresponding phoneme target classes [13]. These estimates are refined by the second MLP which operates on a longer temporal context of 230 ms (or 23 frames) of posterior probabilities estimated by the first MLP. In all our experiments, both MLPs have a single hidden layer with sigmoid nonlinearity and an output layer with softmax nonlinearity. The number of input, hidden and output nodes of each MLP is set to be equal to the cardinality of its input feature vector, 1000 and the number of output classes1 respectively. We have used Quicknet package for training the MLPs [14]. Hierarchical MLP Acoustic features

B. Feature extraction using a 2-D filter If S(n, k) denotes the spectro-temporal representation of the speech, and h(n, k) characterizes the 2-D filter, then the corresponding feature f (n) at a particular time n is extracted using (1).

MLP−1

temporal context of 230 ms (23 frames)

MLP−2

Phoneme posterior probabilities

Phoneme posterior probabilities

(1)

Fig. 2. Hierarchical estimation of posterior probabilities of phonemes. In this case, MLP-1 is trained to estimate phoneme posterior probabilities from acoustic features. MLP-2 is trained on phoneme posterior probabilities estimated by the first MLP, but with a longer temporal context.

where, i and k denote the discrete time index (due to 10ms shift) and the critical band index respectively. K represents

1 39 phoneme classes along with an additional garbage class (frames with no labels).

f (n) =

N X K X

h(i, k) S(n + i, k)

i=−N k=1

IEEE SIGNAL PROCESSING LETTERS

D. Design of MLDA one-vs-one 2-D filters Initially, a hierarchical MLP is trained using the MLDA one-vs-rest features (section II-A) as shown in the Fig. 1. The posterior probabilities of the cross-validation (CV) data, estimated by this hierarchical MLP, are decoded. A confusion matrix is obtained by comparing decoded phoneme sequence with the reference transcription. The top 15 most confusing phoneme pairs of this system are: (/ih/,/ah/), (/iy/,/ih/), (/eh/,/ih/), (/s/,/z/), (/r/,/er/), (/d/,/t/), (/m/,/n/), (/eh/,/ae/), (/ih/,/er/), (/ao/,/ah/), (/uw/,/ih/), (/eh/,/ah/), (/ey/,/ih/), (/d/,/g/), (/d/,/dx/). For each of the above confusing pair, a new set of 2-D filters are designed using MLDA technique in order to discriminate the spectro-temporal patterns corresponding to each phoneme. When a set of 13 discriminating 2-D filters are used per phoneme pair, and 15 such most confusing pairs would yield a feature vector of length 13 × 15 = 195. Another hierarchical MLP is subsequently trained using the features derived from the output of these filters to estimate another set of posterior probabilities as shown in the Fig. 1. E. Combining posteriors of discriminative spectro-temporal features Posterior streams obtained from the individual discriminative spectro-temporal feature streams (MLDA one-vs-one, MLDA one-vs-rest) are combined2 using a combination rule which is based on the Dempster-Shafer (DS) theory of evidence ([6]) as shown in Fig. 1. The particular combination scheme has been shown to be related to human way of processing multiple feature streams. The DS theory is a generalization of the Bayesian probability framework which allows characterization of ignorance through basic probability assignments (BPAs). The individual posterior streams are converted to BPAs which are then combined using DS orthogonal sum. III.

SYSTEM DESCRIPTION

Speaker independent phoneme recognition experiments are conducted on TIMIT database (excluding ‘sa’ dialect sentences) using the hybrid Hidden Markov Model/Multilayer perceptron (HMM/MLP) approach [15]. The training, test and cross-validation (CV) sets of TIMIT consist of 3000, 1344 and 696 utterances from 375, 168 and 87 speakers respectively. All utterances are sampled at 16 kHz. The posterior probabilities of each acoustic feature stream, estimated in a hierarchical fashion (section II-C), are converted to the scaled likelihoods by dividing them with the corresponding prior probabilities of phonemes. An HMM with 3 states, which has equal self and transition probabilities associated with each state, is used for modeling each phoneme. The emission likelihood of its each state is set to be the scaled likelihood. Finally, the Viterbi algorithm is applied (unigram language model) for decoding the phoneme sequence, which

3

is compared against the reference sequence to obtain the phoneme recognition accuracy. While evaluating the accuracies of the test set, the phoneme insertion penalty is chosen to be the one that maximizes the phoneme recognition accuracy of the CV data. The various acoustic feature streams are combined at the posterior level using the Dempster-Shafer theory of evidence (described in section II-E). Note that hierarchical MLP is used for estimating the posterior probabilities of each acoustic feature stream. The experimental results are described in the next section. IV. R ESULTS The phoneme recognition accuracy of various features is listed in Table I. The proposed features (section II-E) are compared with the two conventional feature sets namely PLP and MRASTA. PLP feature vector is obtained by stacking a set of 9 frames of standard 13 PLP cepstral coefficients along with its delta and delta-delta features (dimensionality is 13 x 3 x 9 = 351). In the case of MRASTA, features are obtained by applying a multi-resolution filter bank consisting of 14 temporal filters on each of the 19 critical bands, and by filtering the outputs of each filter along frequency by using a 3 tap filter {−1, 0, 1}. Thus MRASTA feature vector consists of 14 x 19 + 14 x 17 = 504 features. Table I also lists the phoneme recognition accuracy of the single frame 39 dimensional PLP features when Gaussian Mixture Model (GMM) is used to model the emission likelihoods of HMM states [16]. Consistent with the hybrid HMM/MLP setup, a simple unigram language model on the phoneme set is used. It is to be noted that our baseline hybrid HMM/MLP gives better performance than the standard HMM/GMM system. TABLE I C OMPARISON OF PHONEME RECOGNITION ACCURACY ( IN %) ON CLEAN TIMIT FOR VARIOUS FEATURES . Features PLP, HMM/GMM ([16]) PLP 9 frames MRASTA MLDA one-vs-rest MLDA one-vs-one MLDA one-vs-rest + MLDA one-vs-one

Accuracy 68.3 70.6 68.4 71.5 71.3 73.1

The proposed MLDA one-vs-rest and MLDA one-vs-one features (after feedback from confusion analysis) are described in sections II-A and II-D respectively. Whereas their posterior combination is described in section II-E. It can be seen from the Table I that an absolute improvement of 2.5% in phoneme recognition accuracy is obtained by using the proposed posterior combination of the discriminative spectrotemporal features compared to the PLP features. Furthermore, the number of confusions among the top 15 confusing pairs reduced by 5.54% on the CV data after the combination. A. Complementary information

2 Phoneme

recognition experiments show that the resultant combined posterior stream outperforms either of the posterior streams from MLDA one-vsone and MLDA one-vs-rest features (refer Table I).

In the next set of experiments, we demonstrate the proposed discriminative spectro-temporal features capture significant

IEEE SIGNAL PROCESSING LETTERS

4

TABLE II P HONEME RECOGNITION ACCURACY ( IN %) ON CLEAN TIMIT FOR VARIOUS FEATURE COMBINATIONS . Features PLP 9 frames + MRASTA PLP 9 frames + MRASTA + (MLDA one-vs-rest + MLDA one-vs-one)

Accuracy 73.0 74.6

complementary information to the conventional features. Table II summarizes the results of various feature stream combinations at the posterior level. It has already been shown that combining the posterior probabilities obtained from the PLP (spectral) and MRASTA (temporal) feature streams would improve the recognition accuracy [6], [17]. This is also evident in our experiments as the combined posteriors of PLP and MRASTA feature streams yield an absolute improvement of 2.4% compared to either of the individual posteriors (can be observed from Tables I and II). Furthermore, when the posteriors of the proposed features are combined with the posteriors derived from the conventional spectral and temporal features, an additional improvement of 1.6% is observed. This complementary nature can be attributed to the fact that the proposed features explicitly capture phoneme discriminability (including the top most confusing phoneme pairs) in the spectro-temporal domain as opposed to the conventional features. V. D ISCUSSION The procedure of deriving an additional set of features to address the top confusing phoneme pairs at the output of the first hierarchical MLP closely resembles the ideas exploited in the Adaboost algorithm [18]. MLDA one-vs-rest 2-D filters can be seen as an initial set of weak classifiers (discriminating patterns) for multi-class (phonemes) classification problem. Instead of determining a threshold for each of the discriminative spectro-temporal filter for classifying phonemes, a hierarchical MLP is discriminatively trained using the projections of a spectro-temporal pattern on these filters. Wrongly classified examples (most confusing pairs) are emphasized, albeit not exponentially, in the second iteration when learning next set of weak classifiers (MLDA one-vs-one 2-D filters). However, it is observed that repeating this procedure does not help beyond the first iteration, as the resulting top confusion pairs do not change significantly. But after the first iteration, the number of confusions between the top confusing pairs are reduced yielding a better performance. VI. C ONCLUSIONS We proposed the application of data-driven discriminative spectro-temporal (2D) filters for the speech (phoneme) recognition task. Initially, a set of discriminative spectro-temporal filters are designed to separate each phoneme from the rest of the phonemes. A hierarchical MLP is subsequently trained to estimate the posterior probabilities of the phonemes. Based on the confusion analysis done on the phoneme posteriors, a second set of discriminative spectro-temporal filters are designed specifically to address the top confusions. The feature

streams derived from the two sets of discriminative filters are combined at the evidence (posterior) level. An absolute improvement of 2.5% in phoneme recognition accuracy on TIMIT is obtained using the proposed features over the PLP features. Further, the proposed features capture significant complementary information to the conventional features. This is evident as the combination of the proposed features with the combined PLP and MRASTA features gave an absolute improvement of 1.6% over the later combination alone on the TIMIT phoneme recognition task. VII. ACKNOWLEDGMENTS Authors would like to thank Prof. Mounya Elhilali for providing feedback on this work. R EFERENCES [1] D.A. Depireux, J.Z. Simon, D.J. Klein, and S.A. Shamma, “Spectrotemporal response field characterization with dynamic ripples in ferret primary auditory cortex,” Journal of Neurophysiology, vol. 85, no. 3, pp. 1220–1234, 2001. [2] M. Kleinschmidt and D. Gelbart, “Improving word accuracy with Gabor feature extraction,” in Proc. of ICSLP. USA, 2002. [3] B. Meyer and B. Kollmeier, “Optimization and evaluation of Gabor feature sets for ASR,” in INTERSPEECH. Brisbane, Australia, 2008. [4] S. Zhao and N. Morgan, “Multi-stream spectro-temporal features for robust speech recognition,” in INTERSPEECH. Brisbane, Australia, 2008. [5] N. Mesgarani, G.S.V.S. Sivaram, Sridhar Krishna Nemala, M. Elhilali, and H. Hermansky, “Discriminant Spectrotemporal Features for Phoneme Recognition,” in INTERSPEECH. Brighton, 2009. [6] F. Valente, “Multi-stream speech recognition based on Dempster-Shafer combination rule,” Speech Communication, vol. 52, no. 3, pp. 213–222, 2010. [7] H. Hermansky, “Perceptual linear predictive (PLP) analysis of speech,” The Journal of the Acoustical Society of America, vol. 87, pp. 1738– 1752, 1990. [8] H. Hermansky and P. Fousek, “Multi-resolution RASTA filtering for TANDEM-based ASR,” in INTERSPEECH, 2005. [9] K.F. Lee and H.W. Hon, “Speaker-independent phone recognition using hidden Markov models,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, no. 11, pp. 1641–1648, 1989. [10] S. Chen and D. Li, “Modified linear discriminant analysis,” Pattern Recognition, vol. 38, no. 3, pp. 441–443, 2005. [11] J. Pinto, G.S.V.S. Sivaram, M. Magimai.-Doss, H. Hermansky, and H. Bourlard, “Analyzing MLP Based Hierarchical Phoneme Posterior Probability Estimator,” IEEE Transactions on Audio, Speech, and Language Processing, 2010. [12] J. Pinto, B. Yegnanarayana, H. Hermansky, and M. Magimai.-Doss, “Exploiting contextual information for improved phoneme recognition,” Proc. of International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2008. [13] M.D. Richard and R.P. Lippmann, “Neural network classifiers estimate Bayesian a posteriori probabilities,” Neural computation, vol. 3, no. 4, pp. 461–483, 1991. [14] “The ICSI Quicknet Software Package,” Available: http://www.icsi.berkeley.edu/Speech/qn.html. [15] H. Bourlard and N. Morgan, Connectionist speech recognition: a hybrid approach, Kluwer Academic Pub, 1994. [16] S. Thomas, S. Ganapathy and H. Hermansky, “Tandem Representations of Spectral Envelope and Modulation Frequency Features for ASR,” in INTERSPEECH, 2009. [17] Fabio Valente, “A Novel Criterion for Classifiers Combination in Multistream Speech Recognition,” IEEE Signal Processing Letters, vol. 16, no. 7, pp. 561–564, July 2009. [18] R.E. Schapire and Y. Singer, “Improved boosting algorithms using confidence-rated predictions,” Machine learning, vol. 37, no. 3, pp. 297–336, 1999.

Data-driven and feedback based spectro-temporal ...

estimates of various phone classes in multilayer perceptron. (MLP) based acoustic ..... IEEE Transactions on Audio, Speech, and. Language Processing, 2010. ... Proc. of International Conference on Acoustics, Speech, and Signal. Processing ...

64KB Sizes 1 Downloads 159 Views

Recommend Documents

Physically-Based Vibrotactile Feedback for Temporal ... - mobileHCI
Sep 18, 2009 - selects a company and gives his device a twist to quickly feel the trend from ... social networking context, enabling users to transmit directly the.

TRUETIME BASED FEEDBACK SCHEDULER DESIGN ...
communication layer, being the backbone of the NCS, reliability, security, ease ..... of 41st IEEE conference on decision and control, , Las Vegas, Nevada USA,.

TRUETIME BASED FEEDBACK SCHEDULER DESIGN ...
bandwidth, and the development of data communication protocols for control system are another two ... Matlab-based analysis tools for real-time control system.

Physically-Based Vibrotactile Feedback for Temporal ... - mobileHCI
Sep 18, 2009 - back design is one way to provide both compelling and informative feedback ... desktop environment, such as browsing the internet or watching vi- deos. ... Our example 'overView' application enables the interaction with.

Discriminant Spectrotemporal Features for Phoneme Recognition
We utilized two methods for finding the discriminant filters: (1) Regularized Least Square technique (RLS). [9] and Modified Linear Discriminant Analysis (MLDA).

Discriminant Spectrotemporal Features for Phoneme Recognition
We propose discriminant methods for deriving two- dimensional spectrotemporal features for phoneme recognition that are estimated to maximize the separation ...

Conferencing and Feedback - ciclt.net
Feb 13, 2013 - structured process for facilitating ... give actionable information. ... only factual information about how one is progressing in relation to a goal.

Conferencing and Feedback - ciclt.net
Feb 13, 2013 - Conferencing and feedback are required components of the. Teacher Keys Effectiveness System (TKES) and the Leader Keys. Effectiveness ...

Multiuser Scheduling Based on Reduced Feedback ...
Aug 28, 2009 - Relayed transmission techniques have the advantages of enhancing the ..... concepts for wireless and mobile broadband radio,” IEEE Trans.

Feedback during Web-Based Homework: The Role of ...
Abstract. Prior work has shown that computer-supported homework can lead to ... laptop program for 7th and 8th grade students and their teachers. A study .... The pre- and post-test and homework assignments consisted of 10 problems each.

Queue-based Sub-carrier Grouping For Feedback ...
Hybrid schemes have been considered that combine sub-carrier grouping and channel thresholing. Here, the CSI is reported for the group if its quality exceeds a pre- specified threshold [6]–[8]. Jorsweick et al. [10] consider an uplink MIMO-OFDM sys

A Rate Feedback Predictive Control Scheme Based ...
based on a Back Propagation (BP) neural network technique. We consider a ... predictive technique to predict the congestion situation in computer networks, but.

Physically-Based Vibrotactile Feedback for Temporal ...
Sep 18, 2009 - solid theoretical foundation to the underlying interaction. Figure 3 defines some of ... flexible shaft τ = K∆θ where K is the spring constant and ∆θ.

A Feedback-Based Access Scheme for Cognitive ...
IEEE International Conference on Telecommunications (ICT), Doha,. Qatar, April 2010. [7] M. Elsaadany, M. Abdallah, T. Khattab, M. Khairy, and M. Hasna,. “Cognitive Relaying in Wireless Sensor Networks Performance Anal- ysis and Optimization,” in

Relevance feedback based on non-negative matrix ... - IEEE Xplore
Abstract: As a powerful tool for content-based image retrieval, many techniques have been proposed for relevance feedback. A non-negative matrix factorisation ...

Limited Feedback and Sum-Rate Enhancement
Nov 3, 2012 - obtained by considering the effect of feedback overhead on the total throughput of the MIMO IMAC model. I. INTRODUCTION. Interference ...

Feedback loops.pdf
What would you expect to happen if your blood sugar was 120 mg / 100 mL ? Be specific. 7. A person with diabetes cannot regulate their blood sugar, mainly ...

feedback form.pdf
... fill this form otherwise admit card will not be issued for semester. examinations (November 2017). After submission of this form, there is no need to take a print. Directly go to your counter as per schedule given in previous notice and take your

Feedback Form.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Feedback Form.

Feedback Control of a Thermal Fluid Based on a ...
the same triangulation. Moreover, the penalty method is ... AP + PAT − PCT CP + BBT = 0. (2) Compute the Cholesky factorizations Π = RT c Rc and. P = RT f Rf . (3) Compute the singular value decomposition of RcRT f such that RcRT f = UMV T ... tio

Mo_Jianhua_Asilomar15_Limited Feedback in Multiple-Antenna ...
Retrying... Mo_Jianhua_Asilomar15_Limited Feedback in Multiple-Antenna Systems with One-Bit Quantization.pdf. Mo_Jianhua_Asilomar15_Limited Feedback ...

Feedback Control Tutorial
Design a phase lead compensator to achieve a phase margin of at least 45º and a .... Both passive component variations are specified in terms of parametric ...

AM Feedback Report -
Our Performance Chakra provides you with a bird's-eye view of your performance in different sections of modules you have attempted. The three levels indicate ...

Aggregating Reputation Feedback
Abstract. A fundamental task in reputation systems is to aggregate ... services, such as those operated by Amazon.com, Tripadvisor, and many other electronic ...