Evaluation of Vision-based Real-Time Measures for Emotions Discrimination under Uncontrolled Conditions David Antonio Gómez Jáuregui and Jean-Claude Martin LIMSI-CNRS, B.P. 133, 91403 Orsay, France

{gomez-jau, martin}@limsi.fr These difficulties can be overcome by evaluating and training the automated recognition system on a database recorded in realworld conditions. However, collecting a database in uncontrolled can be a tedious task. In order to overcome this problem, the database Acted Facial Expression in Wild (AFEW) [9] consists of video sequences of facial expressions extracted from actors in movies. This dataset provides video sequences that are more challenging to recognize than portrayed expressions available in acted data sets since movies present environments similar to real world conditions. In traditional systems for automatic emotion recognition from facial expressions, several features are commonly used to discriminate between different emotions. However, most of the features are not robust to real-world conditions [5], [9]. Therefore, a main issue is to find the relevant set of cues that present robustness against uncontrolled conditions. We propose to evaluate whether two cues can be used to discriminate between emotions. The first cue that we evaluated is quantity of motion [7]. This cue has been commonly used under real-world conditions [7] and has been proved to be a relevant cue in recognizing emotions from facial [23] and full-body expressions [18]. To the best of the authors’ knowledge, no study has been done to examine whether quantity of motion can be used as an emotion discriminator of facial expressions under uncontrolled conditions. As a second cue, we propose a measure of approach and avoidance detection based on the inter-ocular distance [3]. Studies in psychology have demonstrated that emotions can be used to forecast approach and avoidance behavior of users [1]. We believe that this cue can be also robust to real-world conditions since eye detection has been demonstrated to work under variable lighting conditions [26]. As a third study, we evaluate the accuracy of a state-of-the-art commercially available face recognition system (Noldus’ FaceReader) [8] with the dataset (AFEW) [9] in order to have a clear idea of the limitations of face recognition systems commonly used in controlled conditions. Both evaluated measures and the face recognition system are able to run in real-time on a standard PC.

ABSTRACT Several vision-based systems for automatic recognition of emotion have been proposed in the literature. However most of these systems are evaluated only under controlled laboratory conditions. These controlled conditions poorly represent the constraints faced in real-world ecological situations. In this paper, two studies are described. In the first study we evaluate whether two robust vision-based measures (approach-avoidance detection and quantity of motion) can be used to discriminate between different emotions in a dataset containing acted facial expressions under uncontrolled conditions. In the second study we evaluate in the same dataset the accuracy of a commercially available software used for automatic emotion recognition under controlled conditions. Results showed that the evaluated measures are able to discriminate different emotions in uncontrolled conditions. In addition, the accuracy of the commercial software evaluated is reported.

Categories and Subject Descriptors I.5.2 [Pattern Recognition]: Design Methodology – Feature evaluation and selection; I.4.8 [Image Processing and Computer Vision]: Scene analysis – Motion, Tracking

Keywords: Emotion recognition, Facial expression, approachavoidance detection, quantity of motion, real-world conditions.

1. INTRODUCTION Automatic emotion recognition is a relevant research topic for human-computer interaction. Computer systems that would have the ability to recognize emotion automatically in real-world and uncontrolled conditions would be able to interact naturally with users in unlimited and unimaginable ways. In recent years, emotion recognition techniques have been implemented by detecting the facial expressions of subjects in front of a camera using computer vision algorithms. However, most of the techniques are employed in controlled laboratory conditions using exaggerated facial expressions of subjects [11], [5]. Most of the systems fail in real-world conditions due to various limitations such as poor illumination, users not facing the camera, occlusions, complex backgrounds, etc. These limitations are a disadvantage for these systems to be integrated into everyday life. Only a few systems and studies consider the collection and recognition of facial expressions of affects in reallife conditions [15] including in pedagogical situations [10].

The aim of this study is to evaluate whether simple measures that can be used in real-world conditions are able to discriminate between different emotions from facial expressions under uncontrolled conditions. The rest of the paper is organized as follows. Section 2 reviews related work regarding vision-based facial features used for facial emotion recognition. Section 3 describes the method implemented to detect Approach and Avoidance detection and the results of the evaluation in the dataset. Section 4 describes the method implemented to obtain the quantity of motion from facial expressions and the evaluation results. Section 5 describes the commercial software evaluated and the results obtained. The results obtained are discussed in Section 6. Finally, we conclude in Section 7.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. EmotiW ‘13, December 9, 2013, Sydney, Australia. Copyright 2013 ACM 978-1-4503-2564-6/13/12…$15.00. http://dx.doi.org/10.1145/2531923.2531925

2. RELATED WORK There are many works published in recent years focusing on various facial representation measures or features and their

17

1.

usefulness for recognizing facial expressions and emotions from video sequences. Vision-based facial features can be classified in geometric-based and appearance-based features [25].

2.

Geometric-based features consist of shapes and corners of facial components such as mouths, eyes, nose, etc. These features can either be extracted using a model-based approach or a facial corner detector. Sebe et al. [20] use a model-based approach where an explicit 3D wireframe model of the face is constructed and fitted in a 2D image of the user’s face. Once the 3D model is fitted, local deformations of the facial features such as the eyebrows, eyelids, and mouth can be tracked. Pantic et al. [17] employed a facial corner multi-detector from a segmented face image in order to spatially sample the contours of the facial components (eyebrows, eyes, nose, mouth and chin).

If two or more eyes are detected, we compute a pair-wise comparison between the areas of the eyes detected. Then we retain the two eyes ( , ) that have the most similar areas. The height position difference between the first eye and second eye must be less than 10% of the height of the current face detected . (

3.



) <

Overlapping condition: the distance D between both eyes must be equal or greater than the sum of the radius R of the eyes circles. ≥

Appearance-based features describe the visual texture of face regions, with-out explicitly recognizing the location and shape of facial components. Guo and Dyer [13] used Gabor filtering to extract features from selected points in face images. Anderson and McOwan [2] proposed a spatial ratio face template in order to obtain a rough map of face feature locations. Motion features have also been used by several authors. Valstar et al. [23] defined facial regions in which the presence of motion characterizes certain AU activation. Motion was extracted using temporal templates (Motion History Images). Wu et al. [24] explored the use of Gabor Motion Energy filters (GME) as a basic representation for Early Temporal Integration in facial expression classification. They found that low level motion information, as captured by GMEs, may be particularly critical for classifying low intensity expressions. Tariq et al. [21] proposed the combination of three different features extracted from the facial region. These features included Scale Invariant Feature Transform (SIFT) at selected key points, Hierarchial Gaussianization (HG) features and motion (Optic Flow) features.

(1)

10

+



(2)

The inter-ocular distance is obtained by calculating the distance between the centers of the eyes. The distance is measured in pixel units. In order to obtain the approach and avoidance behavior measure for each video frame, we use the following equation: =



(3)

Where is inter-ocular distance obtained for the current video frame . is the inter-ocular distance in the first frame of the video sequence. is the approach and avoidance measure. Thus, a positive value of indicates and approach behavior while a negative value indicates and avoidance behavior.

Hybrid approaches that combine geometric-based and appearancebased features are also proposed. Cootes et al. [6] proposed an Active Appearance Model (AAM) that contains a statistical model describing the shape and grey-level appearance of the face. Lucet at al. [14] explored a number of representations of the face, derived from AAMs, for the purpose of spontaneous facial action recognition. Tian et al. [22] combined permanent features (eyes, brows, and mouth) and transient features (crows-feet wrinkles, wrinkles at nasal root, and nasolabial furrows) in order to recognize facial expressions and subtle changes in facial features.

Figure 1: Approach and avoidance measure based on the interocular distance (red line). The right image shows an avoidance behavior relative to the position of the actor in the left image from a video of the AFEW database [9].

3.2 Results

Although several vision-based facial features have shown accurate results, the development of features that are robust in fully unconstrained environments is still limited. The existing visual face detection and tracking techniques are just able to reliably handle the near-front/profile view of face images with good resolution and controlled lighting conditions [25]. In a realistic environment, the variable lighting conditions, unpredictable movement of subjects, low resolution and occlusions can cause these techniques to fail.

We analyzed the mean differences of the approach and avoidance measure between the different emotions in the AFEW dataset [9]. A repeated measures ANOVA design was used for the analysis. The independent variables were the seven emotions considered: Angry, Disgust, Fear, Happy, Neutral, Sad and Surprise. The dependent variable was the approach and avoidance measure. The hypotheses of this study were:

3. APPROACH AND AVOIDANCE 3.1 Method

H1.2: Positive emotions (Happy) present an approach behavior.

H1.1: The measure of approach and avoidance behavior shows significant differences for all evaluated emotions.

H1.3: Negative emotions (Angry, Disgust, Fear and Sad) present an avoidance behavior.

For each video sequence in the dataset, approach and avoidance behaviors are measured by computing the inter-ocular distance between the eyes. First, face and eyes are detected using the AdaBoost algorithm and Haar-like features [16]. False eye detections are avoided using the following heuristics that we iteratively defined and tested on data collected in spontaneous expressions collected with teenagers [12]:

Results showed significant main effects for the Emotion factor (F(6,7377) = 25.86, p < 0.0001). This confirms H1.1. Post-hoc comparisons (Tukey’s method) revealed a significant main effect between emotions Angry and Happy (T=5.61, p < 0.0001), Angry and Neutral (T=9.34, p < 0.0001), Angry and Sad (T=4.24, p <

18

0.001), Angry and Surprise (T=7.07, p < 0.0001), Disgust and Happy (T=4.31, p < 0.001), Disgust and Neutral (T=8.39, p < 0.0001), Disgust and Surprise (T=5.95, p < 0.0001), Fear and Neutral (T=8.27, p < 0.0001), Fear and Surprise (T= 6.22, p < 0.0001), Happy and Neutral (T=3.67, p < 0.01), Neutral and Sad (T=-5.35, p < 0.0001), Sad and Surprise (T=3.39, p < 0.05).

4.2 Analysis and results We analyze the mean differences of the quantity of motion measure between the different emotions in the AFEW dataset. The data for each video sequence was entered into a one factor (Emotion) within subjects ANOVA. The independent variables were the seven emotions considered: Angry, Disgust, Fear, Happy, Neutral, Sad and Surprise. The dependent variable was the quantity of motion. The hypotheses of this study were:

In figure 2, we can observe higher means of approach and avoidance behaviors for positive emotions (Happy and Surprise) while lower means for negative emotions (Angry, Disgust and Fear). This confirms H1.2 and H1.3.

H2.1: The measure of quantity of motion shows significant differences for all the emotions evaluated. H2.2: The emotions Angry, Surprise, Happy, Fear and Disgust present higher means of quantity of motion than Neutral and Sadness since these two later emotions have lower activation values according to [19] . H2.3: The emotions neutral and sad present lower means of quantity of motion than the other emotions since these two emotions have a lower arousal value. Results showed significant main effects for the Emotion factor (F(6,7607) = 144.68, p < 0.0001). This confirms H2.1. Post-hoc comparisons (Tukey’s method) revealed a significant main effect between emotions Angry and Disgust (T = -14.71, p < 0.0001), Angry and Fear (T=-8.96, p < 0.0001), Angry and Happy (T=13.21, p < 0.0001), Angry and Neutral (T=-25.83, p < 0.0001), Angry and Sad (T=-17.40, p < 0.0001), Angry and Surprise (T=3.85, p < 0.01), Disgust and Fear (T=4.46, p < 0.001), Disgust and Neutral (T=-11.14, p < 0.0001), Fear and Happy (T=-3.19, p < 0.05), Fear and Neutral (T=-14.40, p < 0.0001), Fear and Sad (T=-7.06, p < 0.0001), Fear and Surprise (T = 4.98, p < 0.0001), Happy and Neutral (T=-12.34, p < 0.0001), Happy and Sad (T=4.25, p < 0.001), Happy and Surprise (T = 8.67, p < 0.0001), Neutral and Sad (T=7.92, p < 0.0001), Neutral and Surprise (T=20.40, p < 0.0001), Sad and Surprise (T=12.64, p < 0.0001).

Figure 2: Main effect plot for the Approach and Avoidance behavior measure with respect to the Emotions analyzed in the dataset.

4. QUANTITY OF MOTION 4.1 Method For each video sequence in the AFEW dataset, we measure the quantity of motion obtained from the facial expression contained in the video sequences of the dataset. First, for each video frame, the face is detected using an Adaboost algorithm [16]. Then, the quantity of motion is obtained inside the region of the face detected. This quantity of motion is obtained by using motion history images (MHI) [7] that provide an image template that shows the recency of motion in a sequence. Finally the quantity of motion is normalized with respect to the region of the face detected using the following equation. =

In figure 4, we can observe higher means of quantity of motion for emotions Angry, Disgust, Fear, Happy and Surprise while lower means for emotions neutral and sad. This confirms H2.2 and H2.3.

(4)

Where is the amount of motion which varies between 0 and 1. is the number of pixels where motion has been detected, and corresponds to the area of the face region detected.

Figure 4: Main effect plot for the Quantity of Motion with respect to the Emotions analyzed in the dataset.

5. FACE READER FaceReader is facial analysis software developed by Noldus Information Technology [8]. It aims at detecting emotional expressions in the face. It aims at identifying six basic emotions: happy, sad, angry, surprise, fear, disgust and neutral state. It

Figure 3: Quantity of motion obtained from a facial expression extracted from a video of the AFEW dataset [9]. The left image shows a video frame of the dataset. The right image shows the quantity of motion that we computed.

19

provides a score for each emotion categories, thus estimating the detection of a blend of several basic emotions. Face Reader software has been evaluated and used in several applications, mainly under controlled conditions [4]. The main advantage of this software is that it can be used in real-time scenarios [8].

6. DISCUSSION Regarding the first study, a direct relation between the approach and avoidance behavior and the facial expression was found. Positive facial expressions are related to approach behaviors while negative facial expressions are related to avoidance behaviors. Thus, we can confirm that this measure can effectively be used to discriminate between positive and negative emotions in real-world conditions. In the second study, the emotion expressed is directly related to the quantity of motion presented in the facial expression. We observed that Anger presented the highest quantity of motion while the Neutral emotion presented the lowest quantity of motion. These results can be used to discriminate facial expressions in real-world scenarios. For example, if the intensity is very low, we can assume that the user is in a neutral state. In this way, by combining the information from these features an automated emotion recognition system can discriminate in real-world scenarios whether the user is expressing a high-intense / arousal positive emotion (e.g. Surprise), a highintense / arousal negative emotion (e.g. Anger) or a low-intense / arousal and negative emotion (e.g. Sad).

5.1 Method Face Reader software was executed for each video sequence in the dataset (Train and Val data). For each video frame, a measure named “expression intensity” is obtained for each emotion. This expression intensity describes the emotion as a value between 0 and 1. The dominant emotion is selected by choosing the expression intensity with the highest value and above a threshold. Face Reader software also provides, for each video sequence, a value named “valence”. The valence indicates whether the emotional status is positive or negative. The valence is calculated as the intensity of Happy minus the intensity of the negative emotion with the highest intensity. A positive valence indicates a positive emotion while a negative valence value indicates a negative emotion.

Regarding the third study, Face Reader presented a low accuracy except for emotions Neutral and Happy (Table 1 and Table 2). However the other emotions are still far to be correctly recognized in real-world conditions.

5.2 Results The results for all video sequences were grouped according to the emotion analyzed. For each emotion of the dataset we computed the mean intensity value, the mean valence and the accuracy of each emotion. Table 1 shows the computed valence from the expression intensity and the accuracy obtained by the Face Reader software. The accuracy is calculated as the total number of frames correctly classified divided by all the frames in that particular emotion in the dataset. Table 2 shows the expression intensity values for each emotion in the dataset. The highest intensity value for each emotion evaluated is highlighted.

7. CONCLUSIONS In this paper, we evaluated two robust measures in the AFEW dataset. We evaluated a commercially available software for emotion recognition mainly used in laboratory conditions. The results showed that the measures evaluated are able to discriminate emotions against uncontrolled real-world conditions. In addition we discussed how the measures can be combined in order to infer more precisely the emotion of the user.

Table 1: Valence and accuracy obtained by Face Reader for each emotion in the dataset. Emotion dataset Valence Accuracy -0.06 56.2% Neutral 0.29 48.2% Happy -0.04 16.3% Sad -0.08 6.4% Angry -0.02 19.7% Surprise -0.07 3.9% Fear -0.12 3.3% Disgust

For future work, we plan to use these measures in a more elaborated emotion recognition system. In this case, approach and avoidance measure and motion features will be used to design a classifier in order to provide robust emotion detection in realworld conditions. In addition, the evaluated measures will be combined with traditional geometric-based and appearance-based features in order to increase the accuracy of the emotion detector. Finally, we plan to use the evaluated measures to inform a virtual agent capable of providing an affective feedback to the user with respect of the emotion detected.

8. REFERENCES

Table 2: Expression intensity values obtained by Face Reader for each emotion. For space reasons, emotions on the columns are represented by letters: Neutral (N), Happy (H), Sad (S), Angry (A), Surprise (Sr), Fear (F) and Disgust (D). Emotion

N

H

S

A

Sr

F

D

Neutral

0.33

0.06

0.09

0.05

0.05

0.00

0.01

Happy

0.21

0.38

0.06

0.00

0.08

0.04

0.01

Sad

0.27

0.11

0.10

0.06

0.06

0.01

0.03

Angry

0.40

0.07

0.10

0.05

0.11

0.01

0.04

Surprise

0.26

0.08

0.06

0.03

0.11

0.02

0.01

Fear

0.27

0.05

0.1

0.03

0.1

0.03

0.01

Disgust

0.25

0.06

0.13

0.06

0.05

0.02

0.02

[1] Adams, R.G., Ambady, N., Macrae, C. N., and Kleck, R. E. 2012. Emotional expressions forecast approach-avoidance behavior. Motivation and Emotion. 30, 2, 177-186. [2] Anderson, K and McOwan, P. W. 2006. A real-time automated system for recognition of human facial expressions. IEEE Trans. Systems, Man, and CyberneticsPart B. 36, 1, 96-105.

dataset

[3] Asteriadis, S., Karpouzis, K., and Kollias, S. 2009. Feature extraction and selection for inferring user engagement in an hci environment. HCI International. (July 2009), 19–24. [4] Benta, K.-I., Van Kuilenburg, H., Eligio, U.X., Den Uyl, M., Cremene, M., Hoszu, A., and Cret, O. 2009. Evaluation of a system for real-time valence assessment of spontaneous facial expressions. Distributed Environments Adaptability,

20

Semantics and Security Issues, International Romanian – French Workshop.

[16] Morency, L.P., Rahimi, A. and Darrel, T. 2013. Adaptive view-based appearance models. In Computer Vision and Pattern Recognition. 1, 803-812.

[5] Bettadapura, V. 2012. Face Expression Recognition and Analysis: The State of the Art. Technical Report. College of Computing, Georgia Institute of Technology.

[17] Pantic, M. and Rothkrantz, L. 2004. Facial action recognition for facial expression analysis from static face images. IEEE Trans. Syst., Man, Cybern. B, Cybern. 34, 3, 1449–1461.

[6] Cootes, T.F., Edwards, G. J., and Taylor, C. J. 2001. Active appearance models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 6, 681-685.

[18] Piana, S., Stagliano, A., Camurri, A., and Odone, F. 2013. A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. In IDGEI International Workshop.

[7] Davis, J. and Bobick, A. F. 1997. The representation and recognition of human movements using temporal templates. In Proceedings of the IEEE CVPR, 928–934.

[19] Russell, J. A. and A. Mehrabian. 1977. Evidence for a threefactor theory of emotions. Journal of Research in Personality. 11, 273-294.

[8] Den Uyl, M. J. and Van Kuilenburg, H. 2005 The FaceReader: Online Facial Expression Recognition. In Proceedings of Measuring Behavior, 589-590.

[20] Sebe, N., Lew, M.S., Cohen, I., Sun, Y., Gevers, T., and Huang, T.S. 2004. Authentic Facial Expression Analysis, In International Conference on Automatic Face and Gesture Recognition, 517-522.

[9] Dhall, A., Goecke, R., Joshi, J., Wagner, M., and Gedeon, T. 2013. Emotion Recognition In The Wild Challenge And Workshop 2013, ACM ICMI 2013.

[21] Tariq, U., Lin, K-H., Li, Z., Zhou, X., Wang, Z., Le, V., Huang, T. S., Lv, X., and Han, T. X. 2011. Emotion recognition from an ensemble of features. In Ninth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2011), 872-87.

[10] D’Mello, S. K. and Graesser, A. C. 2012. AutoTutor and Affective AutoTutor: Learning by Talking with Cognitively and Emotionally Intelligent Computers that Talk Back, ACM Transactions on Interactive Intelligent Systems. 2, 4, 23:223:39.

[22] Tian, Y. L., Kanade, T., and Cohn, J. F. 2005. Facial expression analysis. In Handbook of Face Recognition, S. Z. Li and A. K. Jain. Eds. Springer, 247-276.

[11] Fasel, B. and Luttin, J. 2003. Automatic Facial Expression Analysis: a survey. Pattern Recognition. 36, 1, 259-275. [12] Gómez Jáuregui, D. A., Philip, L., Céline, C., Padovani, S., Bailly, M., and Martin, J.-C. 2013 Video Analysis of Approach-Avoidance Behaviors of Teenagers Speaking with Virtual Agents. In Proceedings of the 15th International Conference on Multimodal Interfaces (ICMI 2013). To appear.

[23] Valstar, M., Pantic, M., and Patras, I. 2004. Motion history for facial action detection from face video. In Int. Conf. Systems, Man and Cybernetics. 1, 635-640. [24] Wu, T., Bartlett, M.S., and Movellan, J.R. 2010. Facial expression recognition using Gabor motion energy filters. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 42-47.

[13] Guo, G. and Dyer C. R. 2005. Learning from examples in the small sample case - face expression recognition. In IEEE Trans. Systems, Man and Cybernetics – Part B. 35, 3, 477488.

[25] Zeng, Z., Pantic, M., Roisman, G.I., and Huang, T.S. 2009. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence. 31, 1, 39-58.

[14] Lucey, S., Ashraf, A.B., and Cohn, J. F. 2007. Investigating Spontaneous Facial Action Recognition through AAM Representations of the Face. In Face Recognition Book, K. Kurihara, Ed. Pro Literatur Verlag, 275-286.

[26] Zhu, Z. and Ji, Q. 2005. Robust real-time eye detection and tracking under variable lighting conditions and various face orientations. Computer Vision and Image Understanding. 98, 1, 124-154.

[15] McDuff, D., Kaliouby R., Senechal T., Amr M., Cohn J., and Picard R.W. 2013. Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild, The 2013 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW'10), (June 2013).

21

Evaluation of Vision-based Real-Time Measures for ...

Analysis: The State of the Art. Technical Report. College of. Computing, Georgia Institute of Technology. [6] Cootes, T.F., Edwards, G. J., and Taylor, C. J. 2001.

768KB Sizes 1 Downloads 250 Views

Recommend Documents

Information-theoretic Measures for Objective Evaluation ...
the viewpoint of classification applications. Using these ... been used in classification applications. Taking binary ...... numerical examples were done using the open source soft- ...... Clinical Monitoring and Computing, 1995, 11(3): 189−206.

Do user preferences and evaluation measures line up?
ABSTRACT. This paper presents results comparing user preference for search engine rankings with measures of effectiveness computed from a test collection. It establishes that preferences and evaluation measures correlate: systems measured as better o

measures for displacement of permutations used for ...
Abstract : In speech scrambling systems, the signal is encrypted by rearyanging the speech data using permutations. Effective scrambling as measured by residual intelligibility (Rl) is dependent on rearrangement. Two parameters can be used for measur

Supplementary Material for ”Production-Based Measures of Risk for ...
Measures of Risk for Asset Pricing”. Frederico Belo. ∗. November 3, 2009. Appendix A Producer's Maximization Problem. Define the vector of state variables as xit-1 = (Kit-1,ϵit-1,Pit-1,Zit-1), where Kit-1 is the current period stock of capital,

Supplementary Material for ”Production-Based Measures of Risk for ...
Nov 3, 2009 - [4] Campbell, John Y., and Robert J. Shiller, 1988, Stock prices, earnings, and expected dividends, Journal of Finance 43,661 − 676. [5] Campbell, J., 2003, Consumption-Based Asset Pricing, in George Constantinides, Milton. Harris, an

Texture Measures for Improved Watershed Segmentation of Froth ...
ore type changes) to each float cell. Typically, changes to the input variables are made by an experienced operator, based on a visual inspection of the froth. These operators look at features such as: bubble size, froth velocity, froth colour and fr

Measures of Diversity for Populations and Distances ...
sity that will be defined apply also to simpler kinds of genomes, such as ..... on the same theme (Morrison and De Jong, 2002; Wineberg and Oppacher, 2003b).

Comparative evaluation of drying techniques for surface
Universiw of California, Los Angeles, CA 90095-1597, USA. Abstract. Five different ... 'stiction' in the field of microelectromechanical systems. (MEMS). Stiction ...

Comparative evaluation of drying techniques for surface
testing of a C-shape actuator, Tech. ... John Y&in Kim was born in 197 I. He received his B.S. in ... B.Tech. degree in mechanical engineering from the Indian.

FOUR SUTURE TECHNIQUE FOR EVALUATION OF TIP ...
FOUR SUTURE TECHNIQUE FOR EVALUATION OF TIP DYNAMICS IN RHINOPLASTY.pdf. FOUR SUTURE TECHNIQUE FOR EVALUATION OF TIP ...

Evaluation of approaches for producing mathematics question ...
File Upload. • Fill in Blanks ... QML file. Generate QML question blocks. Import back in to. QMP. Import QML file to. Excel ... Anglesea Building - Room A0-22.

EVALUATION OF SPEED AND ACCURACY FOR ... - CiteSeerX
CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM. 1. Jing Yi Tou,. 1. Kenny Kuan Yew ... may have a smaller memory capacity, which limits the number of training data that can be stored. Bear in mind that actual deployment ...

Truncation Approximations of Invariant Measures for ...
Sep 14, 2007 - R. L. TWEEDIE,* Colorado State University ... Postal address: Department of Statistics, Colorado State University, Fort Collins CO 80523, USA.

New Measures of Global Growth Projection for The Conference Board ...
projection methods, using more information from historical performance and adopting .... compensation share ( ) in value added averaged over the last two years: ... advanced technology, and improvement of production process, thereby contributing to o

Volume inequalities for isotropic measures
For each measure Z on Sn−1 the convex body Z∞ ⊆ B may be defined as the convex body whose support function is given by. hZ∞ (u) = max{ u · v : v ∈ supp ...

New Measures of Global Growth Projection for The Conference Board ...
projection methods, using more information from historical performance and adopting ... Declining export demand from mature economies and many domestic policy .... advanced technology, and improvement of production process, thereby ...

A comparison of measures for visualising image similarity
evaluate the usefulness of this type of visualisation as an image browsing aid. So far ... evaluation of the different arrangements, or test them as browsing tools.

EVIDENCE-BASED PERFORMANCE MEASURES FOR ...
be the time elapsed from the moment that the tele- phone rings at the 9-1-1 call center until the responding. personnel with a defibrillator make actual patient ...

pdf-175\realtime-data-mining-self-learning-techniques-for ...
... loading more pages. Retrying... pdf-175\realtime-data-mining-self-learning-techniques ... numerical-harmonic-analysis-by-alexander-paprotny.pdf.

Distributed QoS Guarantees for Realtime Traffic in Ad Hoc Networks
... on-demand multime- dia retrieval, require quality of service (QoS) guarantees .... outside interference, the wireless channel has a high packet loss rate and the ...