Animating a Conversational Agent with User Expressivity M.K. Rajagopal1, P. Horain1, and C. Pelachaud2 1

Institut Telecom, Telecom SudParis, 9 rue Charles Fourier, 91011 Évry Cedex, France 2 CNRS Telecom ParisTech, 37-39 rue Dareau, 75014 Paris, France {Manoj_kumar.Rajagopal,Patrick.Horain}@Telecom-sudparis.eu, [email protected]

Abstract. Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial and temporal extent) from captured motion trajectories. We first validate this estimation against synthesis motion and then show results with real human motion. The estimated expressivity is then sent to the animation engine of an ECA that becomes a personalized autonomous representative of that user.

1 Introduction In day-to-day life, people express gestures with their own natural variations. These variations are consistent with individuals across gesture, so we call them “expressivity”. We considered the expressivity parameters defined by Hartmann et al. [1] that are based on the wrist movement in 3D space, irrespective of joint angles (shoulder, elbow, etc.) information. In this work, we estimate spatial extent (SPC) and temporal extent (TMP) of the Hartmann’s et al. [1] expressivity parameters from captured user motion and then animate an ECA to render the captured expressivity.

2 Estimating Expressivity Parameters and Its Validation Generally a gesture is formed from a set of key poses of wrist positions p having SPC and TMP as 0.0 called “basic gesture”. From the wrist positions of human communicative gestures, we compute the 3D distance between the wrists and the sacroiliac vertebra. This distance should be mapped to SPC from -1 to +1. The wrist position p’ of captured communicative gestures of a person is varied by a factor of SPC from basic gesture wrist positions p. (i.e.) p’ = [SPC] p. For zero SPC value, the wrist position p’ is same as p. When value of p is known, the SPC is determined by back substituting p in the above equation. TMP is measured using wrist speed. Wrist speed in whole motion is determined from the instant speed of the wrist among each poses. Distance covered between consecutive poses defines instant speed of the motion. From the example trajectories H. Högni Vilhjálmsson et al. (Eds.): IVA 2011, LNAI 6895, pp. 464–465, 2011. © Springer-Verlag Berlin Heidelberg 2011

Animating a Conversational Agent with User Expressivity

465

we considered, instant speed among each pose is sorted with descending order and the range of 5 to 5.5% of upper quantile gives 99% correlation with TMP values. The 99% correlated upper quantile value is mapped to TMP ranging from -1.0 to +1.0 through linear regressions. We tested the estimated method against synthesized motion with known SPC and TMP values for validating of our process. The absolute mean error of estimated SPC with respect to ground truth SPC is 0.13. Similarly absolute mean error for estimated TMP with respect to actual TMP value is 0.15. This error value shows our method is working well for estimating expressivity parameters.

3 Experiments We used motion data captured by computer vision from two video lectures (hereafter named V1 and V2) using software developed by Gómez Jáuregui et al., [2] and it outputs the upper body joint angles. 3D wrist positions are obtained from upper body joint angles using forward kinematics [3]. Experiment yields SPC as 0.8 for V1 and 0.6 for V2 and TMP as -1.0 for V1 and -0.7 for V2. These estimated values are given as input to the Greta [4] and the conversational agent is animated.

4 Conclusion By estimating the SPC and TMP from motion by a real user, we can then animate the conversational agent to render the expressivity captured from a real user. This animation can be played virtually when the user is not available to control his avatar. Rendering the expressivity parameters allows generating personalized animations, so that the viewer can have the feeling of interacting with an expressive virtual human.

References 1. Hartmann, B., Mancini, M., Pelachaud, C.: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006) 2. Gómez Jáuregui, D., Horain, P., Rajagopal, M., Karri, S.: Real-Time Particle Filtering with Heuristics for 3D Motion Capture by Monocular Vision. In: Multimedia Signal Processing, Saint-Malo, France, pp. 139–144 (2010) 3. Craig, J.: Forward Kinematics. In: Introduction to Robotics: Mechanics and Control, 3rd edn. Prentice-Hall, Englewood Cliffs (1986) 4. Pelachaud, C.: GRETA, http://perso.telecom-paristech.fr/~pelachau/Greta/

Animating a Conversational Agent with User Expressivity

Hartmann et al. [1] that are based on the wrist movement in 3D space, irrespective of joint angles (shoulder, elbow, etc.) information. In this work, we estimate spatial extent (SPC) and temporal extent (TMP) of the Hartmann's et al. [1] expressivity parameters from captured user motion and then animate an ECA to render the.

108KB Sizes 0 Downloads 214 Views

Recommend Documents

A Neural Conversational Model - arXiv
Jul 22, 2015 - However, most of these systems ... bined with other systems to re-score a short-list of can- ..... CleverBot: What is the color of the apple in the.

Monitoring with Zabbix agent - EPDF.TIPS
server cache, and two minutes until agent would refresh its own item list. That's better ...... You can look at man snmpcmd for other supported output formatting ...

man-160\chrome-windows-user-agent-string.pdf
Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more ...

LANDIS-II Biological Disturbance Agent v3.0 User Guide.pdf ...
LANDIS-II Biological Disturbance Agent v3.0 User Guide.pdf. LANDIS-II Biological Disturbance Agent v3.0 User Guide.pdf. Open. Extract. Open with. Sign In.

man-160\firefox-windows-user-agent-string.pdf
30. PDF Ebook : Kindle User Agent. Page 3 of 5. man-160\firefox-windows-user-agent-string.pdf. man-160\firefox-windows-user-agent-string.pdf. Open. Extract.

A BDI Agent Programming Language with Failure ...
Department of Computer Science & Information Technology .... As a consequence, achievement event-goals enjoy, by default, a certain degree of commitment ...

TIBCO Runtime Agent Domain Utility User Guide.pdf
24. Page 3 of 108. TIBCO Runtime Agent Domain Utility User Guide.pdf. TIBCO Runtime Agent Domain Utility User Guide.pdf. Open. Extract. Open with. Sign In.

man-160\windows-phone-user-agent-string.pdf
man-160\windows-phone-user-agent-string.pdf. man-160\windows-phone-user-agent-string.pdf. Open. Extract. Open with. Sign In. Main menu.

A Neural Conversational Model.pdf
Jun 23, 2015 - specific dataset, and from a large, noisy, and gen- eral domain dataset of movie subtitles. On a. domain-specific IT helpdesk dataset, the model.

Multi-Agent Search with Deadline
Jun 15, 2011 - continuous probability density function f whose support is X. .... The shaded area shows the acceptance set A(t), whose barycenter with.

Multi-Agent Search with Deadline
Jan 16, 2015 - We study a multi-agent search problem with a deadline: for instance, the situa- ... a predetermined deadline by which a decision has to be made. ..... bargaining solution irrespective of the distribution of offers, as the ... 9The “F

Conversational Impliciture
Jun 2, 1994 - between the 'logical form of an utterance' and 'the proposition expressed (1986, p. 180) that ... determines not a full proposition but merely a propositional radical; a .... as conjunction reduction, VP-ellipsis, and gapping:.

Simulating Agent Societies with PRESAGE
Abstract. PRESAGE is a simulation platform for rapid prototyping of. Agent Societies. It enables designers to investigate the effect of agent design, network properties and the physical environment on individual agent behaviour and long-term collecti

Mining conversational text for procedures with ...
dle customer issues, and address product-and service-related issues. .... service center calls. ...... customer: connect, application, error π complaint about error in.

Animating Chinese Landscape Paintings and ...
of not much help for extracting 3D models from a single ... entertaining despite its convincing 3D effect. ..... With acceleration, real-time walk-through of images.

Mobile Conversational Characters
Existing conversational character research focuses on desk- top platforms, but there are obvious differences when the platform is a .... The software that runs on the client is very “thin”; it streams audio ... User: can you help me a bit? InCA:

Sharing Secrets With Lawmakers: Congress as a User ...
The National Security Act of 1947 charged the Central Intelligence Agency with ...... are held in a secure briefing room on the fourth floor of the Capitol. The two ...

Video transcoding in a Grid network with User ...
transcodification service using a Grid architecture and User Controlled LightPaths .... communities cannot use the public Internet to carry out their experiments; they ... receives from a switch or network cloud using a specific network protocol ...

User experience can be captured with a single ... Developers
metrics and track the ones that are important to your users' experience. MYTH 2. User experience can be captured with a single “representative user.” Real-world performance is highly variable due to differences in users' devices, network connecti