This article is part of the series Facial Image Processing.

Open Access Research Article

Multispace Behavioral Model for Face-Based Affective Social Agents

Ali Arya1* and Steve DiPaola2

Author Affiliations

1 Carleton School of Information Technology, Carleton University, Ottawa, ON K1S5B6, Canada

2 School of Interactive Arts & Technology, Simon Fraser University, Surrey, BC V3TOA3, Canada

For all author emails, please log on.

EURASIP Journal on Image and Video Processing 2007, 2007:048757  doi:10.1155/2007/48757


The electronic version of this article is the complete one and can be found online at: http://jivp.eurasipjournals.com/content/2007/1/048757


Received:26 April 2006
Revisions received:4 October 2006
Accepted:22 December 2006
Published:7 March 2007

© 2007 Arya and DiPaola

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.

References

  1. C Jones, Chuck Amuck : The Life and Times of Animated Cartoonist (Farrar, Straus, and Giroux, New York, NY, USA, 1989)

  2. N Badler, BD Reich, BL Webber, Towards personalities for animated agents with reactive and planning behaviors. in Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents, ed. by Trappl R, Petta P (Springer, New York, NY, USA, 1997), pp. 43–57

  3. J Bates, The role of emotion in believable agents. Communications of the ACM 37(7), 122–125 (1994). Publisher Full Text OpenURL

  4. AB Loyall, JB Bates, Personality-rich believable agents that use language. Proceedings of the 1st International Conference on Autonomous Agents, February 1997, Marina del Rey, Calif, USA, 106–113

  5. A Egges, S Kshirsagar, N Magnenat-Thalmann, A model for personality and emotion simulation. Proceedings of the 7th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES '03), September 2003, Oxford, UK, 453–461

  6. P Ekman, WV Friesen, Facial Action Coding System (Consulting Psychologists Press, San Francisco, Calif, USA, 1978)

  7. S Kshirsagar, N Magnenat-Thalmann, A multilayer personality model. Proceedings of the 2nd International Symposium on Smart Graphics, June 2002, Hawthorne, NY, USA, 107–115

  8. C Pelachaud, M Bilvi, Computational model of believable conversational agents. in Communication in Multiagent Systems: Background, Current Trends and Future, ed. by Huget M-P (Springer, New York, NY, USA, 2003), pp. 300–317

  9. D Rousseau, B Hayes-Roth, Interacting with personality-rich characters. Report KSL 97-06 (Knowledge Systems Laboratory, Stanford University, Stanford, Calif, USA, 1997)

  10. A Arya, S DiPaola, L Jefferies, JT Enns, Socially communicative characters for interactive applications. Proceedings of the 14th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG '06), January-February 2006, Plzen-Bory, Czech Republic

  11. JS Wiggins, P Trapnell, N Phillips, Psychometric and geometric characteristics of the revised interpersonal adjective scales (IAS-R). Multivariate Behavioral Research 23(3), 517–530 (1988)

  12. S Battista, F Casalino, C Lande, MPEG-4: a multimedia standard for the third millennium—part 1. IEEE Multimedia 6(4), 74–83 (1999). Publisher Full Text OpenURL

  13. DCA Bulterman, SMIL 2.0—part 1: overview, concepts, and structure. IEEE Multimedia 8(4), 82–88 (2001). Publisher Full Text OpenURL

  14. Y Arafa, K Kamyab, E Mamdani, et al. Two approaches to scripting character animation. Proceedings of the 1st International Conference on Autonomous Agents & Multi-Agent Systems, Workshop on Embodied Conversational Agents, July 2002, Bologna, Italy

  15. B De Carolis, C Pelachaud, I Poggi, M Steedman, APML, a markup language for believable behaviour generation. Proceedings of the 1st International Conference on Autonomous Agents & Multi-Agent Systems, Workshop on Embodied Conversational Agents, July 2002, Bologna, Italy

  16. A Marriott, J Stallo, VHML: uncertainties and problems. A discussion. Proceedings of the 1st International Conference on Autonomous Agents & Multi-Agent Systems, Workshop on Embodied Conversational Agents, July 2002, Bologna, Italy

  17. H Prendinger, S Descamps, M Ishizuka, Scripting affective communication with life-like characters in web-based interaction systems. Applied Artificial Intelligence 16(7-8), 519–553 (2002). Publisher Full Text OpenURL

  18. LR Goldberg, An alternative "description of personality": the big-five factor structure. Journal of Personality and Social Psychology 59(6), 1216–1229 (1990). PubMed Abstract | Publisher Full Text OpenURL

  19. D Watson, Strangers' ratings of the five robust personality factors: evidence of a surprising convergence with self-report. Journal of Personality and Social Psychology 57(1), 120–128 (1989)

  20. DS Berry, Accuracy in social perception: contributions of facial and vocal information. Journal of Personality and Social Psychology 61(2), 298–307 (1991). PubMed Abstract | Publisher Full Text OpenURL

  21. P Borkenau, N Mauer, R Riemann, FM Spinath, A Angleitner, Thin slices of behavior as cues of personality and intelligence. Journal of Personality and Social Psychology 86(4), 599–614 (2004). PubMed Abstract | Publisher Full Text OpenURL

  22. P Borkenau, A Liebler, Trait inferences: sources of validity at zero acquaintance. Journal of Personality and Social Psychology 62(4), 645–657 (1992)

  23. B Knutson, Facial expressions of emotion influence interpersonal trait inferences. Journal of Nonverbal Behavior 20(3), 165–181 (1996). Publisher Full Text OpenURL

  24. P Ekman, Emotions Revealed (Consulting Psychologists Press, San Francisco, Calif, USA, 1978)

  25. J Funge, X Tu, D Terzopoulos, Cognitive modeling: knowledge, reasoning and planning for intelligent characters. Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99), August 1999, Los Angeles, Calif, USA, 29–38

  26. J Cassell, C Pelachaud, N Badler, et al. Animated conversation: rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '94), July 1994, New York, NY, USA, 413–420

  27. J Cassell, HH Vilhjálmsson, T Bickmore, BEAT: the behaviour expression animation toolkit. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '01), August 2001, Los Angeles, Calif, USA, 477–486

  28. SA King, A Knott, B McCane, Language-driven nonverbal communication in a bilingual conversational agent. Proceedings of the 16th International Conference on Computer Animation and Social Agents (CASA '03), May 2003, New-Brunswick, NJ, USA, 17–22

  29. K Smid, I Pandzic, V Radman, Autonomous speaker agent. Proceedings of Computer Animation and Social Agents Conference (CASA '04), July 2004, Geneva, Switzerland

  30. JA Russell, A circumplex model of affect. Journal of Personality and Social Psychology 39(6), 1161–1178 (1980)

  31. W-S Lee, M Escher, G Sannier, N Magnenat-Thalmann, MPEG-4 compatible faces from orthogonal photos. Proceedings of Computer Animation (CA '99), May 1999, Geneva, Switzerland, 186–194

  32. J-Y Noh, U Neumann, Expression cloning. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '01), August 2001, Los Angeles, Calif, USA, 277–288

  33. A Paradiso, An algebra of facial expressions. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '00), July 2000, New Orleans, La, USA

  34. K Perlin, Layered compositing of facial expression. Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '97), August 1997, Los Angeles, Calif, USA

  35. A Arya, LN Jefferies, JT Enns, S DiPaola, Facial actions as visual cues for personality. Computer Animation and Virtual Worlds 17(3-4), 371–382 (2006). Publisher Full Text OpenURL

  36. CJ Beedie, PC Terry, AM Lane, Distinctions between emotion and mood. Cognition and Emotion 19(6), 847–878 (2005). Publisher Full Text OpenURL

  37. S DiPaola, A Arya, Affective communication remapping in musicface system. Proceedings of the 10th European Conference on Electronic Imaging and the Visual Arts (EVA '04), July 2004, London, UK

  38. R Bresin, A Friberg, Synthesis and decoding of emotionally expressive music performance. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, October 1999, Tokyo, Japan 4, 317–322

  39. PN Juslin, Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance 26(6), 1797–1813 (2000). PubMed Abstract | Publisher Full Text OpenURL

  40. D Liu, L Lu, H-J Zhang, Automatic mood detection from acoustic music data. Proceedings of the 4th International Symposium on Music Information Retrieval (ISMIR '03), October 2003, Baltimore, Md, USA