SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research Article

Multispace Behavioral Model for Face-Based Affective Social Agents

Ali Arya1* and Steve DiPaola2

Author Affiliations

1 Carleton School of Information Technology, Carleton University, Ottawa, ON K1S5B6, Canada

2 School of Interactive Arts & Technology, Simon Fraser University, Surrey, BC V3TOA3, Canada

For all author emails, please log on.

EURASIP Journal on Image and Video Processing 2007, 2007:048757  doi:10.1155/2007/48757

Published: 7 March 2007

Abstract

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.