The Multimodal Nature of Embodied Conversational Agents

Abstract

Embodied conversational agents (ECA’s) have become ubiquitous in human-computer interaction applications. Implementing humanlike multimodal behavior in these agents is difficult, because so little is known about the alignment of facial expression, eye gaze, gesture, speech and dialogue act. The current study used the data from an extensive study of human face-to-face multimodal communication for the development of a multimodal ECA, and tested to what extent multimodal behavior influenced the human-computer interaction. Results from a persona assessment questionnaire showed the presence of facial expressions, gesture and intonation had a positive effect on five assessment scales. Eye tracking results showed facial expressions played a primarily pragmatic role, whereas intonation played a primarily semantic role. Gestures played a pragmatic or semantic role, dependent on their level of specificity. These findings shed light on multimodal behavior within and between human and digital dialogue partners.


Back to Friday Papers