Predicting when to laugh with structured classification

Bilal Piot 1, 2 Olivier Pietquin 3, 4 Matthieu Geist 2
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Today, Embodied Conversational Agents (ECAs) are emerging as natural media to interact with machines. Applications are numerous and ECAs can reduce the technological gap between people by providing user-friendly interfaces. Yet, ECAs are still unable to produce social signals appropriately during their interaction with humans, which tends to make the interaction less instinctive. Especially, very little attention has been paid to the use of laughter in human-avatar interactions despite the crucial role played by laughter in human-human interaction. In this paper, a method for predicting the most appropriate moment for laughing for an ECA is proposed. Imitation learning via a structured classification algorithm is used in this purpose and is shown to produce a behavior similar to humans’ on a practical application: the yes/no game.
Document type :
Conference papers
Complete list of metadatas

Cited literature [22 references]  Display  Hide  Download

https://hal-supelec.archives-ouvertes.fr/hal-01104739
Contributor : Sébastien van Luchene <>
Submitted on : Monday, January 19, 2015 - 10:39:34 AM
Last modification on : Wednesday, July 31, 2019 - 4:18:02 PM
Long-term archiving on : Monday, April 20, 2015 - 10:31:28 AM

File

supelec887.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution - NonCommercial - NoDerivatives 4.0 International License

Identifiers

  • HAL Id : hal-01104739, version 1

Citation

Bilal Piot, Olivier Pietquin, Matthieu Geist. Predicting when to laugh with structured classification. InterSpeech 2014, Sep 2014, Singapore, Singapore. pp.1786-1790. ⟨hal-01104739⟩

Share

Metrics

Record views

2361

Files downloads

188