Talk by Jules Françoise: Gesture–Sound Mapping by Demonstration for the Design of Sonic Interactions

Jules Françoise from IRCAM – Centre Pompidou will give a lecture entitled “Gesture–Sound Mapping by Demonstration for the Design of Sonic Interactions”

Thursday 14th November, at 12:00pm
Goldsmiths College, Richard Hoggart Building, Room 137a

Abstract

This work focuses on computational modeling of gesture–sound mapping in interactive systems for sound and music performance. Today, the range of applications of such systems is widening, extending beyond the context of music performance to novel gaming applications, sonic interaction design, or rehabilitation.

The design of the mapping between gesture and sound is a crucial element of such systems, as it impacts the interaction possibilities. We propose an approach we call mapping by demonstration, aiming to learn the mapping from examples provided interactively by the user, for example gestures performed while listening to sound examples.

We propose a modeling framework providing a multimodal and multilevel representation of gesture–sound mapping, and we will detail two contributions addressing orthogonal aspects of this general framework. First, we introduce an extension of continuous temporal mapping methods to complex time structures through a hierarchical model. Second, we propose a multimodal probabilistic model able to jointly model gesture and sound sequences, therefore capturing both the temporal dynamics of the mapping and expressive variations. Finally, we discuss current experiments for the evaluation of these systems through a set of application in sonic interaction design.

Bio

jules_francoise_pict

Jules Françoise is a PhD candidate in the “Interactions Sound Music Movement” Team at Ircam, under the supervision of Frédéric Bevilacqua. After a Master degree in acoustics, he shifted towards Computer Science during his Master degree ATIAM at Ircam, to study gesture interaction with sound in interactive musical systems. Currently, he focuses on modeling gesture, sound, and their mapping for expressive control of sound synthesis, with specific interests in the articulation between machine learning and human–computer interaction.