EAVI is a research group focused on embodied interaction with sound and image. We broach issues of whole body interaction, haptic feedback, sound image relationships, all in live real time applications. We are a small group of academics, researchers, and PhD students, carrying out cutting edge research across a diverse range of topics including motion capture, eye tracking, brain computer interfaces, physiological bio-interfaces, machine learning, and auditory culture.

Recent News

Rebecca Fiebrink seminar at IRCAM

Interactive Machine Learning as Musical Design Tool Supervised learning algorithms can be understood not only as a set of techniques for building accurate models of data, but also as design tools that can enable rapid prototyping, iterative refinement, and embodied engagement— all activities that are... Read More

“Making Data Sing: Embodied Approaches to Sonification” – new publication by Adam Parkinson and Atau Tanaka

Atau and I have a chapter in a new book published by Springer in Science, Music and Motion as part of their Lecture Notes in Computer Science series. Find it here: http://link.springer.com/chapter/10.1007/978-3-319-12976- The chapter is entitled “Making Data Sing”, and reports on two projects we... Read More

Expressive Interfaces for Differently Abled at CHI 2015 by Simon Katan

Our contribution to CHI 2015 was a note about how Interactive Machine Learning can be used to create expressive interfaces for differently abled people. The team was Rebecca Fiebrink, Mick Grierson, and myself, and the work was the culmination of six months of research on... Read More

Expressivity, Muscle Sensing and Intelligent Machines at CHI 2015

We just got back from the SIGCHI Conference on Computer-Human Interaction in Seoul, Korea. CHI is one of the largest conference in the field, counting this year over 3000 attendees. The CHI experience is as overwhelming as exciting. With 15 parallel tracks, there’s always something... Read More