EAVI is a research group focused on embodied interaction with sound and image. We broach issues of whole body interaction, haptic feedback, sound image relationships, all in live real time applications. We are a small group of academics, researchers, and PhD students, carrying out cutting edge research across a diverse range of topics including motion capture, eye tracking, brain computer interfaces, physiological bio-interfaces, machine learning, and auditory culture.
We are organizing Gen.AV, a 2-day hackathon on the topic of interactive computer-generated audiovisuals. During the hackathon, we will develop functioning prototypes of software tools for audiovisual performance. In particular, we would like to explore issues such as interaction design, reconfigurability, ease of use, audience... Read More
Interactive Machine Learning as Musical Design Tool Supervised learning algorithms can be understood not only as a set of techniques for building accurate models of data, but also as design tools that can enable rapid prototyping, iterative refinement, and embodied engagement— all activities that are... Read More
Atau and I have a chapter in a new book published by Springer in Science, Music and Motion as part of their Lecture Notes in Computer Science series. Find it here: http://link.springer.com/chapter/10.1007/978-3-319-12976- The chapter is entitled “Making Data Sing”, and reports on two projects we... Read More
Our contribution to CHI 2015 was a note about how Interactive Machine Learning can be used to create expressive interfaces for differently abled people. The team was Rebecca Fiebrink, Mick Grierson, and myself, and the work was the culmination of six months of research on... Read More
Rapid-Mix is an EU funded project bringing together research labs and creative companies, with the aim of bringing innovations in interactive technologies to users. We humans are highly expressive beings. Beyond explicit verbal communication, the human body is a major outlet for both conscious and...Read More