EAVI is a research group focused on embodied interaction with sound and image. We broach issues of whole body interaction, haptic feedback, sound image relationships, all in live real time applications. We are a small group of academics, researchers, and PhD students, carrying out cutting edge research across a diverse range of topics including motion capture, eye tracking, brain computer interfaces, physiological bio-interfaces, machine learning, and auditory culture.
Interactive Machine Learning as Musical Design Tool Supervised learning algorithms can be understood not only as a set of techniques for building accurate models of data, but also as design tools that can enable rapid prototyping, iterative refinement, and embodied engagement— all activities that are... Read More
Atau and I have a chapter in a new book published by Springer in Science, Music and Motion as part of their Lecture Notes in Computer Science series. Find it here: http://link.springer.com/chapter/10.1007/978-3-319-12976- The chapter is entitled “Making Data Sing”, and reports on two projects we... Read More
Our contribution to CHI 2015 was a note about how Interactive Machine Learning can be used to create expressive interfaces for differently abled people. The team was Rebecca Fiebrink, Mick Grierson, and myself, and the work was the culmination of six months of research on... Read More
We just got back from the SIGCHI Conference on Computer-Human Interaction in Seoul, Korea. CHI is one of the largest conference in the field, counting this year over 3000 attendees. The CHI experience is as overwhelming as exciting. With 15 parallel tracks, there’s always something... Read More
Rapid-Mix is an EU funded project bringing together research labs and creative companies, with the aim of bringing innovations in interactive technologies to users. We humans are highly expressive beings. Beyond explicit verbal communication, the human body is a major outlet for both conscious and...Read More