Expressive Interfaces for Differently Abled at CHI 2015 by Simon Katan

Our contribution to CHI 2015 was a note about how Interactive Machine Learning can be used to create expressive interfaces for differently abled people. The team was Rebecca Fiebrink, Mick Grierson, and myself, and the work was the culmination of six months of research on Sound Lab, a NESTA funded R&D project with community group Heart and Soul.

katan_chi2015_2

It was up to me to make the presentation, and I was a little nervous as this was my first presentation at CHI and I had to squeeze my presentation into seven minutes. We were in the HMDs & Wearables to Overcome Disabilities session, and it was heartening to observe that some renegade had vandalised the session sign by crossing out “Disabilities” and replacing it with “Challenges” – this was my sort of session.

katan_chi2015

The paper was well received and the questions asked were very supportive. The other work was also impressive, in particular Mayank Goel’s work on using wireless signals for facial gesture detection. However, what was most exciting for me was the new ideas that emerged from subsequent conversations about our work. We now have a host of ideas for how to take this research forward as well as some potential working partners.