|AVATAR Lecture Series|
|Interactive Machine Learning in Music Composition and Performance|
|Margaret Schedel, Stony Brook University; Rebecca Fiebrink, Princeton University|
|Johnston Hall 338
March 02, 2011 - 10:30 am
Supervised learning offers a useful set of computational tools for many problems in computer music composition and performance. Through the use of training examples, these algorithms offer composers and instrument builders a means to implicitly specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the synthesis or structural parameters of dynamically-generated digital audio or visuals). However, previously existing software tools have not adequately enabled musicians to employ supervised learning in their work. In Rebecca Fiebrink's recent dissertation research, she has focused on building better software for these users by supporting more appropriate and comprehensive end-user interactions with the supervised learning process. Margaret Schedel is a composer, cellist, and one of Fiebrink's collaborators who has used this software in her work with a sensor bow. In this talk, we will provide a brief introduction to interactive computer music and the use of supervised learning in this field. We will then show a live musical demo of the software that Fiebrink has created for interactively applying standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training examples by real-time demonstration and the evaluation of trained models through application to real-time inputs. In addition to demonstrating how the Wekinator can be used to create new digital musical instruments, we will demonstrate how we used it to train a set of models to label Schedel's cello bowing gestures in live performance. In the remainder of the talk, Fiebrink will discuss a selection of the findings of her research with students and composers applying the Wekinator to real-world problems. She will highlight a few of the challenges and opportunities presented by interactive systems for end-user machine learning and discuss how supervised learning can function as a tool for supporting creativity and an embodied approach to design. Schedel will share a composer's perspective on the use of interactive machine learning in music, including how she is using machine learning in her current work.
An Assistant Professor of Music at Stony Brook University, Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She is working towards a certificate in Deep Listening with Pauline Oliveros and serves as the musical director for Kinesthetech Sense. She sits on the boards of the 60x60 Dance, BEAM Foundation, EMFInstitute, ICMA, NWEAMO, and Organised Sound. Rebecca Fiebrink is a new Assistant Professor of Computer Science (also Music) at Princeton University. She studies machine learning from an HCI perspective in order to build interactive machine learning systems that are more effective and efficient, and that allow human users to apply machine learning effectively to new problems. She is especially interested in applying machine learning to real-time, interactive, and creative domains. Within music, she is interested in studying human-computer interaction in the process of both composition and live performance, and she seeks to develop new compositional and performance technologies in collaboration with other composers and performers.