|Special Guest Lectures|
|Applications in Signal Processing and Music Informatics|
|Alexander Lerch, Assistant Professor|
|School of Music, Georgia Institute of Technology|
|Digital Media Center Theatre
October 25, 2013 - 10:00 am
Modern audio software offers many ways of processing and modifying the audio signal, including the ability to mash-up two or more sequences of audio by changing their tempo and pitch. To do that automatically, two pre-requisites are required: an accurate analysis of the timing and similarity between the sequences and a dynamic "time-stretch" algorithm for matching tempo and pitch. While this functionality is already available, it often lacks the necessary quality in both the analysis result and the artifact-free processing of the audio. This presentation will outline the approaches, typical problems, and recent developments in this area.
Alexander Lerch is an Assistant Professor at the School of Music at Georgia Tech, where he works on the the design of new digital signal processing algorithms that extract information from music. Alexander Lerch studied Electrical Engineering at the Technical University Berlin and Tonmeister (Sound Engineering) at the University of the Arts Berlin. He received his PhD on algorithmic music performance analysis from the Technical University Berlin. In 2001, he co-founded the industry leader company "zplane" - a research-driven technology provider for the music industry. At zplane, Lerch has worked on the design and implementation of algorithms for music processing and music information retrieval that have been licensed to companies such as Ableton, Native Instruments, Sony, and Steinberg. In addition to his work at zplane, Dr. Lerch lectures at the audio communications department of the Technical University Berlin. His book "An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics" has been published in 2012 by IEEE / Wiley press.