FTM & Co. introduction in New York City

Diemo Schwarz's icon

The FTM&Co extensions for Max/MSP make it possible to program prototype applications that work with sound, music or gesture data in both time and frequency domain in a more complex and flexible way than what we know from other tools. This workshop will provide an introduction to the free Ircam libraries FTM, MnM, Gabor, and CataRT for interactive real-time musical and artistic applications.

The basic idea of FTM is to extend the data types exchanged between the objects in a Max/MSP patch by rich data structures such as matrices, sequences, dictionaries, break point functions, and others that are helpful for the processing of music, sound and motion capture data. It also comprises visualization and editor components, and operators (expressions and externals) on these data structures, together with file import/export operators for SDIF, audio, MIDI, text.

Through examples of applications in the areas of sound analysis, transformation and synthesis, gesture processing, recognition, and following, and manipulation of musical scores, we will look at the parts and packages of FTM that allow arbitrary-rate signal processing (Gabor), matrix operations, statistics, machine learning (MnM), corpus-based concatenative synthesis (CataRT), sound description data exchange (SDIF), and Jitter support. The presented concepts will be tried and confirmed by applying them to programming exercises of real-time musical applications, and free experimentation.

FTM&Co is developed by Norbert Schnell and the Real-Time Music Interaction Team (IMTR, http://imtr.ircam.fr/) at Ircam and is freely available at http://ftm.ircam.fr.

Prerequisites: A working knowledge of Max/MSP is required, knowledge of a programming or scripting language is a big plus for getting the most out of FTM&Co., having notions of object-oriented programming is even better. Users of Matlab will feel right at home with MnM.