This workshop (in English) about advanced sound and data processing with FTM&Co will provide an introduction to the FTM&Co extensions for Max/MSP, looking into the basics and the advanced use of the free Ircam libraries FTM, MnM, Gabor and CataRT for interactive real-time musical and artistic applications [http://ftm.ircam.fr].
These tools make it possible to work with sound, music or gesture data in a more complex and flexible than what we know from other audio tools in both time and frequency domain.
The basic idea of FTM is to extend the data types exchanged between the objects in a Max/MSP patch by complex data structures such as matrices, sequences, dictionaries, break point functions, and others that are helpful for the processing of music, sound and motion capture data. It also comprises visualization and editor components, and operators (expressions and externals) on these data structures, together with file import/export operators for SDIF, audio, MIDI, text.
As examples of applications in the areas of sound analysis, transformation and synthesis, gesture following, and manipulation of musical scores, we will look at the parts and packages of FTM that allow arbitrary-rate signal processing (Gabor), matrix operations, statistics, machine learning (MnM), corpus-based concatenative synthesis (CataRT), sound description data exchange (SDIF), and Jitter support. The presented concepts will be tried and confirmed by applying them to programming exercises of real-time musical applications, and free experimentation.
Prerequisites: A working knowledge of Max/MSP is required, knowledge of a programming or scripting language is a big plus for getting the most out of FTM&Co., having notions of object-oriented programming is even better. Users of Matlab will feel right at home with MnM.
Preview: another workshop in Edinburgh is planned for April.