After ten years in development at IRCAM, we’re pleased to welcome the MuBu project to the Max Package Manager. The depth of this set of tools is impressive, and can be intimidating, but I strongly encourage you to install and check it out. The gesture recognition tools powered by interactive machine learning should definitely be worthy of your attention. Despite the range and complexity of the work here, they’ve done a heroic job of documenting and introducing the objects with help files and examples. For a quick intro, don’t miss the included Launch (MuBu-Overview) patcher, which shows off an auto-sliced sampling engine.
In their own words, this is what the ISMM team at IRCAM has to say about MuBu for Max:
MuBu was initiated by Norbert Schnell almost ten years ago to provide Max with a powerful multi-buffer: a container with multiple synchronized tracks such as audio, sound descriptors, segmentation markers, tags, music scores, and sensors data (see the ICMC article “ MUBU & friends”). MuBu provides generic time-based data structures in Max, with optimised access to shared memory and lock-free real-time signal processing objects that can operate on them. Since its beginning, a powerful editing and visualization tool has also been part of the MuBu library, the imubu object developed by Riccardo Borghesi. Informed by several previous projects, such as FTM, the MuBu library has been expanded by the IRCAM’s ISMM team, constantly adding new functionalities and improving the documentation. Beyond the original features made available to the Max environment, MuBu brings new ways of thinking about musical interaction. MuBu is the engine behind all the patches of the Modular Musical Objects, the winner of the 2011 Guthman Prize. Actually, MuBu is our Max playground where our recent research on movement-sound interaction is implemented. For example, MuBu includes not only the gesture follower (gf) but also XMM, our latest set of interactive machine learning objects by Jules Françoise, which is a powerful series of externals for static or dynamic gesture recognition. Descriptors-based sound synthesis is also included (granular or concatenative): it is now easy to implement CataRT from Diemo Schwarz and colleagues using just 3 mubu externals…your turn to augment it with your ideas. We also improved our externals to perform either real-time or offline data processing, called PiPo (see the latest publication at ICMC): audio descriptors (mfcc, yin, …), wavelet, filters, etc. This works for both audio or sensor data processing. We should soon add more examples designed for Max for Live, so stay tuned... —Frederic Bevilacqua, ISMM team leader, IRCAM
MuBu for Max is free to install for Max users and can be found in the Max 7 Package Manager, located in the File menu.