Content You Need: FluCoMa


    In addition to being artists, Max users are toolmakers; from the first moment you realize that some bit of Max patching you've done is something you'd maybe like to use again and again or use elsewhere, a very specific part of your patching life begins. Some of those Max users take a further step: taking the collection of helpful tools they've made and sharing them with the community of Max users at large, often for no more return than that of the accrual of really Good Karma. One of the places where that generosity is made visible is the Max Package Manager (that little icon at the very top of your Max patcher window's left toolbar), where you can explore and learn about those gifts to you from other Max users. From time to time, we take the opportunity to point out something really interesting that shows up in the Package Manager. This is most definitely one of those times.
    Meet FluCoMa (the interesting name is short for Fluid Corpus Manipulation, by the way). James Bradbury, one of the FluCoMa team members at the University of Huddersfield, succinctly describes what it is (in understated terms):
    FluCoMa introduces a suite of tools to Max that can help you work with sounds and collections of them in new and flexible ways. Possibilities include slicing, decomposition, and hybridization of sounds, audio-descriptor analysis, and machine learning-driven musicking. The package links out to a body of educational content, tutorials, and inspirational examples from the world of FluCoMa and Max users.
    For my money, this is one of the most extensive Package Manager offerings ever, and particularly interesting and exciting for the way that it locates the work (whose tools are also available to Pure Data and Supercollider users, by the way) and guides you through kinds of patching and compositional approaches that might otherwise seem to be dim and out-of-reach "someday aspirations." And even though FluCoMa is available in a variety of environments, a strength of the Package Manager toolset is that it feels like Max—you can use a subset of the package tools and go a long way while still connecting to all other elements of Max via the buffers and dictionaries you already work with.
    One of the things that make it of interest to people exploring the tools for the first time is the wealth and range of information and examples from within the FluCoMa user community, in addition to a well-documented Max software package.
    It all starts here....
    It all starts here....
    The most interesting part of FluCoMa for me is the various compositional or improvisatory threads that it brings together: the ability to take a huge collection of sounds and to create a way to display, organize, and explore them as a kind of visual terrain (if you've ever experimented with Diemo Schwarz's CataRT, you're probably already starting to get excited....). Their entry-level tutorial starts by creating a simple 2d browser and segments sound in slices and then uses machine listening algorithms to map loudness and centroid features to create a navigable X and Y space. The tutorial walks you through most of FluCoMa's basic syntax along the way, as well:

    Building a 2D Corpus Explorer (Part 1)

    Pretty amazing, but where FluCoMa goes next is where it really gets interesting for me — What you can do with the corpus you've created? Since FluCoMa is about ways to find patterns in these collections of sound (corpora), one next likely thing to imagine is applying machine learning to that collection. The FluCoMa project is strongly influenced by the work of Rebecca Fiebrink (whose Wekinator you may know) with its emphasis on ideas like artist-curated states and the aphorism that "small data is beautiful data," so there's a tutorial designed to ease your gradual transition to working with machine learning: What's that? You want to split your giant collection of sounds so as to make them more flexible in your sound design? Your new FluCoMa best friends have implemented ways of doing that for classic paradigms (voice/noise, transient/continuous, harmonic/percussive) in real-time and non-real-time Max tools and also explored some more exotic approaches such as source separation (demising) and hybridization via non-negative factorisation (NMF). You can find an overview of FluCoMa's decomposition tools here.
    The FluCoMa package is in use by artists already. Resources from the FluCoMa team in the form of articles and podcasts with artists who use the toolset that will give you some ideas about how you might integrate FluCoMa into your process (complete with example code).
    Any more questions before you download and install? Oh yeah - The FluCoMaians are working to build a community of fellow explorers by way of their own Discourse site, as well as providing pedagogical support for educators where they share workshop materials, sample plans, and lessons.
    Okay - what are you waiting for?