Composition Techniques (Procedural/Conditional)
So despite using Max extensively for about 3 years now, I have never made any actual music using it, and a major goal that I have had is to dive into procedural/algorithmic composition. My overarching idea is that I can lay out a set of rules and conditionals to compose instead of inputting everything by hand in a DAW. I don’t think of this as generative as such because I don’t want to be driving everything off of random number generators, instead I’d rather use the logic to dictate the flow of the composition in a predetermined way.
This where I get stuck - as I am so used to timeline based composition in DAWs (and even traditional notation!), Max’s open ended timeline tools have always freaked me out as it seems that composing in the traditional sense is not an option here.
I have been doing research to see what others have done in this regard, and one of the the most interesting reads was an interview with Autechre, where they seemingly mention in passing that they have (at least have in the past) programmed individual motifs and sequences, using a lot of conditionals, and then combine them into a larger patch to create the music. Other than that I can’t find many examples that demonstrate that sort of procedural workflow (one exception is a YouTube channel called fendoap).
This brings me here, does anyone have experience in composing using Max, in a predetermined/procedural way?
I hope I’ve been clear in how I’ve described my goals haha I’d really appreciate anyone’s insight into this!
Have you considered firing up your Package Manager and downloading the recently released Upshot package that Benjamin Van Esser so graciously gifted us? Based on some serious fun this week, I think it would be a good place to start organizing your thinking.
I use a recursive rewrite grammar to generate multileveled temporal form, meter, rhythm. A model of Lou Harrison’s notion of melodicles, or centonization, for melodic shapes classically transformed, prime, invert, retrograde, invert retrograde, melodic shapes and contours abstracted away from scales/modes for harmonic flex, self analysis of generated melodic material to map melodic information to playing styles, grouping and following behaviors driven by various deep structure events of the generated temporal hierarchy. Minimize the use of randomness, constrain it to have fractal features. Connect everything to something else to create coherence. Start with precomposed music, generated, MIDI, and apply rules to transform, vary, meta-create. Take a spectral approach and extract musical gesture from the analysis of real or synthetic sound. Use the harmonic series and its transpositions as a harmonic map. Etc. Etc. Etc.
@Max Garderner those are really useful patches thanks for the heads up! I love building patches like those, but I never end up using them! Maybe this can prompt me to figuratively dust off some old patches.
@Kenneth Newby multilevelled temporal form, meter, and rhythm sound similar to the BachLibrary / OpenMusic style of multilevel lists to represent time, is it a similar concept? Form and timing are the main areas with which I struggle to translate into Max. The other processes you mentioned are really interesting and I will definitely experiment with them, but I find that without the form to structure them I'll forever end up with aimless generative doodles! Might you be able to shed any light on how you approach form and structure?
I've always wondered whether those who use Max to compose a full piece tend to do it "one-piece-per-patch" (create a patch that creates one single composition) or do they build a single patch that is used more like a sequencer software capable of creating many different compositions?
From my limited time spent with both the Bach library and OpenMusic I think, yes, it's probably a similar idea. It's also like an Lindenmayer-system spread out in time. My own approach, based on a personal theory of musical coherence, begins with the generation of metric hierarchies at five levels of depth. From those structures events can be triggered by mapping events from some level of the hierarchy which is a powerful way of integrating events into the whole. It's a useful approach to the problem of prolongation and the composition of larger formal structures in generative music because, as you intimate, it's easy to generate local detail but much harder to create larger coherent forms. A recursive generative grammar for temporal structures is one solution in that it produces forms of arbitrary length with related micro-structures at smaller levels of temporal detail. It's a model of form based on the fractal nature of much music. Norgard's infinity series is a kind of in-between approach in that it definitely noodles along but integrates where it is at any given moment with where it's been and where it will go—fractal form again.
To your last question, my CAC system can produce and perform pretty complex stretches of music. Those outputs get recorded as MIDI files. I use those files as data for a set of meta-compositional processes that I use to further refine and re-compose the generated material. It's a meta-meta approach in a manner of speaking!
"This where I get stuck - as I am so used to timeline based composition in DAWs (and even traditional notation!), Max’s open ended timeline tools have always freaked me out as it seems that composing in the traditional sense is not an option here."
the question is why would like to do that. if you have made new, custom tools with a new approach to composing, it would be a waste of time IMO to now create pieces of music with it, which are similar to your previous (DAW or pencil) notations.
and if you really miss linearity, a global timeline, global settings, then just add this functionality to your current algorithm collection - as one algorithm among many.
adding a global timeline thing with marker events to control my realtime algo stuff was always on my to do list, but is not yet done (after 14 years). eventually that means you dont really need it, who knows.
Think of generative/algorithmic approaches to composition as music design-space exploration tools. They let you explore an area of music very efficiently so you can quickly dispense with the ideas that don't work and focus on those that do. You can always import the best results into your DAW for further elaboration. It's not an either/or proposition but an and/and one.
The idea of using it alongside a DAW was off putting at first but now I see the great benefits, especially as I’m so used to my DAW of choice (Ableton Live). which is extra good because I can try to implement some of these patch ideas as M4L devices, running off of the Live transport. I originally wanted to avoid this because every M4L device I have made has been seriously broken in some way but it seems like the sensible solution to spend time learning how to build devices that work and don’t break Ableton Live!