Algorithmic Music Generator: Beep Bleep ZOMG!
Techniques used in Max/MSP in Creating: Beep Bleep ZOMG!
When I decided to use Max/MSP to compose my piece: Into the Metaverse on some cosmological Bubble, I was very daunted at what I should do. All I had was a vague idea in my head that I wanted to use samples and have max trigger them in some sort of fashion.
As I begun working on the patch, I decided that I would set in place that not only will Max be able to trigger the samples but also be able to adjust the playback speed and directions as well as be able to control the panning of the audio in the stereo field.
I first started by creating a system to store the audio samples and the values to manipulate them by using the poly~ and groove~ objects. In each instances of the poly~ object, there was random processes that would determine which samples was loaded and also determine the playback value of the samples as well. I then sub-divided each instance of the play-back objects into eight voices, two of the voices where given strict parameters to only play back long drones and the other six had less constricting parameters being able to playback the samples at high and low speeds.
I then encapsulated each of these eight voices and gave them three simple inputs.
- One: Triggered the currently loaded sample when given a bang message
- Two: Triggered the playback values, the instance of the loaded sample, and the panning values.
- Three: Toggled the on/off status of the buffer and in some cases the loop status as well.
The outputs of the voices patch was just a pure audio stream that is sent into a panning algorithm. From there the left and right audio signals are sent to the gain~ object then to the dac~ object. In totally I had sixteen channels of audio running into the dac~ object.
Once I got the internals of the audio producing system working, I then needed to devise a way to control the playback of the voices. I started out by using the transport object as a way to time the piece and track its progress temporally. I then used the sel object to trigger when I wanted an event to happen along the time line.
From the sel object I routed the incoming bang message to one of three processes that have control over one of the three controls on the voices. The bang messages triggers a random generator that then triggers a gate in which triggers one of six tables. I used the tables as means of sequencing, each one having a different pattern. Using the counter object, the table outputted a message to another sel object which then routed a bang message to he one of the triggers on the voices patcher.
Once the transport object is initiated, the tables would be triggered according to predetermined lapse in time. This would then play out for the duration of the piece.
Max was used for everything in the piece. All the audio was processed through max and what you hear in the song is purely generative by manipulating samples Max had control over.