Crayonada’s Hat


My most recent composition, written for Max/MSP and Ableton Live.

The audio samples I utilized were actually the individual tracks from a previous composition of mine, called Crayonada (hence the title). However, to add an initial extra bit of aural flavor, I applied a series of individual effects (which involved the convolving, filtering, and transforming of each sample) to each track to morph them into something that, while still relatively similar to the original composition, were also very different.

My instrument of choice was the eMotion Technologies’ Twist sensor suite (and, of course, my hat). While the Twist offered a myriad of different data streams that I could use as CC messages, I was also able to remap and reshape those same data streams into triggers, which allowed me to achieve an exponentially more interesting performance and musical result. I had several different data streams mapped to effects processing parameters, panning, and volume. I then triggered a specific sequence of events that controlled which track(s) were being heard. Whichever track was triggered also switched the panning controls to that specific track, to make it more apparent which track I had just turned on. Following the sequenced triggering, I then randomly triggered the state of each track to being either on, off, or partly on.

This is by far one of my most complicated pieces to date, both in terms of my Max/MSP programming and data-mapping/sound processing within Ableton.

Best heard whilst wearing headphones.

Apologies for the periodic choppiness of the video…my computer was doing quite a bit of processing :)

How did this project use Max?

Max was used in just about every facet of this project. The eMotion Technologies' Twist sensor suite is run entirely in Max (the software that the Twist comes with was written in Max). I utilized the Twist's native software and customized it, then wrote my own series of Max patches to process, shape, and map the data so as to properly interact with my sounds in Ableton Live. I used the X and Y axes from the accelerometer on the Twist, as well as two distinct threshold detections from two other data streams. The data stream from the X axis was mapped to the dry/wet parameter of an audio trashing plugin within Ableton Live, which I customized to my liking. The data stream from the Y axis was mapped to the dry/wet parameter of a granulation plugin that I also customized within Ableton. To add another method of gestural articulation I incorporated threshold detection utilizing two different data streams which allowed me to add two more gestures to my gestural palette. These two gestures also allowed me to incorporate triggers, in the form of note-on/off messages, alongside the continuous control messages that were already being sent by the first two data streams. The up/down gesture that acted as one of the triggering mechanisms triggered a predetermined sequence of which tracks were being heard, and when. At the onset of each sequenced trigger, a custom-designed sound was played to highlight the trigger, and the panning control was then shifted to the newest track being heard to make it more apparent which track was just turned on. Following the sequenced triggering, I then randomly triggered the state of each track to being either on, off, or partly on by the left/right gesture, with the panning controls shifting to the master track. If you have any other questions about my data-mapping and use of Max, please feel free to contact me!