Articles

Between the Gesture and the Thought: Music for Disklaviers and Their Friends

In a recent newsletter piece, I recommended a few recent recordings that demonstrated Max/MSP in a variety of musical contexts. I’m always glad to foist recordings on my friends (doing a weekly radio program on contemporary music is really nothing more than an efficient means for this kind of geek-out in broadcast form), and interested in their responses, too. In the case of that particular article, my email pals were particularly interested in Mark Fell’s Intra - a recording that took the interest in generative algorithms that we normally associate with his work and then assigned the output to human performers.

I was surprised at the extent to which that particular work caught my interlocutors’ attention. Was it the novelty of the use of Max as something that generates material for acoustic performers? Is it the sound of "real" percussion that makes it interesting? Does that mapping of generative structure to physical performance somehow change the way that we hear the work? For my purposes, I'm going to assume that there's something going on in terms of the intersection between algorithms and acoustic musical practice. If you’d like an opportunity to think about that intersection between algorithms and acoustic musical practice as a Max/MSP person in more detail (and to hear some examples of those ideas realized), I can point you in no better direction than a trio of articles by Joseph Branciforte that touch upon algorithms and acoustic music, realtime networked notation, and approaches to harmony and orchestration. It's my go-to resource when dealing with questions from excited beginners, and there’s plenty there to stimulate the imagination of the intermediate user as well. It's a motherlode.

It’s not surprising why algorithms and acoustic performances often produce interesting results - as you patch and develop algorithms, you get to leverage the virtuosity of your collaborators’ skills on their acoustic instrument of choice, as well as their responses in improvisational situations. When that’s done well, the results can be breathtaking (I’d direct you to George Lewis’ longstanding body of work with the Voyager program, which has made its way from an Apple II to its current laptop MSP instantiation and continues to make astounding music. It remains the most venerable bit of software in this tradition, and it still packs a fearsome wallop).

I guess you could say that we’re really talking about questions of interface - is there a person generating the musical output being modified and remapped or a process? Where in the input-to-output chain does the processing occur? Since the data we’re modifying can really be analyzed and mapped to nearly anything, what does the evolutionary tree for this practice look like? I thought I'd follow up on that last post by pointing you to some interesting works and sources of inspiration.

For starters, the most humble and common form from which we begin is probably something you do yourself every time you connect a MIDI keyboard to a softsynth and start playing away. That efficiency sometimes tends to obscure something more subtle - that use of your keyboard as a MIDI input device isn’t just a way to bang notes into your DAW - it’s a way to leverage your sense of timing and touch and facility with the input device.

Prior to the arrival of live real-time audio processing, MIDI was the only way to do this, of course. From the moment of its arrival, MIDI promised more than mere amanuensis - it was ready, too, to provide ways of producing music that stood in the tradition of the visionary player piano music of Conlon Nancarrow and continues to this day in his wonderfully feral descendants - the tribes of Black MIDI.

While that last link will take you to all kinds of MIDI-cable choking fun, you might find this piece on Ranjit Bhatnagar's work A Short Ride on a Fast Chihuahua (along with a set of notes by Ben Hougue, one of the contributing composers) an interesting take on a variant of the genre.

More recently, Kyle Gann's astounding Hyperchromatica takes Conlon Nancarrow's approach and multiplies it, using three Disklaviers, each of which is tuned to a different set of scales that share pitches to create the equivalent of a player piano with 243 keys.

There are other approaches whose traditions take a different path - once you’ve got the data, it can be mapped to make something new, and that’s where things take myriad forms and - to me - start to become more interesting. The results run the gamut; (Lyle Mays’ piano-as-MIDI-control-source outing Solo is a great example of this simple practice in a musical context that may be less familiar to some Max/MSP users).

From that point, it’s a short step to building something between that MIDI input and the output device to create material which begins to create dialogs between the live performer and the software. And when it comes to ways of converting MIDI information into physical instrumental practice, it's no surprise that the Disklavier shows up with some regularity. While MIDI pianos and the Moog PianoBar (back in the day) a great job of capturing input, the Disklavier rules the roost in its ability to make noise. You'll find it in wide use in many of the pieces I'll be mentioning here, for good reason.

As someone who enjoys broadcasting work like this, I find it interesting that it tends to be the case that - as we move into this territory - the likelihood of finding full recordings of work tends to diminish in comparison to Youtube fragments of online sections that are intended to communicate something of the live experience. My first encounter with this was local and live toward the end of the last century - it’s probably that experience probably that set me to considering a body of work like this: Composer Joe Koykkar and pianist Todd Wellbourne collaborated on a performance piece that combined a Disklavier and a sampler to facilitate a kind of live dialogue between the piano under human and software control in a piece called Interfacing (here are the listening links to movement I, movement II, and movement III).

Starting around the turn of the century, your standard computer music conference (the ICMC or Seamus, for example) seemed to always contain a new piece that situated activity between a live player and a piano. The approaches have been as varied as the persons themselves, but it's often the case that the primary experience of the piece we have is what we hear. While the following examples are certainly not exhaustive by any means, they ought to suggest some futures for you based on someone else's pasts. I'm sure that some of you reading this will have recommendations of your own.

As a Max/MSP person, one's second contemplation (the first being the piece itself) is almost always "How's that done?" Some quality time googling for things like this will bring up an interesting set of responses. If you want to see a patch whose innards you can probably intuit but whose construction will suggest how dialog-based work might be constructed, Bob Gluck's Many Hands software modules are an interesting place to start - you can marvel at the old school Max patches, and stay for the myriad MP3 file examples.

Sometimes a little encounter with the mysterious is the proper catalyst - in that case, Edwin Kenzo Huet's Prism is a good place to start - you can compare a rehearsal video of the piece with an example of one of the pieces from which the piece is constructed. (What does a Markov model sound like? One of the ways is like this.)

Cat Hope's Chunk is an interesting hybrid approach - the Disklavier transduces a graphic score while the pianist plays from one. In some ways, I think that dealing with the graphic score material in terms of imagining what the software's doing is a more interesting exercise than merely hearing the piece by clicking on the title link .

Here's a performance of Chunk synced to the score as some food for thought:

Imagining the mechanism of a transducing mechanism such as a Disklavier (or a piano under some other sort of external control) brings an entirely different set of ideas into the mix - Hans Tammen's Music for Choking Disklavier isolates and foregrounds the physical system of the Disklavier to great effect, while the clickbait friendly "talking piano" combines careful signal processing and narrow filters to drive a piano mechanism:

Danny DeGraan has also taken a run at this same approach, with the added benefit that you've got a Max/MSP patch to check out!

Whew! I feel like I've just tried to describe the constituent parts of a playground. There's a lot more here to investigate (I'm sure you'll have some recommendations of your own - add 'em in the comments), but I hope this gives you some stuff to see, hear and think about. Enjoy your contemplations of the summer!

by Gregory Taylor on July 31, 2018

Creative Commons License
Daniel Maruniak's icon

This is vast and amazing. Thank you!

Mériol Lehmann's icon

Very interesting article. Canadian artist Jocelyn Robert's work is also noteworthy : http://jocelynrobert.com/?page_id=43