Interview: A Quick Conversation with Kenneth Kirschner and Joseph Branciforte

For many of us, digital approaches to music-making are most closely associated with the use of electronically generated sound. But it's also the case that computer-based composition opens up a huge space of possibilities beyond the sounds/timbres we generate — possibilities that can involve fundamental investigations of basic musical structure as we conceive of it.
That possibility space is obviously attractive to nearly anyone who, as a composer, is engaged in the practice of not only generating ideas, but organizing and presenting them to traditional musicians, as well. I'm generally quite curious about the forms that this particular avenue of exploration takes.
When I got wind of a collaboration between Kenneth Kirschner and Joseph Branciforte that situated itself in precisely that terrain, I was curious - I've been a fan of Kenneth's enormous body of freely available recordings (each of which is named for the date on which the work on it commenced) for a very long time, and I've also followed Joseph's output with some interest - his 2018 release with vocalist Theo Bleckmann LP1 left me curious to hear more.
That collaboration is now out there. From the Machine: Vol. 1 features two side-long pieces (Kirschner’s April 20, 2015, for piano and two cellos, and Branciforte’s 0123, for low string quartet) composed non-linearly using software, translated to traditional notation, and performed by members of the Flux Quartet, the International Contemporary Ensemble, and the Talea Ensemble.
I got in touch with them both, and had the chance to ask a couple of quick questions about the project, the software they were using, and the nature of their collaboration. Looks like there's more to come, too.
Rather than using Max itself as a sound source or signal processor, you've chosen to translate Max data into traditional musical notation for instrumental performers. Could you walk us through the motivation for this approach, as well as the technical process of translation?
Joseph Branciforte (JB): The motivation for me was to combine the vast computational/compositional powers of Max with the tactility and expressiveness of human performance.
After encountering Max for the first time back in 2003, I quickly became fascinated by its application to questions of music theory – an interest of mine that preceded computer music. While learning Max, I took countless detours into its synthesis and DSP capabilities, but I kept returning to these more formal questions: harmony, voice leading, rhythm, musical form, etc.
It seemed to me that the computer was uniquely suited to address these types of musical questions — calculating, say, all the possible voicings of a 5-note chord and constructing voice leading rules to move among them. This is something that could never be done on a modular synthesizer, for instance.
Not on my modular, that's for sure. How did you go about implementing those ideas in Max?
JB: In 2016, I discovered the Bach library, a group of Max external objects that extend the music notation capabilities of Max. I realized that these experimental composition patches I had been building for years could be translated to real-time performance, retaining all the interactive elements (live data input, probability, etc.) that I'd come to love. I designed a performance system of networked laptops that allowed an ensemble of musicians to read from a scrolling real-time score. During performance, I could input new musical data, change the orchestration, loop sections – basically "perform" the patch as usual, but with musicians reading down the score output in real time.
The string quartet I wrote for this album, 0123, was workshopped using this networked notation system. Each time it was performed, it was essentially a new piece: the underlying harmonic rules were the same, but I could radically alter the piece's overall shape in performance.
When it came time to record a studio version, I decided to create a fixed paper version of the piece — recording the MIDI output of the patch and engraving it in Sibelius. This allowed multiple takes of the same material to be recorded in the studio which made editing much more flexible. The fixed version that appears on the album would never have been possible, however, without the insights gleaned during those improvised networked performances.
The non-linear and improvisational elements were crucial to the compositional process, and were unlocked by using Max — I could never have conceived this piece in a traditional left-to-right fashion.
Kenneth, for those of us who know your recorded output over a long period of time, I’m wondering why you would be interested in working in a Max-focused collaboration like this.
Kenneth Kirschner (KK): Well, in the interests of full disclosure, let me just say that… my Max skills are terrible! (laughs) So collaboration is really the only option for me.
But let me give you some of the backstory on why this is helpful for where I’m currently at...
That'd be great. I'd be interested to see how you describe the overall trajectory of your work across the years.
KK: Well, after working predominantly in MIDI since the 80s, my work from around the early 2000s to the mid 2010s focused more on directly manipulating audio — speeding things up, slowing things down, slicing them up, chopping and granulating and looping and stretching — all the things you can do to twist and mangle audio files. That’s a certain way of thinking, a certain set of tools you can use — and a certain set of results. That’s also the time period when that sort of thing actually became possible on a desktop computer, and MIDI kind of faded into the background for me.
But then, fairly recently, I found myself coming back to MIDI because I was focusing more intensively on questions of counterpoint, harmonic structure and linear development — stuff that really needs MIDI if you want to dive into it and are working electronically.
How does what you've done here diverge from the compositional approaches and techniques that appear in your earlier work?
KK: Getting back into MIDI was where some of the Max-based tools started becoming more important for me. Again, I’m no Max programmer, but I use a lot of Max for Live MIDI devices in Ableton, including some custom-designed by people like Joe to fit into my often fairly strange workflow.
All this has really helped me bring out, I think, a new level of compositional detail in my work in recent years. That said, you aren’t yet hearing much of this on the new album — my piece on this record is derived from an audio-based composition reverse engineered by Joe into a playable score. But this is only the first release in what we hope will be an ongoing series — so stay tuned!
I'm curious about the use of the term "reverse engineering" applied to the piece that occupies the first side of the LP. Was it a case of taking the material you'd initially created and porting it over for live display, or did you have the same kinds of affordances that Joe describes?
KK: Here’s the story in a little more detail. The original electronic April 20, 2015 started with me creating a series of hockets — interwoven melodies going between piano and violin that were built using MIDI and samples. I then recorded these into audio files and started layering and manipulating the audio, mostly through looping and pitch/time stretching (straight-up “sampler” style, or varispeed as some people call it).
Importantly, though, this time stretching wasn’t locked to semitones – it was continuous in pitch, and so the original piece was quite microtonal (something we didn’t even try to translate into the acoustic adaptation).
Yikes. The longer I try to think about this, the scarier it becomes.
KK: And even more scarily, the hockets were recorded completely freely in terms of rhythm — constantly speeding up and slowing down with no fixed meter whatsoever. The final piece ended up using two audio layers, and so you had two parallel piano-string hockets weaving in and out simultaneously, with both moving completely independently and fluidly in time.
When we decided to try to adapt the piece for acoustic musicians, the first thing we did was split out those audio layers and run an audio-to-MIDI translation process on them — which, well, sort of worked, sometimes; it at least gave us a starting point. The pitch relations were ultimately pretty easy to figure out (given that we’d set aside the microtonality), but the rhythms were a whole other story.
I can imagine. It’s so much easier to have a machine translating those kinds of rhythm. In the audio world, we keep telling ourselves that whole-number ratios translate well to polyrhythms, but it’s another thing to notate that kind of thing and hand it to an actual musician….
KK: I haven't written notated music for performers since I was in college back when dinosaurs roamed the earth, and the rhythmic complexity was way, way beyond my very limited notational skills. But Joe dove right in bravely and somehow got through it.
The challenge is that with these two parallel, non-metric, freely flowing rhythmic lines, you’ve got to build a temporal grid to communicate the timing information between the different players, who ended up being two cellists (because of all the pitch stretching of the audio, violin didn’t have the range) and a pianist (who had to be having two simultaneous hocketed conversations with two cellists at once). The final score is an amazing feat of metric chaos, but with a click track and a lot of patience, the players got through it. And it’s an amazingly accurate translation of the crazy aleatoric rhythms of my original piece.
I'm curious about what you did – or, perhaps, what kinds of composerly "interventions" you have in mind as you realized the piece and as you'll be thinking going forward – tweaking the live performance? Modifying the underlying algorithms themselves?
KK: As we honed the score and moved toward the recording session, the “composerly interventions” that ended up taking place were not so much at a technological level, interestingly — and this part of the process was a real education for me.
Again — I’m an electronics person, and have been pretty much forever; I’m used to figuring out what I want and getting precisely, exactly that. And I’ve developed a set of skills for clearly communicating my own musical intent in electronically constructed recordings of my own creation. But this was a whole different world and process.
What was most interesting was just how unclear my emotional intentions often were from just the score alone. Moments in the piece that were absolutely critical to me the musicians would just play right through since they had no way to know what I “meant” at a level beyond just the notes.
That sounds like the places you’d like to insert some direction to the players, definitely.
KK: I found, as we worked through the rehearsals, that I really needed to go through and annotate the score with lots of (sometimes fairly wacky) notes about how the performers should be feeling about various musical moments. I remember the players in particular objecting to the fact that they had no idea how to tell the difference between “contemplative" and “reflective" — which seemed totally clear to me! So again, this was a real education for me in the nature of musical communication and the different ways of getting your intentions across in electronic versus acoustic music.
Joseph, since you'd been developing this sort of algorithmic composition software primarily for your own output, what kinds of changes were required to adapt it to a collaborative environment?
JB: I think this collaboration has worked so well because while Ken and I are pursuing very similar questions in our work — asking ourselves what possibilities are afforded by composing algorithmically or digitally that are not available using traditional composition techniques, and exploring what human performers bring to a musical realization or recording that is not available in the digital domain. Our answers and our means of arriving at them are often quite different, so — rather than the collaboration being about some shared process or specific piece of technology — it’s really more of a philosophical dialogue.
Technically speaking, my work is the more Max-centric of the two, since that’s a language I’m very comfortable speaking and lends itself to the process-based composition environment I’ve come to love. Ken’s work is much more intuitively made, created in a DAW using complex audio and MIDI manipulation and pieced together by hand.
However, after being so intimately involved in the recording and production of one another’s music, I think some cross-pollination has definitely begun to happen! Ken has imagined a handful of Max for Live device ideas that I’ve been able to build for him which allowed the manipulation of compositional data within his usual Ableton workflow. I’ve begun to integrate rhythmic and orchestrational insights gleaned from translating Ken’s music back into my own patching. This is part of the reason that this LP is only Volume 1 of the series: we still feel like we’re just scratching the surface of where this type of musical conversation might ultimately lead.
by Gregory Taylor on June 8, 2021