An Interview with George Lewis and Damon Holzborn, Part 2

    George Lewis (left) and Damon Holzborn (right - photo: Betsy Nagler)
    George Lewis (left) and Damon Holzborn (right - photo: Betsy Nagler)
    One of the most ambitious of all interactive compositions I have seen is the ongoing Voyager project of George Lewis. The amount of information which Lewis can extract from the simple stream of interval and amplitude data from a pitch-to-midi converter is unrivaled. Stretching over more than five years and three different hardware systems it has evolved to the point where a rich polyphonic composition process is capable of recognizing and adapting itself to the interval set, long term dynamics, density, articulation and most impressive of all, the tempo and time sense of a live performer. Without trying to impose a single composition on the 'guest soloist', Lewis' interactive composer constantly surprises both performer and audience with its varieties of style and its strategies for accompaniment. The advantages spending 25 years as an improvising musician show in the richness of this rare gem of musical artificial intelligence. -Joel Ryan, Some Remarks on Musical Instrument Design at STEIM
    This is the second half of an amiable and free-ranging chat with George Lewis and Damon Holzborn [you can find the first half of that interview here] about the Voyager project - its origins, the philosophy and approaches that inform its creation and operation, its transition from the FORTH language to a Max/MSP patch, and the way that the project fits into the artistic practices of its designers and implementers. It was amazing to be able to talk about a project like the Voyager as both an historical object and a current living and breathing Max program. In this part of the interview, Joel Ryan briefly joins the conversation (via email) to provide some helpful historical detail, and I'm grateful for his help.
    Gregory: Can we step back for just a minute? I didn't ask about what you were using for output for the pre-Virtual Concerto version of Voyager. What kind of hardware did you use for that back in the day?
    George: Well, for the early hardware, we're talking 1987-1991. This might be of quirky historical interest.The premiere was Voyager (1987), for improvising soloist and interactive virtual orchestra at the Massachusetts College of Art in October of 1987. I was there for a short residency, where I served on Christian Marclay’s MFA committee - that’s how we met - and did the very first Voyager concert. The conductor Ilan Volkov found this very early performance on the Web - I didn’t know about it.
    It's November of 1987. This performance might have been the second Voyager concert, in the old Roulette, aka Jim Staley’s loft in Tribeca. I thought this was a pretty good performance by that early machine. The audio sounds like I was using Yamaha FB-01 FM synthesizers—there were two of them, for sixteen voices, controlled by an Atari ST (16-bit 68000 processor, like the Macintosh but much cheaper). I still have one of them here at home. What was inside the FB-01?
    George: Georgina Born recalls the following:
    A representative of the Yamaha corporation, in 1984 the leaders in commercial technologies, came to visit IRCAM to demonstrate their latest CX synthesizer. The senior Japanese executive took the machine through its paces. The breakthrough with this machine was size: the extraordinary miniaturization of a digital FM unit. Bemused and admiring of this tiny, powerful toy, the researchers gazed at its black casing. Finally, the American composer PL (who alone worked seriously with small commercial systems), defying the implicit etiquette of the occasion, challenged the man to tell how it worked: what was in the box? A pause, and there presentative replied,"Ah.. . Japanese air!" The room broke into polite, ironic laughter, and mystery was maintained. — Georgina Born, Rationalizing Culture, page 184.
    I'm honestly curious about some of your early hardware and software. Part of that is that I got a glimpse of some of it in a class on FORTH that Joel Ryan taught at the Institute for Sonology back when I was a student.
    George: You were in the FORTH class in the Institute of Sonology?
    Yeah, Fall of ’89. You had just departed…. I was really excited the first time I saw you perform because I’d seen a little bit of the FORTH code and had some of the ideas mediated to me by way of Joel Ryan. And later, I had these recordings of your work. But I didn’t have a way to fit them together. So my first encounter with the Voyager in a live performance setting was kind of like the light going on.
    George: I lived with Joel for one or two years. It was like being in graduate school, you know – he was so smart. And he would cook every day. He said, "Well, I have to feel as though - no matter how bad my coding day is - there’s something that I'm definitely going to achieve that's going to work." (laughs). And so there would be Indian meals, but he’d say, “Well, I'm Irish. I got to have my potatoes” (laughter).
    It was a very fun time and we built incredible stuff together. And then he started getting to this database stuff, which is quite brilliant, in his own way of doing things.
    What Joel describes in this article on musical instrument design at STEIM [Ryan, Joel, Contemporary Music Review, 1/1/1991, ISSN: 0749-4467, Volume 6, Issue 1, p. 3] is a lot more complicated than I remember.
    Interviewer’s note: At the time we finished this interview, George and Joel Ryan were corresponding about their work together. With their permission, we're including these comments about their work together.
    While George Lewis was at STEIM working on The Empty Chair, [a 1986 theater piece for several live musicians, interactive composing system and interactive video, premiered at ICMC 1986 at the Royal Conservatory in the Hague, Netherlands], I attempted real time timbre recognition using a simple lattice filter chip designed for speech. The desire was to enable an interactive 'player' to respond to textural elements in the sound of a live performer. This would be especially useful information at those times when a complex sound made simple pitch data meaningless. This required two microprocessors: one using the filter chip to analyze the acoustical signal into 'frames' of data representing a measure of the timbre of the sound, the other comparing these frames with previously measured ones which were arranged according to our scheme of timbre classes. This scheme was arrived at by simply 'teaching' the program examples of our ad hoc classification. The final result is just a token sent to the player program estimating the best guess as to timbre at that time.
    Joel Ryan: George, you discovered that General Instruments voice synthesis/analysis board from the Circuit Cellar (I showed the original Byte article for a class at Sonology). LPC timbre detection turned out to be a cuckoo's egg of programming challenges. The chip implemented realtime LPC (linear predictive coding) for audio-rate signals.
    George: As I recall, we wanted to see if we could use the LPC voice recognition to do detection of specific sounds--say, a bass clarinet multiphonic--or differentiate a trombone from a flute....
    Joel Ryan: We had to deal with a math problem which can be seen in relation to what we now call Kalman filters - a very sophisticated predictive method with roots in Norbert Wiener’s Brownian noise model of control from the 1930s and 40s.
    Kalman filters are used to optimally estimate the variables of interests when they can't be measured directly, but an indirect measurement is available. They are also used to find the best estimate of states by combining measurements from various sensors in the presence of noise. This stuff was kind of way ahead of its time, or maybe it showed how ambitious amateurs and hackers were at that time.
    Luckily, this was not as scary then as it would probably be now. We were trying to make interesting music - not to hype investors or make art science!
    The work was distributed over a realtime network of nine machines - three different kinds of microcomputers:
    • An Apple II hosted the LPC hardware that generated nonlinear estimates of a marker which we took as a measure of the color or timbre of audio signals.
    • A first edition 512 Macintosh did the training and pattern recognition feeding via MIDI.
    • Eight Yamaha MSX 8-bit microcomputers (CX-5) running the high-level RT Voyager composition/improv code, each using embedded microtonal Yamaha FM synthesis hardware.
    We were trying to find ways to recognize different sonic timbres of arbitrary duration. Software was devised to parse the flow of LPC data - estimate vectors, one every 20ms - gathered into event frames of arbitrary length (N vectors bracketed by noise).
    These irregular time length frames of LPC "estimates” underwent dynamic time scaling – that is, they were resampled to standard frames of 12 vectors.
    Recognition and data collection required some underpinning, which we borrowed from voice recognition:
    • Realtime adaptive noise floors for event-driven framing of musical gestures (i.e. quiet events will emerge from a different noise background than loud ones)
    • Dynamic time stretching to normalize the time aspect of events for comparison (for the same timbre in longer or shorter appearances)
    • Training software, to populate our feature space
    • Vector pattern recognition in a reduced dimension feature marker space
    On the Mac, we ran at least 4 parallel threads via our own multitasking kernel to manage the interweave of data collection, normalizing, recognition, and communication with the eight Yamaha players in FORTH on all machines, without the support of “packages” or example code.
    And that was just the tech. It was crazy and fun.
    Wow. That's complex all right.
    George: When I look back on it now, I realize that we were nuts to try to put all these things into one concert—LPC timbre detection, video processing, virtual orchestra (review attached). But reading what Joel wrote, you can see now what I was saying about what living and working with him was like. Also, he always sees the larger picture about the importance of the social in the work.
    George, what about the first Voyager performances I got to see?
    George: For the period from 1994 to 2003, I used the hardware in these photos.
    kickin' it old school - a vintage Mac, a MIDI patchbay, and a Garratin AG-10
    kickin' it old school - a vintage Mac, a MIDI patchbay, and a Garratin AG-10
    The hardware was used on a live performance from 1994 released on Endless Shout on the Tzadik label.

    George E. Lewis -- Voyager Duo 4

    Do you ever fire up the old gear and perform with it in situations where you want the old-school orchestra?
    George: Nope. It’s too complicated to set up, and I don’t perform much now anyway. I see that they still have MIDI patch bays — but the four Audio Gallery players? I have no idea where those are—probably in my basement in San Diego, which means that they became waterlogged a while back.
    The last thing like that we tried was in San Francisco in 2014 where Damon and I tried making a 16-channel “chamber orchestra” from VST instruments that could do lots of extended techniques using key switches, unlike the usual Garritan stuff. The machine also played the Disklavier, and we had Dana Reason (piano) and Kyle Bruckmann (oboe) as soloists.
    We got through it - Damon did the near-impossible in terms of our ridiculously short timeline - but the orchestra would have needed a lot more work to get anywhere near what the old Voyager could do. In particular, we did not have time to recreate Voyager’s complicated orchestration and grouping schemes. These are absent from the piano version.
    Also, the advantage of the old Sound Gallery machines was that they could be instantly switched with a few MIDI program change commands. You could have 64 different instruments - or 64 of the same - all microtonally tuned. The orchestration scheme was based on that.
    Could a modern laptop run 64 VST channels? If not, I’d be in the same position as in 1987—the Yamaha CX-5 could run its own FM synthesizer and my program, but not both at once. I have thought about building a “Voyager hardware box” that could operate like the old Sound Canvas, but with the flexibility of 64 channels of VST instruments.
    Okay. On to the software now. The Voyager software was originally written in FORTH, and Damon ported it to run in Max. How close is the Voyager code that you run today to its ancestor?
    George: Whatever change there was didn’t happen as a result of the translation process. Damon translated even the stuff that didn’t work and the stuff that wasn't being used (laughs) And we found out that some of that stuff really did work. I just didn't know how to use it.
    I would say that certain things were made to work better, and there’s some of that stuff that you added, Damon, that I still don't quite understand, like the multi-node processor that looks at certain power relationships across the keyboard or different kinds of things that got added to it. But in terms of the essentials, it's very much the same. What do you think, Damon?
    Damon: I guess it's hard to say ‘cause I'm trying to remember - there are certain things we added, like the transitions. Now were the transitions one of those things that we brought back that you didn't implement before?
    George: There was the stability detector. The transition patching was new with the first Max version.
    Damon: And what about the daemons? Were those new?
    George: The daemons were there.
    Damon: So the transition stuff was new. And then we just brought in the stability thing recently. George, you explain the stability stuff - it's your thing.
    George: Well, yeah. It took a long time to figure it out. My sort of theory was that when you well…
    I don't want to forget this one part about what it means to have stability. It's all based on this thing that Joel Ryan called the leaky integrator - he was the one who made up that idea.
    The leaky integrator is still with us - it was a system of smoothing - lowpass-filtering inputs so that the system doesn't respond note-by-note. It takes its material from these lowpass curves that kind of go up and down - like biorhythms, in a way. There’s one for pitches, one for the durations, and there are others. So they work like the transition systems - the input sets the goal and it starts moving toward that goal. And, let's say, if you play a high note, it starts moving toward the higher notes. And then if you play a low note after that, it starts moving back down - it never stays in a stable place, but it always gives you a sense of more or less where you are.
    It's a little bit like a mediator of some kind. So I described it to Joel and he said it was like a capacitor. And so that's the thing that I think came out of this thing. I don't know if Joel used that in his own systems - he probably did. But that's been an integral part of Voyager since the very beginning. We call those the “smoothers” now.
    smoother (implemented in Max/MSP by Damon Holzborn)
    smoother (implemented in Max/MSP by Damon Holzborn)
    George: So the leaky integrator was there and the daemons were there, but the stability checker was dependent upon the smoothers for its operation. The idea was I hear improvised music quite a bit of the time as alternating between periods of consensus and dissensus. It might be very quick but it's almost very regular.
    I played with a wonderful group called Meltable Snaps It with George Cartwright and David Moss and Michael Lytle. And in listening to them play, it was sort of amazing - they would get into this groove or this consensus, and after a while, someone would get bored, and then there'd be this moment of furious disconnect where people are trying to create a new consensus… “Got to get rid of this. What are we gonna do next?” So there's an ongoing audible discussion that you can follow about where people are going to go next. Everybody's privy to it. And then suddenly everybody settles in, and that's the next section, and the final thing that happens when they do that at the very end of the piece - when they decide, “We want to stop the piece.” That's another kind of thing that they do. Sometimes people don't get it, sometimes things get missed, and so on.
    If I listen to Derek and Evan Parker play, it might be two minutes, but with Meltable that might be 30 seconds or 20 seconds between moments - consensus and dissensus. The stability checker was designed to look at several parameters of the input, and to see when things were not changing very much and when things were changing a lot. That was the theory. And when things were changing a lot, it would say, “Oh, it's not very stable.” But when things weren't changing, it would detect that. It was pretty good - even the raw FORTH code that Damon was good at detecting it. We've made it better. But then we had to figure out what the hell to do with it? (laughter)
    And this is where it's very important to have someone working with you on something like this who is a conceptual thinker - Damon – and an improviser who can really sort of say, “Well, here's what I would do….” Or, “Here's what I think people do….” from having listened to a lot of music, having taken part in thousands of improvisations. And so we were able to figure out together what Voyager should do in order to respond to this: What we do when we find stability? And it came down to a very simple thing – I hope I'm saying it right, Damon. Whatever you're doing now, just keep doing it! (laughter)
    Damon: Yeah – I think that’s really what it comes down to! (laughter)
    George: Don’t interrupt. And when you detect a change, then you can change, too.
    That sounds so simple....
    George: It works very well with people like Roscoe Mitchell, who might go for three or four minutes on this one groove that was fairly stable. So the system should be able to do that, too. And we know that the first time we tried it was in Berlin in 2018. It worked great. Roscoe was much happier because the system wasn't interrupted him all the time just for nothing, just like, “Oh. The numbers come up. It's time to change.”
    “Oh, no, no, no. We've got this thing going on. Let's go with it let's keep going with it.” And not even, “Let's see how long it's gonna go.” We're not making predictions. We're just saying we're not going to change. Now that we have a groove, we're gonna keep it until you make the change. And the change could be something like, “Oh, you stopped playing now.” That's a change. Or “You changed to a different kind of behavior.” That's another change. Either of those will read as instability.
    I guess this is a good time to get slightly tweaky. Over time, you're really talking about the development of the system - this idea about performing, improvising, and composing that's instantiated in code. I'm a little bit curious about Damon's experience porting the pseudocode that you got to Max. I'm just a little hesitant to ask the question: Was it it an easy thing to do, or did you find translating stuff to Max to be a real challenge? For example, I notice that the patches are full of pvar objects. Did you use pvar objects before you did the port?
    Damon: No, and I probably haven’t used them since, either. There are a few things in the patch that are a little un-Maxlike or at the least un-Damon-Maxlike. But - for the most part - it's not that bad. The things that were challenging with Voyager and Max are mostly just things that are just challenging in all programming contexts or Max in general - like managing large projects and dealing with thousands of little objects that you have to wrangle and keep your head wrapped around. And some things are easier in a more traditional programming context, like global searches, diffs and stuff like that. But other than that, I was able to just go through the code and just plug it in. And, for the most part, the challenge was just that it was a big program and it took time to do.
    You really didn't have a lot of scheduling problems and stuff like that?
    Damon: I didn't have a lot of scheduling problems. There was one horrible bug… We had about two or three months and then the last week or ten days I was here in New York – it was before I lived here – I was sitting in my hotel room coding 18 hours a day or more to try to finish it up. There was just one major problem and I literally wore out a mouse by madly clicking to get the bug to surface and had to go out and buy a new one. It was just crashing every once in a while when you pushed a button to change settings. It came down to the solution being “Turn on overdrive” or “Turn off overdrive,”—I can’t remember which one it was. But overdrive was it and then everything was fine. That was the only major glitch that we had as far as the programming. It was dealing with MIDI, not audio, so even back that it wasn’t too taxing. That was like 2004 or 2003 for the piece’s premiere.
    George: In 2004 the piece premiered.
    Damon: Also, I think we did have a way to turn off all the displays so you wouldn't so you wouldn't see the numbers changing. Now, I don't think that matters, but when you were running it back then, you could do that. At the time, the UI would've been too taxing on the computer of that era.
    So for you, I guess what you'd wind up thinking of as your contributions to Voyager were things that happened after you hooked things together – you got everything working, and then you said, “Oh, we could do this….”
    Damon: You know, a lot of the Voyager work is, you do some stuff and then you listen a lot. And then when it does stuff that seems kind of crappy, you've got to say, “Okay, what state is it in and why is it crappy?” And then you’ve got to puzzle it out. So a lot of the work comes when George and I - either individually or together - trying to puzzle these things out.
    And then you have to go into the patch and that gets complex. I think one thing that was the hardest was dealing with the input - just the raw input and getting that done in a way that's smooth. And that's probably undergone more change than anything in the past. Over the years, we’ve repeatedly tweaked that. It originally was fiddle~. Then, at some point we switched over to sigmund~ as our main processor and we have to do a lot to get the data we want out of these.
    And then you have the problem that there might be four live mikes right next to a giant grand piano. How do you avoid those problems? So the challenge is not just the technical challenge - sometimes it’s just purely the acoustic challenges of all this stuff. The biggest challenge and the biggest pain in the butt was to get it to just do the job. Not what you do with the data, but just getting good input data that we like.
    George: I wanted to point out something you figured out about this input business, Damon - because one of the biggest problems that I've had for years and years was to try to find a way to get the system not to listen to its own audio output. In other words, if a piano is playing very loud, then it'll hear the piano and you. And then suddenly you're at sea, because it's hearing itself and responding to itself, and responding to the player as well. I've tried all kinds of things--close-mic’ing the player, standing at a little distance away, all these kinds of things. And Damon, you worked this out. I don't know how you did it. Maybe you could tell me how you did it. We had these compressors we built. But somehow it stopped listening to itself.
    Damon: I don't know if we're using an actual audio compressor now or if we're just working with the data - this is something we've gone over so much over the years, it's hard to keep track of what we did when. Lemme see – do we have a compressor in there? Yeah, we do.
    Yeah. Joshua Kit Clayton’s credited in the patch there….
    Damon: It's Joshua. Yeah.
    George: Oh really? Is that the famous Kit Clayton?
    Yep, sure is.
    George: OK. All right (laughs).
    Damon: So we did add that compressor in there as an option. We’re using a microphone whose job is usually to take some sound and transmit it to some speakers, whereas we're not trying to do that. We don't care what it sounds like. So the idea is that we want that input to be turned into data, which is just like a string of bits that doesn’t have anything to do with the sound but hopefully corresponds to the sound in an intelligent way.
    And so it was about just trying to figure out ways to deal with the data. And I think we built in some sort of delays for the onsets. I think that that was one of the last things that we did, George. Isn't there some sort of delay now where, like, if the person is quiet, you’ve really got to make some noise for it to act. There's got to be something like that.
    George: All I know is we worked on it for a long time. We were very frustrated. And then you figured something out to where we don't have to worry about it anymore. And this made a big difference. So that's all I can tell you.
    But those are some of the issues. I mean, there's a lot of stuff to be said about this thing because of how people thought of it back in the day when I was doing at the beginning. There was this whole thing around the time of the 80s. If you look at Georgina Born's book on IRCAM, there was this whole thing about the small systems and the big systems, and of course, mine is a small one.
    And then at first, people thought of it always as some sort of small thing. But if you print out the FORTH code, it's like 120 pages. It's not really that small. It's like a book (laughs). It's pretty intricate stuff. And so, too, you look at the code that Damon created for it, it's level and levels and layers and layers. If you print that out, it’s probably even bigger now. So it's not a particularly small system, but it's meant to have a small footprint and not be hard to set up.
    I played Voyager in very diverse environments where you don't get hours of setup time like IRCAM or armies of technicians that you can call on to get things to work. So I've had to be my own technician. I'm used to going out there and sort of saying, “Well, if it's not working, I'm going to try something and then I can look here and see what it is.” Because since I built the thing - even with Damon's version which uses aspects of Max that I'm not very adept at - I can sort of figure out how not to fuck up his code so that when I eventually get back to him, he can fix what I screwed up on an emergency basis so that he can see, I had an emergency, and here's what I had to do to get it to work.
    When you were talking earlier about the whole pvar thing, it’s there because Voyager is works on the basis of many, many, many tuned white noise decisions. Rainbow Family had 1/f algorithms, and Voyager doesn't have those. Everything is based on, “if 50 percent of this…”, 40 percent of that, 30 percent of that, 25 percent of that, you know. Always like binaries. Binaries where 30 percent of the time it's going to do this. And then all those many, many decisions sort of emerge in the same way as Damon’s Alternator modular code. And so that's what produces the output. There are no sort of special algorithms. I think drunk might be used a couple times, but a lot of it is just basic setting of percentages.
    Damon, in addition to being the guy who worked on the port of the Voyager, you've got an entire body of work of your own. I'm a little bit curious about the sense in which your own practice reflects some of the ideas we’ve been talking about. I presume that you've been at it long enough that some of those are ideas you brought to you to the table the first minute you sat down and tried to port the pseudocode to Max. But how do these ideas interact with the kinds of decisions that you make as a composer, player or improviser?
    Damon: I think on one level it's hard to answer, partly because I met George as an undergrad so my studies with him started about 12 or 13 years before we ported Voyager. So George’s thinking was long a part of my thinking, because he introduced me to so many of these concepts during my development as a musician. And the other part is that most of my practice - although this is currently changing a little bit - has been about building instruments for real-time performance, but in a very traditional sense where I move a thing and then something changes. Not systems that generate that much.
    Although I have definitely done some of that, by far the largest part of my practice involves creating instruments or controllers that interact with instruments so that the interface allows me to create the sound directly. There's a certain amount of what we’ve talked about that doesn't fit directly into what I'm doing, but I think you could see similarities in the philosophy that I developed over the years, too. I couldn't help but have been influenced by my work on Voyager.
    If you look at my dissertation, which came after I had worked on Voyager, it was basically on simplicity. At the time of dissertation, my piece was about a software instrument. I was using an iPad as my instrument. So my instrument had boiled down from Max with controllers and this and that to just an iPad making sounds with RTCMix. My practice had really narrowed. In my dissertation, I talk a lot about stripping things down. And I think that work couldn't help but have been influenced by my work on Voyager--although calling it “simple” seems like a very strange thing to do, because actually it's incredibly complex - but it’s the interaction of many simple things that create something that's greater than the whole. That's a huge influence on my thinking in general. I am engaging with that in the Alternator recording, as I said, in the sense that it's a bunch of simple things interacting. Alternator is a lot less generalizable than Voyager, but on a small scale, you could see a lot of similarities.
    As I move forward, I am starting to build more algorithmic tools . I'm working on a pseudo-live coding language called Park, and I have projects that I'm doing for the Monome Crow - things that at least have some algorithmic bits. I think that as those things start to emerge, you'll definitely see this thinking more directly. This idea of simple things combined in a way that’s basically bottom up rather than top down - that’s another way to put it - is definitely how I think now.
    To me, it's about how you come up with a system that creates something that you couldn't have come up with otherwise. I can't play everything that I would like, but I can create systems that could do something I could never play. So how do you do that? I think that the influence from what I learned from Voyager is definitely there, even though the kinds of things I do will be very different from what George has done with Voyager, of course. I think if you looked at them both under the hood, I think you'd see a philosophical connection. I learned a lot from Voyager that I'm going to be able to apply to that work in particular. Given your shared interest with the Voyager project into what can emerge from the interaction of smaller processes and your interest – as your dissertation lays it out – in simplicity and constraint, it seems like your 0-Coast project projects that into the realm of social or shared behavior.
    Damon: Toward the beginning of the lockdown period, my friend John O’Brien – a musician with whom I’ve played in the past – posted a track that he’d made on his Make Noise 0-Coast synthesizer that gave me an idea, since I have one too. I suggested that we could do something along the lines of Exquisite Corpse where we could trade things back and forth. Together we worked out a rule set where there were constraints for how you produce a patch and how you’d produce the recording, and it was the perfect way to collaborate in a time where we couldn’t actually meet in person.
    Each week, we’d each record what we called the Prompt and then we’d notate that patch -- the way that the cables were set up and the way that the knobs were set up -- and send that to the other person. Then the other person would create what we called the Response. So we each produced two pieces – each of us created a Prompt and each of us created a Response -- so there were four pieces each week. So the idea is that you make a patch, and then you make a recording, and then you send the patch off. You don’t share the recording until later. When you record your Response to the other person’s Prompt, all you know is the ending state of their patch on the 0-Coast. After several rounds of this we thought it was worth sharing so I created a web app so anyone can easily document and share a patch. I also created a playlist for Exquisite Coast tracks people share on SoundCloud.
    If you listen to some of the Responses we’ve done, sometimes they’re very similar, but there’s a lot of freedom. You do whatever you want within the constraints of the patch, so sometimes they’re very different. And the rule is that the only thing you’re using is the 0-Coast itself?
    Damon: Right. Basically, you’re allowed the 0-Coast itself, patch cables, stackable cables or splitters, and no other modules or modulation sources or sound sources. And no control sources, such as a keyboard, either. The only thing you have to control it is the knobs and the buttons. The 0-Coast doesn’t have a keyboard, unlike – say – the Mother32 from Moog, which has a little mini keyboard. So this really focuses you on just the 0-Coast and just one set of parameters under which the 0-Coast is set up for the Prompt you receive.
    The Response is really fun. In fact, I just recorded a Response right before this. The patch that John sent was crazy. Since it wasn’t something that I came up with, it was just something that landed on me, when I started listening to it and tweaking I got results I just was not expecting at all. I’ve had the 0-Coast since – I think since it came out. I think I pre-ordered it. Despite having had it for some time and having spent a fair amount of time with it, since it’s such a deep instrument it continues to be a source of rich exploration.
    George was talking about whether or not you’re improvising when you write the code – how does that kind of distinction (or lack of one) look with respect to this?
    Damon: These days the line between composing and improvising gets blurred and - since it’s been a while since I’ve had a gig - my practice lately has been totally in the studio, and working has become more a compositional exercise. I create a situation, and then I improvise within that situation, and then I record a track. Ultimately, the length of time that it takes me to record the final piece is the length of time that the piece is. But there is that pre-compositional part which involves setting up the instrument. There really are elements of both, for sure, but it generally feels more like composition now, in terms of the way that I deal with modular synthesis. My current studio practice is that I set up patches as a way to explore and to create sounds, and then I pull out all the cables when I’m done, and the next time I start again. Each act of working with the synthesizer is a kind of exploration from scratch – which Is easier to do in the studio than on stage, and it’s perfect for studio practice.
    It allows me to focus on a different part of the instrument – maybe an oscillator or a modulation source -- that I really want to focus on each time. This takes us back to the Exquisite Coast project, because when you focus on that one thing, it forces you to dive deep and really discover what a module or, in this case, what the 0-Coast can do. Well designed instruments will have a lot of hidden depth that will reward you if you really spend a lot of time with them. With this collaboration, every week I’m just dealing with the 0-Coast for some length of time and discovering new ways to patch it up, new ways to control it, and new sounds that I hadn’t heard before.
    George, you've been a composer for a very long time, but it seems to me that the body of your work over the years since I first encountered you has moved more toward what I would consider to be more traditional compositional opportunities. How do you see the roles of improvisation and composition in the sweep of your life's work?
    George: “The sweep of your life's work”--hmmm…that’s quite a long time (laughs).
    That sounds more like a valediction rather than a question about where you are now. How different is composition than what you think about doing with the Voyager from a sort of phenomenological point of view?
    George: Well, it’s a very odd thing, Gregory. You know, when you go on stage or someone goes on stage with Voyager, they're improvising. But in order to make the code, you’re composing - you have to compose the code.
    So the difference between composing a Voyager and composing another kind of piece… I do have a class of pieces which are called situational-form pieces - things like Artificial Life 2007, which a lot of people play, or this new one, P. Multitudinis, where people can decide to use the materials as a kind of tool box. Another example would be Creative Construction Set. These kinds of pieces where people don't have to be card-carrying or self-acknowledged improvisers, because we're all improvising every moment of our lives.
    You're asking this question about composition versus improvisation. Maybe I'm oversimplifying that, but when I'm composing notated pieces for ensembles, I'm not necessarily thinking about what the input behavior is going to be because there isn't really going to be any input behavior. I'm just thinking about the outputs. So what I'm composing situational-form pieces - we could even call Voyager a kind of situational-form work, because it's definitely looking at and characterizing situations and trying to act on them, which is what the other pieces I do are doing. In those cases, I do have to think hard about what's going to happen. Now, the odd thing about one of my pieces, Artificial Life 2007, is that it is an abstraction of the Voyager thought process (laughs)
    It starts with a set of possible inputs. What should I listen to? Should I listen to the pitch? Should I listen to the volume? Should I listen to the timbre? I list about eight to ten different things they could be listening to, then the player decides what to focus on. The second thing they decide is what standpoint to take toward those parameters. It's the same set of standpoints as in the Voyager, you know - follow, reverse, ignore (laughs). And then the third thing is: What do we do? And then it says act. So that's what happens. If people are doing that and that's designed for groups of 30 or 40 people, then they're acting a lot like Voyager.
    In fact, Voyager and the work on improvising computer programs has led me to rethink what improvisation is, in a pretty salient way for me now. A lot of the old ideas about what things had to be part of improvisation - I don't look at them as being that salient now or even that descriptive. It's boiled down to me now to basically four things or maybe four or five things.
    The first one is that the basic condition of the world of indeterminacy. Now we're getting a little philosophical, okay? Indeterminacy - that's our basic condition of the world.
    Then there's agency. So when I started building these ideas around 2007, it was all about agency and indeterminacy in a kind of double-star relationship - they're intimately bound up. There was always a player or a person. But I'm not talking about music anymore - just any improvised act, any improvised act whatever. But it comes from thinking about machine improvisation, in which we're not assuming certain things, such as John Cage being concerned about improvisation of will and taste in psychology. I don't have a problem with those things. That's a part of sonic ecology. Intentionality - I'm kind of interested in that. Most philosophers are. So that's where it starts.
    So you’ve got indeterminacy and agency. In a way, agency is against indeterminacy. You try to carve out some bright line in the midst of all this, some sort of sense of meaning in the midst of the indeterminacy.
    The third thing is analysis and judgment. In other words, looking at the conditions around you - just looking - and that process is also improvised. Listening becomes improvised. Attending, trying to find out where you are, what your situation is.
    And the fourth thing is choice. So choice is the oddest thing because choice is like Schrödinger’s wave function collapse. There's no truth value. You could say, “I'm going to do this because….” You can't prove that. Or “I did this because….” You can't prove that either. So basically it’s just a choice. So now you have indeterminacy, agency, judgment, and choice. That's it.
    So that takes out time - you don't need time. That takes out will and psychology - you don't need these things. So what that means is that once you have that, any act by anything or anyone that fulfills those conditions, more or less, I consider to be an improvisation. It could be a machine, could be a slime mold, could be anything like that. It could be a composition.
    So let's say you're playing December 1952 by Earle Brown. That's my example here.
    When we played that with Music for Merce, Christian Wolff was showing us how it was working and stuff. Younger people - so-called young people like me, or even younger people like Zeena Parkins or Quinta or Ikue Mori and Phil Selway. And then there was Joan La Barbara and David Behrman.
    So I was saying, “Well, why do we have to have a stopwatch?” You know - just like a Merce Cunningham Event - they set a stopwatch for 75 minutes. You play for 75 minutes, then you're out. That's it. You don't improvise your way toward the end.
    Now I see what these guys are doing. In my pieces, you have to sort of create your own sense - you have to improvise your way toward the end, which means you have to pay attention to local conditions. You’re still dealing with agency and you're dealing with indeterminacy, but you're not dealing with judgment or analysis. You don't have to worry about that.
    So that's why I'm saying the piece Vespers by Alvin Lucier is not an improvised piece. It's not because people are doing cool things with sonar dolphins, you know (laughs). It's because you still have agency and you still have indeterminacy, but what he's trying to do is that he's trying to reduce agency by saying “Accept and perform this task of echolocation.” He’s making it a task. Echolocation - what does that mean? You're looking for conditions, but you're not assigning a value or meaning to those conditions. The only meaning is you're getting an echo. What do you do? It's kind of unspecified. Either that's the most pure kind of improvisation or it's not improvisation at all.
    But I think that generation, I've often felt that they were sort of in awe and also a little bit held back by the notion of, you know… at the same time they were doing their thing, you had bebop, you know, and bebop was like, “Wow, that's real improvising,” you know? And then you had the era when the Indian classical musicians came in and Ravi Shankar became very prominent, and that was called real improvisation, right?
    So they started saying, “We're not doing real improvisation.” Well yes, you are. I mean, even their friends said they are doing it, like Frank O'Hara saying, “Well, Morton Feldman’s work sounds like the purest kind of jazz” (laughs) –uhhh, I don't think so. Or I'm imagining someone coming up to Christian and saying, “Well, this stuff you guys are doing, is this a weird kind of jazz?”
    I could see now where when you're playing For One, Two or Three People, you're not doing it in the same way as you play Voyager. You have to think about situation and think about conditions. But the sense of judgment is very different. And you've still got choice. So it's all very complicated. It's going to take me a while to really be consistent with it. But once I was freed from things like, taste, judgment and time, I could look at rice farming in Sierra Leone as Paul Richards describes it as being a kind of improvisation over a very long frame with time, like months or years, rather than being in the moment or in “real time.” Whose real time we're talking about? I feel like I learned that from working with computers. I don't think it would've come out otherwise.

    Bonus Video/Audio - The Voyager Goes to Johannesburg

    Unyazi of the Bushveld

    Voyager and percussion (Thebe Lipere)

    Synergetics No. 10

    Voyager and Digeridoo (Thebe Lipere)

    Synergetics No. 8

    For Further Listening

    In addition to the many links to performances salted throughout the two-part interview, here's a brief selection of recordings by George, Damon, and our special guest Joel Ryan:
    Voyager recordings:
    Here are some examples of George's ideas applied in a compositional/ensemble framework:
    Damon Holzborn:
    Joel Ryan:

    For Further Reading

    by Gregory Taylor on
    Jul 21, 2020 5:20 PM

    • Gregory Taylor
      Jul 25 2020 | 9:02 pm
      I was really pleased to be able to include Joel Ryan's comments in this interview. I had the album of his reworkings of Evan Parker on this afternoon while I was patching, and ran across this in his liner notes. It really struck a chord with me, and I hope it will inspire you, too:
      I’ve always seen programming work as kin to instrument making rather than crafting language to describe music. The making of music, musical intelligence, arises within the process of listening and playing. Instruments are not neutral channels of control, just links correlating some idea with its expression in sound. They are intrusive inventions that incite new phenomena. They amplify and modulate the vibration of air but they also carve experience that has no other access. The writing for these instruments focuses on the construction of formal detail, content arises in the moment, remaining ambiguous until the musical meeting begins. The instruments act as levers to dislocate intention. At the fulcrum is the surprise of hearing the result of well-known techniques radically reorganized. It is a way of getting fresh ears with out sacrificing fluency.