Articles

An Interview With Luke DuBois

Luke DuBois is a teacher at Columbia University in New York City, and a member of the famous Freight Elevator Quartet, whose "Fix It In Post" CD is making waves as the first release on the C74 record label. In this conversation with Gregory Taylor, Luke shares stories of synthesizer part scrounging, the early days of the Freight Elevator Quartet, and some of his most inspiring students' projects.

Early Experience and Freight Elevators

History. How'd you get here?

I did my undergraduate degree at Columbia. My first-year here I dated a woman who was a senior and was taking an electronic music class up at the Columbia Electronic Music Center, as it was still called then. It was taught by a composer named Art Kreiger, and it was a very old-school class. I sat in on the classes with her, and our midterm assignment was to realize a two-part Bach invention using just a test oscillator and tape. So you would have to cut the tape up and splice it all together and realize that, okay, one inch is a quarter note. That sort of thing.

My sophomore year I took a class called MIDI Music Production Techniques as a follow-up. I really hit it off with the instructor, who was a guy named Thanassis Rikakis, who had just finished his doctorate. It was my first exposure to using computers to make music, and I thought it was very interesting. The Columbia studios were still all analog except for this one, so I got in on the ground floor of the gradual transformation of Columbia's Electronic Music Center into the Computer Music Center it is today.

Good heavens, what year was this?

1994.

If you subtracted thirty years from that I would have been less surprised!

Yeah, sure! My first real fetishizing experience was when I discovered all the analog modular synthesizers down here. I thought they were really great; we had a ton of Buchla modules and a couple of Serge modular synths that I became pretty good at playing and improvising on. All the computer music work up here until my second year in graduate school was on NeXT machines and Silicon Graphics computers, and we were very much a tape music shop at the time, which is a medium I still really enjoy.

You were doing more traditional software synthesis?

I was writing my own code on the SGI and making interfaces using RTcmix and X/Motif, and just sequencing things. Thanassis and I were teaching a class called Basic Electroacoustics, and in the fall of 1998 we decided to take the plunge and try to teach something about interactive music, which is a genre that had never been taught here before in any depth. We'd heard about MSP from Curtis Bahn, who was Brad Garton's sabbatical replacement up here the year before, and I went to a festival in Japan and there were a lot of people talking about it, so Than and I figured we should teach it. We'd never touched Max before to any degree - neither of us had a clue about it - so over Christmas break that year we sat down with the MSP tutorial and gave ourselves a completely insane crash course. I sort of got hooked on it and started re-writing all my SGI apps in Max/MSP. The main folder of Max applications on my hard drive is still called, ironically, MSP Ports.

You didn't just sort of become a musician. You were doing other kinds of music at the time.

I became interested in music in high school, which I guess is a bit on the late side for most people. I'd played in lots and lots of rock bands, but I sort of learned composition in a very piecemeal fashion. I never really studied any composition as an undergraduate; I was always winging it, and no one ever seemed to take issue with that until I became a graduate student. I still haven't taken much of a plunge into acoustic writing, though. I'm just not particularly interested in it. I've just been writing electronic music the whole time. I'm in a group called The Freight Elevator Quartet, and we do what one could consider electronic improvisation, so that's where a lot of my focus is right now.

When The Freight Elevator Quartet started you were doing mostly analog electronics, right?

That's right. When we started I was still doing a lot of things with Buchla synthesizers, which I was restoring at the time. We had a big largesse of modules, and I built my synth out of spares that had been left around. My senior year in college we were running these parties with the visual art students up at Prentis, the building where the Computer Music Center is based... every month we would throw one, and it was called Knuckles. The 'official' purpose of the parties, which were funded somewhat inexplicably by the Dean of the School of the Arts, was to promote inter-disciplinary collaboration between students in different divisions of the school. We would have people from the film division working with people from the undergraduate dance department who would be working with musicians and theater people and sculptors. It was a great idea in theory except that no one took into account the fact that when you throw a party people tend to go a little crazy...

One of the things we did was bring everybody up in the freight elevator. We'd let people up in groups of thirty or so crammed into the elevator. The bar was also in the elevator, so if you wanted another drink you had to go down and up again. By the end of the night the elevator was hopping; everybody was packed into it. Mark McNamara, FEQ's video artist and my main solo collaborator, was the graduate student theoretically in charge of the parties. Mark asked me and my friend Stephen Krieger, with whom I'd been doing music for years, to put together a group to play in the elevator because he thought there should be some incidental music in there. So we got on the top of the freight elevator, tapped the flourescent light to get a plug running off of it and set up a PA in there.

Stephen and I got together with a friend of ours named Paul Feuer with      whom I'd played before - he was a digeridoo player and played drums -      and Rachael Finn, who came to the parties and played the cello.      It turned out to be a lot of fun, so we started doing it at every      party. Then Mark got us these gallery gigs downtown and it became a      regular thing. The nice thing which we started doing from day one was      audio tape everything. We had a portable DAT at every show and so we      had dozens of hours of rehearsals and performances... so come the      summertime we decided to cut an album out of all the live material,      and that's what we did. I did everything on the analog synth, and      our sets were all improvised; we never really wrote any songs or anything.

Over the following year we wrote and recorded a studio album of tracks which we called the 'Jungle Album'; it was a very self-conscious attempt to work in a specific genre to see what popped out. Paul and I did a lot of SGI programming to do the DSP on that album. When we were finished one of the people to get a copy was Paul D. Miller, DJ Spooky, who liked it enough to want to work with us.

Idiosyncratic Interactivity

So after you were done learning enough about Max to teach it to your students what'd you start... I mean, the difference between you and the computer music students presumably is that your background working with stuff like The Freight Elevator Quartet predisposes you toward interactivity already, right?

That's certainly true. The first thing I did, which is not necessarily a totally unique idea, was that I was sick and tired of carrying around a Serge modular in a road case, so I just wrote myself a Serge in Max/MSP. What I wanted to do was sort of replicate the experience of just closing your eyes and twiddling all the knobs on your synthesizer, so I wrote these sequence generating algorithms in Max the just generate little synth sequences, and this all syncs to MIDI clock coming off Stephen's MPC sampler. They're performable sequencers, which is nice. I play with a Wacom tablet live, because the mouse is a drag, and I tap pressure and tilt and map it to waveshaper parameters. The idea is for the patch to generate little sixteen-note patterns that you can edit live - they're all tables of different things: pitches, filter Q's, etc.

Most of the sequences are generated according to rules I came up with based on some music theory research I did for Fred Lerdahl's music cognition seminar at Columbia. So the riffs aren't random, but sometimes they don't work quite right because my algorithms aren't entirely bulletproof. So on stage sometimes it's a bit dicey because I might accidentally spit out a bum riff and have to fix it quickly.

So you use them for live performance?

Sure do. I started to use a Wacom tablet because the trackpad craps out in sweaty clubs. Now I use Richard's wacom object to get at all the nifty pressure and tilt data; it's not mapped to anything incredibly exciting, so people think I'm doing something more intense than I really am. I got this totally stoned woman come up to me after a show we played at the Cooler a couple of years ago, and she asked me if I was sketching everyone in the audience. So I lied and told her I was. It's actually really funny the way it used to work after gigs when we played with the analog gear, because I'd get all the tech-heads who say things like 'so which model of Clarity control voltage unit is that?' That's actually the other reason that I moved to the laptop, so that I could avoid having to get into an electrical engineering discussion after a gig. Fortunately, not that many people are going to be able to look at my Max patches and understand them.

[luke and gregory play with the computer]

I'm not sure whether what I'm looking at is a set of idiosyncratic tools that are really goal oriented, or something that you've made that pleases you that you've put to some other purpose.

Everything I write is really idiosyncratic. I do that on purpose. I never pretend that I'm making a generic, user-configurable synthesizer. I've actually talked about that a fair bit with Elliot Sharp, when I do MSP programming for him. We were talking about the kind of thing we were going to make, and I was like, I write things that are not for everybody. I can make it with lots of bells and whistles so that you can load in your own banks and whatever, but if you want just a sampler you should buy Unity. It's a better program for that sort of thing; it will work better. The great thing about him is that he's very receptive to the idea of programming as composition. The point is that the technical underpinnings of some of this work are sophisticated enough that really they could stand as composition in their own right. This would be no different than me writing a string quartet using these weird algorithmic rules for spitting out the melodies. I didn't bother writing out a string quartet; I've got something that does it for me.

What's been your impression of looking at the way that traditionally trained composition people go into interactive music? Do you think they find it a salutary experience?

I think a lot of it depends more on first impressions more than anything else. I think people who stumble upon this thing are very much influenced by the first one or two people they meet that do it. My first head-on run in with interactive music was with Mari Kimura. She's very virtuostic with using the machine as an extension of her violin, and it comes very naturally to her.

Natural in the sense that the interface between her and the technology looks kind of effortless?

Yes, it's very transparent. There's no handicap apparent in the way she uses the computer on stage, which is very cool to watch. The best interactive performers are those who can deal with the patch responding in unexpected ways, and just keep going with it. That's why natural improvisers make the best interactive musicians, in my opinion.

I'm a big advocate of simplicity. I never use score following based around specific pitch-based performance tracking; I think you're asking for trouble, expecially if you're like me and play in loud clubs and the nicest mic you'll ever drag onstage is an SM-57. I think trusting the computer to pay attention to an unpredictable data stream is a bad idea. I try to make it as transparent as possible for the user to interact with the computer, but also I don't want the computer making decisions that the user isn't completely informed about. When I have the robot turned on in my patches, all the lights that the user would normally be switching switch, and all the dials they would be turning turn, so you can see what's going on. The automatic processes aren't working on some hidden data file that you can't see.

My patches for Elliott always have some huge ass number boxes hooked up to the pitch trackers, so he knows if something funny is going on and he can run with it. If I'd made it a black box experience and something started going wrong we'd be in deep shit. While I really admire Max/FTS programmers who can sift through all the subpatches from the NeXT in the back of the room and see what's going on, it's just too complicated for the average performer to deal with. I think there's some truth to the saying that people can only pay attention to so much at once, that people have a limited bandwidth, especially in stressful situations like performance. The last thing anybody wants to happen at a gig is to have something schiz out on them and not know about it. That's why clarity of interface is so incredibly important.

You were talking about the idea that your experience is mediated through the first people that you meet using the tools. Tell me about your experience working with other musicians, doing patch work for other people. How does that work for them?

Elliott and Toni Dove, a New York-based installation artist, are the two main people I do commission work for. I tend to slip into an educator role pretty quickly when I work for people, so they have at least a good grasp of what's going on inside the system. Some people who do commission work don't necessarily feel this responsibility to make sure that the person they're working for understands what's going on, but I feel differently. I always comment my commission patches, so the people I write them for can fix things if they want to. Anyone can program this shit as well as I can if they want to. It's not hard.

I actually really enjoy Max programming for other people, and I like coding Max externals in C for people to play with. Dan Trueman and I are very proud of PeRColate, not necessarily because the objects themselves are so great, but because we leave everything open source and so people can learn from it. Max is a great program in that you can get involved with it on any level, from working with interface design to coding at the lowest level in C. There really isn't anything else out there that's that extensible.

Teaching

What do you use when you teach students Max?

When I taught Max with Thanassis we'd create a big library of tutorial patches which we'd use in class and leave on the machines for the students to hack with. The way we'd split it is Thanassis would teach the Max part and I would teach the MSP part. There were 13 classes in the term, and we'd do six or seven patches in class, and made these patches link to the help objects for all the new objects we covered. The patches were pretty simple. We run the gamut from simple notein/noteout things to score following and pitch tracking. The interfaces are all pretty simple because we just wanted to show everybody how to do it. When I teach Max/MSP at NYU I tend to make patches on the fly in class and then save them to the hard drive for the students to use as starting points.

Teaching Max isn't really like teaching a conventional programming language, because the set of 'primitives' from which you build algorithms isn't small; it's more like learning a foreign language, and so once you teach the basic programming interface you can show groups of objects as a vocabulary to be learned. The tutorials for Max and MSP are great starting points, but it's very important to get a sense of what your students find most interesting.

Do you have a sense of what your students want to do with it?

It's a really big range. I've taught graduate composition students at Columbia who have very specific ideas of what they want to do in terms of human-machine interaction, and I have students at both Columbia and NYU who are more interested in Max's capabilities in terms of automatatic music generation and signal processing. The main challenge for a teacher of Max is to expose the students to the whole range of things you can do with it; most music students come in with limited exposure to interactive music and its potential, so you have to get people to realize 'oh wow, I didn't know you could do that with a computer.' That kind of thing is easily taken for granted by people who are conversant with computer music, but you have to remember that for a lot of people this stuff is still complete magic.

I think part of the goal of teaching Max is to show how it can undermine your assumptions about what machines can do musically. MIDI is an excellent example; the first major assignment I give my class at NYU is to make them use the MIDI keyboard in the room to make a relational synthesizer, where different keys trigger specific actions or parameter changes but not on a one-to-one correspondance. There are a lot of people out there who walk away from computer music thinking that all they'll ever get out of the machine is a deaf-mute reverb processor, and it's your job as an instructor to deflect that bias.

The best experiences I've had in teaching Max have been when I've been pleasantly surprised by people without formal musical training. A lot of our undergrads at Columbia come from the experimental new music division at WKCR. So we get these students who don't necessarily compose but are very familiar with current experimental music. They'll want to make the computer play like John Zorn, or they'll want to do something like a Merzbow-matic, just tons of noise. This woman in my class two years ago did a really nice realization of Lucier's 'I am sitting in a room...'. She did a really interesting Max patch where she stuck a CD in the CD-ROM drive, took a couple of samples off of it and made these sort of ring modulated feedback things. It was really great; she just hooked it all up to the key object - there was no real interface or anything - and you would arbitrarily press keys on the keyboard just to see what it did. Actually, it's funny - I never showed them Bomb, but it sort of ended up working like Bomb because you never knew what the fuck was going on unless you felt like reading the manual.

... and why do that?

Seriously. We had some graduate sculptors in the class once, and this guy named Karsten Krejcarek did this very cool thing. He built this very Freudian chaise loungue, and a papier maché rock, and stuck his iMac in the rock. He had one of those stress sensors, those things that you put your finger on and it puts out a whine based on your sweat or something. So he hooked that into Max and had it pitch track the sensor. He sat down on the couch with a little clip mike and had Max detect when he was speaking, and as his stress level rose the responses from the computer became more and more Freudian. So he was sitting there saying things like 'I'm really worried about my job,' and if the stress was high enough the computer would spit out 'what do you think this has to do with your mother?' It went on like that, and it was a really funny thing.

It was using something like Eliza, then.

Yeah, it was a lot like Eliza, except without any pretense of natural language recognition. He had five or six responses for each stress level. It was meant to be performance art, so it was somewhat scripted. The patch was designed so that the first five responses were going to be about his mother, and the second five responses were going to be about his love life. It was really great.

I had another student, a sculptor named Thomas Charveriat, who is a real genius. He programs those PIC microcontrollers and Basic Stamps, and so he made a whole bunch of drawing easels that put out MIDI for the class. You drew on a pice of paper with charcoal and the paper was hooked up to an RC circuit. When you darkened the charcoal line it decreased the resistance, and the chip translated that to MIDI which drove a Max patch controlling the timbre of a drone. Very cool.

Whoa.

Yeah. He was using the resistance to change a waveshaper. So it started out as a sine wave, and then it turned into this waaaaaaaa, sounding like a digeridoo. So people could just draw, and when they were done they could erase everything and it would be a sine wave again. It was really interesting.

Fixing it in Post

That day in New York, Gregory didn't realize that there was a Cycling '74      record label lurking in the future, let alone a Freight Elevator Quartet      release. His pal Ben Nevile takes up where the conversation trailed off:

First of all, my compliments to all of you for producing such an excellent CD.

Thanks. It was a pretty exciting project. I've been wanting to put together a live album of our more recent material for ages - our last live record was out in 1997. It's very exciting to have the first release on C74.

Many of the tracks are composed of segments from several different performances. Can you talk a bit about the process that you undertook to compile the album? How did you achieve such seamless integration of the different performances?

We were lucky in that pretty much every show we've ever done was recorded to DAT. Stephen Krieger does all the mixing live on stage with a portable mixer, so we've always just chained a portable DAT machine off of it and recorded our gigs. We had about fifty or so tapes of material to work with. I made a huge database of all the DATs and we picked about fifteen shows that we thought were really on, both in terms of how we played and the sound quality.

I burned CDs for everyone in the band to go through, then based on everyone's recommendations we sat down and made rough cuts of the tracks we thought would work well on a new live album. We assembled all the tracks in Digital Performer and Stephen worked very hard to get reasonably seamless crossfades between all the different segments, so that you wouldn't really notice where the cuts went from one performance of a song to another. We wanted to include different performances of the same song in each track, so on most of the songs on the record there are as many as five or six different performances of the song in there, all crossfaded together and occasionally overlapped.

My one regret is that we never did multi-track recording live, so we had to jettison a lot of material simply because there was something a little iffy in the mix. Had we recorded everything to DA-88 or something we could have really 'fixed it in post', as it were. As it stands I'm amazed we got such a good sound considering everything was mixed live to 2-track DAT; but that's really a testament to Stephen's ability to keep a grip on everyone's levels while we're all improvising live.

Parts of the album you performed with modular syntehsizers, and other parts with your own software. How did your approach to live performance change as you transitioned to Max/MSP?

The transition from the analog gear to the PowerBook was fairly sudden and went along with a few changes in the way we were working in the band. The main change that prompted a shift to the laptop was the collaboration record we did with DJ Spooky. We had a lot of raw material - beats and bass lines and strange noises - that he had given us to work with or had recorded in the studio with us. I really wanted to change my role on the record from 'maker of strange beeping noises' to 'purveyor of strange signal processing techniques'. Since the modular synths could only filter or put strange rhythmic envelopes on the sound, we ran all the samples through little Max/MSP patches that did all sorts of strange things and then re-recorded the output. As a result there's hardly anything on that album that I 'played'; most of my contribution to that record is in the nature of programming.

The second change was that the analog gear wasn't lending anything in the way of precision to the band when we played live, and we were starting to perform 'songs' live, so we needed some consistency. The Buchla was especially troublesome, but the Serge was pretty flaky too, both in terms of getting consistent midi sync and in tuning: whenever someone turned on a light switch in the bathroom at some of the clubs we were playing at my tuning would drop up to an octave. We got a commission sometime in 1998 for a new piece to play at the Kitchen as part of Columbia's Interactive Arts Festival, and I wrote all my parts on laptop instead of using the synth, and I just realized it was so much easier and more reliable. I only use the analog gear now in the studio or on special occasions like release parties. I miss it sometimes, but the headaches of carrying around two flight cases of synth modules and sync equipment and having to set it all up I don't miss.

The talk you had with Gregory gave me the impression that your patches were modeled after the way you interacted with the modular. Has that changed?

My Max programming was originally very much based around making a 'virtual' Buchla. I used to call the project the Virtchla synth... the idea was to design an interface that roughly simulated the sheer randomness of tweaking lots of knobs and re-patching things onstage. There are lots of weighted randomness algorithms and gates and switches in my patches, as well as a robot which periodically goes around and changes things every four measures or so in case things are getting boring. Gradually I moved into using samples and strange effects live, and I also have patches that take live instrument input to either trigger events or add some strange effect to the sound. The signature sound of my laptop is still a waveshaped sine wave with a variable-Q filter, which is the closest I can get to the rich distorted oscillator sound I used to have.

Basically you can't beat the tactile experience of working with modular synth equipment, and I never even tried to imitate it. I hardly ever use a fader/knob box for that kind of thing. The one piece of gear I do use for interface work which I find indispensable is the Wacom tablet, because it's impossible to use with precision in the same way that you can never turn a knob on a modular synth to exactly the same spot as before... you're always doing something a little different with the pen, which adds a much needed dimension of expressiveness into the sounds coming out of the computer.

What about the other members of the band - have they changed the way they play in response to any changes you've made?

The transition to laptop has made it slightly more common for me to play lead melodies in the head of the song than before, since I can guarantee that it will come out in tune and in sync every time. If this will last is a little hard to say because every year or so we have an equipment explosion where everyone in the band gets a new piece of gear that they'll use for everything for a while. Gradually we settle down to a comfortable equilibrium where nothing dominates the soundscape too much.

In "traditional" bands there are a lot of physical cues, things you can understand by watching the other people play their physical instruments. Do you miss this, or is it not really an issue?

We work, as far as I can tell, pretty much like any other regular gigging improv band on stage. The fact that we're playing downtempo and drum and bass dance music and not jazz changes the dynamic slightly, but we still make eye contact with each other, yell at each other and do lots of hand waving. We still get most of our cues from just watching each other and listening, like any group would. I think the technology is still very much something we enjoy using to facilitate creating a unique sound experience, rather than it being the whole point of the exercise.

The fact that the digerati have over the years taken the term 'interaction' and turned it into a piece of technological lingo is fairly offensive to me; four people on stage with instruments is a priori a more interactive experience than four computers on stage could ever be. That we use a lot of cool gear and computers shouldn't really take attention away from the fact that there are four people up there in charge of everything making the music.

Can you talk a little bit about how you treat live instruments with your computer?

My favorite array of MSP patches contains a lot of granular processing, some weird out of tune comb filters, and a Chebyshev waveshaper which I use to mutilate everything. Pretty much any instrument we use is fair game to go into the PowerBook, but on stage the one I process the most is my guitar. I'm a pretty lame guitarist, so I'm sure a lot of it is a subconscious effort on my part to clean up my playing and make it more interesting.

So what's on the horizon for computer-based musicians?

It's a bit hard to predict the future of computer music per se, but I can comment on two trends that I expect will continue into the near future. One is how computers are doing a fair bit to democratize, if you will, making music. This has a good side and a bad side. The good side is that there are more people making music, both on an amateur and professional level, and a lot of this has to do with the fact that computers bring down the expense curve of doing professional quality recording. The risk of all this is that computer software will become less and less sophisticated to try and target beginners who might not realize all the possibilities that exist; the amount of music out there that sounds like 'compose-by-numbers' will probably increase before it decreases.

I feel that it's really important for music educators to teach aspiring computer musicians how to subvert the system (whatever the system may be) at a very early stage, if only to show that computers can really help you work outside the box if you want them to. All computer software that purports to facilitate the creation of art is biased, whether intentionally or not, and it's very important to locate those biases and know how to work around them. One of the reasons I like max so much is that the bias isn't an aesthetic one, but rather it simply promotes a specific working methodology.

The other trend that I've noticed as computers have become faster is that the barriers between different types of media are gradually collapsing. You're going to see a lot more painters-turned-musicians and musicians-turned-video artists; I became pretty keenly aware of this when I was at Steim with the freight's video artist, Mark McNamara. He taught himself Metasynth in about a day and a half and was making perfectly respectable beats in no time just by manipulating image patterns.

I think the most important thing about technology is that it can break barriers both aesthetically and synesthetically. Maybe in five years we'll all be beta-testing MOP (Max Olfactory Processing). Then again, maybe not, but I wouldn't put it past anyone.

by Gregory Taylor on September 13, 2005