An Interview With Carl Stone


San Francisco resident Carl Stone has composed electro-acoustic and computer music exclusively since 1972. He has been commissioned to compose and perform his works in the United States, Canada, Europe, Asia, Australia, South America and the Near East. In this 1999 interview with Gregory Taylor, Stone talks about his methods for composing with new technologies and the artistic implications of sampling.

Old Technology

One of the things I think that’s both somewhat invisible and still intriguing about your work is the technology that you use to make it. In part, that might be because the materials you use direct the listener’s attention away from the “how” to the “what”. I expect that you might be one of those people that either get all kinds of questions about gear or no questions at all. Which is it?

It depends where I am. I lectured at the Art Institute of Chicago and I didn’t get one question about gear or software – the questions were all from an aesthetic point of view, which is great. When I’m in Japan I do get a lot of questions about gear. It’s a legitimate question because it’s not obvious, but I frankly appreciate this particular line of discussion we’re having a little more – more on the aesthetic line of attack as opposed to which sampler I prefer.

Presumably your choice of tools had something to do with the kind of idiosyncratic ends you wanted, but I’m wondering about the way that things like hard disk recording changed the way you did your work in your early works.

Sure. Hard disk recording allowed me to do things that I aspired to. Years ago I was working with turntables and delay lines. Then, this Publison DM89 came along. It was my instrument of choice for a number of years. It was a high-end box designed for studios to use. It was very expensive at the time, too – and I had saved up for quite a long time to get one. It was only after it was stolen from me twice -

The same box?

The same box. I had a studio in my house, and my house was burglarized. They cleaned me out, including this box – which was a five or six thousand dollar piece of gear. Of course, it was insured – so I had this check after break-in number two and I asked myself, “Should I get this box for the third time?” It’s 1986, MIDI has arrived, the Macintosh is affordable, and sampling has come into the realm of the consumer. So, basically I thought to try something new. I saw it as a chance to do what I’d been hoping for a long time with my Poubeçon, which was to get some kind of programming ability and some kind of precise control over time – which I’d really never had. So the answer was obvious – it was time to try this.

What’s the first recording that comes out, after you acquire this equipment?

It was a compilation on the Music & Arts label that had two of my pieces on it – Hop Ken and Wal Me Do. Those pieces reappear in different performances on the CD that I brought out myself in 1989 called Four Pieces.

Was the business of trying to use the different technology to reinvent yourself stressful?

No, I didn’t feel stressed at all. I just cancelled all my appointments for half a year – my social life went completely to hell. I just worked more or less continuously for a long period of time ’cause I was really totally fascinated by the technology.

New Technology

So I hear you’re a Max and MSP guy now.

Well, I’ve been a Max user since the product first became commercially available around ’91 or ’92. But before that, I was interested in the use of computer programs for algorithmic composition, one of which – M – David Zicarelli had had a hand in. M was certainly an important program for me because of its interactivity; it was a kind of performance tool and a performance instrument that allowed a certain amount of controlled randomization that you couldn’t get with straight sequencing tools. Sequencers for me are not really appropriate for live performance – I’m not just interested in the straight playback of MIDI material – so M was really a wonderful thing.

And then Max extended the ideas of M, and gave you really a full palette – a complete toolkit for building your own MIDI processors to do whatever you wanted. Because my approach to making music is pretty much outside the mainstream, I couldn’t really be satisfied with most of the tools that were in the commercial marketplace for software at that time. From ’92 to ’96 I was using the standard setup – using a computer with a program like Max to control MIDI sampling instruments.

But when Max/MSP came along, it became obvious that this was the key to the future. These new fast G3 computers eat MIDI for breakfast; you can do everything you had previously done with MIDI and external boxes before internally with a single machine now. From just a convenience standpoint, I’m very grateful not to have to be hauling around lots of racks and pounds of equipment when I perform and tour. From just a convenience standpoint, MSP is a wonderful thing. But the other thing is that MSP allows you to build and customize your own tools to do exactly what you want, or even things that you might not know that you want. The commercial devices, which are created for a mass market and for the generalized tastes of the mass market cannot do that.

To be fair, you can perhaps tinker within limits….

They’re optimized for a general purpose, which is driven by market factors – that’s just the reality. But MSP is different – it’s more flexibility-driven than market-driven. It’s just great for composers like me – we can build and customize the things we use for ourselves. If other people are interested in them, that’s just great. But the tool itself is so non-specific – it doesn’t have a lot of specifics built into it in the way that a sequencer does. There are a lot of things you can do to defeat the assumptions of a sequencer, but you have to consciously do it – with MSP you start with a blank slate always and build out from there. Really the thing that drives it is not the marketing imperatives of a software tool, but rather your own instincts and imaginations – and that’s what’s really great.

So before the arrival of MSP, you were running Max and triggering a rack of samplers and effects processors?

That’s right. My system at that time was pretty stripped down – I was kind of proud of the fact that I was doing everything that I was doing with just a very simple sampler, a Powerbook, an Alesis Quadraverb and a mixer, all being controlled by Max using MIDI. Now I’ve basically thrown out the sampler and the mixer and it’s just done all with a fast computer and MSP.

So it’s all laptop. That’s gotta make touring a lot easier – you just go and plug it in…

I’ve always admired a trumpet player, the guy who gets the call at five o’clock and hears he’s got this gig, and is heading out to the airport at 5:05 with his instrument. I’ve always wanted to do that and now it’s a reality because all I really need to take is my Powerbook and a change of underwear.

So now that you’re an MSP guy, I’m curious about your current situation. How do changing technologies alter what you do? I would be inclined to think not a lot except for the efficiency of working without all those encumbrances….

You’re right, it hasn’t made any fundamental changes. My approach is still the same after all these years in the most general way. The tools are different and the efficiency is increased, you’re right. When I first got MSP, my first tasks were to kind of sit down and figure out a way to model and emulate what I was doing with sampling – external devices and MIDI – how to do that just with a Powerbook. But then it didn’t take much time to realize that, and then I was left everything else that’s possible, which is not necessarily about imitating or modeling preexisting devices, but taking new approaches that were available to me for the first time….

Compositional Techniques

Let me see if I’m getting this right – you learned MSP by taking what you already understood how to do and then doing it using MSP?

Well, yeah. After being in this field for all these years, you develop habits and approaches which are basically the way you think or conceptualize a compositional problem. So, rather than completely rewriting the book, the first thing you do is figure out how can you can do what you know, and do it better. Then – because you don’t want to be doing the whole thing for the rest of your life – at least I don’t – it leads to something else, something really new.

Once you’re done with the implementation, you have a system that runs on your Powerbook that does what you did the last time you did something. When you think about moving past that, is the next step altered by your encounter with the technology?

It surely is. And it has to do with my fundamental approach to composing, which is that I very rarely start out with a fixed idea that I wish to realize, but rather the act of composing for me involves a considerable amount of simple play by using materials and processes which I construct myself. The process of play reveals something about sound and material and the sources that I’m using which, in turn, then, realizes something about form and content and so on. Eventually, this becomes a piece of music. And because MSP suddenly expanded the whole range of processes enormously, you have a whole new sonic world available to you once you break out of the old way of thinking and start a kind of new extended way of thinking.

So, what you’re doing before you create the thing that will become the piece is a kind of interactive listening.

Yeah, that’s exactly what it is – interactive listening which is predicated on some kind of harebrained idea about what might work and what might be interesting, constructed usually from some very simple process which could be used to generate an entire piece or to generate one line in a piece, or one section in a piece, or some subset of an entire piece.

I guess a lot of the other people I’ve been talking to have been talking about perfecting process, rather than creating a process and then kind of listening to the way it interacts with things that you bring to it. So I’m curious about your post-MSP work – I’m curious about where you’ve gone since that happened.

The first big piece I did using MSP was about an hour’s worth of music that I created during a one month period when I was composer-in-residence at the Djerassi artist in residence program. Djerassi is a beautiful compound in the hills in the south of San Francisco, down the Peninsula on the way to Woodside, Silicon Valley-area. It’s a ranch owned by Carl Djerassi, and about twelve artists go there to live in an isolated natural setting, just themselves and a small staff – to really be away from the distractions of everyday life to – in my case – compose. I had a project at hand, a collaboration with a choreographer and dancer from Japan by the name of Akira Kasai. The performance coming up a month after my residency would be over so I just sat about with my copy of MSP, my manual, and a form scheme for a 90 minute dance piece-about an hour of music – and began to jam. The hardest part was pre-selecting the materials that I would bring with me to use as fodder for my sampling, so I just brought an enormous amount of stuff and selected from that. Along the way, I just constantly tried out different techniques based on my imagination. Some of them worked, some of them didn’t. You know, you work really hard for a month and the fax machine is off, and the phone is off and you don’t have email – you get a lot done.

Coming out of that month, what do you wish you knew about MSP before you started?

Hmm… I have no real complaint about that, because I really started from zero.

There are people who encounter the learning curve of a technology and come out the other side and say, “I really wish I’d had a sense of this,” or “I’m accustomed to working in a situation where the languages I work with for programming are hierarchical, and I find Max and MSP complicated to work with because I expect hierarchy and I don’t find it.”

In my case, it’s neither because I don’t crave hierarchy. I like the kind of non-hierarchical blank slate approach that Max and MSP give you. Because I’d been a Max user for a number of years before adopting MSP, I was very comfortable with the interface and the programming style and approach that Max embodies as it’s further implemented in MSP, so I felt reasonably comfortable. There are some obscurities in MSP and – this is not a knock on MSP – I’m not personally very well grounded in math or acoustics, or digital-audio theory. When you start to work with the objects that are really deep like buffer~ and fft~ and ifft~, you do kind of need that stuff. At that point, I had very little grounding in that and there wasn’t really time to go deep in that period because I had to produce a whole hour of music in a month. So, I kind of put those on the shelves and out of reach for that period. I guess I wish I had known more about those things at that time, but it worked out okay.

I think that one of the really helpful things about trying to learn digital signal processing is that you can sit down and fire MSP up and really start screwing around with an audio stream or the contents of a buffer – tweaking it and discovering that what comes out the other end sounds kind of like reverb, or a swarm of bees. While some people are happy with the math or bithead parts of it, I never had the hang of it until I saw what it could do. I came out of the other end of it thinking, “That’s why I need it. Crap ! If I knew this, it would be a lot quicker.” But there’s a lot of wonderful damage to be done to stuff in the process of learning.

Of course, even if you don’t know that, you can still spend a lifetime doing everything else. Last April I was in Italy for a month in a thing that was like Djerassi, that was sponsored by the Rockefeller Foundation – I was at another artists’ retreat called Bellagio, but the difference for me was that I didn’t have a project at hand or have anything I needed to do, I just used it for research and going deeper inside some of these objects like buffer~ and so on, and that was very useful.

50 iMacs make a lot of noise

So what kind of Max and MSP work are you doing now?

More and more towards these kinds of things that I’m talking about where you get in at the bit and byte level and start peeking and poking around buffers and fooling around that way. I’m interested in resynthesis, and doing more with that – that’s what preoccupies me at the moment.

You said that one of the things you’re working on now is installation work. That’s quite a change in direction, isn’t it?

Sure. Almost all my work has been for performance or working with other media – time-based work that starts, proceeds, and then ends. I was very happy to have had the invitation to participate in a project in Japan where a number of composers were asked to create pieces for a kind of a sculptural instrument called Incubator – 50 iMac computers in a network. The instigator for Incubator was Mr Masayuki Akamatsu, who also made one of the pieces for it. This opportunity got me thinking about networking ideas and using Max and MSP to realize that. It was completely the appropriate tool.

So what did you actually do?

My piece for Incubator was one of the richer ones compared to the other composers in terms of the materials. It used a lot of sound files graphics, Quick Time, dynamic texts…

There were 50 screens to watch at the same time?

Yes, 50 screens – although there was a certain amount of visual repetition throughout the 50. Sonically, you can make a joyful noise – a lot of racket with 50 iMacs. They were set up in a kind of a grid pattern, and people could wander around in between them and observe what’s going on the screens and listen to the sounds. The sounds tended to mass in the room, but if you got in close to any computer you could hear what was going on with that particular one. There were actually six, at any time, six different programs running at the maximum.

Were the machines networked together?

They were networked together and some composers actually passed data in between. One composer had an interesting idea where they used the internal mike and internal speakers of the computers plus objects that could sense the absence or presence of sound so that machines would actually listen to each other and have conversations based on what they heard. In my case, the network was very rudimentary and simple. The programs were designed and installed on the machines. All the programs looked at a master synchronization clock, which was supplied by the network. Depending on the time of the hour, they would react in certain ways.

I’ve never heard of you doing anything like that before. What did you think about it?

It was my first time working with a network like that so I learned a lot along the way. I learned what worked, what didn’t work, what could’ve been better. 50 machines – it’s kind of an orchestra. I think that if I examined the piece that I did for Incubator, it could’ve been more dynamic – I should’ve gone the whole range from one to two to ten to 50, but things tended to be either on or off, very soft or very loud. In fact, I’m doing an upcoming piece in Mexico City that’s a piece for 20 computers. This time, I’ll really play with the dynamics so that each computer is its own voice and you have the full range.

Your using the word orchestra interests me. My first thought was that what you’re describing didn’t seem very much like the way we think of an orchestra as kind of timbral engine. Maybe after the 19th century we don’t do that anymore….

One tendency in my work is the interest in compounded masses of sound through a technique of layering. I’ve done that with my old taped pieces and then through the digital cloning of materials, up to 16,000 layers of the same sound, and when you’ve got 50 computers in the same room, it kind of cries out for that kind of layering.

And you can spatialize is as well. Nice inexpensive spatialization – all you need is 50 computers.

Yes, very inexpensive [sarcastic laugh].

Layering

Since you brought up the notion that layering has been a longtime compositional interest of yours, I guess this is a good place to ask about how you started working like this. How’d it happen?

I was interested in and passionate about music since early age, and studied classical piano when I was very young, but never really took it all that far. I wasn’t disciplined enough to really practice, and I had no aspiration to be a pianist. But I continued with my keyboard skills, working in high school bands. I played both bands and keyboard in some groups that included some people you might know, even – like the percussionist Z’ev – who at that time was a Valley boy like myself by the of name Stephan Weiser. We had a blues band for a while, and also more of a western improvisational ensemble called The Sonic Arts Group. The Hogfat Blues Band featured a female vocalist named Wendy Steiner who later became the Nashville songwriter Wendy Waldman. My influences at that time were Captain Beefheart, Frank Zappa and The Soft Machine – I really liked what Michael Ratledge was doing with his keyboard work, doing a lot of modification of his sound. It was what got me interested in electronics, and that led to an interest in synthesizers. The early 70s were when that was all coming to the fore – the Moog and the Buchla synthesizers were becoming viable instruments. Cal Arts started in 1970, and I began working as a student there as an electronic music composition major. I was equally divided between working with electronic sounds from the Buchla synthesizer as well as working with microphone-collected sounds and using the tape studios at, at Cal Arts. I parlayed that into a job as a music person – I worked for radio for a number of years as music director at KPFK, the Pacifica Station in Los Angeles. That continued, with me working as a composer in parallel to that, till I left the station staff in 1981.

How did you start working with appropriated and found material? What was the genesis of your interest in appropriation?

There were a couple of things. At Cal Arts, I had a work study job in the music library. My job was to tape all of the records in the music library onto cassette. They did this as a kind of archival project because they figured that the records would wear out and, and I guess they figured cassettes would last forever [laughs]. They set me up in this tiny room with four turntables and four tape recorders and a patch panel. Basically, my job was to just run dubs constantly – I would set up four in a row and play them all together. At first, the challenge was deciding which one I wanted to monitor – I could listen to Machaut or electronic music, or I could listen to some Pygmy music, and so on. Basically, it was up to me. Then I started getting into listening to combinations of them and doing kind of collage and mixing. Maybe that was the genesis of it all, although I wasn’t thinking as a composer – I was thinking as a guy who was working in the music library, but it definitely had some aesthetic impact.

After I finished Cal Arts, I worked in a radio station, where my resources were completely different. At Cal Arts, I had access to what was then a fifty thousand dollar electronic music synthesizer and a lot of tape recorders and mixers and stuff like that. After I left Cal Arts, I had nothing except what the radio station had – which was a couple of turntables, a couple of tape recorders and a big classical music library. So I asked myself – how can I make my piece? What can I do?

And I had what you could politely call an inspiration – you could also call it sort of a stupid idea. I recorded the sound of Igor Kipnis playing Henry Purcell’s Rondo from Adbelazar – the same theme that Benjamin Britten used for his Young Person’s Guide to the Orchestra – onto the left channel of a tape recorder. Then, I went back and I recorded it on the right channel – the same tape, but displaced a little bit in time. So you had this kind of small delay effect that created a kind of a rhythm. I mixed those two channels, recorded them onto the left channel of my second tape recorder, and then went back and recorded again those two tracks onto the right channel of the second tape recorder – again, displaced in time, but by a different amount from what I had done before. So you had two delays happening, four tracks of material, and a little more complicated rhythm was starting to happen. I rewound all the, both tape recorders, went back to the beginning and then mixed the four tracks that I had now onto one in the mono, and then recorded those four tracks on the left channel of my first tape recorder, and went back and recorded on the right channel of the first tape recorder.

You can see what I had going here -my one had become two had become four had become eight….I just kept it going through 16, 32, 64, all the way up to 1,024 tracks of the same material. And what I noticed first of all was that the character of the sound changed completely. The harpsichord – which in the beginning had this kind of plinky thing happening – became more of a kind of a shimmering effect as I added the layers, but still with a feeling of rhythm, however complicated. As I got up into the higher levels of layers – 256, 512, 1024 – all of the feeling of rhythm basically fell out. All the smaller time details of the sound dropped away completely and you were left with the broader harmonic contour of the piece – the harpsichord sound had evolved into something completely out of this world, both denotatively and figuratively. It was interesting to me that this had happened – in large part because of where the sound had started out in the beginning. So I thought, well, the audience would find this interesting too. Why not just present the work in series where the form and the content and the process are all merged into one thing? Kind of pedantic, when you think about it in retrospect. But it was very much in keeping with the kind of minimalist movement of the time and the work of people like Alvin Lucier was doing in pieces like I am Sitting in a Room. I was very influenced by that when I was a student and so I basically applied that as a kind of formal device.

There are composers who find it difficult to understand the idea of appropriation in anything other than specifically ideological terms. Your story seems to be more about a compositional technique that you happened on serendipitously. Do you find that some of your audience expects you to account for a set of attitudes about appropriation that you may not have, or are there attitudes that you developed about the notion of appropriation and ownership in the process of working?

You’re correct. When I started on this path, I didn’t really come to it from an ideological point of view and I had not developed a specific set of theories behind it. I was aware of movements in the art world that used appropriation – Warhol, Lichtenstein and Rauschenberg. I also knew of Duchamp, who had used found objects and appropriated work in their own. So I knew intuitively that what I was doing was consistent, and I did not have any doubt about the ethics of it. But in musical terms, I really hadn’t thought or searched for any precedent and wasn’t all that much aware of them – although if I’d thought about it I would have realized that in fact the precedents go lots farther back than our current century in music.

The Business of Sampling

But it seems to me that when we think of quotation as a musical device – at least in those historical situations – we have this idea that when a “real” musician borrows something, there are these notions of compositional skill used in that borrowing and the formal devices used to do it. It seems to me as though they’re presented as being somehow qualitatively different than skills that you’re talking about – the skills to imagine the timbres which emerge from displacement, and so on. I’d think that there’s some critical pushback there….

They may be different, but so what? They’re still skills – they may be different, you can’t deny that skills are involved. For me, one issue with sampling is that when you listen to a piece of music whose composers use appropriated musical material as a starting point comes down to a simple question: Does the musical interest of the final piece derive from the material that’s sampled in the beginning or does it come from something that’s done to it? If the answer is the former, I have my doubts about the work. But if it comes from the processing of the music somehow, then I have absolutely no problem with that.

It seems to me part of the question about the idea of working with appropriated material has to do with its visibility and transparency. I suspect that there are a lot of people in the universe who listen to your work the way I do, which is to occasionally play trainspotter – to try to recognize source material. Is the invisibility of source material important to you? Is there a certain sense where the invisibility of the source material is attached to more visibility for you as the composer?

It really runs the gamut. A lot of times in the earlier works it’s very exposed, and very often from the beginning. Early on, I had basically two techniques – I was kind of in a rut. A piece would either start out naked to the world and then would somehow develop into something completely new, or I’d begin with something completely foreign and unknowable which transformed itself into the familiar over the course of time. While both are fine, I like to take a little more complicated approach now where I think things are sometimes a bit more understandable and you can put your finger on them, or nearly so. In a piece like Nyala, the ambiguity is a little higher – if you listen, you hear Jimi Hendrix jamming together with Miles Davis, but you’re never sure if those are really the samples that I’m using or if it’s an illusion I’ve created. There’s an element of alchemy now that I’m interested in, and I like that kind of ambiguity and uncertainty – that queasiness that you have when you’re not exactly a hundred percent sure what’s going on. But even to this – today in the music that I’m doing there are times when it still becomes patently apparent.

But it’s also the question of where we stand with the law and how that’s changed. In the beginning when I had no reasonable expectation that my work would be released commercially, I could go in and use Michael Jackson or use the Temptations, Four Tops, or those kinds of things. I have a piece called Shibucho that’s all based on Motown. Since that work was never intended to see the light of day commercially, I could just let it all hang out. I guess I never thought about this consciously, but now that my work has found some release in the commercial marketplace, I guess that I am subconsciously putting things a little more under the skin, putting a veil of gauze over the sample – so that I won’t be getting a letter from Island Records one of these days.

Do you see the business of live performance as being qualitatively different than the material you produce on compact disc because of the nature of the performance or because of the circumstances under which it’s made? I don’t want to belabor the technology issue, but in a sense different technological resources might be brought to bear on what you do.

It’s a very legitimate question. Nyala could not be performed live. I’ve done concert performances of sections of it, but the piece in total could not be done live. It is therefore by definition a studio work. Other pieces that have found their way onto CD are documents of live performances that I do and other pieces are simply created in the studio. My first released recording which came out on LP, Woo Lae Oak, was a studio work done in a tape studio. It was commissioned for the purpose of radio broadcast. And along the way, it falls into a kind of a fifty-fifty differentiation. My bread and butter is performance now, and so works tend to be created with that in mind. So I’d say it’s used more in the favor of pieces made for live performance, which then eventually I like to bring out on CD in hopes of meeting a larger audience.

A Shift in the Balance

One of the interesting things about Max and MSP for me is a kind of historical irony – that Max saw some of its beginnings in the bowels of an institution dedicated to High Modernism and a certain set of attitudes towards composition. By virtue of a number of hilarious historical incidents and accidents, it has now landed on the desktop of all kinds of people who have no particular investment in that cultural discourse and are busy building their own nonhegemonic memes, just whacking away with it.

I think the work that Miller Puckett did at IRCAM back in those days must’ve been considered somewhat heretical to a certain extent. But even to the extent that it wasn’t I think that the trend toward the democratization and personalization of music was absolutely inevitable. It couldn’t have been stopped by even the most powerful music Czar.

Most of the time I think of interesting cultural exchanges of music as things that make their way from the margins to the center. Max and MSP strike me as kind of peculiar because in a way they have gone in the opposite direction – from the corridors of cultural power to the clubs and art galleries of the world. In turn, there’s this reflected wave thing that happens, too – the technologies have moved out to the edge, and the work that happens on the edge washes back to the center and renews and changes things. I just heard that this year’s Ars Electronica prize didn’t go to an academic composer, it went to Richard James (the Aphex Twin).

Yeah. Institutionally, they made a conscious effort to open up and proactively involve composers from outside of the academy. By getting submissions from outside for the Ars Electronica Digital Music awards and facing the whole breadth of musical material that they didn’t have before, that’s what they get.

How do you feel about that?

Well, not commenting on the specific wisdom of who did or didn’t get the prize, I think it’s a very good thing, I think that prizes like that should really look at the whole breadth of musical activity for what they’re worth. To say that it’s only open to the academy is obviously very self-limiting and silly – I don’t have a problem with that.

I’m curious about the way that you think about the kind of manipulations that you used to perform. There are any number of artists – DJ Shadow comes to mind here – whose recordings have a pretty bewildering mix of interleaved material from turntableland. The overall feel of that there’s something immediate about it. It seems to me that your work comes from a more gestationary process. It’s not precisely in the moment in that sense. You ever felt the urge to return to doing turntable work? Do you see the kind of assembly that you do as being a different kind of process?

I was doing a turntable work back in the early eighties – pieces like Dong Il Jong and Shibucho. Frankly, I was completely unaware of any parallel movements which were happening at the same time in terms of hip-hop and the things that Grandmaster Flash was doing. I was in this fuzzy-headed classical new music world that just wasn’t really paying attention to that. Eventually I found out that this was going on too, and I really liked it. But at that time, I’d pretty much left that world and started working with sampling, working from a computer or a keyboard-controlled point of view. And, of course, there’s been so much innovation and there are so many great technicians out there now that it’s kind of scary to think about delving into that.

Is there any kind of music happening out there on the margins that interests you personally?

I think most of the music that interests me is out on the margins at this point. I’m kind of a weird case because I’m not a big consumer of music; I don’t listen to a lot because I tend to get more absorbed in making it than listening to it. I spent a lot of time in Japan and I’m pretty interested in some of the composers working there.

It seems to me that the material that you use is so carefully chosen that I assume that you must listen to stuff all the time.

You’d be surprised. Maybe because I don’t listen all that much, things really come out and grab me when I do. There’s also the idea of what I use as my musical material and sampling material. I never use electronic music for my sampling.

Is there a technical reason for that?

No technical reason, no. There are several reasons. First of all, electronic music is – almost by definition – already highly processed. Because what I’m interested in is using musical cognates for their significance, they have to be fairly concrete to begin with. Usually, I use things that are somewhat iconographic, classical music or pop music or something like that.

So the things you bring in bring their resonance with them….

They bring their resonance, plus it gives me room to change and the change has significance, and you can ascribe it to what I’m doing. If I were to start with something that was already abstracted, like the electronic music of Robert Normandeau, Michael Redolfi, Otomo Yoshihide, it would be meaningless because it’s already once removed.

You’ve worked with Otomo Yoshihide before. Your collaboration with him was interesting because in some respects, it seemed to me that you met in the center – you were both very interested in this business of selectively mining this set of shared cultural objects, taking that material and doing something with it.

That’s right. But what we did was the opposite of what I told you – we did what I don’t normally do, which is to use each other’s developed materials in furtherance of this kind of sampling project. But the only reason we did it – and the only reason it worked in my mind – is that if you went back to stage one, you would hear the original materials at the beginning.

Are you doing any collaborative work now? I suppose that using MSP now means that you can prototype instruments quickly. That’s bound to make collaboration a little easier….

Yeah, you can prototype instruments quickly and you have these terrific adc~ and dac~ input/output things that you can do, passing material, data, audio data and other kinds of data back and forth between computers. I haven’t done a lot of work with that yet, but it would be interesting to have two composers playing together where they’re networked, they’re actually passing not only audio data in and out, but other kinds of data through some kind of networking scheme. I’d be interested in trying that.

It would be interesting to see if there’s a way to do that that didn’t mystify the process of passing materials back and forth.

Yeah, hopefully it would not become totally oblique.

It seems to me that good process work is a little like the classical music of India, where the performance of a given raag contains the information you need to navigate through the listening experience – from the statement of the raag, to the entrance of a pulse and rhythmic structures, to its virtuousic elaboration. At the end, you’re in a position as a listener to appreciate what you’ve actually heard, because you’ve been led along the way.

Sure. Well, there is something very attractive about that. The metaphor of Indian music is not the first one that came to mind, but I can accept in terms of the elegance and the simplicity and the coherence in musical structure and form. But now I’m also interested in taking that and then maybe at a certain point destroying it altogether. I want to move from coherence into incoherence and then maybe back up from there…

…exposing people to the process by which things have been dismembered? Sounds like fun.

Well, stick around.