An Interview With Andrew Schloss
With a set of experiences that includes playing with Tito Puente, touring with Peter Brook’s theatre ensemble in the ‘70s, and recently playing percussion with Rickie Lee Jones for the opening of the Experience Music Project in Seattle, it’s clear that Andrew Schloss has been all over the map for the past 30 years. In the mid-80s, shortly after discovering the radiodrum (originally called the radio drum or radio baton, an electronic instrument created by Bob Boie and Max Mathews at Bell Labs), he went to IRCAM where Miller Puckette and David Wessel introduced him to MAX. The young program’s power and flexibility bowled him over, and since then Schloss has been working with MAX to make the radiodrum respond with the same subtlety as a traditional percussion instrument. On a warm summer day at his home in Seattle he and Ben Nevile talked about the challenges that a performer faces when trying to take advantage of the enhanced possibilities of computer music.
Where is music going?
Ben Nevile: It might be interesting to talk about the way that we’ve treated music in the past, and how music has changed. For many, many years music was so special, because you couldn’t record it…
Andrew Schloss: Yeah, and it’s only been a hundred years or so that we’ve lived in the age of recorded sound.
…and now? For most chart-topping, popular music, a CD is better than a performance. It’s backwards.
For many thousands of years, live music was the only source. Now, in the blink of an eye, we’ve had two or three revolutions in music that some people thought would already have killed it. First there was radio, then there were audio recordings, then electronic music, then computer music, and yet still people listen to music, they go to concerts, they buy records, they learn how to play instruments…so things have shifted, but we still live in a musical world. Now it is true that, say, if you look a hundred years ago, in North America, for example–what percentage of the population played the piano? It was high! If you wanted to listen to music, you played music! And now, probably one out of a hundred people play.
It’s really decreased, has it?
Oh yes. In 1910 (I just happen to have seen some of these statistics) the production of pianos in the United States was astonishing. They were producing hundreds of thousands of pianos a year, because it was like having a stereo. Now if you go to a neighborhood department store, you see fifty varieties of these cheap boom box stereo things. That’s what you buy now, because it costs a hundred bucks. In 1910 you went out and bought an upright piano.
Is it interesting maybe that we’re using personal stereo equipment—turntables is what I’m thinking of, of course—as instruments?
Yeah. I didn’t think that was going to happen. I’m surprised by it. I wonder where it’s going. I don’t want to take away from it…it’s an art form, or it can be an art form. But learning to play the violin is definitely a life-long endeavor. Learning to play turntables is probably not a life-long thing, although some people I guess are getting pretty good at it.
This harks back to the idea of virtuosity. Would you pay money to see DJ Foo, who’s astonishingly good at this shit? Yeah! I mean, look at this guy, he’s doing amazing stuff. Yeah, that’s virtuosity. That’s going to see something that you couldn’t do yourself. At the same time, some of the stuff that I’ve seen people doing…you know, it takes only a few weeks. People like to have things they can really sink their teeth into. On the other hand, we have two hundred cable channels, people have less and less time, and in a way they’re pretty lazy. Maybe they’re not willing to spend the ten thousand hours that it takes to learn how to play the cello.
But the first people who played a cello—or whatever distant relative of the cello came first—were surely unskilled as well. It’s only through tradition and time that an instrument’s intricacies are learned.
Yes , of course. Or maybe we can make these instruments easier to play. Max Mathews has talked about this in terms of making instruments “play themselves.” For example, let’s say that the instrument played the right notes for you, and all you had to do was control the nuances of it. This is like his version of the radiodrum, which he calls the “radio baton.” It’s really fun, and it may be a nice device for people in their homes. It might make music more accessible to more people. You probably wouldn’t pay to see somebody else do that, but you might do it at home for your own enjoyment.
You can always look at all of these things from positive and negative standpoints. For example, a music teacher or performer might say “this is sick. These people think they’re playing music, and they really haven’t learned how to do it.” But the other side of the coin is that you can say “look at all these people for whom music is more accessible.” It is true that if you pick up a cheap Casio keyboard that will play automatically for you, chances are you’re not going to stick with it long, because…well, it’s kind of stupid. But then again, your chance of quitting piano lessons is high because…well, it’s really hard to learn to play the piano.
But it’s the hardness that makes it so special!
Exactly—that you have to work at it. This is why I wonder if it’s going to go away, or what. What’s different now is that in the past you didn’t have any choice. If you wanted to make nice sounds, it took considerable effort and patience. Whether it was scraping a bow over strings, or blowing into a hole, there was no easy way to make music. Now we have lots of easy ways to make music.
So what do people really want? I don’t think we know. I think we just throw the stuff out and see what people do with it. I think it would be a sad thing if kids only learned how to play turntables, and then said “why would I bother learning how to play the trumpet? It’s too hard, it sounds shitty. I can’t even play drum’nbass on the trumpet. What’s the point?” I think that would be really bad. For one thing, at some point there would be no music left to source!
I keep trying to get back at this issue of understanding. How do you understand music? I think you understand it by playing it, I really do.
More than just by listening to it?
Absolutely. If I’m in a classroom, and there’s something funny about a certain instrument, for example whether I can’t tell if it’s real, or I can’t tell how it’s played, or if it was done by a computer, or something, I will look around the room and I will find a person who plays that instrument, and ask them, and they’ll tell you right away. They’re not just listening, there’s a corporeal process going on. They’re “playing” the music when they’re listening to it. If I’m listening to a symphony orchestra, I can tell you all kinds of details about the percussion, and you’d go “percussion? I didn’t hear any percussion.” I’m listening to the sounds that I’m familiar with, and I’m familiar with the way they’re physically generated.
I think there may be some biological basis for this—not to get too profound here—but I know that they’ve studied bird songs, and it turns out that there are some theories about how birds identify song. One of them is this really funny thing, that they’re actually identifying the physical method of generating the sounds rather than the sounds themselves. To some extent that might happen with us as listeners. When you listen to music that you’ve never played, and have never had any concept of playing, it’s not that you don’t enjoy it, but there’s something…
You’re missing a dimension of the music. It’s like having one eye closed and losing your depth perception: you still take in everything, but your brain lacks a whole level of understanding.
Good analogy. I think about this with my own instrument. What if it got to the point where the radiodrum finally became the instrument that I wanted it to be…and let’s say there’s a kid who’s eight years old and wants to learn how to play drums. Do you give him a little Remo djembe to bang on, or do you give him this radiodrum or some other electronic apparatus? You know, I think maybe you should give him both, but not to give him the physical drum seems like a mistake.
Even if the radiodrum has developed just as much subtlety and dynamic feel as a real drum?
Maybe. I’m not sure about this. When you play a drum, and you hit it, and you notice that it sounds different with all these different ways of hitting it, and you learn how to make the drum sing, it’s pretty deep, and it takes a lot of physical effort. Will people continue to play orchestral instruments in three hundred years? Plenty of people predict that there will be no symphony orchestras, because they’re all going bankrupt as it is. People don’t want to pay, for some reason.
We have so many entertainment options. Wham! Eight thousand TV shows.
There it is, any time you want it. Much of it is garbage, but it’s very seductive. I just wonder about the whole idea of music education. What is it? What do you learn? When you’re eight years old, you probably don’t go to lectures on music theory. That would be silly. You start learning how to play an instrument. I think the coordination and the mental discipline are really important, especially for young people.
This sounds a bit like the discussions that people had when calculators began being used by kids in schools.
Well, it’s true – people can’t do arithmetic very well any more.
But maybe I don’t need to memorize my thirteen times table.
You still need to know how to do it for those times when your calculator doesn’t work!
Right – you don’t want to become overly reliant on the technology. The denizens of the plastic planets they visited in Star Trek taught me that. There’s no doubt, though, that technology becomes a part of us. Maybe the way that we make music is changing because we’re changing along with the technology.
I do wonder about this. I can’t imagine what my colleagues (the instrumental/performance faculty) must think! Luckily for me, I teach electronic/computer music, so I’m not going to lose this argument, but at the same time, I’m worried about it, because I really believe that people need to play instruments. For example, I find that of the students I’ve had, the ones that do the best are often the ones who have a command of some instrument. If somebody comes in to my class who is a really clever engineer, but doesn’t know anything about acoustic music, they often sit there like…what do I do? What are the raw materials? They need a musical context/understanding to be effective.
Okay, here I am in the studio, and I know how to use the tools. Er…how do I make music?
Exactly: how do you make music? What happens is that if you give a student like that a defined project, they’ll do it because they’re good programmers, but the problem is that sometimes they don’t know where to go with it. That’s where, whatever it is, this musical intuition…
It’s a language. You have to find a voice, and then learn how to speak it.
Having taught all sorts of people, musical and non-musical, you must have had some pretty interesting students.
Indeed! I wasn’t always teaching composition, you see. Sometimes I was just teaching a point of view, so if someone came to my class from cognitive science, or computer science, linguistics, physics or even visual art, they could take it in their own direction. A bunch of interesting students have taken my classes over the years, starting at Brown, where I first taught. Scott Draves, for example, the guy who wrote Bomb, was in my class at Brown, where I taught from 1985 to 1989. Another example is Tecumseh Fitch, who took my introductory class at Brown, and is now a world-renowned expert in bioacoustics and runs a major lab in Vienna. Lisa Loeb, who’s a bit of a rock star these days, was a clever songwriter even then. She was writing really cool songs, and she was learning how to do overdubs and things like that. I think all I taught her was a bit about tape recorders. I’m still friends with a lot of my students from that time.
In any case, the point is that the students in my classes at Brown would come from all these different departments, and I would try to arouse their curiosity, and they were very open to that. There can be a problem in music schools or conservatories, in which the students erroneously believe that if they learn anything besides their instrument, they’re wasting their time. I refer them to Yo Yo Ma, math major at Harvard and superb musician. Not a normal guy, though.
I teach some classes at the University of Victoria that some of my colleagues initially thought were a waste of time, because they’re not specifically designed for music students. “Music, Science and Computers” is an example. It’s similar to the course I taught at Brown and UCSD many years ago. My answer to that is that I’m teaching people, not necessarily musicians. I don’t know what these students are going to do for a living, but I do know one thing, they’re going to understand the world of sound better, and that’s useful, interesting, and even at times inspiring!
How many of our music students are going to end up as professional instrumentalists? It’s a scary thought, and I’m almost afraid to voice it, because it’s so terrifying–I mean, what are we doing in our music schools/conservatories? I would even fear that the provincial government could say: look, what do you need all this money for? How many oboe players do we need in the province of British Columbia? I’m certainly never going to say that–of course I believe it’s vitally important to have music schools, and to train professional musicians–but it’s a scary prospect these days with all of these orchestras shutting down. As I said before, though, sometimes not having command of an instrument makes things difficult, and there are many diverse reasons to study music.
Another example: I went to a lecture recently by an ethnomusicologist named Charlie Keil. He was talking about his students who can’t play clave, which is a Cuban rhythm, the basis of a lot of Latin music. The students can sort of do it, but they can’t really do it, or they can’t keep it going; they lose the rhythm after a short time. He’s really distressed about this because he thinks that these people are handicapped in some profound way. He finds that if they can’t do it by the time they’re in the university, they’re screwed. You can’t teach it to them by then, he says. He tried at first to be nice and keep these people in his drumming ensembles, but now he kicks them out. He says they just mess everybody up, and they never learn. He really didn’t want to be a pessimist, but he just found that if they couldn’t do it on day one, they couldn’t do it on day one hundred.
Some people are born without rhythm, I guess.
But are they born without rhythm, or are they simply not exposed to it in a physical way? I mean, maybe one out of every thousand people is born “without rhythm,” let’s say, like being tone deaf. But sadly, maybe one out of every three people—at least in the US—although born with a potential sense of rhythm, never develop it. It’s a cultural thing. If you go to Cuba and try to find someone who can’t play clave, you’d have to search, and that person would have to be pretty screwed up. This also has to do with that physical, corporeal thing. Keil also talked about what he calls “head bobbing.” When he listens to music, his head moves—it’s a physical thing. He was saying that one of the reasons his head bobs is that he identifies with some physical aspect of making the music, whether it’s playing the drums, or singing, or something. I had never really thought about that.
I bob my head to a lot of artificial music.
I had a student this year who I think was a sociology major. He really didn’t know much about music, but he was super enthusiastic, and he really wanted to learn about sound editing. He was good at the things he wanted to do, but he was totally ignorant about other things. He didn’t know anything about new music, contemporary music, Western music, he didn’t play an instrument…I found him really fun and interesting to teach one day, and other days he was really annoying, because he was so totally clueless about certain things. But after the class was over, he went out and got a job at some fancy movie studio in Vancouver, and now he’s doing sound effects for TV shows, and he’s probably very happy and making a good living. I think that’s wonderful. In contrast, I think there are plenty of graduates of the School of Music who are now selling records at the local record store. I wouldn’t want to do that.
No, that seems like a huge waste of talent.
Aesthetics and the Social Milieu
At least as musicians they’d be able to point the customers towards good music.
Well, maybe. This idea of how you appreciate music…is it true that musicians appreciate music more than other people? I personally don’t listen to recorded music that much.
I don’t know. There are lots of non-musicians who listen to music ten times more than I do. There are various reasons for that. I really like to listen to music more deliberately—I don’t like to have music playing in the background or walk around with a walkman on my head. Another reason is that especially in the last few years, I have a very strong emotional response to certain music, and I find that it’s unpleasant sometimes – it’s more of an emotional response than I necessarily want to have. So I turn the music off.
What do you think about the idea that we’re moving more towards music being like a painting, a work of art captured at one point in time? Obviously as a performer and not a studio artist that’s not the direction you’re moving – you’re still standing up in front of the crowd and drawing – but especially with electronic music, a lot of it is … built, constructed.
Yes, with meticulous attention to detail.
I think that’s fine if the method of construction is interesting. If I sit here and write musical notation, it’s not music until a performer plays it. If I’m working in a studio with Pro Tools or Max/MSP or whatever, and I’m making this very careful and constructed thing, it’s a finished product when it leaves my studio. I guess to me the viability of that relates to the intellectual property question. A lot of people will do it out of pure love, but those people who are trying to make a living at it, if they can’t sell their work, it’ll be a problem. Performers will still have the option of performing.
The practical implications of this change still have to be sorted out, but somehow musicians will continue to extract money from their fans. I’m wondering more what you think of the philosophical change, I guess. I think this is the key difference between turntable-based music and the rest of pop music culture. There are some turntablists whose performance is based on their dexterity. There are others–the majority, I would argue–who approach things more like an art gallery: here’s a nice painting, here’s another nice painting, now I’ve got three nice paintings playing at once…
One of the extraordinary things that has happened in the aesthetics and social milieu of electronica in the past ten years is that…well, just a few years ago, John Oswald made a record called Plunderphonics, which he got sued for. It was intended to be a poke in the face at commercial music and sampling, which it was! I thought it was funny, clever and interesting, and also philosophically interesting. I began to teach a section in my class based on Plunderphonics, and until recently it was exotic to plunder existing music in this way, and it was useful, because it was a way to get even inexperienced people making music, because anybody can do it.
This year it suddenly dawned on me that young people are doing this all the time. It’s like nothing now, it’s what everyone does! They take the music, move it around in Pro Tools or Ableton or whatever…and it’s what all of these kids are doing! Aesthetically and sociologically, something really changed between John Oswald’s record and now. People don’t think twice about the fact that all their sound sources are other people’s copyrighted material. That’s not even an issue. They’re not even thinking about making their own sounds to manipulate. The whole world, their CD collection, and their mp3 files, that’s their paintbrush. It took me—you know, thick skull—until this year to realize it. How did this happen?
The technology is cheap…I guess it was just a logical step. There it is—why not use that material?
Sure. It’s rich, it has a beat…When I first discovered modern electronica I thought, okay, I could see how you would make the stuff. Use a drum machine, make the rhythms, design the sounds…that was sort of interesting, because I am interested in new techniques, and I’m interested in what’s going on, too. What I find very interesting is that not a single one of my students this year was actually making those beats. They were just using beats that somebody else made!
There are also people who make their own sounds and beats. Laziness manifests itself in sampled loops.
It’s more appealing to me to generate your own stuff.
Yeah. It’s more like art and less like theft.
Indeed! On the other hand, I know some very erudite computer music composers who were doing similar things way before the DJ craze. For example some “serious” composers were “digesting” other people’s music for fun and profit. That is to say, they might take a Mahler symphony or something like that, load it onto their hard drive, and then write very complex, non-realtime csound programs that would process this symphony into something totally different and completely unrecognizable. Of course, it had some connection to the original, because that was the sound source, but there was no way you could get this guy for copyright, because there was absolutely nothing recognizable at the end of the process. That was weirdly prophetic. They were using these sound sources that were already complex orchestral sounds, and then created the csound “meat grinder” to run the source through. Nowadays, this is commonplace in techno circles. In fact, as I just saw in Berlin (at the 2000 ICMC), the late-night “off- ICMC” clubs, it is a very hot application for Max/MSP these days.
All the rich timbres of classical instruments…
—but radically modified! It’s hard to synthesize sounds that are as interesting and complex as orchestral instruments, so why not start with them? When I first saw MSP, I thought it was a great synthesis environment—which it is—but now I think it’s an even better processing environment.
Can we talk about “computer extended ensembles”? In one of your papers you describe how in your joint performances with David Jaffe the “boundary between performers is like a permeable membrane.” I think it’s fascinating that you can push what you’re doing onto the other person, and that they can push back.
Yeah. An analogy in bluegrass music: there’s this trick where one guy reaches around and plays the chords for the other guy’s banjo. With musical instruments in the MIDI paradigm you can have this complete flexibility in who’s talking to the synthesizer and in what way. MIDI isn’t that complicated, there aren’t that many streams of data. The thing that makes it more interesting, let’s say with Max: if you have an interesting Max patch between the two players and the sound generators, then everything becomes, as we said, like a semi-permeable membrane between the sound-generator and the players and the algorithms themselves. What I play into the system may not directly affect the sound, but it may affect what the other player can do. For example, if David’s doing pitch bend, but I have control over sustain, then if I make the sounds that he’s playing staccato, the pitch bend won’t be audible. He’s dependent on me to enable the sustain so that you can hear the pitch bend.
I remember you describing another interesting technique…
I started experimenting with this idea with the jazz pianist Jeff Gardner, when we were in Paris together. I also did it later with David Jaffe and his pitch tracking Zeta violin. For example, the violinist is playing pizzicato, or if it were a pianist he’d be playing the MIDI keyboard, and I’m “stealing” the notes. So, the person who’s playing the live instrument is feeding me notes basically, and then I have a Max patch…
Right—the notes go into a buffer that you can access and play back with your radiodrum.
Right. So what’s weird about it is that by hitting different parts of the surface of the radiodrum you can traverse this buffer, and thereby not go up and down in pitch (as you would expect), but rather go backwards and forwards in time in the buffer. You can do really cool things with the stuff that the other person has played–it’s a form of improvisation that you just couldn’t do any other way. Let’s say you were a brilliant jazz musician and you had a perfect ear. If somebody played a really fast figure on the piano you could play it back on your saxophone, and you could riff on it, but what you couldn’t do is “digest” it in real time, and keep flinging it back at the other person in infinite permutations. I like doing it on the radiodrum, because you have precise control over the rhythm. When it’s in the buffer you can play it note-by-note, or you can be in what I call continuous mode, where the position of the drum sticks continuously trigger the computer (without touching the surface of the drum), so you just have a constant stream of notes. The technique gives really interesting (and sometimes blindingly fast) textures.
The buffer stays constant and is only changed by his playing, correct?
That’s right. If my partner does an arpeggio and fills up my whole buffer, then that’s all I can play. We could have done it another way, but we like certain constraints. We can, of course, make this buffer as short or as long as we desire, which completely changes the “feel” of the improvisation.
In Uni, the piece I’ve been working on with Randy Jones, we’re performing images and sound together, and there’s some similar things going on. This brings up another question, which is: where we are we going with images—is this a fad, or is it the beginning of a whole new set of performance possibilities? I think it is something that’s going to keep progressing. The last time people really did this was in the ‘60s during rock concerts when they did fantastic light shows, and then it faded away…
…except for the Pink Floyd die-hards who like to get stoned in planetariums. It’s also been a big part of rave culture.
[laughing] That’s true, and that’s something I haven’t been tracking. Now I don’t know exactly how they’ve been doing them at raves, whether they’re like light shows…
They’re typically visuals on a big screen. The most complicated ones use computers, but I don’t really know much about them.
Randy has been working on this software called Onadime that allows you to talk to these image streams, whatever that means. If you’re “listening” to MIDI—say, a performer sending out a MIDI stream…
…every single event can be used to trigger a visual phenomena?
Right. For example, in a piece I did with David called The Seven Wonders of the Ancient World, the original version uses a Yamaha Disklavier, and the keys physically move. Watching the keys of the piano move is like an animation—it’s a visual thing, because people can see the piano keys moving in perfect synchrony with the sound. When you take the piano keyboard away and you play a synthesizer, which is what I do sometimes in venues that don’t have a Disklavier, suddenly there’s nothing to look at, and some of the impact goes away. That’s why I first talked to Randy about doing an Onadime patch for The Seven Wonders. It’s sort of like a fantasy on the motion of the keyboard – every single MIDI event triggers a one-to-one correspondence with some image. It’s cool, it’s like a super-duper piano mapping, but abstracted and multiplied. How far can we go with this? Is it a fad? Is it distracting? Appealing or appalling? I don’t know! Anyway, I think there’s value in it.
I don’t think it detracts from the music in any way—it certainly adds something to a performance.
Sure, for us, but there are cases when it could detract. For example, if I were listening to concert music, which was written as concert music, and the composer intended it to be concert music, and then somebody made an visual patch and projected it behind the musicians, I could certainly see that as being distracting. The music was conceived as an entire experience in itself. If you put this crazy visual stuff behind them, who knows? I’m most interested in things that are designed to be performed visually. In the case of the Seven Wonders, our mission was animating the keyboard and visualizing the music, whatever that means. I hadn’t thought about it for years, and then last year I went to a conference on visualizing, and what I realized was that there are all these different worlds that I hadn’t thought about as being connected at all. There are a whole bunch of people doing different versions. The idea that there are interesting ways of visualizing sound…I think we can go a lot further with that. I’m just beginning to experiment with it.
The performance that I saw you give in Vancouver had so much going on in the visual aspects, I can’t really even remember what the sounds were like. I was especially fond of the twirly things [makes hand motions].
hose twirly things were, of course, supposed to be something else.
When Things Go South
We ought to talk about that—with highly technical instruments like yours, often you get up on stage and the equipment malfunctions, or a bug shuts you down, or something. What do you do?
Well, I get really upset. It’s just not fair. It’s really a level of stress that…I mean, some things can’t be helped. You can get up on stage and there can be an earthquake. Okay, fine. But the number of times that something goes wrong with this computer music, and the amount of time I’ve seen very experienced performers, very thorough, like Jean Claude Risset, Mari Kimura (who’s not as technical but certainly very professional), they get up on stage and then they’re just hosed! It’s like, now what? Some people have options, like George Lewis—if his computer breaks, he just kicks it off the stage and plays the trombone for an hour. He’s a great player. Mari could do that too if she had to; I mean, she’s a great violinist. But that’s not why we’re there. We’re here to see this piece that involves technology, and many pieces don’t even have an acoustic component to fall back on.
The question is, how far do you go to protect yourself from disasters?
Right. Do you, for example, have a complete duplicate of your entire set-up on stage? Do you really go that far? That’s pretty extreme.
It’s what U2 and Rush do.
But they have roadies, huge trucks…they’ve got people to back everything up!
To me this is not a trivial matter at all. I’ve suffered too much when my apparatus didn’t quite work. As a professional you get off stage, and you smile, and people congratulate you…but at the same time, it’s so frustrating to have all this stuff prepared and have it not work. If you’re extremely sophisticated technically you’re more confident because you can fix most things. But even the most brilliant programmer will occasionally run into something they can’t fix. I’m not technical enough to solve all possible problems, especially on the spot when I’m nervous and I’ve got a concert.
So if you’ve been preparing for months, what kind of things go wrong? It’s incredible what can go wrong! We’re not just talking about something inconvenient, we’re talking about something profoundly destructive, the feeling that you’re at the mercy of these machines. I imagine that someone who’s really technical and sharp, like Adrian Freed for example, would be more likely to troubleshoot an issue on the spot that I would, perhaps. However, he’s not a performer; if you put him on stage, get him scared enough, and add on all the other issues that performers deal with, well, at some point we all fall apart.
I remember once watching the wonderful composer and scientist Jean-Claude Risset on stage. His piece simply didn’t work. Who walked up on stage? Miller Puckette, David Zicarelli. On that particular evening, the audience was full of experts, and many got up on stage and they couldn’t figure out what was wrong either! What was wrong? I think it may have been a bad MIDI cable or something silly like that. There are too many things that can go wrong, and debugging when you have an audience waiting for you is a whole different experience than debugging in the studio.
You’ve hit a nerve here, because I have to say, I have paid in blood! Some of the things can’t be helped—there’s a finite possibility that a disk drive will fail at any time, for example. What’s much more likely and much more irritating is that the bloody programmers were careless, and that is inexcusable. If I added up the number of hours that I messed around with extensions in the Macintosh OS, I could be a concert pianist! (I mean if I traded those hours in and played the piano instead). Why am I sitting there selecting and deselecting extensions? I mean, that is so stupid! There’s no doubt though, we’ve all been there. One wrong extension and you’re dead.
MSP and Max are very robust, don’t get me wrong. But, like everybody, my computer crashes randomly. It’s very frustrating. We need computers that run for three years without crashing.
We have them in traffic lights and pacemakers…and control towers.
That’s right…and keyboards! Keyboards don’t crash. Most of the time I’m inspired to keep going despite all the headaches. The only time when that slides is when…when I either think there’s no remedy, or when I’m just so furious that I just can’t stand it.
I get to that point pretty easily. I’ve got about twenty minutes of debug inside of me before I get frustrated and look for something to throw against the wall.
I’ll spend twenty hours on something if I think it’s worth doing. The kind of stuff where it’s mindless, where you might as well be a monkey, where your thought process is irrelevant, and not only do you have to blindly do it, but also wait for the computer to reboot every time. Your cycle is very slow, and you can’t really do much while your computer is booting. You can go pee or something. It’s like a diabolical system that’s designed to keep you from getting anywhere. It’s not enough time to read a book or go for a walk, but it’s too long to stare at the screen and wait…
Part of the problem with crashing, and also the difficulty with acquiring virtuosity, is that you keep changing your instrument. If you were to use the same radiodrum patch for ten years, you’d have it all sorted out, and it wouldn’t crash.
Exactly. We’re constantly modifying things and end up stepping on ourselves. I think that’s inevitable. I think that technical people tend to have this disease more than musicians (“featuritus”). Everything that can possibly go wrong in the physical domain is multiplied by a million when you’re using computers. I have a friend named Jaron Lanier, whom you might have heard of…
Sure. Wired magazine. The “virtual reality” guy.
Yes, I guess he coined that phrase. Anyway, Jaron used to say that if you want to be good in computer music then you have to be a hick. What he meant by that was that if you’re one of those people who, every time a new synthesizer comes out you buy it, you’re never going to learn how to play it! You have to hold onto these older things for a while and really learn how they work, and if they don’t have the newest features, that’s OK.
I think that’s played out in more…can I say popular electronic music circles? The instruments that have come to the forefront are old analog instruments, and even more recently FM synths are kind of the rage.
I didn’t know that! I’ve had a bunch of those. I think physical modeling will be a huge thing. It just hasn’t happened yet.
Modeling of real instruments?
Yes, and extrapolating from the real to the unreal, but still using physical models.
Like maybe modeling a super drum that you’re hitting with an 8000 pound stick?
Something like that. Taking the physical parameters outside the normal range. Physical modeling for me means being able to explore timbre space. I mean, as a percussionist I’m looking for whole ranges of sound that need to be controllable from an instrument that’s something like a drum in an integral way, so that if you’re moving away from a physically location on the virtual drum, there’s some meaning to that, that it relates to the sound that results. That’s so important for improvising, because when you’re improvising you can’t sit down and spend three hours figuring out where your sounds are coming from. You can’t do that. You have to be in a perceptual space that you can traverse that makes sense. Once you’re on stage, there it is, it’s your world that you play.
What I’m most interested in and working towards is getting to the point where I can jam. Simple. I haven’t really been able to do that. All the musical situations where I have played my high-tech instrument were either with other people who know what it’s like, like David or Randy, or they were with geniuses like Chucho Valdés, where I worked for six months on my own and then played for an hour. It was great, but the ratio is too high!
It must have been a thrill to work with an instrumentalist of his caliber.
Absolutely! Chucho Valdés, I think, is one of the greatest pianists living today. The idea of playing with him is wonderful and terrifying. You know, you can get together with one of your weirdo computer musicians and say “oh, wait a minute, I have to work on my computer programs,” but when you sit down with Chucho Valdés you can’t screw around!
As Jeff Gardner used to say when I was endlessly messing around with the computer “It don’t mean shit on the bandstand.” Meaning: I don’t care, don’t tell me about your computer problems—let’s just play! To be able to simply walk into a room and pull out your axe and play…I suspect most of my colleagues are not at that point. I know what ingredients I need to get there. I need a radiodrum that works a little bit better, and I need a physical modeling environment that gives me a sonic physical world to be inside of. If you’ve got physical models of percussive things, pieces of wood and pieces of metal, and you arrange them in a reasonable way, depending on the musical context. I think it would be a huge step towards my being able to jam.
Do you think it’s important for us to pattern our electronic instruments after acoustic instruments?
I do think it’s important. When I first went to CCRMA in 1978 I was annoyed when people tried to imitate real instruments. John Chowning was working on that, for example. At first I thought, “what’s the point? We can already play real instruments. I want to make sounds that have never been heard before.” That’s valid, but what Chowning would say to answer that question would be “yes, that’s very interesting, but we need to develop the subtlety and the craft of the sounds first.” In terms of my own work, I think that the constraints, whatever it is you’re pushing against, are very liberating. If you always try to expand your horizons, then you never play. At some point you have to say I’m done, this is my instrument. One’s intuition about music is based on making music. I really feel that conventional instruments are an inspiration, and always will be. I’m not sure why…maybe it’s because we’re physical animals and we live in a physical world. “Art cannot be simulated,” I like to say.
A lot of it would have to be that we’ve had real instruments for thousands of years. The violin, for example–we know a lot about the physical techniques that coax great sound out of it.
This relates to an article that David Jaffe and I wrote on the future of musical performance. If you’re Marvin Minsky, or you’re somebody who wants to write for Wired magazine and stuff, it’s very thought-provoking (and sometimes a bit glib) to talk about getting rid of your body, but me…I like having a body.
Well, the body is what our brain is about, right? The brain protects our body, it gets our body to feed ourselves, it’s using our body to make more bodies…I think it’s okay for you to like having a body.
Thanks. So at some point, okay, maybe you can download yourself to a hard drive or something. I think for a foreseeable time we’ll have bodies, so I think the dance thing is really interesting and important, and the physicality of playing…the history of musical instruments is based on physicality. Maybe we’ll get past that, and that’s something that I think is worth thinking about, maybe that we’ll evolve musically beyond any correlation with physical instruments or even a physical body.
I also think the social aspect of music is not to be forgotten. I think a lot of people into techno forget about the social aspect. Maybe not, because I mean, raves are definitely a social thing.
I think they used to be more so.
Oh? I went to one thing that was kind of like a rave. The musicians were all on stage with computers, one of them with a keyboard—there were four of them—and for the life of me, I could not tell who was playing, if they were playing, or if it was a tape and they were just waving their arms. To me, it was a bad performance. What was interesting was that to their peers, it was not a bad performance. I was really surprised! Their peers didn’t seem to mind that there was no apparent correlation between what they were doing on stage, and what we heard.
In this way I’m sort of reactionary in that I believe in cause and effect from a performance standpoint. The virtuosity question is in relation to cause and effect too. I mean, if there’s no cause and effect there can’t be virtuosity. You have to see someone doing something, making something happen. It may be a juggler juggling steak knives, or it may be a guitarist, or singer…if there is no cause and effect, then there’s no reason for virtuosity to exist. This has already affected me in the negative sense, where I’ll do something with my high tech stuff and somebody will come to me and say “well, I can do that! I can wave my arms like that!”
It’s true that some of the things I do anybody can do. Why is that? Well, in some cases it’s because I spend a whole bunch of time thinking about how it would be possible to do what I do, and that’s part of what I consider to be the piece. In Seven Wonders, for example, there are certainly things that anybody could do. The Shepard tones at the end of the seventh movement where the piano is playing octaves; all I have control over during that part is the shape of the amplitude envelope of the descending sound. So, for example, if I had a heart attack and just collapsed during the performance, it would continue to play perfectly. The point is that in a performance with technology, there’s always going to be variations in the extent to which the performer is directly making something happen.
So, will electronic concerts become more physical, or will people wrap their heads around a new form of concert where the visual cues are absent?
I think it’s too early to tell, and I don’t know how many years, or decades, or centuries, or millennia it will take to know whether our concept of music, which is based on the physicality of conventional musical instruments, is a historical relic of the fact that that’s all we could think of doing—banging on membranes, scraping strings and blowing in holes. That’s all we knew how to do, so that’s our concept of music! Maybe we really will get totally past that. I don’t know.
An Epilogue: August 2015
This interview is now more than ten years old, but I’m struck by how many of the issues and problems discussed here are still current and relevant as I re-read it. The hardware and software available today are both vastly more powerful and sophisticated of course, and we can do far more in real time with a little laptop or even an iPad. Is the software more reliable now? Definitely – OSX will run for weeks or months now without crashing. In “the old days,” previous versions of the OS wouldn’t last more than a day or two without a crash.
As far as education goes, I think things are definitely looking up, for two reasons:
- We now have educational programs that are specifically designed for people interested in music technology, programs that really integrate science and art. We have one such combined program now at the University of Victoria, and it is a resounding success. There is now a path for students who are interested in such things, and they are flourishing, both in school and out in the world.
- I am delighted to report that the students now really love to play music; almost all of them play an instrument or would like to – my pessimism back then about people not bothering to learn to play physical instruments when they have laptops and DAWs available to them did not come to pass, happily.
Finally, the internet has spawned an entire world of audio and music technology, which has created all kinds of employment opportunities for our students – most of whom are landing on their feet after they leave school. That is very gratifying. When I was a student a few decades ago, computer music was so obscure that no one understood what I was working on. Now the technology is absolutely everywhere. What’s next? I look forward to finding out.
One thing I have not managed to do in the past 10 years is to jam on demand – I still don’t have a ready-made musical world that is both infinitely flexible and instantaneously available. Why is that? To some extent, this is maybe a disingenuous comment, because when you play a drumset or a conga drum, you know that your instrument is — it’s sitting in front of you.
In the world of virtual instruments like the radiodrum, there is no a priori object or sound world that defines who or what you’re doing. So in a way – by definition – you’re not ready. Like many artists using technology, I am seduced and also crippled by unlimited choices…
Despite all of those choices, I’ve managed to stay busy since Ben and I sat down for that interview. I’ve continued to work and perform in the same vein as before, using Max extensively in my work (and also in my teaching) for nearly 30 years now. Also, my instrument, the radiodrum, has come a long way since the original one that Max Mathews and Bob Boie “loaned” me back in 1986 (I still have it). There have been many versions of the radiodrum since then, made by Max and Tom Oberheim after Max moved to CCRMA from Bell Labs. But for me, the instrument took a different turn after the “audio” version that Ben Nevile (yes, the guy who interviewed me) worked on. After a visit to Bob Boie’s sheep farm in Vermont in 2008, he generously offered to come out of retirement and make me a new instrument, which he finished in late 2009. Indeed, it works better than the other ones—it’s less sensitive to electromagnetic noise than any of the previous instruments, and it’s more accurate.
But there is one big problem: I only have ONE working instrument, which is too precarious. I have tried twice to clone it, and even though I have all of Boie’s schematics, I couldn’t get them to work properly. At the moment, Adrian Freed at CNMAT is trying to help me figure out what’s wrong. Once we have two working instruments, I will breathe easier, and also it will then be possible to make more of them.
I have continued to collaborate extensively with David A. Jaffe, a process that began more than 35 years ago at CCRMA. The piece The Space Between Us (you can watch a it here) developed from David’s relationship with composer Henry Brant, who left David some of his percussion instruments when he died. We had been talking with Trimpin about collaborating, and it turned out that Trimpin and Brant had been planning to work together just before Brant died. The instruments that Brant gave Jaffe included a glockenspiel, a xylophone, and a set of orchestra chimes; Trimpin turned these into mechanical instruments, Jaffe wrote the piece for two string quartets, accompanied by the ensemble of robotic percussion plus Disklavier piano. As the soloist, I controlled the robotic percussion and Yamaha Disklavier using the radiodrum. The world premiere was at the 2011 Other Minds Festival in San Francisco. Later, the Canadian premiere was given in 2013 at the Open Space gallery in Victoria. Recently we received a New Music USA grant to present the Seattle premiere at the Chapel Performance Space in Seattle in March, 2016.
Collaborations with Trimpin have continued over the years. I began as a spectator, but evolved into a participant over many years. From the first time I saw his installations, I yearned to “play” them; that is, to control them from my own instrument. It took a long time, but it finally happened for the first time in 2003, via an installation Trimpin made called Klavier Nonette, which consisted of nine computer-controlled toy pianos located throughout the Jack Straw Gallery in Seattle. The piece is called Maravillas, and was co-composed with David Jaffe. You can find a link to a recording of it here.
My next opportunity was Trimpin’s installation made of eight computer-controlled turntables. I browsed in a local record shop and found eight pristine copies of Jane Fonda’s Workout Record (1982). With these, I created a piece called Fonda Mix (a tongue-in-cheek homage to Cage’s Fontana Mix) for eight computer-controlled turntables, each with an identical copy of Jane Fonda’s LP, which generated an octophonic tapestry of Fonda’s voice emanating from the turntables with roars and complex phasing.
My most recent piece with Trimpin, created at the Open Space Gallery in Victoria, BC. was based on an installation he created called CanonX+4:33=100, built using old donated pianos otherwise destined for the dump. Trimpin spent a week at the University of Victoria, where students helped him strip the pianos down to the soundboards and began constructing the installation. Once the installation was up and running, members of MISTIC (Music Intelligence and Sound Technology Interdisciplinary Collective) spent as much time as we could in the gallery, getting to know the sounds and quirks of the new ensemble and composing new works specifically for Trimpin’s robotic, disembodied pianos. My piece, entitled 8:66 for Trimpin (a reference to Cage’s 4:33), was for voice (Cathy Fern Lewis) and radiodrum (myself). There’s a video of the performance here.
You can see another piece using Ajay Kapur’s electro-mechanical Indian percussion instruments controlled by the radiodrum (Mahadevibot Variations) here.
An ongoing theme of my work has been trying to merge electronic music with Cuban jazz. As an experimental trio with Cuban pianist Hilario Durán on piano and Irene Mitri on violin, the Durán/Schloss/Mitri Trio performed in various contexts, including a concert series entitled Jazz: The Second Century, part of the Earshot Jazz series in Seattle, and a fascinating tour of Cuba in 2008, performing at the Museo de Artes Plásticas in Pinar del Rio, the Escuela Nacional de Arte (ENA) in Havana, and at the Museo Nacional de Bellas Artes as part of the XII Festival de Música Electroacústica. You can see a new piece we performed on this tour called Underground Economy by David A. Jaffe here.
There was also a very fruitful collaboration with Matt Wright, who spent a year at UVic on a postdoctoral fellowship funded by SSHRC (Social Science and Humanities Research Council of Canada). We worked in the area Computational Ethnomusicology, concentrating on analysis of Afrocuban rhythms. Matt is now back at CCRMA where he serves as the Technical Director.
1989 The Radio Drum as a Synthesizer Controller, by Robert Boie, Max Mathews and W. Andrew Schloss. Proceedings of the 1989 International Computer Music Conference (ICMC) at Ohio State University, pp. 42-45.
1990 Recent Advances in the Coupling of the Language MAX with the Mathews/Boie Radio Drum, by Andrew Schloss. Proceedings of the 1990 International Computer Music Conference (ICMC), Glasgow, pp 398-400.
1992 The Making of “Wildlife”: Species of Interaction, by Andrew Schloss and David A. Jaffe. Proceedings of the 1992 ICMC, San José, California, pp. 269- 272.
1993 Intelligent Musical Instruments: The Future of Musical Performance or the Demise of the Performer? by Andrew Schloss and David A. Jaffe. INTERFACE Journal of New Music Research, The Netherlands, 22:3, pp. 183- 193.
1994 A Virtual Piano Concerto—The coupling of the Mathews/Boie Radio Drum and Yamaha Disklavier Grand Piano in “The Seven Wonders of the Ancient World” by David Jaffe and Andrew Schloss. Proceedings of the 1994 ICMC, Aarhus, Denmark, pp. 192-195.
1994 The Computer-Extended Ensemble by David Jaffe and Andrew Schloss. Computer Music Journal, 18:2, pp. 78-86.
2002 Designing New Musical Instruments–The Artist/Engineer Collaboration by W. Andrew Schloss and Peter Driessen. Innovation, Journal of the Association of Professional Engineers and Geoscientists of B.C., December 2002, pp. 14-17. APEGBC Editorial Board Award, best article of 2002.
2003 Using Contemporary Technology in Live Performance: The Dilemma of the Performer, by Andrew Schloss. Journal of New Music Research, The Netherlands. Special Issue: Research in Musical Performance. Guest editor: Johan Sundberg. vol. 32 no. 3 September, 2003.
2003 A New Control Paradigm: Software-Based Gesture Analysis for Music, by Ben Nevile, Peter Driessen, Andrew Schloss. IEEE Pacific Rim Conference, pp. 360-363, August 2003.
2006 Radio drum gesture detection system using only sticks, antenna and computer with audio interface, by Ben Neville, Peter Driessen, Andrew Schloss. Proceedings of the 2006 ICMC, New Orleans, LA.
2007 Controlling a Physical Model with a 2D Force Matrix, by Randy Jones and Andrew Schloss. Proceedings of the 2007 NIME Conference (New Interfaces for Musical Expression), New York.
2011 Gesture Analysis of Radiodrum Data, by Stephen Ness, Gabrielle Odowichuk, George Tzanetakis, W. Andrew Schloss, Sonmaz Zehtabi. Proceedings of the 2011 ICMC, Huddersfield, England.
2015 Snare Drum Motion Capture Dataset, by Robert van Rooyen, W. Andrew Schloss, George Tzanetakis. Proceedings of the 2015 NIME Conference, Louisiana.