David Wessel is Professor of Music at the University of California, Berkeley where he directs the Center for New Music and Audio Technologies (CNMAT). Wessel worked at IRCAM between 1979 and 1988; his activities there included starting the department where Miller Puckette first began working on Max on a Macintosh. Since Wessel’s arrival in Berkeley over ten years ago, CNMAT has been actively involved in teaching Max/MSP as well as developing freely available Max-based software projects. In this 1999 interview with Gregory Taylor, Wessel talks about his musical background, his relationship with French composer and IRCAM founder Pierre Boulez, the origins of Max, and some perspectives on his current work.
Musical and Technical Background
I was raised in a kind of musical bath from the time I was fairly young. My mother brought music around to the house and I liked it and started playing in the grade school orchestra at the age of about 9 or 10. When I got into high school I played in a rock band, and became a jazz snob pretty early on, at about 17 – I just wanted to be a bebopper. All along, my father wanted me to be an engineer. I was very oriented towards that and loved it, too – model airplanes and mechanical drawings and all that stuff. But I was sort of being raised to become an engineer eventually.
So I had this technical orientation from the time I was very young and I had this musical thing going on in parallel. When I got out of high school I went to a jazz clinic that was held at Indiana University for a few weeks and I ended up getting a scholarship to the Berklee School of Music in Boston. I came home and was excited about the possibility of becoming a professional musician and my father and mother just freaked out and…well, there just wasn’t any question about it. So I went off to the University of Illinois as an undergraduate engineering student.
It wasn’t very long after that that I got involved in the musical scene in Champagne-Urbana. By then, I’d switched to mathematics. About 1963, I heard Lejaren Hiller talk about the application of information theory to analysis in music. I was studying information theory at the same time in another class and well… suddenly it just connected up my interest in music and science and technology. That was sort of the first piece of connective tissue.
A lot of people talk about the art part of their lives and the science part of their lives as though they were pretty separate. Was that the case for you?
That talk by Hiller was the moment where they connected. Up until then, I had no concept that they could be unified in any way – it didn’t even occur to me ’til that moment. I ran home and I actually analyzed a solo transcription of Sonny Rollins that I happened to have laying around using Markov chains. I could just take this data that I got from my analysis and use it right away. That was a very important moment for me.
Was it the analysis part of it, or the generative bit?
I started to think about what it would be like to do my work in mathematical and theoretical psychology – which is what I was really interested in – and music. But it didn’t… it wasn’t quite all there yet, but it was an important moment.
Then I went to Stanford as a grad student in mathematical psychology (that’s what it was called at the time) – the sort of application of mathematical models to psychological theory. At Stanford, I kind of abandoned music for a while. Being a grad student was kind of hard for me at first, and I wanted to do well. I was still following music very much but I wasn’t actively performing. And then in ’66, Coltrane came and I went to the concert at Stanford in the Kresge Auditorium. It was really a memorable thing for me – I had seen Coltrane a number of times and Miles, too – but something happened that day after that concert: I had to get back into it again, back into music.
So this would have been Coltrane with Jones and Tyner and Pharoah Sanders? No wonder….
It was the end of that period of his work – his last gig with Elvin Jones, I think. I think that he actually walked off the stage because he wasn’t getting along with Rashied Ali. This was the time he made Kulu Se Mama – basically, that group. But that gig was it. I called up my parents right away and I got the drums out. Since I decided I had to do something about my musical skills, I wanted to play in the orchestra at Stanford. I also got involved in the Free University’s improvisation class and I was heavily into music once again.
So I went over to take percussion class at Stanford and my teacher was John Chowning – the next big piece of the connection – first Hiller, and then Chowning. He started asking me all kinds of questions about perception because he had just finished his work on the simulation of moving sound sources…it was just an incredible contact. Through him, I learned about the whole computer music thing. I heard about the idea of doing it before, but it became real I decided at that point that I really wanted to orient my work in psychology and perception towards musical problems.
I went to my advisor William K. Estes, who was a very famous learning theorist and asked him if I could somehow reorient my thesis work toward some problem related to music. The first thing he said to me was, “Damp the oscillations, Dave.” (laughs) He told me to finish up what I was doing and then go and do what I wanted. I think his advice was just right, but I did take a long time to finish because I got so involved in music at that point that I kind of let my research go by the wayside.
So one thing led to another and we started doing this live electronic music. I met Stockhausen while he was at UC-Davis and I got to hear him talk and hear performances of his music. Again, Chowning being the sort of messenger – the person who brought these people to many of our attentions. I got involved and wanted to do this computer music thing. I just saw this fantastic idea that you could make any sound – the total of this idea of generality was there, you know. In ’69 I participated in the first computer music workshop that they had at Stanford. Max Matthews came out and I learned Music V and got even more enthusiastic about it.
So when I went to take this position at Michigan State University, the first thing I decided was that I was going to do some music perception cognition work and going to get this computer music thing going. So we installed Music V on our mainframe and got a group and a little laboratory going and started doing some work in music perception while I kept up the idea of trying to do some performance.
IRCAM and Boulez
Then, one Christmas Eve – it was in 1973 – I was sitting reading the New Yorker magazine and there was a profile of Pierre Boulez written by Peter Hayward, I think – I have to locate this piece sometime because it had such a big impact on me. I’m reading along in this text and in the second of this two-article series, there was something about Boulez’ new institute in Paris that was going to be set up and how it was based on ideas that he had about applying science and technology to musical problems. I said “Well now, this has got my name on it!”
I just got all excited about this possibility and immediately contacted Chowning and Max Matthews and Jean-Claude Risset about this institute and found out that, yes, it was really a scene. I wanted so much to be part of that scene that I worked it out so that I could take a sabbatical leave at my first opportunity to do so, and I went to IRCAM. I went there on July 4th, 1976. I remember it so well because we were flying out of Chicago during the 200th anniversary of the Declaration of Independence, and there was a huge fireworks display going off as we were flying out of Chicago – it was quite a send-off.
IRCAM really wasn’t finished yet – it was just kind of a hole in the ground at the time and a concrete shell, and a staff with not a lot of people on it… I got really interested in the way the whole institute was going to turn out. After my first year there on sabbatical, they asked if I would stay on a second year and be on their staff. So I worked out a deal where I could get an additional leave of absence. Towards the end of that second year, they offered me a permanent or full-time position. I went back to Michigan State briefly to clean things up, and in ’79 I became a sort of permanent part of the IRCAM scene.
In those days, it was a lot of mainframe computing and non-realtime sound synthesis on the one hand and then realtime stuff going on around this engineer from Italy named Peppino diGiugno, who I worked with. In ’77 I made a piece called Antony that I did with this realtime oscillator bank that was recorded on Wergo and that I’ve gotten a lot of performances out of. Working on Antony really piqued my interest for the realtime aspects of live performance, although I was still working primarily with these music languages like Music V and Music X and so on.
In ’79 I was asked to run the pedagogy department in IRCAM and to be the sort of connective tissue between the scientific world and the musical world. Of course, I loved being in that sort of a situation because I felt comfortable in both worlds and felt like I’d resolved the tension between them…
There’s one little piece of this that I’m sort of curious about. The kind of art that I understood that Pierre Boulez was interested in making a place for – particularly if you read his stuff like, say, Technology and the Composer – sits in a very distinguished kind of cultural niche – things that follow in the train of High Modernism. But what I’m nearing from you leads me to wonder how Coltrane and your improvisor’s background fits into all this…
That’s right. That’s a pretty interesting question.
Were you a closeted jazz fan while you were there? Tell me how that works.
That’s interesting, I think, because my interest in contemporary music evolved out of my interest in jazz. It happened early on – there was this saxophonist named Sam Andrea who lived in my home town when I was in high school. He must have been in his early 30s or so – sort of the “hip cat” in town. And Sam would tell me what to listen to – the Bartok string quartets and the Rite of Spring and Debussy. Later, when I was a freshman at Illinois, I was reading a Downbeat blindfold test with Yusef Lateef. They asked him what music he was into at the time – what he’d heard recently that excited him. He said that he’d heard this piece called Zeitmasse by Stockhausen which was very surreal. So I ran down to the record store right then and got it – I can remember that day so well. The record had Robert Kraft conducting – one side of this record was Stockhausen, and the other side was Boulez’ Le Marteau Sans Maitre. That’s the first I ever heard of or knew about Boulez. I listened to this music and I couldn’t figure out what the hell it was about – it was just a total mystery to me, but that’s what interested me.
Was there something compelling about its impenetrability, or were you curious about the notion that here was something you couldn’t make sense of?
There’s a kind of delirious quality to both of these pieces that got to me. I listened to it a lot and then I started following up what these guys were interested in – particularly Stockhausen. I think there was more of a pole of attraction there for me than Boulez – because of the electronic works like Gesange der Junglinge, Zeitmasse, and Kontakte. These were all things that got me really excited. Berio, came on the scene, too – another jazz player who recommended stuff to listen to me. But you were asking about what happened with this interest in jazz when I got to IRCAM…
Right.- I’m wondering whether or not you had trouble reconciling those two different worlds – not so much personally, but within the microculture that was IRCAM in those days. Was it an exclusivist kind of place?:
Oh no. Some of the IRCAM people were quite knowledgeable about the jazz scene – not Boulez so much, but I was closely involved at IRCAM with Berio and Vinko Globakar for example, and these guys were much more interested in these other musics than Boulez himself. In particular, Globakar was very much involved in doing improvisation. So I kind of found some partners along the way there. Boulez and I had a very interesting relationship – I kind of think he found me a bit amusing, and maybe he liked my enthusiasm for things – but the point was that he and I got along okay. I’ve never had any problems with him, whereas a lot of people really did have disagreements about particular aesthetic issues.
That’s what I was sort of curious about because in some measure, the place really bears his stamp – some of the writing about IRCAM I’ve read suggested that there were some real cultural divisions at work there…
You probably have seen Georgina Born’s book…
That’s the one. It was a really entertaining read.
Well, I’ll tell you – my code name in that book is Rig. R-I-G. You can go back into the book and probably read about some of what I’m saying here. I think that somebody like Boulez is a fairly complicated mind – he has a kind of a way of zigzagging around. While he does, in some way, have a very strong aesthetic and he’s certainly someone to be reckoned with and he has expressed some intolerance for certain musical directions and so on, remember this, too – for three years in a row, he asked me to organize concerts at the Pompidou Center of new improvised music under the auspices of IRCAM. That’s thirty concerts of what some people call free jazz.
That hardly sounds exclusionary to me. Man, that must have been a ball. So you basically got thirty people you really wanted to hear….
Yeah. I would put on Steve Lacy and the Dutch people would come down, and the East Germans, the Italians, and a number of groups from Paris. You know, if you’d gotten a different type of individual in there than Boulez, they might not have been as flexible. I always found Boulez though to be a pretty complicated man and a pretty interesting guy – I wouldn’t want to try to pin him down or categorize him. I think that Georgina Born simplifies him a little bit too much in the book. I mean, there are certain problems, but now but if you think about music that he’s writing now…
The stuff he’s written since Repons?
It’s really quite light, quite luscious, just quite different from any kind of what you might expect. You can’t just say he’s a serial composer – certainly, there are some Modernist issues, but sometimes I think I might be closet modernist myself…so beware!
One of the things that I’ve talked to him about directly is what it is that attracts him in music and he talked about the delire – which is the French word for it. I think you hear this delirium he’s pointed out in his works – Pli Selon Pli, Le Marteau Sans Maitre – and in Repons as well. He does like that a lot and so I think he can hear that elsewhere. Now, of course he might have problems with some of the screeching and screaming that goes on in extreme situations…
… and sometimes it is undisciplined in a way…
No, but the idea of the delirious body in music is something that is really attractive to him – and it is to me as well… and I think that’s what got me interested early on in music. When I heard some of these things happening early on in the music that these guys told me to listen to, it was like things almost ready to explode. That – and things that I couldn’t understand – just attracted me in music.
We’re talking across genres, too. The first time that I can remember hearing Coltrane recordings I thought – this was like grabbing a live wire – you just had this stuff spilling out at you and you thought, “I don’t understand this, but whatever this is, I want it.”
The point of commonality there that he would agree on as well in this idea of the delirious state and you hear it, you just hear it…
Real-time and Non-real-time
So at IRCAM you sort of had… you found yourself with the resources to begin to look at and explore that quality. But I think of IRCAM in its early days as more a place where interesting taped music is done – Jonathan Harvey, York Holler, Saariaho…
Oh well now – quite the contrary. And that’s interesting that you bring that up because by ’81 – that’s quite a while ago – Boulez made it very clear to everyone that he did not want non realtime music being made.
At all. Now that did change. I remember being called into his office one day. I had a note from his secretary that said, “Please come and see me.” So I kind of said, “Oh my god, what did I do now?” So I was summoned to Boulez’ office and he said, “There have been reports that this person is using Music V….” And I said, “Wait a minute, Pierre. Look – a lot of people are using this non realtime stuff because there isn’t anything else really available. So that’s how this person’s piece is being made. And, in fact, that how… I just named a bunch of pieces. Boulez felt really strongly about this, but I think he wasn’t quite in touch with what was going on in the institute. He had made this kind of mandate that there shouldn’t be taped music made, and he had a real distaste for taped music concerts- I mean, to the point where he just wouldn’t tolerate it. He really wanted realtime live performance to… to be the key and that’s why DiGiugno was such a favored person within IRCAM. But part of the trouble was that the stuff they had developed wasn’t really being made available to… or it was too hard to use and … I hope I’m not making any revisionist history here…
All oral history is probably a little revisionist, I think…
Yeah, I guess it is. The thing was – maybe I should tell this story: I went to Japan in the fall of 1982 and was invited back again in the fall of 1983. That was really just the beginning of MIDI – the MIDI 1.0 spec was sort of put in concrete around August of 1983. Anyway, this girlfriend I had – Ushio Torikai – had arranged to get a concert sponsored by Roland that we were going to do in Studio 200 at the Seibu Department Store in Tokyo. So I went down to Hamamatsu and we met with Roland. The IBM PC had finally hit, and they had a prototype of the MPU401. I got this gear from them and went back and wrote programs in Basic for a week and did a concert. In this concert, the idea was that we were going to have three different musicians playing the same instrument via MIDI and influencing what each other had done somewhat.
Well, that was my first experience with this stuff, and it became clear to me that somehow the personal computer was going to have a big influence on what goes on in music. So I went back with incredible enthusiasm about all of this to IRCAM and I wrote a little proposal which I entitled, “Just a little bit of real time music for 30,000 francs” which was about $6,000 at the time. The proposal consisted of an IBM PC and a DX7 and an MPU401 – a little MIDI system. We had a budget arbitration meeting and my proposal was handed around the table to be either voted up or down by the IRCAM staff. Man, I wish I had that document now – I wasn’t able to retrieve it. But it had all kinds of nasty remarks were written on it by various people in the room. Of course it was voted down completely as being vulgar in the French sense of the word “vulgaire” – commonplace – not something that we should be doing.
The Macintosh Invades IRCAM
But I didn’t exactly give up. In 1984, the Macintosh appears, and now Adrian Freed was around at IRCAM as the systems person. Adrian’s one of these remarkable people who’s just on top of technology and what’s going on, what’s happening. He’s really a boon to what we do. So he was the systems guy around the institute at that time and we were running a VAX and UNIX. Anyway, I managed to make contact with Jean-Louis Gassee through a woman named Marianne de Gordelfis and we ended up getting 6 Macintosh 512s donated by Apple to IRCAM. I’d been working on this throughout the summer of 1984 so the 128s were hitting Europe in the spring, late spring. And then the 512s were about to hit in September of ’84. So it’s the following fall – the year after my IBM proposal so then I had the machines in hand and then someone objected in a meeting – the budget arbitration meeting in the early fall of ’84. The question was, should we keep these or give them back to Apple because it was considered maybe a “cadeau enpoisonee” – vulgar machines coming in, machines that people might have access to…
— oh, that would be terrible!
I could not believe it and I don’t want to mention names in this – we’ve somehow gotten over some of this. But I felt then as if I had been accused of going around the back door and somehow of doing this on my own and — getting this involvement with Apple without proper institutional approval. But, to give Boulez credit, he said, “Okay, look – you can keep them, but you’re not going to have any money to do anything with them…”
Basically, “There are some limitations on the resources you have, and we’re not going to invest in it, so don’t spend time over this…”
Basically, it became clear that people liked these machines – they started doing all their work on them. MacPaint and MacWrite and MacDraw – these things were a lot easier to use than anything we had at the time. At this point, Adrian came back to IRCAM after some time at Bell Labs. We sat down one evening in a restaurant in the summer of ’85 and designed the MacMix program on a napkin. Basically, it was a Macintosh front end to a mixing program that was running on the VAX. It was a way you designed your mix with envelopes and so on. Anyway, he developed all that very quickly on the Macintosh. Also, there was a protocol for talking over the serial port to the VAX. It got put to use right away by George Benjamin to make a piece.
At the same time, Miller Puckette coming around and saw the opportunities that we were working with and trying to develop software that could be used in live performance on the Macintosh. One thing led to another and a lot of attention and finally at that point they decided that maybe I should go off and start a separate group at IRCAM based around personal computers- they called it the “Systeme personale” group. So I was moved out of the pedagogy world over to this new group, and I moved up out of the hole into a different space which was in the old school building. We were actively involved in this LISP – development we called MIDI LISP. Adrian had taken the MacMix idea and gotten it into the commercial world it was what became the Studer Dyaxis system. So, things were changing at this point. The battle was clearly won.
Maybe I’m being dense here, but how could the idea that musicians would want “personal” instruments, so to speak, have been that hard to see? It just seems like such a no-brainer…
…That somehow musicians want personal instruments? A real no-brainer. I just thought that the whole mainframe idea just wasn’t going to go anywhere. So I did, you know, feel vindicated by all that happened. I guess maybe that’s something I’d sort of like to claim a little responsibility for, anyway. So Miller Puckette and Phillippe Manoury started working together and Miller then was using a Macintosh to actually control the IRCAM 4x machine – the DiGiugno machine with what was an early version of what became Max. The earliest version of it was called “patcher”.
Well I switched over to using it from a LISP environment at that time too because it… no, at that time, I was working in the MIDI LISP environment, doing this piece with Roscoe Mitchell. That meant that I stayed with the LISP environment for a while, but I switched over to using Patcher from a LISP environment. About that time, I started getting courted by Berkeley and wound up getting this offer to come to Berkeley in the fall of ’88. I turned in my resignation at IRCAM in April of 88. I’d met David Zicarelli a few times. I’d gone to Opcode early on to get MIDI interfaces back in the very beginning when they were in a garage in Palo Alto. I think I met David the very first day, and I learned about his patch editors and the librarians – the stuff he was doing. So what came of that was that I wanted David to come to IRCAM and to talk about Macintosh programming. So I invited him and it was in the fall of ’88. I’d already resigned, but it was sort of the last thing I did. And that’s when Miller and David met. He and Miller hit it off right away.
Early Experiences with Max
And the rest – with some gaps – is uh…history. But you brought Patcher or Max or whatever it was called by this time with you to Berkeley?
That’s right. Of course I was really enthusiastic about it and I started teaching it right away even though I didn’t really have a computer lab when I came for my first year here. I was kind of teaching things which were yet to be in a program that was in development in a building… my first year here was nothing. I didn’t even have an office really yet because this building wasn’t quite ready yet. But anyway, that’s it. That’s how I got here.
So this is some kind of answer to the question, “Daddy, where did Max come from?”
Oh, that’s sure not the whole story – I didn’t really do anything but to be someplace at t a certain time with some ideas and to provide a space and encouragement to Miller. And I did start using it in its very primitive form at the beginning.
What attracted you to Max?
Well, see, I was already interested in this notion of doing some kind of interactive improvisatory thing with a computer where the computer would be able to play lots of notes – multiple events and gestures – that I could control in some way. So I had made these things in this MIDI LISP environment a long time ago. I wrote a paper about it I’ve got somewhere…
When Max came along, it was just a lot more efficient implementation that I was able to work with. I didn’t have these problems with garbage collection and so on since I had this patching concept from the Music V-X world already. But it wasn’t about sound at that point – it was just about events flowing around. And it was early, too – I can remember the way the right to left order thing happened. It’s probably hard to imagine, but at first, Max didn’t have any notion of eager and lazy evaluation; if you put anything in any inlet of the plus object, you’d get an addition. You had to build all this stuff around your objects to keep things from getting weird. You’d really be careful about when you excited an object because any inlet would make it fire and then…well, Miller got the idea that making only one inlet – the left one, the one that made it fire.
(laughing) We just spent some time in MSP night school learning how to make it fire on both again.
Now the other thing that Miller added was the idea of abstractions. I had this piece that required me to replicate the same set of modules many times, and abstraction didn’t exist in the first versions of Max. In fact, it still didn’t have the notion of making a sub-patch. I always remember spending a whole night copying a patch over and over and over again just wiring out the same thing like 88 times – I had to do it for each key- Miller saw me doing that and said, “Yeah, you know, I’ve got abstractions coming along….” He told me the other day while we were talking about it that seeing me suffer was really the thing that motivated him to get the abstraction thing going!
Dipping Into Your Own Instruments
But what was at the heart of things was that high degree of interactivity in music, dialogue… for me, these things come out of the jazz world for me. But also I was very interested in Indian music – music in which people perform together and kind of have a discourse of some kind. I always just thought that computers ought to be involved in something like that.
If that’s the case, what does the computer do?
Let me say something a little bit more about that because there’s another feature that was coming out of this jazz. Now you gotta understand this time- it was the 60s. Jazz was going through incredible evolution in terms of its language and going in many different directions…
Electrification, the new timbres, the introduction of modal structures….
… modal stuff, what happened when people like Cecil Taylor appeared on the scene and then the free jazz kind of thing – although I never liked that term very much. I mean, there was just this incredible evolution going on. It seemed to me that jazz musicians were people who had to invent a sound of their own – a personal sound. It was almost like they were almost like instrument makers in a sense- they built a sound. Often times it was a very important part of what happened, and they would invent a kind of language that was a personal language. It seemed to me that computer musicians could do that kind of thing, simply because they were required to almost invent their instrument from the ground up. It seemed totally compatible to me with this kind of — aesthetic or ethic in aesthetics that I had about music the way it should go. It seemed to me that was kind of an ideal. My involvement with my friend George Lewis, who was at IRCAM in the early 80s was important too – the kind of thing he was doing where the machine would react to things. So… how is the machine involved, you’re asking…
How has working with technology changed the way you work?
I guess that part of my interest in it has to do with the simple fact that if you see this stuff and what it can do, you can’t ignore it – you have to use it, you have to do it. I think I saw that already when I first read about it in ’63 or so…
Well this sounds like your story about leaving the Lejaren Hiller lecture: you see the thing and then you rush home and say, “I gotta do this, I gotta mess with this…”
…Gotta mess with it, got, gotta use it. If it’s there and has this potential, then it has to be that you incorporate it in art. And then there’s a tradition in music through technology -development in the whole way acoustic instruments evolved and so on: as new ideas would creep in, they’d just have to be exploited.
One of the things about your own work that I think is interesting is the set of ideas about the use of input material in improvising situations, and the kinds of abstractions you use to work in this area. I wonder if you could talk about it a little bit.
I’ve used this in all kinds of contexts. But the basic idea is that you have an underlying process that is running along, but it’s actually silent. You’re also able to set it up and select various options. You can call up the kinds of data that are running along, and then I dip into it as if there are multiple streams of things. I can dip into that one and dip into this one and dip into that one over there and bring these things to the surface.
That paradigm I’ve used means that I don’t have to be responsible for the activation of each note. There could be like lots of material and what I’m doing is, in some sense, driving it around, focusing it, and orienting where it’s going to go. I’m able to control larger parameters associated with what I bring to the surface like the density of notes and the kind of rhythmic structure, but at the same time I can stop and have it all just be under kind of tight control. One of the problems that I had early on in some of the pieces I would try to make is that I didn’t have good control over the dynamics and I just couldn’t stop the interaction – I didn’t have a neat way to, like, get into a little dialogue kind of thing, a relationship with another musician where…
…where you’re directing from a discrete distance?
I’d basically take it all in. I didn’t control the dynamics, and I didn’t control the densities and the stopping and starting and turning on a dime. That’s still a problem for me – I often feel terribly frustrated in performance, where I can’t go somewhere where I just wanna go. That motivates me to go on back and modify the software so I can try to build in more of that flexibility, so that I’m “dipping” into material that I know things about from experience and practice. I think that all kinds of temporal details are important for feeling and phrasing and articulation and so on – while I’m maybe not changing the way that the notes come out in time, I can alter their lateness or earliness – you know, where they’re around where they’re supposed to come.
I’m also motivated at this point by the idea of using intelligent listeners to help me listen to the other performer and analyze the data that’s coming in. In this piece that I did the other night with Georg Graewe I used these kinds of ideas. All the material is generated from what the other musician is doing: In Georg’s case, he’s playing the piano. I’m able to capture what he played – in this case it was MIDI data – into a bank of 12 recorders I have arranged on a normal keyboard. I can play back that material, transpose it, and play back further abstractions of that material from other keys that I’ve set up. So I do a sort of realtime analysis using Markov chains and various hidden Markov models to try and track melodic processes and so on. I keep these tone profiles around – ideas that have come out of music perception cognition work – and try to use that.
In fact, my interest in music perception and cognition is often primarily driven by the idea of trying to build things like parsers. I’ve got something that finds the beginnings of phrases and the ends of phrases and tries to say, “Well this is a phrase, and here’s my best bet what might be, so I use this sort of time gap rule notion and…
So your performance is a sort of a guided journey through a landscape whose features you’re pretty familiar with, but whose particulars are unpredictable – and thus interesting to you?
Yeah. Also, the software has features that allow me to make new combinations of the material that I wouldn’t normally think of. I get excited about the idea of working with something where I’m… when I’m performing or playing, I mean that I get… yeah, into some new territory.
There are people who think about doing computer music or experimental music as an act of constructing and specifying. While some people don’t accept that there’s a distinction that you can make between pieces and instruments at all, there is this sort of notion for some people that a piece is an instantiation of a certain kind of technological solution. What I’m hearing from you sounds more like there’s this kind of basic core set of ideas and those core ideas are some kind of faceted thing that you’re turning over and showing us another side of. Sounds like what you’re doing isn’t a single solution at a time kind of thing – you’re moving this “dipping into it” paradigm forward and modifying the thing as you go.
I like to keep evolving the software. I guess that my dipping thing has had something like 10 revs on it for different situations by now – all in Max, by the way.
Do you think that Max is particularly suited to that? Why?
Well Max is just the best thing. I mean, I look out there and ask what else there is and… there isn’t anything. I don’t mean to sound like I’m blatantly making a super plug here, but I am: There is nothing that satisfies such a complex set of things that’s running on machines that are available that I can just take out there in public that’s got reasonably low latency, that sort of thing. It’s just not entirely the case in the PC world and the MIDI toolkit from CMU or other languages that people have suggested are interactive. You can build things that act like instruments with it – that’s what drives me and my enthusiasm for it. It involves you at that instrument level. I can build this whole highly interactive reactive system and work with it, evolving it as I go.