Kim Cascone has worked as a synth tech, edited music for David Lynch films, founded San Francisco’s first ambient electronic music label, and helped design new systems of audio for video games.
In this conversation with Ben Nevile, Cascone discusses his electronic history, his interest in genetic algorithms, and a fresh compositional direction that he calls “New Density”.
Movin’ to California
Where’d you get started with all this stuff?
I went to Berklee College of Musicin Boston. I studied music there, electronic music mostly.
When was that?
’73 to ’76.
So that would have been modulars…
Yeah, all basically patchable analog synths back then, but also the beginnings of computer music. There were languages around at MIT.
Did you have any exposure to it in Boston?
Not at that time, no. When I left Berklee I came back to New York to study privately. It was at that point that I discovered I had a talent for electronics, so I went to technical school. I had been working for a company called Electronics for Medicine. They made heart rate monitors and sleep activity monitors. I was a tech, and they sent me to school to study electronics, and so I got some training in the technical side of things, math and physics and what not. At that point I started getting interested in microprocessors, and through my interest in microprocessors, the Z80 and the 6502, I sort of bumped my head up against the whole computer music thing. And I discovered CMJ…
Were you making music this whole time?
Yep. I’d built a synth out of kits made by a company called Aries. They were based in Massachusetts, and they made a modular system that was very much like the Arp 2600. One of the design engineers from Arp actually went over to Aries…
I imagine those were pretty small companies.
They were very small, sort of grass roots… kind of similar to today’s software and dot com companies. There were a lot of people trying to do interesting things with electronic music.
So when did you move to California?
I moved to California in ’83.
What precipitated the move?
Various things, one of which was that in New York it was very difficult to break in on any sort of artistic level. Lots of little downtown mafias, art mafias, and it was very difficult to play anywhere, it was very locked in. So I visited San Francisco and discovered a sort of free spirited anything goes kind of attitude that I really liked. There was a lot of experimentation. The Bay Area has a history of that experimentation in various ways, you know, lifestyles, or psychedelics or what have you. That appeals to me, the spirit of experimentation.
That was 1983, and your record label, Silent, started a couple of years after that?
I started Silent in 1986. Although it was easy to get gigs, it wasn’t easy to get record releases. I had been releasing some work on a little label from Massachusetts called RRR. They released my first two LPs as PGR, then I had a French company release my third one. I just decided at that point that I wanted the means of production in my own hands, so I started my own record label. But I didn’t want to necessarily have it be all about me, so I started releasing work by other people.
I remember reading the story behind PGR somewhere, but I forget now… what was it, something something research?
I hate what it turned into, because everybody seems to know it as Poison Gas Research, and it wasn’t like I really wanted it to be known like that, because at that time there were a lot of industrial bands trying to be really dark. It was more of a joke than anything, because the Oakland rehearsal space we were working in was in a very bad area. This friend of mine Tom had been living there. A lot of guys from the neighbourhood came over and asked him what he was doing there. He didn’t want to be bothered so he just said “government work”. So they said “what kind of government work?” and he said “poison gas research.” So from that point on nobody bothered him. They left him alone, and that was kind of like our cloak of invisibility. We were able to practice there and nobody ever hassled us coming and going. You know, we were lugging strange cases of stuff…
The poison gas, naturally!
So what was it that caused you to give up Silent?
A multitude of things. One thing was the way that indie labels had evolved at the time. You had to release a certain amount of work in order to sustain the operation. When techno started becoming popular we became the electronic music label in San Francisco. This is like, 1991. Raves had been going on for some time, but they were becoming more popular, and more people were making this kind of music. So we started getting deluged with tapes. People were coming by and were interested in what we were doing. We just sort of got pulled along. People that were working for the company were very involved in the scene as well. It started evolving into an area that was very different than what we had started out doing. There was some correlation there… a lot of the guys that had been doing industrial work or experimental work were trying to do techno, and there was a lot of cross over and experimentation. So we started releasing a lot of product.
At one point we switched to a different distributor who promised us a lot of exposure and a wide net of distribution. What ended up happening was that the company we signed with got bought by a bigger company, and then they bought a smaller company. So all of a sudden our account, unbeknownst to us, had been switched to this smaller distributor they bought and we had no idea. They had discussed with us the whole strategy: we would press up this amount of records, we would have all these chains covered, we had to send out all these promos… We adhered to all of this, and then discovered at Christmas time that they had not been distributing our stuff. Our stuff had not made it to Tower or Borders, or any of the chains. They basically dorked us. Not necessarily on purpose…
But you just got lost in the shuffle.
Exactly. So we had, like, $30,000 worth of returns. It was a big hit, we were not doing very well. So I was going to actually bankrupt the company, take it down, but an employee stepped up to the plate and said I think I can make a go of it, why don’t you sell it to me? So that’s what happened. Also at that time my wife and I were getting very interested in working in the internet…
…the whole explosion was about then I guess.
Yeah, it was everywhere.
Since you’ve got the computer open, why don’t we talk about your performance Max patch? You were using the patch from your Residualism CD [on Mille Plateux/Ritornell] last night, were you not?
The original studies for Dust Theories were done with the patch from the Residualism CD on Mille Plateux/Ritornell and yielded a couple of pieces that I liked but it was still not what I wanted to interact with while performing. Before my trip to Europe in the late summer I took my Residualism patch and revamped it completely. I replaced the sfplay~ objects with the groove~ objects so I could control the playback rate of each sample and added a lot of plumbing in order to have better control over the vst~ object. I was also working on a new patch that was going to make use of genetic algorithms. The objects that I had gotten ahold of were kind of difficult to work with. There wasn’t a lot of documentation, so I had to ask around on the max list, and I got some help but it wasn’t really yielding results that were all that interesting. I think it’s going to take a little more work in terms of playing around with it.
How were you trying to incorporate the genetic algorithms?
I’m really interested in behaviours. I’m not all that interested in DSP. What I am interested in is imparting behaviour to the playing back of wave files. We did something similar at Staccato called the event modelling. We essentially modeled a lot of natural type behaviours like explosions, car crashes, ambiences, through either random generation of waveforms or a stochastic envelope of density.
Stochastic envelopes of density… a granular thing?
It is sort of like a granular thing except we used very small sound files instead of grains. You can think of them as grains…
…but they’re really individual sound files that get randomly played back and spread across a space?
That’s right, but with a certain distribution, because the way that car crashes and broken glass and all that tend to happen is with a very dense beginning and then it sort of trails off.
Oh, I see, like this? [Ben traces an exponential decay with his hand]
Exactly. So they have the ability to create an envelope of density with the random playing back of wave files. The sound files are randomly chosen every time. So there’s a pool of files and every time you trigger the events to happen, different wave files will be called, so you’ll get different car crashes every time.
Has that type of work fueled your interest in the mathematics of chaos?
Well, kind of hand in hand where I had the interest and it sort of coincided with some research that was going on at Staccato. My interest was part of what got me hired there. Although those guys are, you know, serious PhD types, I have enough of an intuitive grasp that I can contribute…
…and you have the sound design skills.
Exactly. And I have a bit of the Music N background too, Csound… so although I’m self taught, it wasn’t that difficult for me to grasp a lot of the concepts because I had been dealing with them before.
When did you start using Max?
I started using Max shortly after getting very frustrated with Csound, after bluecube() came out. I worked at Headspace with Chris Muir, the guy who wrote the Uzi external. He was a big Maxxer, so he was constantly using it on the job, and he was showing me stuff, and telling me how great it was, so I just finally got it. I started using it and became obsessed with it, pretty much. That’s kind of what got me the job at Staccato.
Max and Csound, being able to work in both. The tool that I used at Staccato was very much like Max.
Oh, what was it called?
Synthbuilder. It has the same patchable paradigm, so I was very familiar with it.
Does it have the same level of control?
Yeah, it’s pretty granular. It doesn’t have the same community that adds interesting externals to it, but it can do pretty much anything in terms of sound generation that Max can do.
Did you build Synthbuilder right into games?
Yeah. The engine itself is bound into video games, and then we created algorithms that do different things: physical models of race cars, behavioural things for environments, or car crashes… that sort of thing.
I’ve always thought that would be a good thing for Max to be used for…
You wouldn’t want to get into it. The video game business is really brutal.
Yeah. Audio is always the low man on the totem pole. No matter how much lip service a company pays to having killer audio, it’s just a check mark on the box. You’ve got this situation where you’ve got the game programmer and the sound designer. They’re two completely different ways of thinking and viewing the project. The game programmer’s obviously somebody who’s coding so that it works flawlessly in the game. The sound designer is someone who’s creating all the sounds and doesn’t really care about the coding as much, but the two people need to be able to talk to one another. That’s where a lot of implementation issues come into play. The programmer doesn’t necessarily know anything about audio or how it should be implemented, so they’ve got to communicate on some common ground. Most of the problems with bad audio really come from that inefficient communication between the designer and the programmer. So, we thought we had a really good tool that enabled the sound designer to generate an algorithm that can be handed off to the game programmer. They don’t have to worry about talking the same language, because all he has to say is okay, I’ve exposed all these controls to you. You need to send values in this range with this name to these controls.
So you can give it to the programmer and he can make it work.
Exactly. It becomes an interchange format. It really has helped a lot in terms of being able to get better audio to happen in games. We still had a lot of work to do in terms of getting it smooth in terms of what the sound designer hands off to the game programmer. Not all sound designers can work in Max or Synthbuilder. They should, but they don’t have time. The way they think is Pro Tools. They want plug-ins, they want linear, they want to be able to see their sound files, mix it down… it’s the mixing board and tape deck kind of paradigm that they’re dealing with. Max or any kind of patchable architecture is not linear, and most sound designers don’t think in terms of non-linear architecture.
But some games are non-linear. They’re piecewise linear, I guess.
They’re interactive, but they’re still linear in some way. They still have some narrative. They may be branching narratives, or conditional narratives, but they’re still narratives. The sound designer doesn’t initially concern themselves all the time with that aspect of games. They are given a hit list of all the sounds. They say okay, we’ve got this character, we want you to cover the footsteps, they kind of sound like this, he’s a really heavy guy and he’s wearing big boots and he’s got armor, so he’s going to jiggle… Sound designers cover all the sounds for various objects, but how they’re controlled in terms of behaviours and stuff, that’s the game programmer.
So maybe there’s room then for somebody who would just be a sound programmer who would take what the designer does…
Yeah, that’s the missing link. A lot of game companies have audio programmers, but typically they’re people who are not sound designers, they’re kids out of school who get their first programming gig doing audio. They don’t do a much better job than the game programmer or sound designers. We were working on an EA title – Nascar 2000 – and the audio programmer was pretty sharp, but still he had to call us all the time with questions about the API. It was a lot of hand holding. They don’t have time to read the manual or the docs. They just want to be able to look at it, understand it, and go.
I guess that’s what I mean when I say it’s a brutal industry. There’s absolutely no time for anybody. It’s all deadlines and audio’s always the last thing to get implemented, and it’s two weeks to do three months of audio.
The Behaviour of Mistakes
So what about your music, then? How has it changed since you’ve started using Max, or how has your approach changed?
Well, one good thing is that I can basically have my studio contained all within the laptop. That makes life a lot easier, having everything in one room, so to speak. Thinking in terms of behaviours – that’s opened up a whole new area that didn’t exist before.
Can you talk a little bit about that?
Sure. Similar to what I was saying before about random behaviours and being able to read behaviours or being able to impart behaviours to sound files… that’s really what I’m interested in, and with Max I can really experiment with those ideas very easily.
Does the Residualism patch have any of those behaviours?
Yeah β it’s pretty much random. I have four players, and basically they just look into a folder and fill a list with all the files that are in there and randomly throw them at me. Each player has different random files that I get dealt, and basically what I do is mix on the fly. If something isn’t working I can go in and select a different file, but typically what I do is just mute it. It’s sort of a discipline β I wanted to try and get better at mixing on the fly, and trying to make it work all the time, work with it. It’s really difficult because sometimes it’ll deal you bad cards. You get a bad hand and you don’t know what to do with it.
Where do the sounds come from?
The sound sources are all internal, all basically stuff I’ve developed using Csound and Max and various softwares. I just keep working in them. I typically prepare all my files in Vision with VST plugins, a lot of Pluggo plugins, stuff like that. I just keep generating stuff that way.
Working it over and over…
Yeah, or finding and developing new files I just keep adding to the reservoir of sounds. Everything is silicon-based, nothing comes from the outside world.
It still sounds very organic. Maybe that’s the random element.
It’s partly the sound files too. Any time you generate material you have that self-policing process where you aesthetically choose certain sounds over others. What I choose obviously has a big effect on the overall sound. I think I tend to gravitate towards either highly synthetic sounds or somewhat more fluid organic sounds. I like to mix them too so that there isn’t just one or the other. I think there have been times when I have gravitated towards more synthetic or more organic sounds, but I think I’m gaining a balance.
The sound is very lush and full… just natural, like you’ve plunked yourself down in the middle of a dark field in a foreign country where you aren’t familiar with the wildlife.
A lot of people have said that they hear insects, or birds… I guess there’s a little bit of the misty bog kind of thing, this natural synthetic habitat. That’s the other interest of mine – I’m not really into narrative, I’m really into different kinds of space. I don’t particularly like to have a point A point B, start finish – I just like to throw you into a space. I like for people to build their own narrative, whatever they want to do with that space. If they want to envision themselves in a field, or there are crickets, or birds, or what have you, I think I’m more comfortable with that than A, B, C sharp. I’m not really into loop oriented performances for non-beat oriented music, either. I find it kind of boring. I like to keep things moving.
That’s maybe the greatest thing about Max – like you said to me last night, it allows you to get your ideas into a simple form and execute them. It’s just so flexible.
Everytime I go back and develop a new patch in Max I go on a total binge. You get in a zone. First you want to make the patch work, then you start wanting it to look better, and be more efficient and beautiful…
That’s an interesting way to perform, to leave it up to the randomness and balance whatever is thrown at you.
Exactly. It’s a very Cageian thing. It’s actually harder for me to be in control of it on a very exact level. I don’t find that very fulfilling, I like to intereact with the process. The fact that I can get dealt all these random events and try to make sense of it on the fly… I think artistically it’s very satisfying because it kind of develops a certain way of thinking about the material.
Has your technique improved?
When I’m doing it a lot I get better at it. When I’ve been away from it for a while it seems like it’s rusty. When I was touring Europe last summer, the difference was amazing… doing it for a period of three weeks almost every other night… you know the material, you know how to deal with certain situations… there’s a certain fluidity that develops.
You seem to be doing a lot of things that involve random events. Did that lead to an interest in “glitch”, music made out of mistakes?
When I built my own synthesizer from Aries kits back in the late 70′s I had made some mistakes in soldering or diode orientation that resulted in some really frightening noises…but I didn’t really follow this method of working. In 1977 I heard a record titled the Sonance Project by Reese Williams which was a sound collage of vocal detritus or mistakes that really moved me. These types of vocal sounds are often edited out of dialog for films or commercials but were used here as a sound source. It served as a microscope into the little things people say or vocalize that we usually mentally throw away. From then on I became very interested in “systems”, whether they be physical or conceptual.
It wasn’t until I owned a sampler that I discovered some bugs in the OS that created wonderful sounds. If I took the sample playback pitch parameter and turned it all the way to 0.00 the sampler would start randomly playing through its entire memory. It would find parts of samples I had erased from weeks prior as well as make current samples sound very broken. This “feature” was rather intermittent so I couldn’t rely on it to happen but when it occurred I’d record some of it to DAT and then pull it back into the sampler for further work. Unfortunately I don’t think any of these experiments yielded anything very useful at the time so none of it was released.
When I started doing all of my work in the computer I found the space and impetus to start exploring the “edge-boundaries” of software a bit more. At Staccato we used alpha software that was often in a very broken state and resulted in some very interesting sounds. This beta-test technique carried over into my compositional ideas. I’m no longer as focussed on the idea of failure/mistakes as I used to be but I now contextualize it within the framework of information of how a system can be prone to failure. I’m writing an article for Mille Plateaux on this new direction. I call it “New Density”.
Oh, I’d like to hear a little bit about “New Density”. You told me in the car that you were tired of minimal music. Is this new direction a reaction to the sparseness that’s dominated the last few years?
Yes, the aesthetic problems of minimalism are well known and have proven to be a dead end. There’s no clear aesthetic solution to minimalism and artists abandoned the movement in the 70′s for that very reason. The new austerity evident in microsound or glitch music is an interesting approach to minimalism, but after a while it gets to be a little tiresome. There’s too much “me-too” product emerging, and it obscures the work of the more vital artists.
Even without the glut of knock-offs there remains the issue of what it is that minimalism is trying to say in the year 2001. The mediascape today is overloaded and extremely dense and that’s much more exciting to me than the lack of information found in minimalism. One of the problems I have with minimal electronic music is that it hinges on deep listening, which I view as a passive listening mode. I prefer to listen to sound in an active mode where the music is read like text and where multiple channels of information are presented simultaneously, forcing one to aurally multitask. This allows the listener to situate themselves in the audio information in a variety of ways, sort of like a mix of sonic cubism and futurism.
The work I’ve been doing lately is informed by information theory and informational aesthetics. There’s a simple model used in information theory that shows the channel of information being mixed with noise. You can extend this model to one of modulation where the information is the carrier signal and is modulated by another signal containing information. So in sound art this opens up many areas of investigation. One example would be current work in documentary audio like Alejandra & Aerons “La Rioja” on Lucky Kitchen or artists who use field recordings mixed with electronic sounds or manipulated via DSP. Although I used to work with field recordings quite a bit back in the mid-80s, I find more interest in constructing my environments with completely synthetic sounds. I’m fascinated with how information travels up the chain of abstractions in a computer and I’m trying to clarify that process for myself in my new work. Part of the solution so far is to keep all information synthetic and layered in a dense manner so that information can come from various sources such as the patch, the soundfiles, the interface, etc. I haven’t worked this through all the way yet as I find myself drifting on a surface of ideas that pull and push me in many directions.
Learn more about Dust Theories, the Kim Cascone CD on the C74 label, by clicking here.