Articles

A Video and Text Interview with Alex Stahl

Alex Stahl is a veteran collaborator and this has never been more evident than in his collaboration with Composer Paul Dresher for the opera Schick Machine. As Robert Henke pointed out in the recent Max/MSP/Jitter Conference, Expo '74, many of us spend years working on the same Max patch. Alex Stahl has spent years developing the Max/MSP patches that are at the core of Schick Machine. Along the way he's developed skills that landed him a fascinating job at Pixar Studios. Collaboration can be quite useful in this world.

Tell me a little about your background.

I spent five years at the Evergreen State College in Olympia, Washington. The interesting thing there was this state-funded school had some idea that media arts and electronic communication was an important topic in education. So they funded a $5 million communications building that was full of analog synthesizers, Buchlas, multitrack studios, and there was a room where you could just go checkout Portacams and Neumann microphones with your library card.

This was early '80s, late '70s?

This was late '70s, yeah. There were a couple people who had enough experience from their own work to support this, but there was no curriculum to speak of. That school was also very big on multidisciplinary learning. It was a great place for people who were fairly self-guided. What I mean by that is they let you do these things called 'independent contracts' where you would propose a field of study that had tie-ins to science, art, history and humanities. They'd basically let you go do your own educational research, and meet once a week with an advisor. And this was as an undergrad. [Laughs.]

So Peter Randlette and I did that in this electronic music studio. That entailed taking the entire studio apart, moving it over to the cafeteria and re-constructing it in the middle of the night for a Halloween concert. It completely surprised even the president of the college--who I don't think even knew this stuff was going on at his school. The interesting thing about that was, after that incident, the department got a lot more visibility, like 'What the heck is going on there?' Peter is now the director of the new Center for Creative Media. They're getting all new equipment, and he's teaching.

At Evergreen?

Yeah, back at Evergreen. Actually, he just kinda stayed there… and I moved down here. We're both still pursuing this idea about working 'hands on' to make or at least understand how the tools of this virtual world are made, and are so important to people who really want to use them. Like anything else, if you know how it's built, then you'll understand what it's good for. For me, using Max is all about that… I want it to work exactly the way I want it to. So that experience of just literally taking the studio apart, without permission, and then rebuilding it, was a funny story but actually one that I think is fairly important.

Did you have this curiosity for building things even as a kid?

You know, going back, this was probably sixth grade or so, there was a company called PAIA--they're still around actually--that sells kits for electronic music equipment. They had a little battery-powered-synthesizer kit, and a little battery-powered amplifier. I saved up my money, got one of those and built it. I used to ride around on the buses, the public transit system in Portland Oregon, and make these weird noises in the back of the bus. Strangely enough, I'm still kind of doing that, in terms of being interested in presenting unusual sound experiences in public venues, in terms of just encouraging people to listen in a different way.

When did you start working with Max?

I remember it very clearly. At Evergreen, I became very focused on creative signal processing. I used to joke to Bob Ostertag, “yeah, you're a music major… and your instrument is the Eventide Harmonizer?” I think Richard Zvonar and I were the other two people who majored in 'Harmonizer'. I still consider that my main musical instrument. Actually I still think of it as a big part of my career at this point. I got into Max because I was playing music with Patti Clemens and Barbara Imhoff, and that musical context was very much real time, keep the groove, keep the emotion--but I wanted to bring in everything you can do in a recording studio in terms of really tweaked-out post production mixing, reverbs and layers of sound, and do that in real time, in an almost, I would say… a folk music simplicity.

So in the studio, you've got maybe 200 knobs. I'm sitting in front of a console right now, looking at them, and I want to have one foot pedal to go between this one particular mix of effects that I had worked out--and that took hours--and then another one for chorus and the verse. I had no idea how to do that, so I looked at a few things, and that's how I found Max. It was at that point mostly about MIDI processing, and I built a foot-pedal system that let me control a six-foot rack of outboard gear, with my feet, while playing bass. I had control effects on the whole band, and trigger samples. Essentially, I considered it building an actual musical instrument. The Bass instrument, I built out of wood and metal, but this was built out of some pedals and software in Max… and it worked! [Laughs.] So that was really encouraging. I guess that would have been about 16 years ago or so. Since then, there's been this gradual process of getting rid of the outboard gear.

And then you embraced MSP when it was released?

When MSP came out, it was just amazing, because suddenly some of the things I was controlling externally, in terms of MIDI control of some expensive pieces of outboard gear, I was able to do inside the computer as well. The best example of that is probably looping. Being able to do looping inside the computer instead of controlling an external delay and tying it back into the mix.

More and more, I've been getting closer to having the whole rig run on a laptop. It's even more like a musical instrument, because now I can take it--whether it's back on the bus, or to Australia, or out in the woods, or just over to a friend's house for a rehearsal--and bring this whole finely-tuned setup to the studio with me.

On a really simple level, I like to geek out in the studio, get hypnotized by sounds and stay up all night… only to decide that it wasn't worth it and then do it all again and come up with something that's actually pretty cool. My band mates don't necessarily need to sit around for that. Being able to do that in software, so easily, with Max, is very helpful in terms of blending and integration. This realm of bringing all of your studio work with you into a real-time situation… Max works for that.

Let's talk about the Paul Dresher project, Schick Machine.

When I finished up at Evergreen, one of the last things I did was drive from Olympia to Minneapolis for I think the second New Music America festival. It was an amazing experience. All kinds of people we saw. One of them was Paul Dresher doing tapes of music, liquid and stellar music, on this elaborate, 'Rube Goldberg-like' analog tape system.

Since I had just completely failed in my senior performance trying to use another Rube Goldberg-like analog tape system with my brand-new bass, I was very encouraged to see that it was possible to make this thing work. Ever since then--that would have been 1980 or so--I'd been in and out of touch with Paul, wanting to figure out some way to take that performance tool to the next level. Of course, lots of people are doing this, but it took 30 years for us to finally find a way that was practical and possible to do this. So I wrote this patch to do looping in the way that he wanted, which was very much informed by the tools he used… basically Echoplexes. In the meantime, I was working on my own looping engine, which came from a very different aesthetic. For me it was a wonderful experience of finding that both the fact that this simple idea of looping--which is almost too simple--on the surface, can mean so many different things to different people.

It was really cool to be able to on one level finally do something I'd wanted to do for 30 years, and on the other level to find out it was a completely new thing… it wasn't what I thought it was going to be. Since the whole show has a lot of threads of instrument-building in it, it was also a nice context to do this collaboration. Specifically, the fact that we both had at least 30 years of experience with this idea.

The idea of 'performance looping'?

I want to be careful about calling it looping, because that implies mechanical repetition, and there's some of that, but I see it more as a way to take a performance in and out of the moment. If you hear an echo, or a loop, or a replay of something that you heard earlier, you get into this whole business of playing with the perception of time. That's probably one way of explaining why I'm so interested in music, because I'm interested in how we perceive time. Sound is a great venue for that because you can't really take sound out of time. You can take motion pictures, and take out the motion and have a still picture, but I'm still not sure if there's a still frame of sound. The closest thing I've found is a loop. You can repeat it, and it's a little story in a microcosm. It's kind of a big idea, but thanks to the ease with which we can iterate, and rapidly prototype ideas with Max and MSP, we were able to converge on a system that worked for him, works for me, and worked for the show. It ran successfully and reliably enough to be a core part of the sound that everyone heard in the performance.

I thought it was great that we met for about a month and a half and went from a 30-year-old abstract idea to a working system that worked on stage. I think we were both very happy with how efficient that was. That speaks a lot to Paul's experience--knowing what he wants and using it--my experience, working with Max for 15 years, and the tool itself allowing that kind of rapid, convergent work.

So, you built it in Max 5?

I did it in Max 5, and I can honestly say I couldn't have done it without Max 5. It's an example of using a technology to assist human expression. People have been building instruments, which are technological--I consider a violin very high-tech--for longer than pretty much anything else we can think of to help express themselves. To wax philosophical for a minute, I know how much we can take in, we can see and hear and feel so only so much without any external aids… well, we've got our body language, we've got our words, but that's about it. We take so much in, and imagine things, and get inspired but how do we let it out? Musical instruments are one way to do that, but to make it work, you have to really be able to play that instrument. To be able to play it, it has to be really well designed, and you have to have the right controls available at the right time. That's where Max 5 specifically let's us very quickly take this complete mess of a 10-year-old Max patch, which was insane on the inside, and clean it up, pull out just the controls that he needed. These weren't the most important ones for the way I use my looper, but we figured 'Oh, these are the things he needed.' In an hour, we could make this front panel that looks really clean--and needs to be really clean--so he could play it, as an instrument.

You used the new presentation mode?

The presentation mode in Max 5 was absolutely essential for that. So thank you, thanks to everybody who put that together. I've learned that the audio engine and overall stability of Max running on current hardware, which is amazingly powerful, means that I can trust this stuff to work. I've had other patches that I've done for Pixar and for myself that have been running pretty much unattended for two or three years now--knock on wood—with only one or two crashes, which weren't Max's fault. In addition to the ease with which you can do user-interface design, the underlying reliability, at least of doing things the way I do them, are really nice things about the latest version of Max/MSP.

So, what is your job at Pixar, and how you are using Max there?

I've worked at Pixar for close to 15 years. I started out for about two years as a sound designer in the interactive group. Back then, I first used Max to do some experiments--it was part of a proposal for an interactive product that we actually never made--in what I was calling "expressive gibberish." It was the idea to in real time, synthesize voice-like sounds that had no literal meaning, but would tell you something about how these characters in this game were feeling. So you might get a little excitement, or a little confusion, or a little sense that this one might be telling you something you should know, so you should follow them in the maze.

So much speech synthesis is about very understandable but completely cold and robotic voices. I wanted to go in the other direction, which may be completely meaningless but can be very evocative and communicative speech. I used to joke that—yeah, a kind of a geek-y joke, but--I wanted to make a speech synthesizer that had 'Werneke's Aphasia', rather than 'Broca's Aphasia'. Broca's Aphasia is a speech disorder where people would say something like, "Walk dog now. Go to bathroom." Verneke's Aphasia is where people ramble off at the mouth in sort of a grammatical sense, very flowery, where you really have no idea what they're saying, but you know exactly how they feel… like they're struggling, or something. So in the sound realm, for this particular interactive project, I was working with Max to try and generate sounds that approximated that. Unfortunately, that whole division closed.

I went on for 12 years at Pixar, where I founded and built up the audio-visual engineering group here. That means designing, building and maintaining the screening rooms, the recording studios, the editorial facilities, and different ways that basically are about getting Pixar's films out of the computer and onto the big screen.

What are you doing now, there at Pixar?

I'm currently working on several projects in audio research and development. Starting at the beginning of this year, I've shifted my focus at Pixar to some work that's a little more in collaboration with the theme parks, and Disney R&D. It's some new research, so it's hard to go into a lot of detail about, but using the Max/MSP environment is a really good rapid-prototyping environment. We're using that now to explore some new ideas, and get them out of pure theory. When you're doing research on perception and performance, you need to see it, and hear it. The research has to take place in the real world to really answer some of these questions. Thank goodness that that's possible to do really quickly, with Max.

So, you've used Max/MSP on other projects for Pixar?

There are a few things that we did with Max at Pixar. The more interesting ones I think are more recent, starting with this thing we did about three years ago. Pixar has--it's still on the road--an exhibit of traditional-media artwork that goes into the design of Pixar's films. In addition to having a lot of paintings and drawings, we wanted to have a few special new parts of this exhibit. One of them is called the Toy Story 3-D Zoetrope. It's an 8-foot spinning disk that has--basically plaster sculptures that are printed by a machine, which represent frames of animation--like 18 Woodies in different poses, just like 18 frames of film, but they're actual, physical objects. It spins around really fast, and there's a synchronized strobe light, which works like the shutter of a projector, so when you watch it, you see this thing spin up, and when the strobe light hits, these sculptures that are sitting in front of you are animated, and they start hopping around, doing what they do.

Now, I just have a soft spot for people who work behind the scenes, because that's kind of what I do, so I think, these poor gallery staff, if they have to hear the exact same two-minute sample, looped perfectly, for 10 hours a day--and this exhibit runs for three months at a time--they're going to go crazy. So the sound is a randomized, dynamic mix, and it's never quite the same. It's very subtle, but that was kind of a nod to the people who have to hear this really cool thing for a thousand times [laughs] to keep it interesting, and that's just something that's easy to do, in Max.

So Max/MSP 'runs the show' and saves lives… or at least sanity [laughs]

That machine, the sound and the control system that times the performances, is a Max patch that's running on a little Mac Mini. That Toy Story Zoetrope started out as an exhibit at the New York MOMA, and from there it's gone to the Science Museum in London, and several museums in Japan, Australia, Korea, Finland, and Mexico. It's currently in Taipei, and it'll finally be at the Oakland Museum next year, in 2010.

Andrew [Stanton], the director of Wall-E, recently said that he liked it because it helped him remember that there's a sort of child-like sense of wonder that he didn't realize we could still experience as adults. Every time these lights hit, doesn't matter if you're 5 or 65… people "Ooooh and Ahhh"… there's just this beautiful moment.

Video and text interview by Marsha Vdovin and Ron MacLeod for Cycling '74.

by Marsha Vdovin on August 18, 2009