An Interview with Elise Baldwin
Elise Baldwin is an intermedia artist that works with music and projections. I find her work to be both fragile and dense, inspiring me creatively on several levels. She creates music that interacts with both live and recorded video.
Let’s start with a little of your background.
I grew up in a rural part of Idaho, on a farm. My parents moved to the Bay Area when I was like 11 or 12. And I’ve been here, in Northern California, more or less since.
Did you play music when you were younger?
Yeah. I played the piano from a pretty early age. I was always very attracted to recording technologies, which I’m sure a lot of sound people will tell you. I remember doing a lot of radio plays with my brother in our rooms, when we were kids.
My mom had one of those old-school cassette recorders that was bigger than my computer is. [Laughs]. We would make these really elaborate radio plays. I was always very interested in theater, in sound and music. I’m very into literature as well.
I feel like a lot of my artistic and professional life has been about trying to synergize and combine mediums in different ways. When I was younger, I did a lot of theater, because I was attracted to both the liveness and the technical part of it.
Then I moved into filmmaking, and was really into that, because there was sound and acting and script, and light and cinematography. All these different art forms that I loved.
In the early ’90s I started working in the multimedia CD-ROM industry and was really interested in how stories came together in this electronic medium. I think my professional work in video games is just another example of me trying to intuitively put together different mediums and see what happens.
I also think there’s been an increasing move towards live performance, and improvisation in my artwork.
Where were you educated?
I went to UC Santa Cruz for my undergraduate film/video degree. It was part of the theater major at that time, so you had to do all the theater stuff, too.
That was the place where I realized that I really enjoyed working with technology. The film department there had one of the first computerized online video editing systems, a Grass Valley system. [Laughs.] That huge tape-based editing system was so clunky when I think about it now, but at the time it was the height of—I mean, I couldn’t believe we had one. It was 1989, probably. I was so excited to work on it. And I loved sitting in a dark room, by myself, with a wall of gear, and just working. Editing just really clicked for me.
Were you doing music on the side? Were you in bands?
No, I never did that much performing with other people until later, when I went to Mills. That was enforced by the curriculum, as was improvisation, which I had not done a lot of and was pretty freaked out about. But Max helped with that.
When did you go to Mills?
I went to Mills fairly recently, actually. At that point I had worked as a professional sound designer mostly, but also doing some After Effects work, and a lot of video editing. But from 1991 to 2002, I was specializing more and more in audio production.
In 2002 I was working for MTV Networks as an audio director in their online division and running their recording studio. I had this moment where I said to myself, “I love my job, but maybe I want to go back to school.” I had stopped making much art, and felt a little burnt out from doing creative services work professionally. I was basically making these fixed sound art pieces, which were all postproduction technology. There was nothing live about it. I was doing some electronic music, but again, it was very pre-programmed, and everything was fixed.
This sound art was completely separate from my piano playing, and my visual-arts interests as well. They were all very disparate practices. The only place in my sound life where I really felt like there was a live thing happening was when I would do sound design for theater, which I used to do a lot of. That kept me attached to the excitement of live performance, but I wasn’t an improviser. I didn’t feel comfortable improvising.
Mills turned out to be the perfect place for me. It was really cool, because I, on a lark, not even really sure that I would go if I got in, made this application, and then I got laid off from my job about a month later, and I said to myself, “Perfect. Time to make art.” [Laughs.]
It was really fortuitous, because I got to go back to graduate school with a lot of work experience, and a strong sense of putting my work life, my career aside to make art right now. So it felt very focused to me.
What professors did you work closely with?
I worked closely with Maggie Payne, who I adore, and I worked very closely with Les Stuck. He’s primarily responsible for any facility that I have in Max. He really got me on the road.
I had been studying Supercollider. It was the first time I’d ever used a text-line programming environment for music creation, and I was really frustrated by the lack of visual data. Coming from a postproduction and recording engineer background, I wanted to be able to see what was plugging into what and what effect that was having on the audio signal. I’m very visually spatial and I couldn’t do that, so it was quite challenging.
I thought, ‘I’m not stupid, but this is really counterintuitive for me.’ I was just plugging away at it, feeling a lack of agency in any aesthetic successes that occurred, although I did manage to make several pieces I loved—mostly through a series of happy accidents, I felt. Then I started looking at Imagine, and a couple of other programs that encouraged signal cross processing between audio and video, because I knew that I really wanted to combine those mediums for myself.
Then Jitter came out. I thought, “Hallelujah!” So I trucked off to Les Stuck’s office, and told him, “Les, I need you to teach me Jitter.” And he looked at me, and he says, “Well… you might want to know a little something about Max before you start on Jitter. And I said, “Oh, no, no. I’m fine. I just want to know Jitter.”
Les is a great teacher, and he was very gracious, because I continued to put the Jitter cart in front of the Max horse—and he let me do it. I mean, with helpful suggestions, but he didn’t get frustrated with me. I had very elaborate ideas for things I wanted to build, and I would go prattle at him all about them and then he would help me build a tiny part of the thing I had described. And of course six months later I said, “Oh, I see what you meant. You were right.” [Laughs.]
But it was great. I learned a lot from him. And it was really wonderful, because not only could I combine these two mediums that I loved and had been semi-consciously trying to interconnect for 20 years, but also Max came to me at a time when I was actually ready to start improvising. That was a difficult juncture for me, and frightening. As a musician, I had played the piano classically for years, and just didn’t really think about music that way. Then I got to Mills, and it was like, “Play” and I thought ‘Play what? Where’s the score?’ [Laughs.]
So Max was great, because it was a real treat to be able to build my own setup for improvisation. I had tried to improvise with the piano and found that I didn’t take to it at all. So now I had this other way to do that.
Was it a hard learning curve?
Yeah. I’d say so. It was difficult. That experience in itself was interesting for me, because I have always thought of myself as being extremely technically apt. So it was kind of crushing that it was hard. I was really used to just opening something up and making something with it. And it wasn’t like that at all.
But once you got over the learning curve?
I had a lot of experience working with hardware, and hardware signal flows, so it helped me to have these mental models of things that I might build in Max. Having the knowledge of the hardware made the translation to the programming environment much easier for me because I had mental models to work off of. And once I grokked the freedom and power of the toolset and moved beyond those mental models, it became a whole other obsessive creative activity for me.
Is Max your main program?
I would say Max is really the only program I use for live performance these days.
Do you use it in your work at all, at EA?
Actually, yeah, we do use it. It’s been really gratifying to see it make a pipeline production incredibly efficient. I work with a tight-knit community of audio artists there, and there are several people who’ve contributed significantly to the development of our voice-editing tools, which are all done in Max/MSP.
Kent Jolly, who’s an Audio Director for the Spore team, built the first iteration of the voice pipeline tools, and then Dave Swenson and Marielle Jakobsons who I work with, have further refined them. I had a little hand in it, but mostly I just was beta testing.
I also use Max a lot at work to prototype game-engine mechanics without actually having them made or integrated in game code. As an example, we do a lot of voice pitch shifting in our games, and I built a pitch shifter that I can use on voiceover auditions to see how somebody’s voice is going to pitch shift before we cast them and put it in game.
We actually eliminate or choose people sometimes based on whether or not their voice will pitch shift well, without artifacting. Every voice is different in its ability. It has to do with the timbral and frequency characteristics of the voice—some pitch shift better than others. So it was a fun tool that only took me a couple days to build. It’s been really useful.
When you finished grad school, did you have expectations of how that was going to affect your career, or what you were going to do next?
It’s funny. I think that I told myself all along that graduate school wouldn’t actually influence my career. In part I think that was because mentally I really wanted to make this reclamation of my art life, and so I said to myself, ‘This is about art, not about career.’ But of course, the two were intrinsically linked. I got an internship at EA as a result of being in the graduate program, and then they hired me as a contractor for many years thereafter. So it was sort of causal in this way.
I had worked on several games in the ’90s, but EA is really nice environment to be a sound artist in because they support their audio teams, technology-wise we have the tools we need.
It must be hard to have a 9 to 5 job and still make art?
I would say it’s been one of the bigger challenges of my adult life, honestly. Especially because the two activities are so closely related. I do sound design all day, and sit in front of the computer for nine hours, listening, and then I come home, and want to sit in front of a computer and listen again for a few more hours?
I think that that challenge is sort of why my work is tending more and more towards physical props used with Max. I’m basically using Max to control lighting cues, and sound, and image. But there’s also this very physical component, which I’ve received a lot of good feedback on. It’s very accessible; people can see what I’m doing.
I’m setting up these little dioramas, and manipulating them, and combining other media with them. There’s a really tangible, physical aspect of the performance, which people definitely respond to. I’m not just checking email up there. [Laughs.]
Carl Stone says if you just walk by a room where he’s doing a concert, it just looks like an accountant at his computer.
Oh yeah. For me, electronic music can be a particularly hermetic form. You see somebody performing in that environment, and there’s often no perceptible connection with the audience. Maybe because of my interest in theater, and my long-running attachment to that form, I’m always very interested if people who are watching me perform are connecting with what I’m doing.
Am I really taking them somewhere with this work? Are they experiencing things, other than construction ideas about the piece? Is the piece transportive for them? So yeah, it’s interesting. It’s hard to get a little peek into someone as a personality, a performance personality, when they’re working in a computer-based form, I think.
Lately, I find myself moving more towards wanting to do something a little more theatrical.
Is your work carefully composed, or is any of it more freeform when you’re performing?
I would say that I’ve gone through different phases with that. I never improvised at all before I started using Max/MSP. I was really only making tape pieces, kind of a postproduction paradigm. Then I went to Mills in 2002 and it became apparent very quickly that I was going to be expected to improvise in this program.
I was excited about that, but very nervous, because I hadn’t really worked improvisationally, and I had been in digital postproduction for so much of my professional life, that I was accustomed to having very fine control over everything that I was editing and arranging.
So it was challenging. I made a couple of installations in Max/MSP that were more like interactive behavioral systems that had preprogrammed behavior. There was a randomness within the systems, but the range of what they would do was pretty controlled.
After I had done a couple of those, I started building musical instruments to improvise with. I worked for about a year and a half with a group of three friends, and we formed a quartet that we called Mire, that was purely improvisational. It was really the first time that I had ever done that—and it was so much fun! They were tremendously skilled players with a lot of experience. They were also incredibly sensitive to me, and to each other, as improvisers. So it was really a great experience.
A lot of the music, the electronic music work that I do is improvisational. The audio-visual pieces, not so much. Those are more carefully sequenced and planned out, because there’s a lot of pre-production, in terms of setting up how the sequencing on those works. Because there’s so much going on running both audio and visual simultaneously, I’m always teetering on that brink of not really being able to control it all in real time, I tend to automate big parts of the system.
I’ll figure out for a piece, what the most important element for me to perform live, and then I’ll link everything else into that. So for example, I just did a piece that I’m calling Theatre of Plants in which the video sequence is totally predetermined. Not in terms of how long the clips are running—there’s some liveness to that—but what order they happen in is totally locked before I ever play the piece.
But the music part of it is open ended for me, when I perform it live. Although I wrote a quote-unquote ‘composition’, so in my mind it has a certain structure, and compositional form, it changes every time I perform it.
How are you using Jitter?
Well, usually I’ll build either two instruments, and have them talk to each other, via OSC or UDP. I used to run a lot of multi-computer pieces, when I would do A-V work, I would run the audio off one machine, and the video off another, just for processing reasons.
I stopped doing that much lately, because I’ve become a little bit more knowledgeable about OpenGL.
What is OpenGL?
OpenGL is a graphical processing framework. It’s used within Jitter to basically offload QuickTime video to the graphics card, rather than using the machine’s central processor to compute graphical data.
What that’s done for me is it frees up the central processor solely for audio processing. Because I’ve become a little more expert in terms of OpenGL programming within Jitter, I can now offload almost all of the video processing onto the graphics card, and not really tap the other resources of my machine.
So I’ve stopped doing so many networked-machine pieces. Which is great, because besides being little gear heavy, it’s also a little over stimulating in performance, to be running two machines, and looking at two different patches. [Laughs.]
Do you have any favorite objects, both for music, and for video, that you rely on?
I do. I love the pattrstorage object, and all of its associated pieces. It’s just the most powerful data communication tool that I’ve found in Max. Like I said earlier, I’m kind of always teetering on that brink of not being able to manage all of the aspects of a live performance, with both audio and video. I don’t really have that feeling when I’m performing with only my audio patches, and just doing electronic music. That feels manageable to me. I’m very comfortable with the interface I use, so that feels less overwhelming.
I use pattrstorage to do a lot of presetting, and a lot of interpreting between presets. Everything I do, at its heart, is basically sample sequencing, and sample manipulation, so it’s really nice for me to be able to store those and then move between them.
Do you have a favorite Jitter object?
On just a pure functionality level, because I’m very pragmatic in my programming I like the jit.gl.slab objects a lot. That group of objects perform the OpenGL functions I mentioned. So it’s helped me a lot in terms of just making efficient patches.
For your visuals, is most of the material found footage? Or do you shoot visuals?
I do shoot some. Again, I feel like this has changed over time. I used to work almost exclusively with archival footage. I wasn’t really shooting at all. Recently, I’ve started doing more pieces with a live camera, so I’ll often be mixing between something that’s happening physically in the performance space, and then something that is off of disk.
It’s kind of scary, the live camera thing, in performance situations. It’s a little more risky for me as a performer—and I’m a bit of a control freak—but I’ve been really enjoying making these pieces. I’ve gotten good feedback about the work, because, I think, there’s something for people to watch, there’s something very clear physically that’s happening, and there’s a real accessibility to it that. I was actually startled at the positive feedback I got, because I didn’t really think of it as being that different than what I had done before. But a lot of friends and colleagues expressed to me that they liked how it was very translatable.
How about your patching style? How you organize your patches?
I tend to do a lot of nesting of patches. When I first started Maxing, over at Mills, I used to put everything in one level, and just patch it all together, and it would just make Les cry. I mean, I would unlock the patches, and his eyes would just start to water. [Laughs.]
He taught me some very good housekeeping habits, which at the time of course I resisted, but they’ve come to serve me greatly as I have continued to make more and more complicated patches.
Les also taught me an enormous amount, before Max 5 about how to debug. It’s a lot easier now, with the debugging tools in Max 5. When I started, I would make these really complicated monolithic patches that were impossible to debug and pick apart. Often I couldn’t even figure out why they worked the way they did. [Laughs]
So yeah, my programming style has gotten more modular, and more compartmentalized. I tend to build modules, and then use those modules over and over again in different configurations, which I didn’t used to do.
I think that shift in my program style was both a practical necessity—I like patching, but sometimes you have to stop working on the patch to actually make some art!—and also came from studying under someone who had such an intelligent and ordered way to look at patching. I’m really thankful for that… retrospectively. [Laughs.]