An Interview with Tom Erbe
Audio Wizard Tom Erbe is a generous guy. His SoundHack program is a legendary and beloved tool for mangling sound and he gives it away for free. Now he has made VST plug-ins so I called him up to see what the man behind this benevolent act was like. I found a funny and wise educator and musician who loves what he does.
So where did you grew up?
I grew up in the Midwest, in Milwaukee and Chicago, and went to school at the University of Illinois Urbana-Champaign. I got involved with music technology pretty early on. My great uncle was a radio engineer at WCFL, and my grandpa was a police radio operator, back in the ‘30s. My great uncle gave me an oscilloscope and microphone when I was about ten or so.
So I ended up at a high school in Illinois, which happened to have a radio station, and as soon as my friends found out that I soldered, and knew about microphones, I became the technical director there when I was 15.
I had a nice crowd of friends in high school that were very into music. We all had our own radio shows, and we all were always competing to find the more obscure music. The weekly trip to the record store was the basis of high-school social life for me.
Did you play an instrument?
Not really. I really got into things from being a DJ. Also, being the guy who could fix all the gear, as well. We used to do a little outdoor fair that the community put on, and our radio station would go there and put on the music. I would be the one climbing the telephone pole to hang the speakers. It was all fun, and it was just getting together with friends who were really interested in music.
It’s seems so unusual for a high school to have a radio station.
Yeah. This was back in the ‘70s, and back then there was a provision by the FCC for a Class D radio station license. It allowed small organizations to have low powered radio stations, under 10 watts.
So there used to be a lot of high-school radio stations in the ‘70s. I was lucky, because I could get into music technology very early.
Then, when I was finishing high school, I found a recording studio that I was able to intern at for a bit. I learned quite a bit there as well. They let me play with the mixing board, and some of the equipment.
When I got into college, I went into computer science and music. At the same time, I worked at a record store, DJ’ed at a couple radio stations, interned at Faithful Sound Studio, which was where Mark Rubel worked. He runs Pogo Records now, a recording studio in Champaign.
I learned quite a bit then. I also played synthesizer in a band. I just got really, really interested in electronic music. Of course, as my tastes got more adventurous, I got into weirder, as well as more serious, electronic music.
So yeah, after I got out of college, I found a job—I found that there was an opening at the Computer Audio Research Lab at UCSD, and took it. It was just perfect for someone with a degree in computer science and music minor.
What year was that?
This was 1984.
That must have been quite a culture shock, coming from the Midwest.
Yeah, I guess so. [Laughs.] The options in the Midwest for jobs at the time were very conservative, or at least that’s what it seemed like. Someone with an engineering background was going to work for an industrial company. I wanted to get into something more related to music. So it really seemed like quite the right thing to come out to California.
It must have been amazing. There were a lot of really interesting composers that came through back then.
Oh, yeah. A lot of people were visiting. Gordon Mumma was here for a good long time. That was very fun. John Cage visited. Of course, Roger Reynolds was here—and still is. This was also the time when the Computer Audio Research Lab [CARL] was here, at the Center for Music Experiment. We were developing a lot of software tools for signal processing that all ran on the mainframe computer. The music software all ran in non-real time.
Mark Dolson was at CARL. He developed sound file convolution and phase vocoder software—a lot of cool stuff. And Dick Moore, who developed C Music, which was one of the more interesting Music 5 languages in C.
So there was a lot of good software development going on, and I just spent all my time trying to figure out how all of it worked. [Laughs.] I worked on a project developing a real-time pitch detector for electronic violin. Also more basic things like a MIDI interface for a Sun workstation. Which, at that point, we thought was sort of a nice, small, compact computer. [Laughs.]
We were trying to get things working in real time, but at the same time, there was a lot of interest in research with signal processing. I worked there for about three years, but I was itching for a place that was even more creatively active. In ’87, a position opened up at Mills College for the technical director, so I went for that position.
That was such an exciting time in the Bay Area.
It really was. I was at CCM [Center for Contemporary Music at Mills College]. Anthony Braxton was there, David Rosenboom, and Larry Polansky; Chris Brown had just started working. The Hub was there. Bob Ashley came by for a couple of years and I recorded and played synthesizer on his album, Improvement.
So that’s where I started developing the software that became SoundHack, back in ’89.
What was your philosophy behind it? Or did it just evolve organically?
Well, a lot of the fascinating things at UCSD, it seemed to me, were unavailable to people who couldn’t have a mainframe. So for a couple years, I really tried hard to get a mainframe-type computer at Mills and only got so far. I ended up with sort of a cast-off Hewlett Packard Bobcat computer, and I got a lot of things working on it.
Then suddenly the Mac II came out, which had a floating-point processor, and it was the first computer that could actually run serious signal processing, because it did have floating point built in. The earlier Macs didn’t, so it was completely impractical to do anything on them.
So at that point, I thought, “Well, maybe I should learn how to program the Mac, and bring some of these interesting things to the Macintosh.” And that’s what I went ahead and did. I really love the sounds that you can get out of convolution, and out of the phase vocoder.
It took a couple years. I think the first version of SoundHack came out in ’91. And then I spent maybe five years just continually developing and updating it and adding new processes, always as a standalone application.
Why the decision to make it free?
I didn’t think anyone would want to pay for something that took a whole day to process three minutes of sound. I was excited about the software, and wasn’t really thinking about marketing.
This was before there was any sort of notion of open source software—at least it hadn’t hit me yet. I just wanted to get something out there that would be helpful to experimental musicians, and would help people make a lot of different sounds.
What’s you relationship with Max/MSP?
I’m a teacher, computer music developer, recording engineer and occasional musician. I use Max/MSP and PD [Pure Data] in all of those roles.
At UCSD, I teach the fundamentals of music synthesis. It’s extremely helpful to have a program that is modular, that allows me to show the architecture of an oscillator, or a filter, for instance, and show students how to build these things up from small components and into a hierarchy.
My other relationship with Max/MSP is that I am a plug-in developer. The reason I use Max/MSP is to prototype all my software. For example, three years ago, I designed a bunch of delay effects for VST/RTAS/AU, one of which was just called +delay, but it’s really based on the old multi-head, rotating delay lines that were out in the early ‘60s. I prototyped everything in Max, and I wanted to give this delay analog-like behavior, so I needed to do some sub-sample interpolation. I wanted to put some sort of tape saturation as well as some nice filtering in the feedback path, so I could emulate the high-frequency loss in tape. But also allow people to go farther than that.
So I built this all up as a huge patch, before I ever went to the C compiler. Then after I built the patch, I was able to quickly go into C and build a plug-in out of it. Using that process, I built a pitch-shifting delay, which uses the classic, multi-head technique for pitch shifting. I also built a granular delay, where the delay line is being sampled with a grain stream.
So it’s become my process, now, I guess for the past three or four years, just to build everything, in either Max or PD first. Once I’m convinced I have something that sounds good, then I implement it in C, and maybe do some refinements.
You came out with some plug-ins for Max for Live?
I did. I’ve programmed about 15 plug-ins now. I’ve been doing them since, oh, I guess since about ’99, 2000. I found a lot of people were using my plug-ins within Max, using the VST~ object. So they were using my Decimate plug-in, or my Binaural plug-in, and I thought that was possibly a little inefficient for them, because the GUI does take up some CPU.
Also, within Max, you can get to parameters much quicker than you can through the VST~ object—at least more direct access.
So this last year, me and a couple grad students and undergrads ported all of my plug-ins to Max/MSP as externals. So now those are all running under Max/MSP and Max for Live.
Have they been popular?
I don’t know. [Laughs.] It’s really hard to say. I never look at how many people download it. I see a lot of people talking about it, and a lot of people saying, “Hurray!” when it got announced. That’s one thing about developing software, when you give your software away for free, you don’t really get a lot of feedback. So I think they’re popular. I should check how many downloads there are.
But people were definitely asking about them a lot before they came out. Then, when they came out, the announcement got retweeted quite a bit. [Laughs.]
I retweeted it!
There ya go. It seemed to go up on every blog. But I don’t know if that indicates popularity. I do hear from time to time that, oh, I use SoundHack’s externals for doing this or that. So I get feedback here and there, but I don’t spend all of my time looking at other people’s blogs or press releases.
Good for you. But that can be a hard thing to resist.
[Laughs.] I’m usually focused on the new software. So I’ll assume that people like it. If they do send me some notes back, it’ll be encouragement for me to do more. So I guess I actually would like to hear whether people are using them or not.
They are sort of different than other externals for Max in that they’re very complete processes. They’re more like stand-alone effects, studio effects, than they are externals. So I’m really not sure how that would gel to a typical Max user.
At first, I thought there was no reason to turn my VST plug-ins into externals. I thought, well, a Max user could just build them themselves, so I don’t need to do that for them. But then when I found enough people using the plug-ins, I figured, well, maybe it would be nice for them to have some convenience.
Especially the ‘Max for Live’ people. They just want to get going—fast.
Yeah, definitely. As Max gets more popular, and Max for Live gets more popular, there’s a wider variety of users, and some who don’t want to sit there twiddling so much, or programming so much. So I imagine there’s a need, but I haven’t had a lot of feedback yet.
What are you working on right now?
Right now I’m working on a set of plug-ins that are based on the classic phase vocoder algorithm, which I feel has really not been explored enough in commercial plug-ins. I’m working on a real-time time stretcher, which takes a real-time stream of sound and captures multiple windows from the incoming stream, and layers a time-stretched output. I’m still developing it, but it looks like it’s going to be a really nice way to develop a big, ambient, stretched sound out of any incoming sound.
In real-time? How exciting.
I’m also working on some stuff with pitch. Pitch shifting out to the ridiculous. I always like taking algorithms beyond the beautiful, to the point where it gets noisy—from sublime to ridiculous. I’ve also done a pitch-shifting vocoder with it, which is sounding really nice.
Then there is also a phase-vocoder looper that I’m developing. It’s sort of like a conventional looper, but it will be using phase-vocoder style pitch shifting and time stretching on all of the loops.
I’m at the difficult part, to make this thing fun to play with, interactive, able to lock it to beat, and all those kind of good things. That’s what I’m doing right now.
I expect to be finished with these, hopefully some time in the next couple of months.
That’s really exciting. Those should get a lot of attention.
Especially with experimental and fringe electronic music getting bigger and bigger.
Interview by Marsha Vdovin and Ron MacLeod for Cycling '74.
by Marsha Vdovin on February 2, 2011