An Interview with Geoff Martin
The name Bang and Olufsen conjures up a mystique of Bauhaus-inspired, design elegance that borders on the sensuous. But every product is packed with the seriously advanced audio technology they are renowned for. Geoff Martin uses Max to design, prototype and test new products for the design savvy, Danish audio/video company. His Max-designed audio DSP algorithms become directly implemented in the final hardware. Each product is heavily tested using the most accurate of measurement devices and methodology in one the world’s largest, privately owned electroacoustic measurement facilities, known as The Cube. But when it comes down to the final tuning and approval of said systems, precise measurement is not enough; B&O still relies on their most critical instrument, Martin’s Tonmeister-trained ears.
Can you tell us a bit about your background and education?
When I left high school I was caught between two worlds. I loved math and I loved music, but I couldn’t figure out a way to do both. Then I found a program at McGill University that basically let me split the difference — a Master’s in sound recording.
The McGill program, which is a Tonmeister degree, required that you do a Bachelor of Music and then you go to McGill where you receive instruction in electronics, acoustics, programming and sound recording. So I continued on with my music and did a Bachelor’s degree in pipe organ, as well as a lot of choral conducting.
From there I went to McGill and got into the Masters program in Sound Recording. While I was in that program, I also spent a lot of time hanging out with music technology people who, at the time, were using what was then a pretty early version of Max. MSP didn’t exist yet back then.
The early ‘90s then? We came out with the first commercial version of Max in 1990.
Yeah. I started at McGill in 1990, so they probably would have been using the first release. Some of the professors at McGill came from IRCAM, so they had experience with it on the NeXT machines. They were actually at IRCAM the same time as David Zicarelli.
Before MSP came out, we were playing around with ISPW on the NeXT computers that we had a McGill.
It was more important that the acoustic simulation sound like the real thing rather than measure like the real thing.
So, you got your PhD there at McGill?
Yes, after the Master’s program finished off, I just stuck around and did a hybrid PhD between the Sound Recording and the Music Technology departments.
So I actually wound up having two thesis advisors. One was Wieslaw Woszczyk who was the head of the Sound Recording program, and the other was Philippe Depalle who had just recently started at McGill having come from IRCAM maybe a year or two before.
It was basically a PhD in Acoustics Simulation. But it was within the Music Department, so it was more about simulating acoustics from a phenomenological point of view. It was more important that the acoustic simulation sound like the real thing rather than measure like the real thing.
That sounds a lot like what you are doing now at your present job.
Now I have a job that still splits those two worlds. I’m the Tonmeister and Technology Specialist in Sound Design at Bang and Olufsen, working in Struer, Denmark. What that means is that I’m working in the Acoustics Department, where I’m part of the team that develops our loudspeakers up to the point of where we start production.
I start right at the beginning with our designers on initial concepts, and work with our and engineers straight through all the development up to a week or so after we start rolling off the production line.
I’m doing the final tweaks on our loudspeaker design, that all happens in Max.
So, you are involved throughout the whole process…
Well, the reality is that my job only intersects that development in two points. One is at the very beginning, where I help to define what our loudspeakers should do when they reach the point of being a commercial product. So we have discussions very early on to determine, say, for a given product, how loud should it go, how low in frequency should it go, what it is for, who is the customer? And so on. I help design the specifications based on that product description.
Then I come in again at the end, where I’m working with the acoustical engineers and the DSP engineers, to do work on the final sound and the final performance of the loudspeakers. So I’m coming in at the very end of the development process.
Once we have a nearly finished product and start measuring the loudspeaker, I’m working with the engineer who’s doing those measurements to look at how it performs, purely based on the technical measurements. When we’ve cleaned up as much as we can, based on the measurements, then I sit in a listening room and do a lot of listening, evaluating and tweaking – the final finishing touches on the loudspeaker before we start production.
Are you using Max at that stage?
Yes, the bulk of that work happens either in Max or in [Mathworks’s] MATLAB. It depends on the application. But certainly toward the end, when I’m doing the final tweaks on our loudspeaker design, that all happens in Max.
I then feed my algorithms or my coefficients out of the filters from Max back to the engineers, and they port it into the embedded DSP in the products.
So, that’s about 80 percent of what I do.
And the remaining 20%?
The remaining 20 percent is looking a lot further ahead in the future and deciding what are our products are generally going to do for our audio signals say 5 years, 10 years into the future.
So, for example, we’ve just released a family of televisions called the BeoVision 11 with a built-in high-end surround processor. I, and a couple of other guys, started working on the audio flow in that surround processor almost six years ago!
That entire algorithm was built in Max/MSP before we had it running on any DSP platform. So it in fact grew organically in Max/MSP. When we were done, I fed all the algorithms over to our DSP engineers, who then ported it to the embedded DSP.
When it came time to test their system, essentially all we were doing was running measurements on my Max/MSP implementation on a Mac, and their DSP implementation on the hardware, and if the two outputs were the same, using the acoustic or the electro-acoustic measurements, then the porting was correct.
So, it was interesting. We got to a point where the Max/MSP version was the reference version for the product that hadn’t yet been released.
Wow! You know you have a pretty great job, don’t you?
[Laughs!] Oh, I know! It’s pretty fun. Of course, there are days when it’s not so fun, when there’s a problem on the production line and people are scrambling, because production has come to a halt, or we’re really banging our heads against the wall. But there’s always something new to learn and to play with, pretty much every day. So I have to be honest, I really enjoy it.
When did you first learn Max?
The first time I saw it was at McGill, back in 1990. In those days, we were running on an SE30 or a Mac Classic. Just running the MIDI, of course, because that’s all that existed. But I first started actually using it, oh, it may have been six months later.
By the time I finished my Master’s degree, about two or three years after that, I was helping to tutor in the electronic music studio at McGill, so I was teaching Max and the old Opcode librarians for DX-7 for example.
How do have the time to keep up with the newer versions, newer objects?
I’m not sure that I do. I don’t actually keep up with it intentionally. However, basically, I wouldn’t say every day, but certainly every week, there’s something new that’s coming at me that I need to implement that requires me to go out and find a new object to get the job done. I wasn’t trained as a programmer, I’m trained as a musician, so I’m more interested in just implementing something than the actual programming environment itself.
But I am always working toward a goal, and I will learn what I need to learn in Max based on what needs to be done. So when a new version comes out, I won’t be the first person to go trying the new toys.
For example, I’ve only just started to teach myself Jitter, literally a week ago, because I thought I could make some cool toys for my kids to play with.
When Gen came out, I got it, and I still haven’t touched it. I looked at it once and thought, “Oh, that’s complicated,” and ran away. But I’m sure within some period of time I’m going to need to use Gen, and then I’ll sit down and start cursing at my computer.
So I don’t really think I do keep up with the latest developments in it. Although there are other times when, because the kind of work I’m doing with Max is different from that of most users, I find I’m stumbling against things that maybe haven’t been tested or behaviors that no one else has experienced.
In what other ways are you using it?
Most of the time I’m using it for designing filters for our loudspeakers. What that usually means is a VERY long string of biquad’s in series. That’s the simplest case, but it’s a very typical case. That’s what I’m doing almost every day.
Another example, that I mentioned before is the new B&O televisions, with a 16-channel upmixer in them. That was entirely built in Max first, and then ported over to our DSP platform after the fact. So we have a 16.5 upmix algorithm in the televisions where the reference system is my Mac with Max.
Another example is from the days when I first started at B&O. I was hired originally to design the sound inside our first automotive platform, which was for an Audi A8. That entire system, the upmix algorithm and the tuning system, were built in Max first — the entire thing — and then ported over to the automotive DSP platform.
So I do a lot of algorithm design in Max, but it’s a very organic process. For example, when I was told I had to go build a 16-channel upmixer, I sat in the listening room with 16 loudspeakers, a 16-channel sound card and Max and thought, “OK, I’ve got two channels coming in, and I’ve got to put something in between the two inputs and the 16 outputs,” and then just started building things from there, output by output.
So it’s a really organic process for me, adding outputs, and tweaking and fixing, and coming back and doing more of the same. Basically when I get to the end, I have no idea how I got to where I am from where I started. Then I have to go back and reverse engineer my own patchers, to find out what I did and how I got here, in order to draw it as the flowchart to give to the DSP engineers for porting.
Do you have any go-to objects?
I use biquad every day. I have built a couple of my own objects and abstractions that I use a lot, mostly for EQ adjustment and dynamic processing. I also use sfplay and sfrecord; they get used a lot, along with Soundflower – although the glitch problem that has been hanging around in Soundflower for the past 5 years makes it almost as annoying as it is useful. That’s about it, really. It’s actually a really simple corner of Max that I live in.
I do a lot of MS processing — just old-fashioned sum and difference signals — as well as EQ’ing of the M and S components differently. I know this is sort of verboten if you want to pay attention to phase responses, and keep things “correct”, but I kind of ignore that and just go by the seat of my pants and find out how things sound as I’m playing.
I do have to say, one of the drawbacks in Max is the filtergraph object, which only has one view of the universe in terms of how Q is defined.
Can you elaborate on that a bit?
As far as I have found out, there are at least three or four different versions of what Q means when you’re designing a filter. The real problem is that none of this is explained in the help files for the filtergraph object, although, to be fair, it takes quite a bit of digging to figure it out from anyone.
So, for people like me who do an electroacoustic measurement which is then implemented in Max, it is not easily evident that, for example, if my measurement says that I need a 12 dB boost at 1 kHz with a Q of 4, and I implement those parameters in Max, I will not necessarily get what I think I’m getting – or what I want.
The problem lies in how different people define the bandwidth of a filter. It’s in style these days to use the definition based on the half-gain point [ie: 6 dB down on a 12 dB boost]. This is probably because it’s easy to copy-and-paste the equations from Robert Bristow-Johnson’s Cookbook.1 However, the “classic” definition is based on -3 dB points. To complicate matters, there is a hybrid version where, if the gain is greater than 6 dB, you use the 3 dB down points, but gains of less than 6 dB, you use the half-gain points.2
I’m still hoping that multichannel audio will truly become “normal” some day.
I know that a large part of your job ‘looking ahead’. Any hints to the direction of future audio?
The growth in personal audio devices has resulted in a change in the way a lot of people listen to music. It is now completely commonplace to see people walking down the street listening over headphones – many of them with much higher quality than the “free” earbuds that come with players. I wonder if this will change the way recordings are mixed and mastered. For example, I wonder if, 10 years from now, it will be commonplace to be buying binaural mixes of recordings rather than mixes made on loudspeakers, in much the same way that we’re starting to see tracks that are “Mastered for iTunes”.
I’m still hoping that multichannel audio will truly become “normal” some day. Those of us that have been listening to SACD’s and DVD-Audio discs for years, can’t wait until the masses hear how good a well-recorded/well-mixed/well-mastered multichannel recording can sound.
For a while, it was great to see bit rates, and therefore quality, come back up and for people to start thinking of 128 kbps MP3 as a thing of the past. However, it seems that the past couple of years, some new codecs have come out that have dropped bit rates again. Of course the codecs are better, but there’s no replacement for good ‘ol uncompressed audio even if it’s “just” 44.1/16. I hope for a future when people don’t have to sacrifice quality for bandwidth — and therefore convenience.
1 Bristow-Johnson, R. (1994). The equivalence of various methods of computing biquad coefficients for audio parametric equalizers. In 97th International Convention of the Audio Engineering Society. Audio Engineering Society.
2 Moorer, J. A. (1983). The manifold joys of conformal mapping: Applications to digital filtering in the studio. Journal of the Audio Engineering Society, 31(11):826–841.