Articles

An Interview with Geoff Martin

The name Bang and Olufsen conjures up a mystique of Bauhaus-inspired, design elegance that borders on the sensuous. But every product is packed with the seriously advanced audio technology they are renowned for. Geoff Martin uses Max to design, prototype and test new products for the design savvy, Danish audio/video company. His Max-designed audio DSP algorithms become directly implemented in the final hardware. Each product is heavily tested using the most accurate of measurement devices and methodology in one the world's largest, privately owned electroacoustic measurement facilities, known as The Cube. But when it comes down to the final tuning and approval of said systems, precise measurement is not enough; B&O still relies on their most critical instrument, Martin's Tonmeister-trained ears.

Can you tell us a bit about your background and education?

When I left high school I was caught between two worlds. I loved math and I loved music, but I couldn’t figure out a way to do both. Then I found a program at McGill University that basically let me split the difference — a Master's in sound recording.

The McGill program, which is a Tonmeister degree, required that you do a Bachelor of Music and then you go to McGill where you receive instruction in electronics, acoustics, programming and sound recording. So I continued on with my music and did a Bachelor’s degree in pipe organ, as well as a lot of choral conducting.

From there I went to McGill and got into the Masters program in Sound Recording. While I was in that program, I also spent a lot of time hanging out with music technology people who, at the time, were using what was then a pretty early version of Max. MSP didn’t exist yet back then.

The early ‘90s then? We came out with the first commercial version of Max in 1990.

Yeah. I started at McGill in 1990, so they probably would have been using the first release. Some of the professors at McGill came from IRCAM, so they had experience with it on the NeXT machines. They were actually at IRCAM the same time as David Zicarelli.

Before MSP came out, we were playing around with ISPW on the NeXT computers that we had a McGill.

It was more important that the acoustic simulation sound like the real thing rather than measure like the real thing.

So, you got your PhD there at McGill?

Yes, after the Master’s program finished off, I just stuck around and did a hybrid PhD between the Sound Recording and the Music Technology departments.

So I actually wound up having two thesis advisors. One was Wieslaw Woszczyk who was the head of the Sound Recording program, and the other was Philippe Depalle who had just recently started at McGill having come from IRCAM maybe a year or two before.

It was basically a PhD in Acoustics Simulation. But it was within the Music Department, so it was more about simulating acoustics from a phenomenological point of view. It was more important that the acoustic simulation sound like the real thing rather than measure like the real thing.

That sounds a lot like what you are doing now at your present job.

Now I have a job that still splits those two worlds. I’m the Tonmeister and Technology Specialist in Sound Design at Bang and Olufsen, working in Struer, Denmark. What that means is that I’m working in the Acoustics Department, where I’m part of the team that develops our loudspeakers up to the point of where we start production.

I start right at the beginning with our designers on initial concepts, and work with our and engineers straight through all the development up to a week or so after we start rolling off the production line.

I’m doing the final tweaks on our loudspeaker design, that all happens in Max.

So, you are involved throughout the whole process…

Well, the reality is that my job only intersects that development in two points. One is at the very beginning, where I help to define what our loudspeakers should do when they reach the point of being a commercial product. So we have discussions very early on to determine, say, for a given product, how loud should it go, how low in frequency should it go, what it is for, who is the customer? And so on. I help design the specifications based on that product description.

Then I come in again at the end, where I’m working with the acoustical engineers and the DSP engineers, to do work on the final sound and the final performance of the loudspeakers. So I’m coming in at the very end of the development process.

Once we have a nearly finished product and start measuring the loudspeaker, I’m working with the engineer who’s doing those measurements to look at how it performs, purely based on the technical measurements. When we’ve cleaned up as much as we can, based on the measurements, then I sit in a listening room and do a lot of listening, evaluating and tweaking – the final finishing touches on the loudspeaker before we start production.

Are you using Max at that stage?

Yes, the bulk of that work happens either in Max or in [Mathworks’s] MATLAB. It depends on the application. But certainly toward the end, when I’m doing the final tweaks on our loudspeaker design, that all happens in Max.

I then feed my algorithms or my coefficients out of the filters from Max back to the engineers, and they port it into the embedded DSP in the products.

So, that’s about 80 percent of what I do.

And the remaining 20%?

The remaining 20 percent is looking a lot further ahead in the future and deciding what are our products are generally going to do for our audio signals say 5 years, 10 years into the future.

So, for example, we’ve just released a family of televisions called the BeoVision 11 with a built-in high-end surround processor. I, and a couple of other guys, started working on the audio flow in that surround processor almost six years ago!

That entire algorithm was built in Max/MSP before we had it running on any DSP platform. So it in fact grew organically in Max/MSP. When we were done, I fed all the algorithms over to our DSP engineers, who then ported it to the embedded DSP.

When it came time to test their system, essentially all we were doing was running measurements on my Max/MSP implementation on a Mac, and their DSP implementation on the hardware, and if the two outputs were the same, using the acoustic or the electro-acoustic measurements, then the porting was correct.

So, it was interesting. We got to a point where the Max/MSP version was the reference version for the product that hadn’t yet been released.

Wow! You know you have a pretty great job, don’t you?

[Laughs!] Oh, I know! It’s pretty fun. Of course, there are days when it’s not so fun, when there’s a problem on the production line and people are scrambling, because production has come to a halt, or we’re really banging our heads against the wall. But there’s always something new to learn and to play with, pretty much every day. So I have to be honest, I really enjoy it.

When did you first learn Max?

The first time I saw it was at McGill, back in 1990. In those days, we were running on an SE30 or a Mac Classic. Just running the MIDI, of course, because that’s all that existed. But I first started actually using it, oh, it may have been six months later.

By the time I finished my Master’s degree, about two or three years after that, I was helping to tutor in the electronic music studio at McGill, so I was teaching Max and the old Opcode librarians for DX-7 for example.

How do have the time to keep up with the newer versions, newer objects?

I’m not sure that I do. I don’t actually keep up with it intentionally. However, basically, I wouldn’t say every day, but certainly every week, there’s something new that’s coming at me that I need to implement that requires me to go out and find a new object to get the job done. I wasn’t trained as a programmer, I’m trained as a musician, so I’m more interested in just implementing something than the actual programming environment itself.

But I am always working toward a goal, and I will learn what I need to learn in Max based on what needs to be done. So when a new version comes out, I won’t be the first person to go trying the new toys.

For example, I’ve only just started to teach myself Jitter, literally a week ago, because I thought I could make some cool toys for my kids to play with.

When Gen came out, I got it, and I still haven’t touched it. I looked at it once and thought, “Oh, that’s complicated,” and ran away. But I’m sure within some period of time I’m going to need to use Gen, and then I’ll sit down and start cursing at my computer.

So I don’t really think I do keep up with the latest developments in it. Although there are other times when, because the kind of work I’m doing with Max is different from that of most users, I find I’m stumbling against things that maybe haven’t been tested or behaviors that no one else has experienced.

In what other ways are you using it?

Most of the time I’m using it for designing filters for our loudspeakers. What that usually means is a VERY long string of biquad’s in series. That’s the simplest case, but it’s a very typical case. That’s what I’m doing almost every day.

Another example, that I mentioned before is the new B&O televisions, with a 16-channel upmixer in them. That was entirely built in Max first, and then ported over to our DSP platform after the fact. So we have a 16.5 upmix algorithm in the televisions where the reference system is my Mac with Max.

Another example is from the days when I first started at B&O. I was hired originally to design the sound inside our first automotive platform, which was for an Audi A8. That entire system, the upmix algorithm and the tuning system, were built in Max first — the entire thing — and then ported over to the automotive DSP platform.

So I do a lot of algorithm design in Max, but it’s a very organic process. For example, when I was told I had to go build a 16-channel upmixer, I sat in the listening room with 16 loudspeakers, a 16-channel sound card and Max and thought, “OK, I’ve got two channels coming in, and I’ve got to put something in between the two inputs and the 16 outputs,” and then just started building things from there, output by output.

So it’s a really organic process for me, adding outputs, and tweaking and fixing, and coming back and doing more of the same. Basically when I get to the end, I have no idea how I got to where I am from where I started. Then I have to go back and reverse engineer my own patchers, to find out what I did and how I got here, in order to draw it as the flowchart to give to the DSP engineers for porting.

Do you have any go-to objects?

I use biquad every day. I have built a couple of my own objects and abstractions that I use a lot, mostly for EQ adjustment and dynamic processing. I also use sfplay and sfrecord; they get used a lot, along with Soundflower – although the glitch problem that has been hanging around in Soundflower for the past 5 years makes it almost as annoying as it is useful. That’s about it, really. It’s actually a really simple corner of Max that I live in.

I do a lot of MS processing — just old-fashioned sum and difference signals — as well as EQ’ing of the M and S components differently. I know this is sort of verboten if you want to pay attention to phase responses, and keep things “correct”, but I kind of ignore that and just go by the seat of my pants and find out how things sound as I’m playing.

I do have to say, one of the drawbacks in Max is the filtergraph object, which only has one view of the universe in terms of how Q is defined.

Can you elaborate on that a bit?

As far as I have found out, there are at least three or four different versions of what Q means when you’re designing a filter. The real problem is that none of this is explained in the help files for the filtergraph object, although, to be fair, it takes quite a bit of digging to figure it out from anyone.

So, for people like me who do an electroacoustic measurement which is then implemented in Max, it is not easily evident that, for example, if my measurement says that I need a 12 dB boost at 1 kHz with a Q of 4, and I implement those parameters in Max, I will not necessarily get what I think I'm getting – or what I want.

The problem lies in how different people define the bandwidth of a filter. It’s in style these days to use the definition based on the half-gain point [ie: 6 dB down on a 12 dB boost]. This is probably because it’s easy to copy-and-paste the equations from Robert Bristow-Johnson’s Cookbook.1 However, the “classic” definition is based on -3 dB points. To complicate matters, there is a hybrid version where, if the gain is greater than 6 dB, you use the 3 dB down points, but gains of less than 6 dB, you use the half-gain points.2

To make things worse, shelving filters have a similar, but different problem. I was happy to see that Max 6 finally started calling the Q of their shelving filters in the filtergraph object the slope instead – but this is just a small window into a larger problem for those of us who need to be both precise and accurate. The end result of all of this is that I’ve had to make my own JavaScript objects for defining exactly what filter coefficients come out when I put in EQ parameters, to make sure that I’m compatible with different definitions.

I'm still hoping that multichannel audio will truly become "normal" some day.

I know that a large part of your job ‘looking ahead’. Any hints to the direction of future audio?

The growth in personal audio devices has resulted in a change in the way a lot of people listen to music. It is now completely commonplace to see people walking down the street listening over headphones – many of them with much higher quality than the "free" earbuds that come with players. I wonder if this will change the way recordings are mixed and mastered. For example, I wonder if, 10 years from now, it will be commonplace to be buying binaural mixes of recordings rather than mixes made on loudspeakers, in much the same way that we’re starting to see tracks that are “Mastered for iTunes”.

I'm still hoping that multichannel audio will truly become "normal" some day. Those of us that have been listening to SACD's and DVD-Audio discs for years, can't wait until the masses hear how good a well-recorded/well-mixed/well-mastered multichannel recording can sound.

For a while, it was great to see bit rates, and therefore quality, come back up and for people to start thinking of 128 kbps MP3 as a thing of the past. However, it seems that the past couple of years, some new codecs have come out that have dropped bit rates again. Of course the codecs are better, but there's no replacement for good 'ol uncompressed audio even if it's "just" 44.1/16. I hope for a future when people don't have to sacrifice quality for bandwidth — and therefore convenience.

Bang & Olufsen's Website

1 Bristow-Johnson, R. (1994). The equivalence of various methods of computing biquad coefficients for audio parametric equalizers. In 97th International Convention of the Audio Engineering Society. Audio Engineering Society.
2 Moorer, J. A. (1983). The manifold joys of conformal mapping: Applications to digital filtering in the studio. Journal of the Audio Engineering Society, 31(11):826–841.

Text interview by Marsha Vdovin and Ron MacLeod for Cycling '74.

by Marsha Vdovin on July 22, 2013

Brian H.'s icon

Great to hear more about what you're doing over there, Geoff! I gotta say, I'm pleasantly surprised to see Max being used so deeply in these areas where, I have to say, I did not expect it!

ehdyn's icon

Solid interview questions

robert bristow-johnson's icon

wow, i just happened to stumble by this.... hi Geoff !

the way that Q and BW are defined for the old classic analog EQs is, i think, the Q or BW of the bandpass filter that is in parallel with the "wire" in a peaking EQ. with a BPF there is always a -3 dB point. but the -3 dB points of the BPF are not the same as the -3 dB points of the peaking EQ *if* the peaking EQ even *has* -3 dB points (i.e. how do you define -3 dB points when the boost of the EQ is only 2 dB?).

i think the relationship between Q and BW should remain pretty much the same, no matter how the parameter gets applied to the filter. the reason why i chose that weird definition (which was, BTW, motivated by Andy Moorer's "Manifold Joys..." paper) was to have the cut EQ perfectly mirror a boost EQ (for the same dB, same resonant frequency, and same BW) without having any tedious "if" statements (i.e. a consistent mathematical rule).

if you were willing to put in an "if" statement into the coefficient cooking code, you could have the BW and Q be exactly how it is for the classic EQ (i.e. the electrical engineering definition) for *boost*, and then for cut you fudge the definitions even *more* than what the cookbook does in order to get a symmetrical boost and cut. all that means is that one can put in a mapping of what the cookbook means by "Q" and "BW" and what the classic definition of those parameters are. i.e., one can use the cookbook along with their favorite definition of Q as long as they map it from their definition to the cookbook definition. the cookbook sorta splits the difference between the classic definition for boost and the classic definition for cut in such a way that the boost and cut are mirror images of each other.

i understand that some folks may rightly not like that definition. your mileage may vary.

Geoff Martin's icon

Hi Robert!

Basically, your if-statement suggestion is what I've done - allowing me to choose between which version of "Q" I happen to be working with on a particular project by making a "universal translator" between the definitions. Typically, I use your half-gain BW definition, since it is the one that behaves best when I want to scale the gains of multiple filters simultaneously. The big problem as I see it (and as you and I have discussed in the past) is the fact that there are different definitions. For example, if I send EQ parameters that I'm using in my Max/MSP patcher to someone who will use a hardware EQ to implement the curve, there's no guarantee they're hearing what I hear. In fact, if the hardware EQ is from the days before your cookbook came out, it's almost certain that they won't. So, instead (of course) it's smartest to just send biquad coefficients around instead - if your device can understand how to deal with those...

Personally, I don't really care which version of "Q" (and "slope", if we're talking about a shelving filter) a particular piece of software or hardware uses, as long as the version type is explicitly stated somewhere in a manual or help file. Unfortunately, many (possibly even most) people don't realise that different definitions exist, so they don't know that they have to be explicit at all.

Of course, there are other places where this problem occurs. For example, a "ton" could mean 1000 kg, 2000 pounds or 2240 pounds, depending on where and when you live - and the world keeps on turning despite the fact... :-)

Cheers
-geoff

jaxon's icon

Amazing! This is my first time here and know Geof through his interview. He really loves musics and I could say that music is Geof's world.

C74 Ginger's icon
cdeckard's icon

First, thanks for a great interview! It's particularly interesting to me because I began implementing max/msp for our speaker voicing about a month ago. I have two questions, however. Were you able to determine which definition of Q the filtergraph object implements? Also, the acoustic engineer I work with has expressed a desire for odd-order filters, which filtergraph doesn't support. Did you run into a similar issue, and were you able to resolve it?

Cheers!
Christopher Deckard
Seattle, Wa

Roman Thilenius's icon

why the heck would anyone want two different filters to sound the same?

Geoff Martin's icon

Hi Christopher,

My measurements certainly indicate that the filtergraph~ object uses Robert's "cookbook" definition of Q. I notice that there's a small deviation at higher Q values (for example, when I ask for a Q of 16, it appears that I get an actual Q (using the half-gain definition of bandwidth) of 16.1522, give or take... But, for example, for a Q of 1, I get a measured Q of 1.0054 (again, using the half-gain definition). I'll write up a blog posting in the next day or two with some measurements and stick it on my website (www.tonmeister.ca/wordpress) so that you can see some of the details.

As for odd-order filters, I have to confess that I cheat. I do have one abstraction that I use occasionally which is a first-order allpass filter based on half-of-a-biquad. However, for all of the rest, I just use the cascade~ object and just feed it coefficients that I calculate in Matlab.

Cheers
-geoff

Geoff Martin's icon

Hi again,
I've posted a quick analysis of the peaknotch setting in the filtergraph~ object at http://www.tonmeister.ca/wordpress/2013/09/12/q-vs-q/ if you're interested.
Cheers
-geoff