Articles

Developer Focus: Surreal Machines

Do you ever wonder about the people behind your favorite devices? Once in awhile, I’m in a position to have a sit-down with the people who make the tools I use - in this case, two of the men behind Surreal Machines’ Max for Live devices: Matt Jackson and Peter Dowling. We had a nice leisurely conversation about them, their work, and their hopes for the future.

Hello Matt & Pete, thanks for agreeing to an interview with me. I’ve been a big fan of Surreal Machines audio plugins since I first laid eyes on them, and have used them many times both live and in recording.

Tell me, where did the idea first kick off? Particularly the collaboration between you two?


(Matt:) I think a delay with saturation was one of the very first things I ever made musically. I got started in Reaktor (Generator) in the 1990’s and had been kind of constantly evolving the ideas since then.

When I started working at Ableton coming from NI (Native Instruments), I switched from Reaktor to Max for Live. In the beta forum, I had the idea of translating some zero delay feedback filters from Vadim Zavalishin I used in Reaktor Core over to Max using the gen~ object. I think I posted what I hoped would work asking the forum why it didn’t, and Pete solved it right away. Since then, we were always exchanging pretty advanced (and some utilitarian) snippets of gen~ code. Then the idea of the delay came back up and I wanted to see if Pete could optimize my version, which I thought was done. Turns out my code wasn’t that bad in the end - it was just really doing a lot of modeling, and that cost CPU. But Pete took the project to another level - like the second stage booster on a rocket - and added all kinds of character modes, realized UX ideas I had, and improved the quality of the DSP and structure in general. That’s kind of the way we work now: We discuss ideas. I usually will do something I think is done and then hand it over to Pete and he takes it to the next level while we work together on the UX ideas.

(Peter:) A few years ago Matt and I met on the forums. We liked each other’s code and attitude towards building. We decided to work on a delay together because it seemed like an obvious, simple, well loved choice to test out our new collaboration. It got a little out of hand and became the ‘Magnetic’ M4L device, a crazy mix of informed research and playful fantasies. We didn’t care about the craziness because we thought only 10 people might buy it. In the end we created this Dub Machines beast that many people loved and expected us to develop further. We had become obsessed, so we bit. The rest is… as they say.

By the way, Matt and I never met in person until after we had already released the M4L version of Dub Machines. We even did a Skype interview with Darwin Grosse over at Art + Music + Technology podcast before we’d met! Mostly we have worked together remotely.

Hugely important to our recent work has been the collaboration with Alex Harker (of HISS Tools fame, amongst many other things). Without his skills, expertise, creative interventions and friendship, we would not be where we are today as a company.

(Matt:) Yeah Alex basically coded everything in the VSTs except the core of Diffuse, which we exported from gen~. While we all worked together on the ideas and Pete did the code reviews, it is basically all Alex’s C++ code. We even used his convolution engine in the original Max devices. Alex and I had already worked together on the Ableton Max for Live convolution reverbs.

I’m curious about the ‘analog emulation’ component. This has almost become a catchphrase in effects and instrument plugins lately as the world sees a resurgence in analog hardware.

Can you tell me about the lengths you’ve gone to on these new devices to capture the essence of hardware?


(Peter:) This is a difficult question to be asked in public. I mean, I suppose I should play the good company face and give you some aphoristic marketing speak to backup the prevalent mood of the times. However, I would much rather answer as an individual, not speaking for Matt or for Surreal Machines, so I can express how this term that gets bandied around, ‘analog emulation’ (or variants thereof), is basically complete bullshit.

Yes, with our latest VST/AU Dub Machines plug-in releases we have worked tirelessly to create analog inspired sound worlds based on the concept of ‘emulating’ existing hardware. I just feel that the terms used to summarise this intent are not useful. The ‘analog xxx’ moniker should at least be reserved for code that attempts to capture the nuances of analog behaviour within a digital context.

In Dub Machines (especially the recent plug-ins), we started with our own musical preconceptions based on being musicians with analog and digital pasts. We did not trust ourselves so we employed well grounded scientific methods for measuring, recording, creating impulse responses of different pieces of equipment, etc. Then came analysing. Then we found exciting results and were not afraid to play with them a little, creatively, finding our own ways of creating analog-like behaviour.

There are many different techniques for achieving this through programming. We studied hard, learnt how to implement them, and made up a few of our own. The only real ongoing battle is with CPU usage.

So, when we say ‘analog emulation’, we are actually saying that we do not just commit lazy, digital, theoretical maths on ones and zeros representing audio signals to our DAC. We strive to use various well grounded techniques to make our audio signals feel alive, malleable, human and interesting. And the model we look to is the analog domain, where we have an intuitive and in-depth social, cultural and analytical relationship with a long history of human sound making. This is a long-winded and pompous way of saying, “We want our digital audio to sound fantastic”.

(Matt:) I agree with Pete that the term gets thrown around a lot, is so vague and is starting to lose its merit. However, the first draft of Modnetic (or even Magnetic) started with the schematics of a Roland 201 and using that block diagram we started to approximate each part - an input amplifier, for example. We didn’t do math with resistor values calculating the Kirchoff’s law like some of my smarter DSP friends at Ableton and elsewhere actually do. But you know, we did take measurements of frequency responses and clipping, mostly coming from recordings, like from our friend Andreas Tilliander or other devices we had on hand. In the case of many parts, you could even say “Analog Emulation” because we are using samples from actual gear to recreate the same vibe. For example the noises, the springs, the wobble... all that is recreated from actual recordings of the parts of the gear we wanted to inject into our sound.

But Pete - you have to admit that the new BBD circuit in Modnetic is really recreating the same phenomena mathematically that happens in a Bucket Brigade chip. When we started the VSTs, we went to this shop in Berlin called Echoschall (the whole team - Alex, Pete and I. This was maybe the second and last time we were all together). There you can rent whatever classic gear you can imagine; really it’s incredible what they have there. We took out a 501 and a bunch of pedals and did lots of recordings of the sound and behavior to match things like the BBD degradation and the delay time ramping, etc. When I returned the 501, they told us it had broken in the travel and we helped pay for its repair. At that moment, I said to Pete and Alex, “We got a good idea here, because traveling with this gear is so inconvenient and difficult.”

(Peter:) Yes, I would go so far as to say that the BBD modulation section in our VST Modnetic is the piece of coding I am most proud of in my life. I do not mind saying that I think it is fantastic sounding and is a great piece of audio design. That modulation section sums up Surreal Machines and the answer I gave earlier, in the sense that it is a high quality piece of true ‘analog emulation’ but also affords the ability to be smashed apart to make new, strange sound worlds. Really this is the best essence of ‘analog inspired’.

I have to agree, sometimes the lingo “marketing BS” gets in the way of people actually making smart creative decisions. However, it seems the analog vs digital argument still prevails, despite the majority working with hybrid systems, for the most part.

What do you look forward to most as software meets hardware and vice versa?


(Matt:) I think the next frontier is embedded systems. A typical computer is amazing in that it is so mutable, and can basically take on any task someone capable can think of due to its reprogrammability. However, it lacks a dedicated interface for anything other than typing and pointing. And I don’t think the tablets are really that much better in this regard for music, because they essentially suffer from a similar problem- being non-dedicated. But at least their interface is more adaptable, although lacking a certain physical feedback.

I see things going towards hardware that is infused with the power and intelligence of a microprocessor, but more dedicated functions and even more purposeful, physical limitations... Analog sound is an example of the charm of a deliberate physical limitation, in this case the electromagnetic properties of materials. I see things like Elektron’s Analog 4 or even the new MPC X (which doesn’t even have much in terms of analog circuits as far as I know), or Novation Peak or even pioneers like the Mutable Instruments Ambika becoming more popular because they have just enough flexibility that a plugin or computer might have, but integrate a dedicated interface and sound.

The real hurdle or drawback here that has to be overcome is the cost of production and space…. But there doesn’t seem to be a real clear solution to that. Maybe if Elon Musk were making instruments, we’d have a clue (laughs).

On the software only side, I see AI playing a big role, as you can already see in some Google experiments and in the fields of mixing and mastering. The idea with mixing and mastering is that like the digital camera with autofocus, auto aperture etc. things became possible for everyone, whereas before someone needed the expertise and training to do it by hand. Intelligent mixing aides like Neutron (by Izotope) and Lander for master will now automate tasks for home musicians. But really I think mixing and mastering is just the start. I expect big things. I sometimes stay up late just thinking of the possibilities. Yeah, that coupled with the data available that comes with networking hardware... you get kind of a recipe for a steeper evolution.

(Peter:) What Matt said. Oh, and Max running on embedded systems please. If you look at the last 30 years, we’ve gone from specific IRCAM / Stanford reliance on NeXT through FTS & MusicKit, to Pd/Max and Faust - both specific solutions become more useful generic solutions as the technology progresses. I really hope that we continue to get creative generic tools (e.g. gen~ code export) and not closed systems where we have to jump through horrible hoops (e.g. the tyranny of changing plug-in specifications, or iOS, etc).

It seems we’re seeing new ways that things like plugins that work with hardware [aka the new Akai MPCs] relegating the computer to a less prominent role.

What are some of your predictions for software over the next decade? Don’t worry I won’t hold you to these - go wild!


(Matt:) Yeah, AI is one stream of evolution we will see. More collaborative software probably. If I were to make a wild guess, I think that VR will probably become a platform before all commercial records are made on tablets or phones.

(Peter:) I think it is simpler than that. I have a pessimistic version and an optimistic version.

Pessimistic: everything we do with a computer gets increasingly commodified. We have already seen this creeping in. 15 years ago I was in control of my own computer, and now I am not. In general, this leads to a lack of innovation and a rebranding of creativity as ‘cultural usefulness’. The only hope for creative people working with technology in the near future is Linux (not because it is any good, but because it is least worst), and sadly Cycling ‘74 are behind on this one. None of this should preclude commerce, funding R&D, making a living from one’s skills. It is just there should be a way to rethink the model so that it is possible to put time and effort into areas of creative research that may not ‘sell the most copies’.

In fairness, Cycling has managed this amazingly over the years; I mean we wouldn’t have gen~ or jit.physics or dictionaries, etc, if it was not for managing to fund edge-case software development in a commercial context. And it is great to see the likes of u-he starting Linux support, but sadly this is not the norm. With the future so obviously being embedded computing, we need high quality creative software that runs on everything and seamlessly scales from a Raspberry Pi to a GPU beast. Creative people need to be DIY, but they need some help. We want direct access to that embedded platform (bela), not just to buy it (MPC). Sadly, I don’t think we will get it; or at least it will become increasingly difficult to ‘own’ one’s digital environment.

Optimistic: I learn to stop worrying and love the bomb.

(Matt:) Pete [laughing], I like the optimistic version…

(Laughing) We got a little dark there Pete, but I like your optimism. I’m going to drag this discussion back to Max and - in particular- to gen~ code export. I’ve known for sometime that this was a big part of Surreal Machines and both of your own personal development.

I’m curious - how did you both get into gen~?


(Peter:) I was very lucky. I got sent a beta in 2011. My head exploded. I stayed indoors for a few months. It was like Max but better. I was hooked before I’d even worked out what I could do (there was no documentation at that stage). I started studying my DSP algorithms all over again. I’ve used it daily ever since.

So, I guess you could say I was sitting around waiting for gen~, and then suddenly, out of the blue, it appeared. I needed it to carry on using Max and most importantly to stay part of the Max community, as if one disappears into more and more personal and specific solutions, one ends up in a social vacuum. Every time Max extends itself, it makes the investment even more worthwhile.

(Matt:) Before Max, I started with similar environments very early on and tried nearly all of them, getting quite proficient. I worked for a while at NI right after Reaktor introduced Core and before Max for Live or gen~ came along. When Max for Live came out and I started working at Ableton, I immediately switched to Max and months later Gen came out. I was super excited, because programming in Max is such a friendly environment, but I had gotten pretty deep into DSP at that point from using Core and saw gen~ as the perfect way to transfer all my knowledge into making Live devices.

My first real Max for Live device was a sort of test to see if I could make some delays in gen~ similar to ones I had made in Reaktor and in C++- those later turned into Dub Machines.

When we decided we wanted to make VSTs, Cycling had already added and shown us how to use Code Export; since all the audio processing in Diffuse was done 100% in gen~, it was super simple to export and wrap the code as a proof of concept. In fact it worked so well that we basically made a few changes to the gen~ code and exported that to what we are using in the Diffuse VSTs we just released. (Modnetic needed more optimisation, new parts like the modulation and used Alex Harker’s convolution, so we decided to look at our gen~ code there and start writing it from scratch in C++, but we still used Max/gen~ for all the prototyping.)

What is it that you like about gen~?

(Peter:) Speed, quality, depth. Being in control. Having the fine level of granularity to realise almost any idea, all from within the Max eco-system, with the knowledge it can be exported to generic C++ for different prototyping situations. I like gen~. Ultimately, the reason I think it is so clever is that it is a creative language, not an overtly technical one (unlike Faust, for example). It is a logical extension of the Max universe. The designers knew exactly what they were doing in this regard.I really appreciate that.

At Surreal Machines we used code-export for the Diffuse VST. It was not simple or straightforward, and there is a lot of other work in that plugin as well. But I can appreciate that this will get more refined as time goes on. I hope that we can share our experiences in some more defined way in the future to help others with the gen~-patch-to-plugin process.

(Matt:) I also like the control. I really like to know what’s happening inside my DSP and even try to make simple things like MIDI to frequency or RT60 calculations with faster approximations when necessary (even though now we have fastpow and such). We have hand-coded tanh approximations of various trade-offs so that we can get the most optimised one that has the least impact on our particular need.

But what I like best about it is being able to code visually at a low level - that and that fact that it compiles in real-time. I’m really just so rooted in visual DSP that I can’t imagine typing in routines somewhere, tracing them back when reading through something and then hitting F5 and reloading an application just to find out what happens. I just like being able to trace things and dive in wherever I’m working. Pete’s totally the other way though. He loves setting up a big list of code that he can call at whim and going back to change the source in one place.

What would you like to see in the future for gen~?

(Peter:) Specifically, even better optimisations through a robust and consistent interface to standard approximations. Or a Gen SDK so we can do this ourselves. One of the main difficulties with gen~ is that it is an always-on, always-64-bit-doubles environment. Which also makes it great of course!

Broadly, I’d like to see the whole of Max just become Gen-like, with new features to allow for non-audio-graph behaviour. This means addressing anything through the JIT-compiled interface of either patching or scripting. Like we have always been able to create parts of Jitter chains referenced just through Javascript, but if that was GenExpr and everything was compiled, for Max and MSP and Jitter and GL and everything, Wow.

I’d also like to see “Gen for X”, so instead of just a compiling tool chain for Owl (for example), we’d get a gen.owl~ object, and a gen.arm~ object, and a gen.arduino object, etc. And if we cannot get the whole of Max on a Raspberry Pi, maybe at least a standalone Gen application?

So, my wishes are modest.

(Matt:) Dangerous question. I really love gen~ so I’m pretty satisfied but I think the Cycling crew (at least here in Berlin) know that I’d love to be able to do branching visually instead of with a code block and IF or CASE statements. Currently, the ? operator will run both sides of the branch all the time behind the scenes.

Thinking a bit bigger, I’d like to have something like abstractions that exist locally to a gen~ - something I can make and instead of Copy/Paste Replace, I would get an aliased copy. But if I were to edit it (like making a compromise for a tanh,) it would only affect the instances in that gen~ so I know my other projects don’t change sound.

Some sort of Code Export that also exports a Max UI would be killer, but I have no idea how that would work and I guess it’s not really a gen~ feature.


For more information on Surreal Machines:

https://www.surrealmachines.com/product/dub-machines-vstau/

by Tom Hall on August 29, 2017