Articles

An Interview with Eric Lyon

Eric Lyon is, among other things, a composer, performer, teacher, and developer of Max external objects. Generally Eric is someone who is regularly pushing the envelope. And fittingly, he's just published the first book on writing audio external objects for Max and Pd. The book is a practical guide to implementing synthesis and signal processing techniques to extend these two popular audio environments.

Eric will be one of the featured artists at Code Control, Europe's biggest Max meetup, taking place in Leicester (22-24th March 2013). For more information, visit the Code Control website.

First, before we talk about your book, I wonder if you could tell me how you first learned about Max and what you did with it?

I first learned Max so that I could play in a band. In 1996 I had just joined IAMAS and my colleague Masahiro Miwa invited me to perform some live computer music around Japan. The band was called Psychedelic Bumpo and our ensemble consisted of a turntablist, an electric guitarist, and two computer musicians. Masahiro advised me to learn Max saying, “just play around with it for two hours and you’ll know Max.” I had never done any data-flow programming before, but Masahiro was right; Max was very easy to learn. My system configuration consisted of Max 3 on a powerbook, controlling a Kyma Capybara system for sound generation and processing. I built a set of drum machines and weird sample processors. We gave some really fun concerts around Japan, but I don’t know if anything was ever recorded. We were pre-YouTube.

and then, for the benefit of our readers, perhaps you could highlight a musical project you've done with the software and talk a little bit about how it works.

In 2005 I wrote a piece called Introduction and Allegro. It was commissioned by Shiau-uen Ding for the NeXT Ens who premiered it in 2006 with Meg Schedel performing the computer part. The computer performer samples the ensemble live, and these samples are then used in combination with the instrumental parts, often with a great deal of audio processing on the samples and instrumentalists. There is some tight rhythmic interplay between instruments and computer, and samples need to be captured live, edited down, and then played back, sometimes within a few seconds, and other times much later in the piece. For example, toward the end of the piece, the ensemble is combined with a drum machine playing samples that were all captured during the performance. If you miss your chance to record the snare drum, that becomes a real problem later on! The piece is a nailbiter to perform, but very satisfying. The audio processing and specific musical materials to be sampled are all fully notated. But unlike playback of prerecorded materials, each performance of Introduction and Allegro is sonically distinct, since the live-captured sounds will always have different nuances.

I wanted to bring Max users lacking much coding experience to a level where they could independently develop externals according to their own needs and interests.

I suppose the first question is to introduce the context of an audio external object -- what's a quick way to say what it is and why we should care about them?

Audio objects produce digital sound directly, in contrast to the MIDI objects like noteout, which require an external synthesizer in order to produce sound. When you use someone else’s synth, you’re stuck with their sound design. Using audio objects, you can design sound however you like, with incredible flexibility.

So, why did you originally think the world needed a book about writing audio externals for Max/MSP and Pd?

Writing Max and Pd externals has been extremely useful for my own work. I thought other musicians could use some help to develop those skills for themselves. The Max API documentation from Cycling '74 is excellent, but it is aimed at experienced programmers. I wanted to bring Max users lacking much coding experience to a level where they could independently develop externals according to their own needs and interests. Once they have worked through my book, they can use the Max API documentation to progress even further.

My formal training is as a musician and composer, not a computer scientist. So the book covers many of the issues that I needed to learn about when I was teaching myself how to code audio, including issues that might seem obvious to professional programmers, but not necessarily to musicians. Essentially, I wrote the guidebook that I wish I had owned when I started writing audio signal processing software back in the day.

Learning how to code externals in C expands your awareness of the range of possibilities for sound processing in Max.

Why should Max users learn how to write audio externals in the first place?

When Miller Puckette first wrote Max, he expected that users of the software would write externals in C code as a matter of course, which is largely what happened. It was rare at first for Max users to write patches using just the existing externals. A few decades later the situation is nearly reversed: most Max users work exclusively with existing externals, whether provided by Cycling '74, or by the many third party developers. It is rare for users to write their own externals.

So why learn to code your own audio externals? Some Max users may simply want to gain a deeper understanding of how audio externals work. The graphical data-flow interface of Max is very intuitive, but it is a visual metaphor for a more fundamental system of samples, signals and C code. My book can help readers learn what’s going on “under the hood” in Max.

But the book is not just for tourists! Even with all of the externals out there, I often encounter situations where the best solution is to write a new external. For example, I recently needed a version of the Max object “sel” to run in the signal domain with sample accurate response. I quickly wrote an external called “el.sel~” to solve this problem. I think other Max users may find the ability to write externals very useful for situations where they can’t quite find the right external to solve a given problem.

Sometimes writing an external in C can yield the most straightforward solution to a problem, even for problems that might be solvable using a large, complicated Max patch. DSP coded in C is generally very efficient, compared to the same algorithm written as a Max patch. This is true even for gen~ patches. Above all, C code is a fundamentally different way of thinking about how an audio external does its work, compared with patching. Learning how to code externals in C expands your awareness of the range of possibilities for sound processing in Max.

If I were teaching computer music, I might think your book would be a great resource for a class on DSP programming. Have you had the opportunity to use the material that way?

First of all, thank you! The book has come out so recently that I have not yet had a chance to use it in classes. It could indeed be used as a textbook for classes with an emphasis on developing externals for Max/MSP or Pd. The book is designed for self-learning, whether in or outside a classroom setting. My PhD student Christopher Haworth learned how to write externals by reading drafts of the book without any further explanation on my part, so I know that it worked at least once!

I suppose most readers of the book will assume you've written your own audio externals. Can you talk a little about what externals you've made and what you used them to do?

I have developed two major collections of externals: FFTease, a collaboration with Christopher Penrose that implements a variety of spectral processors, and LyonPotpourri, a grab bag of processors for things like sample-accurate timing, non-realtime editing of buffers, and other audio fun and games. All of my externals were built for my own music, and I use them all the time for creating and performing music.

One thing I really like about the book is the iterative nature of the examples. For example, you have a delay line with a certain set of features, and then you add the ability to adjust to changes in the sample rate. Did you use this iterative process when developing your own externals?

Absolutely. Most of my externals are developed during the course of composing. As a piece evolves, I add new features to my externals if the piece requires it. One of my favorite parts of the book is Miller Puckette’s Afterword, where he describes his philosophy of externals design. Miller tries to reduce externals to their essential functions, whereas I try to cram as many fun ideas into my externals as possible. Miller and I have different backgrounds and different reasons for writing externals, so naturally we have different styles and approaches to the design process. I thought it was important to show that in the book, to encourage readers to develop their own personal approaches to designing externals.

This is a nice way to quickly arrive at rich, expressive, refined sound materials for composition.

The spectral processing chapter is really informative and dispels a lot of the mystery behind writing code that operates on audio in the spectral domain. Now that readers will feel more comfortable with spectral processing code, do you have any suggestions for what we could do with this knowledge? Any objects you'd like to see created?

Spectral analysis is the first step for all kinds of feature analysis, such as source separation, beat tracking, timbre matching, etc. There are endless potential uses for such feature analysis in live electronic music, so it would be great to see some new externals that implement such ideas. There is also room for expansion of the pfft system itself. At present pfft implements FFT/IFFT transformations. Both of the spectral examples in my book rely on pfft. But some of my favorite FFTease objects incorporate an internal estimation of instantaneous frequencies, which are then modified and resynthesized using an oscillator bank, rather than an IFFT. It would be nice to have a pfft-friendly external to convert from an amplitude/phase spectrum to an amplitude/frequency spectrum. At the output stage, there would be an option for fftout~ to use an oscillator bank for resynthesis. With all that in place, it would be trivially easy to write a pfft> patch to implement a phase vocoder, by simply scaling all the frequencies in the amplitude/frequency spectrum. For more complicated frequency manipulations (such as occur in my pvtuner~ object), you could write a small external to manipulate the amplitude/frequency spectrum, but the pfft system would do all of the heavy lifting.

Another promising area is in the development of non-realtime externals that operate on the contents of buffers. My LyonPotpourri object el.buffet~ does this, and I also present an external in my book called “bed,” which lays the foundations for a really cool audio editor in Max. I find the tight integration of real-time and non-realtime processing musically compelling, and not easily available elsewhere. You can perform some rich, expressive sound manipulations live, capture the result into a Max buffer, and then refine and transform those materials in a more reflective, non-realtime mode inside the buffer. This is a nice way to quickly arrive at rich, expressive, refined sound materials for composition. I also use realtime/non-realtime combinations in live contexts. For example, in a Trio for flute, clarinet and computer, I capture a passage played by the flute and clarinet to a buffer, convolve it in non-realtime with a synthetic 30-second-long impulse response using a custom external, and deploy the resulting highly reverberated texture several bars later in the piece.

Anything else you would like to add? Any other Max-related projects we should look for in the near future?

There are new versions of FFTease and LyonPotpourri gradually approaching release. I have been exploring spatialization for large numbers of speakers, working at the Sonic Lab of SARC, the Klangdom of ZKM, and soon, ICAST. This work has required the development of new externals for flexible spatialization, including spectral spatialization. Those externals will be rolled into the next version of LyonPotpourri.

I have been commissioned by Marianne Gythfeldt to create a new work for clarinet and Max. I love creating music for classical instrumentalists with computer, because classical instruments have a rich, established vocabulary of effective performance strategies, while the computer is wide open for new ideas. This time out, I’m going to try to really blur the boundaries between the musical behavior of the clarinetist and that of the computer performer. Ideally it will sound like no one is fully in control of the situation.

Eric's Website

Eric's book Designing Audio Objects for Max/MSP and Pd

by David Zicarelli on February 22, 2013

jamie bell's icon

Great interview :) I would love to get this book, if I can find a copy at the right price