The first version of MSP was released eight years ago — December 21, 1997 to be exact. As MSP’s age now represents a child old enough to read and understand a few rudimentary swear words, I felt it was appropriate to reflect briefly on MSP’s past, present, and future.
The origin of MSP was the signal processing additions to Max that were part of the ISPW project at IRCAM. “Audio Max” as it was known to afficionados was actually an unofficial development by Miller Puckette that eventually became the standard tool for productions at IRCAM during the early to mid 1990s. I was at IRCAM during this time and was strongly impressed by the community of knowledge that had developed around Audio Max. Never having used the system myself, I listened in on conversations and looked at other people’s patches. When I first sat down and tried to make something, I remember thinking that this was definitely a system for smart people. My goal then, and now, was to see if there was a way to embody some of the intelligence needed to use Audio Max into the software itself, to make it clearer how things worked (or, more importantly, didn’t work). I was also fascinated by the community of applied signal processing knowledge embodied by the people at the core of the project such as Miller, Cort Lippe, Zack Settel, Francois Dechelle, and Les Stuck. I was too busy as a Max developer to ever rise to this level of a Max user, but I hoped I could at least pretend to be helpful to power audio users someday.
The more immediate problem was that special-purpose hardware was necessary to run Audio Max, while I was a Mac programmer and the Mac was still a lowly 68000 machine. I managed to compile the source code (knowing that a faster PowerPC processor was on the horizon) but the machine was only able to produce a sine-wave oscillator. I added a multiply object and the machine choked.
After leaving IRCAM I was part of a synthesis project that used an SGI machine, which was at the time the computer of choice for audio that might be relatively interesting. But this was dedicated software for additive synthesis, not a general-purpose architecture like Audio Max. As newer and faster PowerPC Macs began to appear, I kept getting e-mail from ISPW users such as Paul Doornbusch reminding me of their dream to have an ISPW-like system in their Mac with no special hardware. At this time, the first software that did “native” DSP directly on the processor started to appear, perhaps most notably the VST system from Steinberg. Combined with the introduction of audio cards and hacks to fool the operating system into real-time continuous sound processing, the pieces were starting to come together in late 1996 and early 1997. The PowerPC port of Max itself was released in December of 1996.
My first attempt at Audio Max on the Mac was simply to port the ISPW. Most of the work was interfacing the Mac “sound manager” to the signal processing code. In addition, the Mac would need a different model than the ISPW in which audio and timed events were processed together. On the ISPW, the Next cube’s 68000 had a copy of the patch for UI purposes. The patch was duplicated on one or more of the processors of the special-purpose DSP board, which alternated between processing audio samples and timed events. The Mac implementation on a single processor required both the real-time audio and the UI to live together. The Mac OS did not really support multi-threading at the operating system level as OS X does today, so the implementation was largely dependent on audio hardware generating periodic interrupts.
After being unable to reach agreement with IRCAM about licensing their software for a personal computer implementation, I decided to start over and base a new design on Miller Puckette’s new Pd software. This turned out to be a better solution in the end as I was able to take advantage of features of the Mac version of Max that were lacking in the Next-machine version used by the ISPW. For instance, this new version added the ability to have user interface objects that processed audio, so we were able to have meters, oscilloscopes, and the all-important button that would turn the audio on and off. Other improvements included the distinctive striped audio patch cords, the “auto-update” mechanism that made the audio reflect changes in patching immediately rather than the next time the audio was switched on, and the ability for objects to run different DSP code if signal patch cords were connected.
The name MSP was suggested by Christopher Dobrian, who wrote the documentation (and also wrote the first version of the Max documentation in 1990). MSP referred to “Max Signal Processing” and also happened to be Miller’s initials. As a bonus, it was the airport code for Minneapolis where I grew up. I can’t remember if I had another name in mind, but MSP was approved by all I consulted. Miller didn’t seem to mind either.
I sent out a few versions of MSP to people familiar with the ISPW such as Zack Settel and David Wessel. There was a real sense of excitement that a system previously requiring an obsolete computer and a minimum of $12,000 worth of hardware would become a $300 program on a computer you probably already owned. The main concern was “how much of an ISPW do you get” and this was measured by seeing how many instances of the cycle~ oscillator object could be used at the same time. For the first few years of MSP’s existence, each new Mac model would rapidly be assessed in terms of how many cycle~ objects it could perform. Eventually it just seemed like people didn’t care; the assumption was that the computer would make more than enough. At first, however, the efficiency of the computer wasn’t sufficient to implement everyone’s compositional fantasies, so a lot of effort was spent trying to optimize the code. I remember starting with about 20 oscillators and managing to increase the count to about 74 using a variety of techniques including a special compiler from Motorola that, alas, is no longer supported.
If we fast-forward to the present, it’s interesting to note how the legacy of the ISPW in terms of defining what MSP is and how it performs is gradually disappearing. MSP is becoming less of a collection of oscillator objects than it is a framework for synchronous audio processing, incorporating other languages and systems that have been made to work within it. As with Pd, MSP was designed to be extended. While this was possible with the ISPW, the development tools tended to require the presence of one of the original ISPW team members. With Pd and MSP, source for new objects and programming information was included from the outset. MSP quickly began to interface to existing audio programming projects and existing audio software standards. Examples include support for ReWire and VST plug-ins, multiple audio driver standards on Windows, the open-source csound~ object whose development was supported by Cycling ’74, Brad Garton’s port of rtcmix, and Topher LaFata’s mxj~ included with MSP 4.5.5. mxj~ is interesting because it shows a higher-level model for developing signal processing objects where the programmer can fill in a few Java classes and have a working object. It also supports a model for dynamic patching of DSP routines that we hope will be the basis of a future MSP. And unlike a traditionally written MSP object, when your Java DSP code in the mxj~ object crashes, the mxj~ object stops producing sound but the rest of your audio continues to play.
For me, MSP represents the movement of a beautiful piece of software, Audio Max, into the larger world outside of a few wealthy institutions. This is largely made possible by the tremendous increase in signal processing power on personal computers over the past decade, but in terms of the software itself, there was also the need to increase the clarity, self-documentation, convenience, and power of the system in order for it to be accepted by an audience without direct connections to a central institution. This second aspect of the technological evolution of MSP is even more important now that we are starting to take the computational power of computers for granted. Powerful processors makes it easier to imagine powerful software, but it does not automatically guarantee it. So what might make MSP more powerful in the future?
One of the areas in which I would like to see MSP expand in the future is in handling the demands of real-time performances in which the configuration of the machine evolves over time while the performance is occurring. For example, perhaps you want your computer to be a psychotic guitar pedal at the beginning of your performance, but need to change it into a drunken speed synthesizer later on. How do you characterize this change in the software in a way that reflects your intentions? Another area that I want to work on involves interactive exploration of audio patches. When trying to figure out why a patch isn’t making sound, it would be wonderful to be able to “probe” each signal patch cord and find out the nature of the signal going through it? Another area of research is in attempting to make the software embody the knowledge of experts. There are only so many ways to do DSP, and we believe the software should provide easy access to as many of these techniques as possible. We’ve started down this road with the template and prototype concepts in Max/MSP 4.5.5, but this is just the beginning. Making MSP more powerful requires that we look at the needs of people who are encountering the system for the first time in addition to those who have used it for the past decade (or longer). The problems are as much cognitive as they are technological, although for me, it’s often hard to separate these two aspects of software design.