The Poly Papers (part 1)
One of the more difficult things for many beginning Max/MSP users is dealing with polyphony. So I thought I'd write up a few articles about the poly~ object, and help take some of the heat off of a new user's head.
One thing that is important to understand about the MSP portion of Max/MSP is the creation of a DSP chain. When you create a patch containing audio functions, all of the signal-rate objects are used to create a chain of DSP functions that will execute once audio is activated. In order to maintain a stable audio path, this set of functions, called the DSP chain, is compiled at runtime, and cannot be changed without interrupting the signal path.
Unfortunately, polyphony requires changes in the signal path. With a polyphonic synthesizer patch, you will only want the voices that are currently sounding to be active (in order to save CPU usage). Under normal circumstances, this would require a constant recompilation of the DSP chain, which just wouldn't be good for business.
So, along comes the poly~ object. This object acts as a switching router for the DSP chain. When the DSP chain is executed, and it encounters a poly~ object, the currently active voices of the poly~ object are each executed - allowing the audio path to be altered in real time. Understanding this bit of magic is the key to understanding polyphony in Max/MSP.
A basic poly~ drone patch
In order to exercise the poly~ object, I've created a little patch that will drone out a number of triangle waveforms, with voices coming in and out randomly. The core of this patcher is found in the ppaper_sub01.pat patch, shown here:
There are three things to consider here:
Inputs and outputs: Rather than using the inlet and outlet objects, like you would in a subpatcher, you have to use the in and out objects (or in~ and out~ for audio signals) to pass information from poly~ to the subpatcher. In this case, we don't have any audio coming in, so a single in object is sufficient for our needs. Likewise, we only have audio going out of the subpatcher, so only a single out~ object will be used for this patch.
The Audio Engine: Not much going on here. A single tri~ object produces the sound, and a *~ object scales the output level to prevent output overload. Pretty simple, as befits a demo patch.
The Voice Muting System: This is particular to a poly~ implementation, and deserves a close look. Whenever the incoming on message is "0", the message "mute 1" is generated and sent to the thispoly~ object. The result is that all of the DSP functions for a voice are turned off. This way, we are saving the CPU use of this instance whenever it is not sounding. This is the great advantage of using poly~ versus just having six subpatchers active in our main patcher.
Now that we have a "voice" subpatcher created, we need to build a main patcher to host it. Here is a complete "main" patch:
Don't worry about all of the initialization stuff on the left hand side for now - instead, focus on the bottom of the patch. Here, we've instantiated a poly~ object with our subpatcher, and a voice count of 6 (since we will have six tones in the drone). Now let's take a look at that initialization mess.
In order to set up each of the voices for playback, we need to send messages to each. When the loadbang fires (when you first load the patch), it will send the messages in right-to-left order. The first message uses the "target 0"message, which tells poly~ to send the next message to all voices. It then sends the "on 0" message, which turns off all voices. After that, we target each voice for a "freq" message that tunes its oscillator. This sets up the voices for subsequent playback.
The right-hand side of the main patch is the drone playback system, which randomly turns voices on and off. In order to see the generated messages, you might want to attach a print object to the output of the pack message - this way, you can see the voices as they are affected. Turn on the DAC~, then turn on the metro object - you should hear the Cm13 chord blip in and out as the voices are randomly turned on and off.
Getting voices to self-initialize.
This works fine, but that initialization code is a real mess. Wouldn't it be cool if the individual voices could just initialize themselves? Yeah - that's what I thought, too. So I created a variation of the subpatcher that uses a unique property of the thispoly~ object: a bang message sent to thispoly~ will cause the object to output the current voice number. From there, it is pretty simple to make the subpatcher tune its own oscillator.
Since no frequency information is coming from the main patcher, it is unnecessary to route messages, so message handling becomes a lot more simple. Of course, all of this implies some changes to the main patch, which can be seen in this new version:
Much simpler, and a whole lot easier to visualize. In general, if you have voice initialization to do, you can save yourself a lot of grief by placing it in the voice patcher rather than having the main patcher send piles of messages.
Smooth it out.
While this patch is all nice and fine, it sure would be more interesting if the voice would fade in and out, rather than just appearing. So, placing a little logic into the subpatcher accomplished that pretty easily:
This looks a bit more complicated that you thought it might, doesn't it? That's because of how the mute system operates: when you mute a voice, the audio stops, dead. Therefore, when you turn off a voice, you have to make sure you delay the voice mute until the fade-out is complete. This is done by storing the 0 (using the set message) prior to starting the fade, then having the completion of the fade-out force the voice muting to occur. Since there is no similar fate on fade-in (as long as the voice is un-muted prior to fade-in, everything will be OK), we don't have to worry about the timing of the "on" message.
In order to add a bit of glass to this version of the patch, I've added a simple digital delay subpatcher to the patch, and added a control for regeneration. Turn on the DAC~, start up the metro and check out the difference a fade makes.
Conclusion
In this initial tutorial, we've seen how to create a poly~ patch that can prevent duplication within your patch system, and save CPU as part of the bargain. This concept can be extended to as many voices as you need, with only a chance in the initialization scheme to support that expansion.
In our next tutorial, we will look into using the poly~ voice allocation mechanism for polysynth emulation, and also into some of the details for efficient voice programming. In the meantime, drone away!
DSP chain: A set of objects (or, actually, the functions the represent) that are compiled into a processing chain for realtime playback.
signal-rate objects: MSP objects that work on high-speed data. These are called signal-rate objects, because they don't always pass audio; sometimes, they are also used for high-definition control values as well. Any objects that operate at signal-rate are compiled into the DSP chain.
by Darwin Grosse on 2005年3月28日 20:00