My mental poly~ block

May 10, 2009 at 1:34am

My mental poly~ block

I’ve been “new” to Max for about 5 months now, and I’ve had a great deal of fun making monophonic synthesizers, crazy noise generators, et cetera. I’ve strictly avoided the poly~ object though, and for one reason: I can’t fucking figure it out. Every time I’ve tried to make a polyphonic synth, it fails miserably. No matter how many times I read over the tutorial, the reference page, or the help file, nothing that I try to make ever works.

The other day while I was in class, I drew out a synthesizer on paper that I’d really like to make, but it makes very heavy use of the poly~ object. When I try to realize it within Max, nothing ever happens. So I humbly ask all you Max veterans, please help me.

I’m not asking you to program my entire synthesizer, but just to give me some sort of framework that I can adapt to what I need. It’s an additive synthesizer with 96 voices. Two inputs need to be directed to all instances at all times, one is an input frequency and the other is a resonance value for a [filtercoeff~]. I also need to control the amplitude of each voice with a 96-bar multislider.

The idea is very much inspired by that picture on the Max/MSP Wikipedia article. in the discussion thread about it, someone suggested that the ridiculous array of toggles was the partials table of an additive synthesizer… regardless of what it actually is, I’d really like to know what that’d actually sound like.

Anyone that can help me try to figure this out, I’d greatly appreciate it… and if you have any suggestions of how to better approach this, that’d be great too.

#43761
May 10, 2009 at 5:06am

I think that the key to poly~ is using adsr~ inside of it. thispoly~ is designed to work with adsr~ and makes voice allocation a snap. poly~ usually needs a list consisting of “midinote” or “note” message followed by some note data. In my patch below the voice allocation is handled by the multi slider so neither of those are necessary. Notice how cpu usage increases as you turn up more multi sliders. The target message sent to the first inlet tells which voice should receive the next incoming message. The message “target 0″ sends the next message to all voices. Notice how I use target 0 to get the frequency of the oscillator to all voices and target $1 to get specific slider values to specific partials (voices). Hopefully this will get you started. I didn’t do the filter part of it (but I kinda like the aliasing you get with high freqs. It also gets really loud really quick and could use some work there.

First save this as add96.maxpat

– Pasted Max Patch, click to expand. –

then save this as whateveryouwant.maxpat

– Pasted Max Patch, click to expand. –
#157148
May 10, 2009 at 6:39am

using poly~ is first of all not different from
using a normal subpatcher.
so before you start using poly, make your monophonic
synth a patcher and load it into a new patcher,
connect your controls (interface elemets? midiin?)
until it works the way you want it to.

step two is using [poly~ mysnth 1] instead of a
[mysnth] – until it works.

step three is making a [poly~ mysnth 16].
at first i would recommend against using autmatic
voice allocation. for the moment, send data
to your synth (say “noteon”, or “bang”) via
[prepend target 1] and [prepend target 2] and
see if you get that to work.
open the windows for voices 1 and 2 (like described
in the helpfile) and maybe have a meter~ or messagebox
in your synth so that you have visual control over
whats happening inside.

there are many ways of voice allocation, and many ways
how to make voices turn off when not needed. of course
you could also just leave unneeded voices running.

the main caveat with poly~ will be that you can not
so easily get signals into the individual voices.
(or out of them)
so always build all your signal stuff into the synth
patch.

#157149
May 10, 2009 at 8:04am

Thank you for the responses so far – you’ve given me a good start… but I have a question about the [in]‘s and [out~]‘s… if I have an [in 1] and [in~ 1], shouldn’t I be able to send both messages and signals to the same inlet on the poly?

also, what messages are reserved for the [poly~] object itself and what actually gets routed? from what I understand, if I send a “note xx xx” message, the poly will dynamically find a voice, and “target xx” will manually select a target… err, I’m tripping over my own shoelaces here.

I’m aware that an [in~] will route a signal to each voice, which is exactly what I want to be happening in this case. but how does [poly] handle jitter matrices? could I send the list of amplitudes as a matrix? would it be distributed to all targets like a signal is? or is the [in] specifically for messages routed to a particular voice?

if I set a target for an inlet, is it applied only to that inlet? is it only for the next message, or does it keep that target until it’s reset?

I’m sorry for asking these questions which could be answered by looking through documentation, but for some reason I’m just too thick to grasp any of it when I can’t just ask and be answered.

#157150
May 10, 2009 at 9:14am

[quote title=Bryan Dodson wrote on Sun, 10 May 2009 10:04]
if I have an [in 1] and [in~ 1], shouldn’t I be able to send both messages and signals to the same inlet on the poly?

yes, they will refer to the same inlet.

Quote:
but how does [poly] handle jitter matrices? could I send the list of amplitudes as a matrix? would it be distributed to all targets like a signal is?

the problem is that INSIDE the poly you can NOT send messages
to specific voices anymore. (because there is no INSIDE the
poly anymore, there is only INSIDE INSTANCES when it is
a poly 16)
Smile

this must be taken into account when using variables, receives, jit-matrix and al this stuff.

personally i dont use automatic allocation because it is
very confusing when yu need to do “target” stuff anyway.

Quote:
if I set a target for an inlet, is it applied only to that inlet? is it only for the next message, or does it keep that target until it’s reset?

good question.
the answer is no. thetarget message is independent from
what follows. it only tells the poly OBJECT to which
instance of the poly PATCHER input will be sent (from now on).

so when “target 3″ is sent to inlet 5, all inlets of the
poly patcher are now routing all input to instance 5 until
a new target is specified.

examples:

• change the filter setting for voices 3 and 4:

“target 3, 20 Hz, target 4, 20 Hz.”

• change the filter, the gain, the envelope and the wavetable for voice 7:

“target 7, 20 Hz, gain 0.5,
(and then continue sending messages maybe into another inlet)
0 4. 2. 0.5 1., wavetable 51″

Quote:
I’m sorry for asking these questions which could be answered by looking through documentation

or not.

-110

#157151
May 10, 2009 at 9:49am

I’ve got it working almost perfectly, thanks to you guys’ guidance… and I’d be happy to share the end result with you when I’m finished. but there’s only one issue that I’m having that doesn’t make any sense to me. When I send a message:
“mute $1 $2, target $1, bang”
to inlet 1 of the poly~, the target voice never receives the bang.

#157152
May 10, 2009 at 8:08pm

quote:

“I think that the key to poly~ is using adsr~ inside of it. thispoly~ is designed to work with adsr~ and makes voice allocation a snap. poly~ usually needs a list consisting of “midinote” or “note” message followed by some note data. In my patch below the voice allocation is handled by the multi slider so neither of those are necessary. Notice how cpu usage increases as you turn up more multi sliders. The target message sent to the first inlet tells which voice should receive the next incoming message. The message “target 0″ sends the next message to all voices. Notice how I use target 0 to get the frequency of the oscillator to all voices and target $1 to get specific slider values to specific partials (voices). Hopefully this will get you started. I didn’t do the filter part of it (but I kinda like the aliasing you get with high freqs. It also gets really loud really quick and could use some work there.”

This is great stuff!!

Instead of using filters, would it be possible to make a kind of exponential amplitude decay for each of the partials, so that the harder i press the midi keyboard the longer the partials decay, and that this is “kind of exponential or something” so that higher partials decay faster than lower, kind of like a lowpass filter.
It would also be awesome to do the same for partial panning, controlled be midi velocity.
Pan, decay and velocity is great Smile

Im just thinking… Smile

Thanks for this patch, great !

#157153

You must be logged in to reply to this topic.