poly~ within poly~ for polyphonic granular synth

jonbash's icon

I'm attempting to build a polyphonic granular synth within Max at the moment, and it is giving me worlds of trouble and making my brain hurt.

I want the end product to be something where I can play my MIDI keyboard polyphonically... and for each note pressed, it would cycle through a series of instances of poly~, layering several 'grains' of a segment of one sample on top of each other to make a cloud of sound grains. I would call myself a beginner-intermediate user of Max; I've taken one single-quarter class on it and have been using it off and on for about two years, mostly for fairly simple stuff. This is probably my first foray into actually using the poly~ object (beyond making an uber-simple polyphonic synth).

Anyone have any advice or ways they might go about this? Thanks!

commathe's icon

Just a week or so ago, someone posted a series fantastic bare bones granular synths. I can't remember the users name, and I can't find the thread. I took the liberty of attaching what I still have saved, I hope he comes forward so he can get the credit he deserves! There is also a granular synthesis in the example patches that is really great and has a ton of features. It's well commented. I used it to teach myself granular synthesis by reverse engineering it.

I don't know about nesting poly~ objects inside one another, and I haven't yet tried it. I want to for the exact same reason as you though! Polyphonic granular synthesis would be a lot of fun. I know you can nest a pfft~ inside a poly~ and vice versa (there was a thread about it not so long ago) so I see no reason why you can't put a poly~ in a poly~.

5544.5410.aprilfreeze.zip
zip
brendan mccloskey's icon

Hi
this is the latest incarnation of a LONG series of experiments in granular playback in Max; I'm glad it's found a home in someone's folder.

Two things I have learned:
- it's pretty impossible (and fruitless) to try to improve upon Sakonda's original algorithm; I've merely added some extra touches (pitch/size independence, further blurring/smoothing);
- someone has always done it first and often better.

So, I'd recommend trying out Wolek, Sakonda, Robert Henke, Timo Rozendal, grainstretch and grainfreeze FIRST, to see if you get the results you want. Most of these will do excellent clouds. I can see no real need to use poly~ within poly~ to achieve what you want.

Brendan

tada's icon

although Brendan i really enjoyed your patch!!
(allpass still confuses me!)
every patch has it's own unique sound... in the past i have achieved some wonderful results without audio-rate mind blowing patches...
simple control-rate poly tricks...
jonbash a very simple start for that you are asking for...

tada's icon

oops!

5545.poly4poly.rar
rar
brendan mccloskey's icon

Hi tada,
Amplitude modulation artefacts are unavoidable when applying small amplitude envelopes to grains. That cycling whirring grainy-ness is also annoying, so I found that the allpass filters, used in reverb algorithms to blur transients, helped to 'soften the edges' as it were.

Btw grainstretch is friggin awesome

Brendan

brendan mccloskey's icon

..and I agree with the control-rate comment too. Machinesleet on YouTube has a gorgeous grain-like playback engine, using dynamically variable line messages.

Ben Carey's icon

I started a similar topic with an attached patch that I think does what you're looking for, though as fm granular synthesis rather than sound file granulation. It's a poly~ nested within a poly~, though I have issues with voice allocation in the parent poly~

Feel free to download the patch and tinker with it, and if anyone has any ideas on how to solve my issue whilst they're at it, I'd much appreciate your thoughts!

Dave Mollen's icon
Max Patch
Copy patch and select New From Clipboard in Max.

This is how I've done it. I don't think this will work for a phasor driven granular synth. I used this for a metro & line~ driven granular synth. To make the voice allocation work I activate a metro with a note-on and note-off. When I receive a note-off I start measuring the amplitude from the nested poly~. When it goes silent, I mute the parent poly~.

hzd's icon

Incredible!!!!

I've been dreaming of a granular synth, that I trigger with the multiXY in TouchOSC, so I can choose different grains from the same buffer and manipulate them individually (pos and grain size) and hey presto, I have it.

Thanks to you guys, and grainstretch~

I'll have to tidy the patch up a bit before I can post it here, but I'll get on it.

Peter McCulloch's icon

I agree with Brendan: avoid the poly~ within a poly~, since it's going to add another layer of control complexity that will probably limit your musical flexibility. (this is assuming that the control is down at control-rate, not signal-rate)

Some strategies/ideas I've found helpful:

  • The grain voice itself (the patch that's loaded into poly~) should have very little control logic inside of it--make it as dumb and general as possible, so avoid using random/drunk/itable etc. inside the poly~ voice (definitely use them outside, though!). You can do all of these via messages and it makes it much easier to have multiple streams of grains going at once.

  • Use the "note" message with poly~ to send big lists of parameters. Use zl.nth/zl.slice/zl.ecils inside of the patch to grab elements that aren't in their incoming unpack right-to-left ordering. Avoid the "target" message unless broadcasting to all voices.

  • Make sure that thispoly~ is getting the envelope inside the patch. It's going to allocate voices better than you can.

  • If you are using buffer~ for your envelopes, make your envelopes longer (~1 second) and turn off interpolation for the envelope since you've basically pre-interpolated it. (I usually use wave~) It saves a little CPU.

  • Prefer higher level random objects/algorithms over plain random. Drunk and itable will give much more interesting results than random ever will. You might also look at Peter Castine's litter objects for more control over random, or the randomvals object which lets you control gaussian-distributed random values. Experiment with sequential/semi-sequential fading into random values.

  • Try logarithms/exponents for parameter distribution. Random amplitude sounds quite different when done in decibels for instance.

I also highly recommend Curtis Roads' book MicroSound.

Peter McCulloch's icon

@Brendan:

Cool trick on the voice vs. amplitude management via peak!

Can't figure out how to PM on the new forum software, but had some ideas for your patch that might improve performance.

  • You could move the pink~ outside of the poly~ patch (and use in~ to talk to it). That way you only have one instance of it running instead of 16. If the possibility of the voices grabbing the same value is bad, then you could use delay~ and the voice number to stagger the input accordingly.

  • It works fine as it is, but a useful trick with click~ and sah~ is to set it to use a two sample impulse using the "set" message. I like to use absurd values like "set -99999. 99999." to make sure that when it's triggered, I'm sure to get a resample (if other signals are being summed with the click~ for instance)

Max Patch
Copy patch and select New From Clipboard in Max.

And now, some stupid envelope trix:

Floating Point's icon

Hey Peter, great set of guidelines there.

Are you able to explain this line a little more?

Avoid the “target” message unless broadcasting to all voices.

The target message is used to target params for a particular voice (poly instance), so why would you want to broadcast to all voices using the target message?

I'm sure I'm missing something obvious here.

T

brendan mccloskey's icon

Hi Peter

thanks for continuing to pass your experienced eye over this glacially-developing synth.

Brendan

Peter McCulloch's icon

@Floating Point: What I meant regarding "target" is that if you're not targeting all the voices (target 0), you're more or less responsible for voice allocation since you have to know which voice is going to get the value, which you should probably let poly~ handle. This is totally fine for voices that are on all the time; if I'm building a vocoder in poly~, for instance, I'll use target to pass the frequencies to the individual voices.

On the other hand, it doesn't work nearly as well for a synth or granular sampler because you're reusing voices with different values. If voice 1's ADSR envelope is 1., 150., 0.5, 200 and voice 2's ADSR envelope is 10000.,10000., 0.2, 5000. you're going to get completely different results depending on which voice gets the note from poly~. If you send the ADSR information with the note, however (what I term the "big list" strategy), it doesn't matter, since the voice will always have the appropriate envelope that goes with the note. Since you're getting parameters at a note boundary, you are also less likely to need interpolation for volume, etc. and you can know the order which things will arrive in and these are very good things. I always try to think about "do I need to know who's playing this?" when designing a poly~ patch. Usually if it has note-like events the answer is no.

Also, if you are using target 0, you could also use send/receive just as easily (if not more so). Target 0 makes sense if you're trying to limit the scope, or could be running multiple instances of the same voice patch within different poly~ objects. I tend to prefer send and receive because I always know that the message is going to all of the copies--it's helpful for students because it's one less thing to forget, and they can always double-click the send/receive to see where it's going. I also generally avoid having more than one "in" to my poly voice patch, unless I'm just distributing data to static voices, and use route.

There is one little problem with the "big list" strategy having to do with the midinote message. It's a pain to make sure that parameters don't change between the attack of the note and the release. (especially if you're randomizing/scaling things based on velocity, etc.) You definitely don't want to change the ADSR sustain value when releasing the note, for instance, since that sudden change will cause a click.

Here's an abstraction (PM.PolyList) that I use within voices that are being addressed with the "midinote" message. (which tells poly~ "handle voice allocation using note-ons and note-offs")
It passes pitch and velocity out the two left outlets for note-ons and note-offs; all the extra data is only passed on note-ons.

Max Patch
Copy patch and select New From Clipboard in Max.

This is certainly not the only way to solve these problems with polyphony, but it's been reliable for me and has meant that I can use poly~ more than I might have otherwise. Anyway, that's probably enough pontificating for one night...

Floating Point's icon

Thanks Peter for your erudite and pithy explanation. I tend to use the so called big-list approach as a 'default' method for my patches in general, but particularly for granular stuff. After reading your post I may well try to coerce my existing granular patches into a note-mode style reception.

At the moment I just leave all voices 'on' and do the voice allocation manually using the target message (cycle through from 1 to n). not the most efficient approach I'd have to say... :-)