poly~ within poly~ for polyphonic granular synth
I’m attempting to build a polyphonic granular synth within Max at the moment, and it is giving me worlds of trouble and making my brain hurt.
I want the end product to be something where I can play my MIDI keyboard polyphonically… and for each note pressed, it would cycle through a series of instances of poly~, layering several ‘grains’ of a segment of one sample on top of each other to make a cloud of sound grains. I would call myself a beginner-intermediate user of Max; I’ve taken one single-quarter class on it and have been using it off and on for about two years, mostly for fairly simple stuff. This is probably my first foray into actually using the poly~ object (beyond making an uber-simple polyphonic synth).
Anyone have any advice or ways they might go about this? Thanks!
Just a week or so ago, someone posted a series fantastic bare bones granular synths. I can’t remember the users name, and I can’t find the thread. I took the liberty of attaching what I still have saved, I hope he comes forward so he can get the credit he deserves! There is also a granular synthesis in the example patches that is really great and has a ton of features. It’s well commented. I used it to teach myself granular synthesis by reverse engineering it.
I don’t know about nesting poly~ objects inside one another, and I haven’t yet tried it. I want to for the exact same reason as you though! Polyphonic granular synthesis would be a lot of fun. I know you can nest a pfft~ inside a poly~ and vice versa (there was a thread about it not so long ago) so I see no reason why you can’t put a poly~ in a poly~.
this is the latest incarnation of a LONG series of experiments in granular playback in Max; I’m glad it’s found a home in someone’s folder.
Two things I have learned:
- it’s pretty impossible (and fruitless) to try to improve upon Sakonda’s original algorithm; I’ve merely added some extra touches (pitch/size independence, further blurring/smoothing);
- someone has always done it first and often better.
So, I’d recommend trying out Wolek, Sakonda, Robert Henke, Timo Rozendal, grainstretch and grainfreeze FIRST, to see if you get the results you want. Most of these will do excellent clouds. I can see no real need to use poly~ within poly~ to achieve what you want.
although Brendan i really enjoyed your patch!!
(allpass still confuses me!)
every patch has it’s own unique sound… in the past i have achieved some wonderful results without audio-rate mind blowing patches…
simple control-rate poly tricks…
jonbash a very simple start for that you are asking for…
Amplitude modulation artefacts are unavoidable when applying small amplitude envelopes to grains. That cycling whirring grainy-ness is also annoying, so I found that the allpass filters, used in reverb algorithms to blur transients, helped to ‘soften the edges’ as it were.
Btw grainstretch is friggin awesome
..and I agree with the control-rate comment too. Machinesleet on YouTube has a gorgeous grain-like playback engine, using dynamically variable line messages.
I started a similar topic with an attached patch that I think does what you’re looking for, though as fm granular synthesis rather than sound file granulation. It’s a poly~ nested within a poly~, though I have issues with voice allocation in the parent poly~
Feel free to download the patch and tinker with it, and if anyone has any ideas on how to solve my issue whilst they’re at it, I’d much appreciate your thoughts!
This is how I’ve done it. I don’t think this will work for a phasor driven granular synth. I used this for a metro & line~ driven granular synth. To make the voice allocation work I activate a metro with a note-on and note-off. When I receive a note-off I start measuring the amplitude from the nested poly~. When it goes silent, I mute the parent poly~.
----------begin_max5_patcher---------- 1475.3ocyZsriihCEccxWgEqqtDFBjjVylY9Blsi5dTIGvEwcC1HrS8XZ002 93G.ARBDGThSsInbswbtGee4K7q4y71vdCy8.eE7MvrY+Z9rYZQJAyp++LuB zaI4HtdZdIrhBLU38fYLA9MgVNWfpDfrJDcWNpBvemJ11LomYTAm7eX0Dg9O 5WKltqfPywB8BC2KjsSzHsYpkHQxVBM6oJbhv.WXbjbk.vPyE+E5KqdzG7u0 2DIUiL1le7kUcgBEUngh2eVQP4f+hkm1NbU1F8y8QekfeOet5mGrjan3WkOr inlBrnhIQnu+YHDihKduDaTQuMHZlWq9ziuBNIeAGgupIJCsEGNFcEeN5ZBj ifkkki8FRUIRapSqovopolKA9M+dJMcgW65TI0SAt5ILEsIWCqoXCLj+wNZw NAF75VLEf.Tl.+EFEP3.ItwjWvo2Tmkv0ZVHzWeAtb4X69gKch2xPL0ddpjk +9GfuSyXXNfSx6L6bBEmv1QEc8EtMwYVYntP8k0ZlKX8.LWzck4XkRRKC0Pe s1YO+r6LzBCdLRxVKMtfqCF0PK7dFVtD7OrcU+sxF6RiKyIYTT90Nd0ZSj4U iQYAqsjxzOGbUMsTyKx6R5E8BthSXzNnZlGprri3YctEEY9CldghenUDgZDA aEUgegb3xJW2Jo1Jjp5tJCReaUrmYP4F37lasgJhLNY5b4wKz+YwpNgtka0Y 4rjehS6XGNSa3SnkUXtzs.IpQQ6vo3mQ6xEOc5s29i+LJAO3MOLeOyKqhjxn Jjz61UhadlxsZyNcTWMROCJp7D2rrnJwN9FTkhZqSJEzLnfwx6OT6toze.QI ExHABhAwA9sKJonrhXBb1JyjyaKOohkm2aoLi7xIFIUtgmfekjJKzy3ateD4 zIkM6CdsbTJICyE8kIPY79R5USZWKwtN58jefCu1WrY4NegUG6B62eficiGx UNpNMqNIQTyu8dPcblgcwXSPcMUzHVGoq1YYhjQuneGTyNI6itCLf+gMg.GL L3T4PnoTkP3gQCOJh3hCUggJX8JQnFMw6x0aeqz6kqry3IX4kX8rObipxoA7 tzfRM9oYAtLcYRy9eC0C5CPoqufPaCA+s8X8fItkjl1OZmI.AWEtIss56iTi 4cSc3wQxBZdR9Hjr3SHgnhrQV8H+vrdWk.7mOKxED06jH4bHXnmrLk3FTdcF +1E3DQXmumNuN0PkyPo5XoWuC1dwkOEzqjynnwJeBt5Fbx1BLmixvm9fLW7Y 9uVm2OL1T9julPVXNue7.zxxa.sLfIi.r4dZsTSK0mDNXz1f.CcGsvw4fo0e nGtPalfQHmZVw3DENN436Px4UT4kxMpRbeXr9JMU1wzJknQagR.zcjyNZIJ4 m.Bfb0oH3DonXanHGZ+Pn.3UJN7kGvwu6YBF8v8qccxI3cK4ju8ImhbXtosD d4TZHzMySZQnEU0bKRSMTOFSwpt4X5FaBF7bEq.7c56xiF.jmqPfSA8HP20m 1.ecYf0cQC5OZiZCbRiGGx26d61U2KaScyCVSXrCiFc+pRdQj0LRfCKGT0E+ 6sUh4.U0bxPYwcXz4JTQIufwDa+.3CBVLgWn5nMtObhMtutbYnxtYDlJ1cLE ZC+yw6zHvzGqvQe47AqbGyfSyveL0iYcwmCM3rDTcfGyq5Xv76NrT4+3CUxN ee3U09YpesBAq0kFFMpmU3kbXK8BzuInF15vleVyYG2zyvt8xbfFd1qwnG2q yi6y4Qad1BGUZ5OQvwFzndGlNBNKr.NwNCMJC0yBGkytiviJv642sbH+nTc3 4vSTuIcS4GarlOXS8lhmHKvSjyfiMnIzcjiuMFyP2hGnM3wMFyqsgdbWrGnM aWvX2hmyscAcm4LzlxLBbn6kM4tfNL2kU7Szmq5vbIdrgefKcGdV9oJaAb0m pToAVEcd0Dwi4LOG7I9ofxAeZeG7Y8c7mz2PeNexmyum++fbDCvL -----------end_max5_patcher-----------
I’ve been dreaming of a granular synth, that I trigger with the multiXY in TouchOSC, so I can choose different grains from the same buffer and manipulate them individually (pos and grain size) and hey presto, I have it.
Thanks to you guys, and grainstretch~
I’ll have to tidy the patch up a bit before I can post it here, but I’ll get on it.
I agree with Brendan: avoid the poly~ within a poly~, since it’s going to add another layer of control complexity that will probably limit your musical flexibility. (this is assuming that the control is down at control-rate, not signal-rate)
Some strategies/ideas I’ve found helpful:
- The grain voice itself (the patch that’s loaded into poly~) should have very little control logic inside of it–make it as dumb and general as possible, so avoid using random/drunk/itable etc. inside the poly~ voice (definitely use them outside, though!). You can do all of these via messages and it makes it much easier to have multiple streams of grains going at once.
- Use the "note" message with poly~ to send big lists of parameters. Use zl.nth/zl.slice/zl.ecils inside of the patch to grab elements that aren’t in their incoming unpack right-to-left ordering. Avoid the "target" message unless broadcasting to all voices.
- Make sure that thispoly~ is getting the envelope inside the patch. It’s going to allocate voices better than you can.
- If you are using buffer~ for your envelopes, make your envelopes longer (~1 second) and turn off interpolation for the envelope since you’ve basically pre-interpolated it. (I usually use wave~) It saves a little CPU.
- Prefer higher level random objects/algorithms over plain random. Drunk and itable will give much more interesting results than random ever will. You might also look at Peter Castine’s litter objects for more control over random, or the randomvals object which lets you control gaussian-distributed random values. Experiment with sequential/semi-sequential fading into random values.
- Try logarithms/exponents for parameter distribution. Random amplitude sounds quite different when done in decibels for instance.
I also highly recommend Curtis Roads’ book MicroSound.
Cool trick on the voice vs. amplitude management via peak!
Can’t figure out how to PM on the new forum software, but had some ideas for your patch that might improve performance.
- You could move the pink~ outside of the poly~ patch (and use in~ to talk to it). That way you only have one instance of it running instead of 16. If the possibility of the voices grabbing the same value is bad, then you could use delay~ and the voice number to stagger the input accordingly.
- It works fine as it is, but a useful trick with click~ and sah~ is to set it to use a two sample impulse using the "set" message. I like to use absurd values like "set -99999. 99999." to make sure that when it’s triggered, I’m sure to get a resample (if other signals are being summed with the click~ for instance)
And now, some stupid envelope trix:
----------begin_max5_patcher---------- 744.3oc4WtsbaBCDF9Z7SACW1w0CRbvPuJ8x9LzoSFYXisxfkXDJt4vj7rWw B1A2ZaTcHD2o2HBq2n8mO9WIwSSb7VHuGp7b+h62ccbdZhiCFpNfS68NdqY2 mUvpvz7DvOkKt0aZyOog60X3O8x1P2HEZAaMfg+phyJ19Kh6VyEEfFmHZmzq 3OhoSny7aCWpfJPnYZtTbsBxzMZLjDYRwMfhW7aGb+wqUPdmdaIHsQaBoenD ZlDuJ9RgQV692JY5rUbwx8JTRmBEPmEM0k1sT7b7AzvhOS88pi87jI0CSsDj UYxRXG0zJ9xkfpqrOHtNJWRQcljZAW76TyBXCTfAm4S5GGMyOIve2kc0PwDK aA7mMwM7hXFODuHomCutoPZdL52kUxTl3ZPcMHXKJftOw6gTxocf6yL5w7RF YwzdSc8VXd9OokhlhdoHjjQHAOlihj7u.gNlWLsNE23DKrhC.V2qb8f03+KM dsMuVQnnO7kxnw36y.R36yRYz3zAborfygWizdnC2tgsLilD1+tgD54fjL45 0FyvevjuI1.JMj6pUrR3QIOmU3BhMy5GU2nVtn0N7l6B8ONahCZYC1dQ5o+h 7weTAJsQnyee5uhoo01i1922X+U5.1dUthUIUuXjezEaeVyQDRiqGmGdJmTx .Rlc8V0vgFYFlG4d0Jt4ObupPhA+KQVvHiLRXSeG4jseyOGnAOlyxd4jFCqW rfj1zNDiWBi1NdHwFOdKU7la5I9QC2lpC4dpYOjU.W76qFRe0NbzsN5RELAu Bt32+3crH0w2GUUx6TYaEV62rZV2eWcxgJMWfaUzIIyGp0MoU77bPz0gjyqp OoKJR+C9RyV8XN0P+5AEM4BSOiDeRrPOIimbhsPNyM4PGG4DYobtfLOlOcXz daYibniGdrUNiCcRrz6LNpYtEpI8hRMjKJiy41U0riJqrz78lUsSIJDywLtU hGgJdJdKWzbKNidJXCea9ISpmsmm7K.qja0Q -----------end_max5_patcher-----------
Hey Peter, great set of guidelines there.
Are you able to explain this line a little more?
Avoid the "target" message unless broadcasting to all voices.
The target message is used to target params for a particular voice (poly instance), so why would you want to broadcast to all voices using the target message?
I’m sure I’m missing something obvious here.
thanks for continuing to pass your experienced eye over this glacially-developing synth.
@Floating Point: What I meant regarding "target" is that if you’re not targeting all the voices (target 0), you’re more or less responsible for voice allocation since you have to know which voice is going to get the value, which you should probably let poly~ handle. This is totally fine for voices that are on all the time; if I’m building a vocoder in poly~, for instance, I’ll use target to pass the frequencies to the individual voices.
On the other hand, it doesn’t work nearly as well for a synth or granular sampler because you’re reusing voices with different values. If voice 1′s ADSR envelope is 1., 150., 0.5, 200 and voice 2′s ADSR envelope is 10000.,10000., 0.2, 5000. you’re going to get completely different results depending on which voice gets the note from poly~. If you send the ADSR information with the note, however (what I term the "big list" strategy), it doesn’t matter, since the voice will always have the appropriate envelope that goes with the note. Since you’re getting parameters at a note boundary, you are also less likely to need interpolation for volume, etc. and you can know the order which things will arrive in and these are very good things. I always try to think about "do I need to know who’s playing this?" when designing a poly~ patch. Usually if it has note-like events the answer is no.
Also, if you are using target 0, you could also use send/receive just as easily (if not more so). Target 0 makes sense if you’re trying to limit the scope, or could be running multiple instances of the same voice patch within different poly~ objects. I tend to prefer send and receive because I always know that the message is going to all of the copies–it’s helpful for students because it’s one less thing to forget, and they can always double-click the send/receive to see where it’s going. I also generally avoid having more than one "in" to my poly voice patch, unless I’m just distributing data to static voices, and use route.
There is one little problem with the "big list" strategy having to do with the midinote message. It’s a pain to make sure that parameters don’t change between the attack of the note and the release. (especially if you’re randomizing/scaling things based on velocity, etc.) You definitely don’t want to change the ADSR sustain value when releasing the note, for instance, since that sudden change will cause a click.
Here’s an abstraction (PM.PolyList) that I use within voices that are being addressed with the "midinote" message. (which tells poly~ "handle voice allocation using note-ons and note-offs")
It passes pitch and velocity out the two left outlets for note-ons and note-offs; all the extra data is only passed on note-ons.
This is certainly not the only way to solve these problems with polyphony, but it’s been reliable for me and has meant that I can use poly~ more than I might have otherwise. Anyway, that’s probably enough pontificating for one night…
----------begin_max5_patcher---------- 984.3oc0X1ziSCCDF9b6uBiOf.oPjsy2HwA3vJsRbiiHDJMws0PpcUh69Eh+ 6L1oIM61zrgtcKrRq11LwwdlG+5YlzeMcBdl5FdEF8dzWQSl7qoSlXMYLLY6 0SvqRuIqHsxNLblZ0JtTicpumlei1Z+SoxEH0FMRujipzpRdNpPT0Nv0o5rk B4huWxyz0KHk56FjPhShbPdDOWuj.Jk5fnddtDGDi3RPea6iOubwLySA11ZQ jaWW0re7NZbypLWI0xzUb6s9XoHs.coNsPj0L.4lUBYAWaCF5Nifm2XkzYtp D2YmKJykXr96oSM+y4IBquX3ikTFDgpTnq4nrToAfqAFJznKu3BX.hJD7WJR pz72ojMSTgPxyTaj1YK3vD1iBjjQRRBAp5G3FDF56AzNlY.bj+nAbzKL.e47 8gmqqau7iMfBMhX4WLnJYLlqebR.gAjiZ3mQlNR9E9BieWHJAco9ZEHE4q.N JqUpuYs.fD50nq3EpLg912hVnrG6K3y0NH9U7xa0FJh3EUb3l7p1rBkhEK0C tELayrYE7tQdOaJc2SRbiiiojHyWiL6IArQum3iu2pVIx466KqUhZWDlijCu G9IUQ9+lcPI+ZHZ1aCrhWfnMVqWY8sq40DDOCRVicPXbKp5gyL+n1zyrPPwm Dk.h9j8xM2Epd3i.RrdgD6YGRPhVj3fPxrw6T+wPXJl1JGoQr1TrTBYPPwvm L0zyOnd0GPjA4zn0QcRh1SU9N.J4zIjnO674tB2R9hCRn+piZc6DptNygPT7 +km0pm0l0d41zm3GVY3MaqJCkGjE29p217.oR3FoZAzri4wviDZ9gtgQwdTe 3jWrquuOIHryW6CeQGQB6l5kiJhNB3Y8iCqiFTCE4F4EEEBmrBBMxlwxgvd4 .YviRc4vmMMFnlipaMnow.jBJ4WRq+fU+gGhqyvmNMUyZgGU14iQi3+D0Hsd 3oKnsXFOnNvzaJzseBDUArwGrdOwfs1yNcYU2HWml8ySQM5NmMnPOiDuX+PH qQDanzqunpPeWApBdKBNh8DpBsiRIr1FqorAoD8rVDx5A12c3A+nE1vwX+9n qRsoLqA.aaq.sKNx4UZgrsZyW20DamAsTjmykcE84hpT3cCrPn+8vw5Nwivc hNadS3XfyYyaXiva7NqdC8wXC6r4NOPi1u+De9bmwfmjyq67Xzw+74Ni.Nw2 aL66MqD41eKhs48n9DWVSIMOlK0mFZeoJBTdCZJ195ns12s1mzX5wH7wdbnN Ke550PG1UamRqi.kB+gpzbYni8Rgr9R6LhK4WIZFe7Tyr86o+AvMLVkp -----------end_max5_patcher-----------
Thanks Peter for your erudite and pithy explanation. I tend to use the so called big-list approach as a ‘default’ method for my patches in general, but particularly for granular stuff. After reading your post I may well try to coerce my existing granular patches into a note-mode style reception.
At the moment I just leave all voices ‘on’ and do the voice allocation manually using the target message (cycle through from 1 to n). not the most efficient approach I’d have to say… :-)