I’m currently working on an asynchronous granular synthesis instrument for a piece of assessment and I’ve run into some brickwalls when it comes to implementing voice/grain/polyphony allocation.
The instrument is supposed to be capable of producing clouds with grain density up to 1000 grains/second, which has meant that most of the patches have been written to operate at audio rate.
So far I’ve created a patch which will generate all the necessary specifications for each grain, at up to a thousand a second – including duration, envelope type, waveform type, amplitude, spatial displacement etc.
The problem I’ve run into is how to organise the polyphony necessary to handle this information. My original plan was to use poly~, but I haven’t found any way of controlling instance allocation at audio rate (is there a way of doing this? All I know of are the note and target messages.) I have searched externals, but have yet to find anything that would help me.
I should also mention that the instrument is required to be able to produce up to 8 clouds simultaneously.
@tremblap: I followed the link and it appears the downloads are.. down; my browser comes up with an error saying that the page/file could not be found – so it might be a server issue. I downloaded the archive at the sourceforge link on the ircam site, but it doesn’t appear to contain the FTM tools (just has a bunch of patches and some sort of instrument that seems to require the FTM externals.)
edit: Unfortunately Oli, your patch doesn’t solve my problem as I’m on windows and the steps~ external is mac only