I'm currently working on an asynchronous granular synthesis instrument for a piece of assessment and I've run into some brickwalls when it comes to implementing voice/grain/polyphony allocation.
The instrument is supposed to be capable of producing clouds with grain density up to 1000 grains/second, which has meant that most of the patches have been written to operate at audio rate.
So far I've created a patch which will generate all the necessary specifications for each grain, at up to a thousand a second - including duration, envelope type, waveform type, amplitude, spatial displacement etc.
The problem I've run into is how to organise the polyphony necessary to handle this information. My original plan was to use poly~, but I haven't found any way of controlling instance allocation at audio rate (is there a way of doing this? All I know of are the note and target messages.) I have searched externals, but have yet to find anything that would help me.
I should also mention that the instrument is required to be able to produce up to 8 clouds simultaneously.
Thank you for your time