Hello everyone, I have a question about working with poly~ that I'm trying to figure out.
So, in the tutorials for poly~, they show you how to cleanly manage your voice activity with muting and flagging voices as busy dynamically within the voice instance. This is usually controlled in a canned, timed fashion, usually by a pseudo-note message triggering a line segment, with the beginning of the note unmuting and marking the voice as busy and the end of the line segment's time muting the voice and unflagging its business.
In another case, a noise~ oscillator is pumped into 16 voices. Using a looping counter, a center frequency is randomly set on a filter present on each voice (or something like that, can't exactly remember).
My question is, if I combine these mechanisms with events that are generated from continuously streaming audio, is it possible for me to spigot off which voice I send a streaming instance of my MSP signal to?
To break it down:
- Streaming audio is always piping through the system, so it is always audible through one poly~ voice.
- Every time an event generated by amplitude tracking the input audio stream happens, the two channels of the stream switch which voice they're pointed to.
- As would be normally expected by something using poly~, each voice has its own release envelope from other effects processes inside of it that independently mute and free up that voice.
Can I use some combination of gate~ and poly~ to re-route in this way?
Anyone have an idea?