non real-time composition
This may be very entry level, but I've only dealt with realtime audio... Let's say I have an automated process (which includes randomized elements) controlling a vst~ object, and I want to record a soundfile wherein this process is duplicated thousands of times and the results blended together. Obviously this is impossible to do in realtime because of CPU limitations, so I turn instead to some sort of pre-rendering, a process about which I know nothing. Any tips? Example patches? Thanks in advance.
You can use the NonRealTime driver in the DSP Status inspector.
The trick is that [metro] and friends remain in real-time, so you have to use MSP objects to control timing, something like [phasor~]->[delta~]->[[edge~].