Is there a way to mix real-time and offline DSP processing in the same patch? I’d like to build a patch which plays back sound files (therefore, I need the ‘normal’ DSP chain), but at the same time, I need to do some offline processing on those buffers (mainly with [pfft~] patches). For the latter, it would be great if I could have a separate DSP-chain that runs with the ‘NonRealTime’ audio driver. Is there a way to achieve this somehow? (One solution would be to start two separate instances of Max with different DSP settings. I’d like to know whether that was my only solution, or is there anything better.)
EDIT: This project will be based on Max 5, so I can’t use Max 6 features — I don’t know whether that makes any difference…
This might be possible, but I’m not sure, haven’t try it yet, with the poly~
unfortunately [poly~] won’t help here…
As far as I know, running in two instances is the only way to use two different drivers (although I’d love to hear otherwise). I’d go that route, and use Jack or Soundflower to pipe back and forth, if needed.
I’ve done similar stuff before with multiple standalones (be sure to set each with unique preferences folders defined in the [standalone] object), in order to push Max to the max on one machine (i.e. managed to render 5000 graphics-driven sound objects across multi-channel system on one machine).
C74 RSS Feed | © Copyright Cycling '74