Gen to realworld accuracy
I’ve had this question floating around in my head for quite some time now. I hope somebody can give me an answer to this.
If I create a gen~ device, do some gen~ magic and output it to the msp domain. Will it create accuracy or latency problems because of the vectorblock limitation which occurs in the msp domain?
If I then create another gen~ device and thus pull it back in to the gen~ realm will it create even more discrepancies?
Or should I create one big gen~ patcher which does all the work?
Thanks for listening
It’s not magic, though it is very cool. As far as other MSP objects are concerned, gen~ looks just like another MSP object, so there’s no differences with regard to signal vector and other objects. Gen~ can do single sample delays, but only inside of gen~, not with other objects. (If you want that, you can set your signal vector size to 1, though CPU usage will shoot up massively)
Gen~ is sample-accurate within its time frame. That time frame is going to be affected by two things: the IO vector and the signal vector. I’m not sure what you discrepancies you’re having, but feel free to post a patch and people can take a look at it.
The IO Vector sets the input/output latency. There’s always going to be some degree of latency in digital audio, but it’s not necessarily a problem provided it’s short enough.
The signal vector sets the number of samples that Max processes in a chunk. The bigger the chunk, the longer it’ll be until you have results. There’s a tradeoff between latency and processing power here: longer signal vectors (e.g. 128 or 256 instead of 64) will use less CPU. One way of thinking about it is that it’s like an assembly line, and it’s faster to do one task for a long time, then a second one for a long time, than it is to constantly switch back and forth between them. There’s some overhead at the start and end of processing a chunk, so the more often you process the chunks, the more overhead you’re going to run into.
You may have read somewhere on the forum about a signal-vector delay. This only occurs when you have a feedback routing (and it’s only at tapin~/tapout~ or send~/receive~ not everywhere!);
I posted a patch some time ago (a granular engine built in gen~), and a contributor asked the valuable question: why do it in gen? And I was unable to answer, or at least justify it.
If there is an algorithm, or functionality or sub patch that you absolutely must run on a sample-by-sample basis, then do it in gen~; but outside the gen~ space, you are constrained by IO buffer and signal vector sizes. Which can, as Peter says, be so small as to be negligible.
This question is one I’ve also been asked, and being a bit if a gen~ evangelist…
The main situation is this one:
the input of a process depends on the output/internal state.
In a granular example this could be a stream of grains, where the next grain is immediately triggered at the end if the previous grain.
Another scenario is for things like changing the speed of a phasor, but only at the end of the ramp. You can’t do it in normal MSP without a one signal vector latency.
It’s also great for building complex analog sequencers. It was way easier to build the Rotating Clock Divider (now in BEAP!) in gen~ than it was to patch it.
It also comes into play with filters, of course.
There’s tons of stuff you can do in plain MSP, though.
. . . and there’s my justification right there – phasor ramp and grain pitch manipulation at the sample level. Not quite an evangelist myself, yet: evangelism requires fundamental knowledge :)
(must check out this BEAP stuff soon)
Thanks for your answers Peter and NOOB.
Ok, so if you create a phasor ramp in gen~ to use as a playback position with granular synthesis.
Does it have influence if you go out of gen~ and go back into gen~ to another playback instance? Is the phasor ramp still as accurate as it was before it’s journey through MSP land?
You see where I’m going with this?
As I understand it now, the transition has no influence at all. The only thing is you have to deal with the vector size blocks.
It’s like looking through different glasses. 1-sample glasses (much like laforge or NOOB_MEISTER ;) ) and vector sized block fido dido shades.
Gen~ is for me the ultimate granular playground. But this transition vagueness (thread topic) is bugging me. I just don’t know when to stop with gen~ and be content with the good old plain MSP.
You need gen~ when the input of your process is dependent on the output of your process without incurring a signal vector delay.
Connecting a gen~ (A) to a second gen~ (B) is the same as connecting two MSP objects. The single-sample delay world only exists inside of gen~. Its inputs and outputs are in normal signal vector blocks. (So no re: influence)
Just think of it as being able to write an MSP object in place. It operates in all the usual ways with other MSP objects. You’re just controlling the internals.