Communicating between gen~ and jit.gen
I am contemplating two projects using the new Gen in Max 6, and one of them seems that it should be ideal for gen~ as it’s just a lot of math operating on audio dependent on MIDI input (making a transposable Just Intonation tuning system using set theory).
But my other more complex project will require constant communication from gen~ to jit.gen, where an extensive set of visual parameters will be modified by audio parameters on the fly (basically realtime audiovisual synthesis, with visual params subordinating to audio params).
My question is if there is a way to send the per-sample level of signal directly between the objects without the need to go up to the Max level (where, if i read correctly, samples would get bottlenecked into packets) then back down.
If not, building a data pipe between these objects would be a great feature for an update!
You can send data between MSP and Jitter matrices using jit.poke and jit.peek. Samples between MSP objects are packaged into blocks the size of your audio vector setting. jit.poke and jit.peek should be more than adequate.
Thank you, that’s part of the answer I was looking for.
I was however still curious though if the same benefits hailed by Gen for processing audio at a per-sample level could be extended to communicating from gen~ to jit.gen without packaging the samples into blocks, as would be necessary to send the data "up and through" the top level of Max/MSP rather than directly between gen objects as a direct data pipe.
I suppose the reason why I feel this will be even more ideal than using jit.poke and jit.peek is because down the road, progressive frame rates may be able to respond in time to per-sample changes and result in an even stronger link in matrix transformations corresponding to changes in audio data.
I see what you mean about it being more than adequate for now, but as this is a long term project, I feel it is never too soon to make what may become a valuable suggestion for the future. If I imagined Max/MSP/Jitter down the road, I might imagine it as the #1 pioneering environment for seamless audiovisual artistry, but it is perhaps still too soon for now to see it coming.
What do you mean by "progressive frame rates may be able to respond in time to per-sample changes"? I understand the gist of what you’re saying, but I’m really having a hard time trying to figure out where this might be useful. Audio is typically running at 44.1KHz. Graphics are usually 30-60Hz and sometimes 120Hz for active stereo. That’s at least two orders of magnitude of difference, so having the visuals respond per-sample doesn’t make a lot of sense or do you have something specific in mind that I’m not considering?