Rendering non-realtime videos with audio-driven events

Matteo Marson's icon

Hi everybody!
I know ths topic wav covered a lot, but i still struggle to find a solution.
I'm using jit.catch~ @mode 3 to generate a visual representation of a signal (see patch below). I want to export a video out of it, but i can't figure out how to do it in non-realtime. I understood how jit.record works but, since the vdeo processing is strictly connected to audio, i'm not able to render frame by frame properly. I don't know how to use NRT audio driver in this context since jit.catch~ is driven by bang messages and i can't use audio to schedule events (even with phasor~ + edge~ the scheduling is not audio). I could avoid this problem by acceding and audio buffer content in non-audio fashon (with jit.buffer~ for example) but jit.catch~ in mode 3 works as a sort of oscilloscope and i don't know ho to reproduce the same effect from the scratch.

Maybe (and i hope so) it's just a silly problem, but i'm quite stuck!

Here's the patch , hope you can help me out!!

Non-realtime-recording.maxpat
Max Patch
YaleO'neil's icon

Have you found a solution to this by any chance?

I'm trying to store matrix data into a jit.matrixset object using an audio file and jit.catch in NRT for a similar purpose. I'm wondering if it's possible to use jit.catch in a way that jit.matrixset can store the matrix data precisely, reflecting all the samples of the audio file without dropping any frames.

Let’s say the sample rate of the audio file is 48K, and matrix data is to be stored at 30fps, so theoretically each of the matrix data would reflect 1600 samples. I suppose it has to do with reading the audio file non real time and synchronizing it with qmetro somehow. But, I’m not sure if it’s possible.