I'm trying to record to a continuous audio and frame buffer that are in sync with one another. Basically grab the last 2 seconds of audio and video information and be able to offset them in order to create various delay effects. I'm a little lost on the best way to do this though. I'm using poke~ and count~ to write to the buffer~ object and then converting the count~ signal to an integer and scaling it to the equivalent frame index for driving the jit.matrixset (I'm doing this with a GPU/Texture method but included jit.matrixset in the example because it more simply exemplifies my problem).
In converting the signal to an integer my problem is that many frame indexes get skipped over or dropped leaving quite a few junk frames in the frame buffer. Normally I would just index each frame based on receiving a unique frame from the jit.grab object but since I'm syncing it with audio buffer this option isn't available. I understand why my current solution is less than ideal but am a bit stuck on other ways to approach it. Any ideas?