syncing tapin~ delay time to recorded videoloop (using jit.matrix, jit.submatrix
Hi guys, I´m trying to add video to a basic tapin~/tapout~ delayline, and I am
having some timing issues. My mathskills aren´t the greatest, so I can´t seem to wrap my head around the
issue.
Ideally, the videoportion of the patch would be like an exact copy of the tapin~/tapout~ delayline,
only doing video instead of audio. Can´t seem to find any Jitterobjects that have this funcionality
by themselves, but if somebody has an external that does this kind of stuff, that would be awesome!
I am using a set of matrices an jit.submatrix to record and play the video captured in the same
amount of time as the delay time of the audio delayline, but the main issues are:
- irregular timing of the video.
- how to alphablend multiple videoloops (ideally no limit to layers, like with the audio delayline)
Thank you for any comments on this!
I think of jit.matrixset as being comparable to tapin~/tapout~ in the sense that you can store the recent past sequence of video frames in a (potentially circular) buffer and access it at any point. See,Example 35: A delay buffer for Jitter matrices.
Yeah, I tried with jit.matrixset, but It doesn´t seem to have the same kind of layering possibilities as tapin~/tapout~?
By that I mean, you can`t record a matrix on top of the other without automatically deleting the first one.
So I guess the question is, would I have to make use of some type of poly-structure based on, say jit.matrixset, or
Is there another way around. I´m concerned about cpu if the poly-option is the only way.
Thanks for your reply, by the way!
The process you're describing with tapin~/tapout~, as in the MSP Tutorials "27 Delay Lines" and "28 Delay Lines with Feedback", involves three stages: delay, scaling, and addition. The delay is accomplished by making a circular buffer of recent past samples (tapin~) and accessing that buffer at the desired point(s) (tapout~), the scaling is achieved by multiplying that delayed signal by some coefficient (usually less than 1), and the original is then added to the delayed-and-scaled past input (or, in the case of feedback, past output).
In video you could think of that as delay, modification, and compositing. You can make a circular buffer of past frames (jit.matrixset), retrieve and alter a past frame (with jit.op or any other matrix processing object), then composite it with the original (by addition or subtraction, alphablend, or any other method you want). If you want to do that repeatedly, you can record the result back into the jit.matrixset.
I hope that I'm understanding your question correctly, and that that explanation helps.
Well, that makes a lot of sense!
Thank´s again