For a theater I will have some scenes where I need live video. Since the whole visual part of the show is being managed in Max/Jitter, it would be great to have the live video as a part of the Jitter patch as well. I. e. on a certain cue, the played video would fade to live video.
Now the idea is to have a capture card (Decklink HD Extreme) which I would grab with the jit.qt.grab object and then feed it to somehow my window (or gl.videoplane or whatever). Without much of video-effect-processing.
My question is whether somebody has experience with live video and capture cards and their latency. Could it be lipsync?
I could imagine that getting an SDI signal on the capture card, which then is being transferred to the CPU and then going out through the graphic card again, could be quite a bottleneck.
what do you think? how many frames will I lose? Are there any tricks?
many thanks for your help.
For Live-Capture i make good experience with osprey cards. didn’t measure the latency, but looked quite acceptable…
from the manual of the 250e/450e card (SD):
Some deinterlacing modes introduce one frame time of latency to the processing of captured video frames. That is, the processing adds 33 msec (525-line, NTSC), or 40 msec (625-line, PAL/SECAM) of delay to the time between end of frame capture, and return to the client. In all cases, this latency is in addition to the time for processing after capture – which is typically 1 to 5 msec. The one frame of latency is inserted or not inserted as follows:
If deinterlace mode Off, there are zero frames of latency.
If you request Motion Adaptive and select 2-Frame algorithm, there are zero frames of latency.
If you request Motion Adaptive and select 3-Frame algorithm, there is one frame of latency.
If you request Inverse Telecine or Auto, there is one frame of latency. In Auto mode, there is one frame of latency regardless of whether you select the 2-Frame or 3-Frame algorithm for Motion Adaptive fallback.