Most efficient way to use jit.qt.movie + OpenGL


    Feb 21 2006 | 10:29 pm
    Hi,
    I thought this should be of interest for a lot of users here.
    My question is, what is the most efficient (CPU/GPU-wise) way to play back 4 simultaneous movies WITHOUT hiccups (not PhotoJPEG, it?s DV and yes I know I know PhotoJpeg is more efficient but there is no time to convert all those movies).
    I tried ALOT and could find no REAL theoretical/practical solution what would be best. My starting patch for discussion is below (think it as 4 times).
    Variation1 (as in the patch):
    1) Use individual qmetros for jit.gl.render and jit.qt.movie
    2) Use @out_name in jit.qt.movie
    Variation2
    1) Use one "Master" qmetro to bang everything (in the right? order t b b b erase)
    2) Use direct connection to jit.gl.slab from jit.qt.movie
    Variation3
    1) Use jit.qt.movie -> jit.gl.texture -> jit.gl.gridshape -> jit.gl.videoplane
    To be honest I couldn?t manage this without a loss in quality of the picture.
    Anyhow, here is the starting patch for discussion:

    • Feb 21 2006 | 11:19 pm
      Did you look at the uyvy-4dv-vidplane.pat Jitter example? Here's a
      summary:
      1. use uyvy colormode for all movies
      2. make sure that @highquality is enabled for all movies (can be set
      before hand inside QT Player Pro if desired)
      3. make sure that @unique is enabled for all movies
      4. eithger use jit.gl.slab->jit.gl.videoplane or 4 instances of
      jit.gl.videoplane with blend enable and various alpha values to blend
      between them. Make sure that you use @colormode uyvy for jit.gl.slab
      or jit.gl.videoplane (if sending directly).
      On a fast dual processor G5 (2.0 Ghz or higher), you should be able
      to play 4 DV streams at 30fps. Not likely on anything less than that.
      As far as hiccups, you may not be able to avoid this when switching
      clips without multiple instances of jit.qt.movie preloaded in
      advance. As always you need to make sure you don't have too much low
      priority processing to ensure optimum framerate. There is no
      advantage to using a named output matrix.
      The last suggestion to improve performance is to use 4 separate
      drives, one for each media file.
      Here's a simple variant of your patch, if you want to use slab. I
      would *definitely* recommend batch converting to PhotoJPEG, however.
      How much time could it take? The performance benefits are noticeable.
      Good luck.
      -Joshua
    • Feb 22 2006 | 8:05 am
      Thanks Joshua for your explanations and the example.
      Two more things to ask:
      1) My basic patch led to the conclusion that I wanted to mix the 4 movies on one screen (jit.window). In fact they should go to 4 different jit.windows (screens) on different monitors.
      So the question: Is there a difference when using one "Master" qmetro for all render contexts or is it not relevant to have multiple qmetros.
      2) is there a performance penalty when sending jit.qt.movie?s output through a outlet of a patch to an inlet of another patch (in opposite to using @out_name).
      As always, thanks a lot.
      Bernd
    • Feb 22 2006 | 9:07 am
      If you are going to loop the 4 videos you might be better to preload
      them in advance using loadram, If yout ram is large enought. Else the
      seperate qt.movies takes up alot of bandwith to and from your HD.
    • Feb 22 2006 | 1:34 pm
      Ah, and I forgot to ask a third one:
      When getting the time for a jit.qt.movie is it more efficient to trigger the gettime message with the same qmetro (i.e. every 20ms [t b gettime] compared to a gettime message that comes from another qmetro every 200ms (and probably interferes with the faster 20ms "playout" thread) ?
      Thanks
      Bernd
    • Feb 22 2006 | 1:45 pm
      Naturally, executing the "gettime" function takes some time, and the
      less you send the message, the less time, overall, it will take. Why not
      use a qlim object to thin out the data from 20ms to 200ms or so, and
      trigger the gettime message that way, instead of having a 2nd qmetro?
      jb
    • Feb 22 2006 | 7:56 pm
      On Feb 22, 2006, at 12:05 AM, Bernd wrote:
      > 1) My basic patch led to the conclusion that I wanted to mix the 4
      > movies on one screen (jit.window). In fact they should go to 4
      > different jit.windows (screens) on different monitors.
      >
      > So the question: Is there a difference when using one "Master"
      > qmetro for all render contexts or is it not relevant to have
      > multiple qmetros.
      Note that performance slows down for multiple windows with sync
      enabled. As previously mentioned on the list, divide the monitor
      refresh rate by the number of windows with sync enabled--e.g. for 4
      windows @ 30 fps, you'll need a monitor refresh rate of 120 hz(fps).
      This is often not a possibility. If you are limited to a 60 hz(fps),
      you should not use more than two instances of jit.window with sync
      enabled (the default). You can sometimes span a window across two
      monitor outputs without problem if they are both on the same card.
      Another solution is to use a 1280x960 output and send to a quad
      splitter box, or four converters which can take a subrectangle (see
      Johnny DeKam's posts on this subject). Otherwise, you should use two
      machines to get four outputs. This is probably desirable anyways, as
      bandwidth to a second card (unless PCI-E) is poor.
      As for the qmetro being global or movie specific, it should not much
      matter from a performance standpoint, however a single qmetro might
      simplify your patcher logic.
      > 2) is there a performance penalty when sending jit.qt.movie?s
      > output through a outlet of a patch to an inlet of another patch (in
      > opposite to using @out_name).
      No.
      -Joshua
    • Feb 26 2006 | 11:44 pm
      ... picking up an old thread:
      The dvducks.mov example movie is compressed in DV/DVCPRO - NTSC,
      720x480.
      Question is:
      Does the other DV formats (DVPRO - PAL, DVCPRO50, ...) also have the
      same advantage using the uyvy colour-mode? Is there an optimal DV
      compression? And what about frame size ... could I for instance go
      576x384 or 480x320 and get even faster results?
      In the quest for more frames per second
      ~Carl Emil
    • Feb 27 2006 | 7:44 am
      Well, DV pal has a nicer color space imo, its 4:2:0 vs 4:1:1 in NTSC
      which means colors wont bleed and text will stay sharp, basically it
      looks better if you start off with an uncompressed/clean source. You
      can do a test yourself and take some uncompressed footage and encode
      it to different codecs.
      Heres a pixel sampling link that explains it: http://www.adamwilt.com/
      DV-FAQ-tech.html#colorSampling - but note there are some caveats with
      transcoding/generational loss, etc.
      Now, DV-NTSC and DV-PAL is a video standard, but DV-25, DV-50 etc are
      codecs - you can definitely play with the frame size - smaller frame
      means smaller matrix in jitter which means less data to process each
      'frame', thus higher frame rates. UYVY is a good mode to use if you
      use openGL, as it results in lower bandwidth to the video card (YUV
      chroma is half sampled, irc, compared to RGB). although slightly
      unintuitive, sometimes lowering the frame rate of your source clip
      results in faster overall processing, esp. if you do many jitter 'fx'
      in serial.
      v a d e //
      www.vade.info
      abstrakt.vade.info
      I LIVE! I LIVE! I LIVE! I LIVE! I LIVE! I LIVE! I LIVE! I LIVE! I
      LIVE! I LIVE! I LIVE! I LIVE!
      You will not be saved by the Holy Ghost. You will not be saved by the
      God Plutonium.
      In fact, YOU WILL NOT BE SAVED!
    • Feb 27 2006 | 6:20 pm
      Thanks Vade for the very precise answer.
      > Heres a pixel sampling link that explains it: http://
      > www.adamwilt.com/DV-FAQ-tech.html#colorSampling
      DV makes more sense now. This one helped too:
      > You can do a test yourself and take some uncompressed footage and
      > encode it to different codecs.
      I tried that and it seems like PAL 640x480 is a bit slower than NTSC
      720x480 which makes sense according to the colour information you
      mentioned.
      > Well, DV pal has a nicer color space imo, its 4:2:0 vs 4:1:1 in
      > NTSC which means colors wont bleed and text will stay sharp
      I don't know about that, I couldn't se any notable difference in
      sharpness on text nor photo when comparing PAL and NTSC. Perhaps the
      colours in PAL are a bit more fresh, but that's it. Comparing NTSC
      720x480 and photo-jpeg in 400x300 i found that they look equally
      blurred but NTSC+UYVY is still faster and has a smaller file size. I
      saw almost no visible difference comparing NTSC 720x480 and NTSC
      640x480, but I squeezed out a couple of extra frames per second using
      640x480.
      It seems like this works best for me:
      Format: NTSC (DV/DVCPRO)
      Frame-size: 640x480
      Frame-rate: 25
      Pixel aspect: Square
      Scan mode: progressive
      Field dominance: none
      Audio: no track!
      Please respond if you think there is a better video format for
      jit.qt.movie + opengl.
      ~Carl Emil
    • Sep 14 2008 | 12:45 pm
      how do you do when several movies have different fps ?
      logically i'll think to set the metro to the highest with @unique 1.
      or do you use one metro for each movie and one with the higest speed for the render ?
      thx.