generative film and audio synch


    Oct 04 2006 | 1:50 pm
    hi there,
    i've made a generative film in jitter which also generates it's own soundtrack in msp. is there a way of outputting some of it to a quicktime movie complete with audio in sync? the thing is i want to do it at pretty high resolution so it can't really be done in a realtime dump to a camera as it drops frames at high res.
    thanks for any help
    owen

    • Oct 04 2006 | 3:10 pm
      use jit.vcr perhaps, if you want it high res, make sure your audio timing is linked to your video timing and run things in non-realtime mode.
      v a d e //
      www.vade.info abstrakt.vade.info
      On Oct 4, 2006, at 9:50 AM, owen lloyd wrote:
      > > hi there, > > i've made a generative film in jitter which also generates it's own > soundtrack in msp. is there a way of outputting some of it to a > quicktime movie complete with audio in sync? > the thing is i want to do it at pretty high resolution so it can't > really be done in a realtime dump to a camera as it drops frames at > high res. > > thanks for any help > > owen
    • Oct 04 2006 | 3:19 pm
      thanks, i'll check it out :)
      owen
    • Oct 04 2006 | 4:30 pm
      search the archives for randy jones's render_node, too
      On Oct 4, 2006, at 4:19 PM, owen lloyd wrote:
      > > thanks, i'll check it out :) > > owen
    • Oct 05 2006 | 4:56 pm
      i've had a look at render_node and am trying to figure out how to integrate it into my patch.
      i wonder if jit.vcr is simpler but in tests realtime and non-realtime mode both drop video frames. i'm probably making a simple mistake but all timings (audio and video) are linked and i'm using the same realtime message that jit.qt.record uses. is this correct?
      thanks again for all help
    • Oct 06 2006 | 11:00 am
      We do a record stage first, in which 1) all basic parameters of the processing are stored every 40 ms (make sure you use a metro (not qmetro) and overdrive for this) and 2) record the audio. In the second stage we use jit.qt.record to render frames back one by one, setting back all the parameters we recorded in the first stage. In stage three the audio file is added to the resulting movie with jit.qt.movie's editing commands.
      Btw, this only works if none of your processes are timebased.
      Greets, Mattijs
    • Oct 12 2006 | 8:32 am
      thanks mattijs, i think i'll give this route a try :) all a bit brain frying (i am quite new to jitter)
      cheers