Render_node for Jitter.


    Feb 22 2006 | 4:07 am
    Okay,
    I've cleaned up the patch which I've been using to record my own work
    lately, in hopes that it will be generally useful. I have tried to
    address some of the most common problems regarding patch preview and
    rendering. From the help file:
    A handy tool for managing both real-time preview and offline
    rendering in Jitter OpenGL patches. Render_node can record both audio
    and visuals offline in sync. Visuals can be rendered with proper
    temporal antialiasing or "motion blur." Playback is controlled via a
    timeline interface.
    There are a few simple steps to making a patch which will work with
    render_node. These are demonstrated in the companion
    "render_node_test_pattern."
    Render_node can render your OpenGL scenes through shader plugins,
    both offline and in realtime. Examples are in the "plugins" folder.
    Open one to activate it.
    Tested on OS X only. Theoretically, everything is cross-platform.
    If you are on Windows and you find yourself applying tweaks to get it
    to work, please share and I'll redistribute.
    I am currently getting some errors using filepath for my
    initialization, but the patch is working OK. If some javascripts or
    shaders are not found when you run the patch, then this is causing a
    problem. If anyone has worked out a good way to add a subdirectory
    of the patch's folder to the search path, let me know.
    Please visit http://2uptech.com/archive.html to download.
    -Randy

    • Feb 22 2006 | 8:02 am
      I think the best way to add a subfolder to a patch's path is to do the
      following:
      in a loadbang JS method:
      ->use something like this to get the folder path:
      var patch = this.patcher;
      var patchFile = new File(patch.filepath);
      var patchPath = patchFile.foldername.substring(0,
      patchFile.foldername.length-1);
      Then, there's a Max object (I forget the name) for adding search
      paths. Use it on conjunction with the JS script to add appropriate
      subdirectories.
      One caveat is that the patch ahs to be fully loaded for this to work.
      So, for things like shaders etc that depend on it, you will have to
      load them after the JS loadbang method executes.
      wes
    • Feb 22 2006 | 7:28 pm
      Hi Wes,
      Thanks for the help. This is basically what I did in Max, using
      thispatcher and filepath. You can ask thispatcher for the patcher's
      path. But filepath is returning errors (-1) when I add the
      subpaths. Does the render_node patch work for you? I'd be keen to
      hear if it works on Windows as well.
      best
      Randy
    • Feb 22 2006 | 7:38 pm
      The patch is semi-functional. You have some paths for pictctrl that
      are specific to your machine on there. Also, I couldn't get the
      plugins to work although they were registered. It could just be my
      misunderstanding the patch. I would need to look at how it's
      constructed to give more detailed analysis.
      Also, what's the purpose of the blur? I'm not familiar with the
      technique of temporally blurring recordings of renderings.
      thanks,
      wes
    • Feb 22 2006 | 7:46 pm
      After thinking about it more, I think the whole relative path thing
      should be rolled into a max external. Something that would
      automatically add subpaths to the search path if it's in a patch.
      wes
    • Feb 22 2006 | 10:31 pm
      Adding the render_node folder to your search path should solve all of
      the file problems for the time being until I get the filepath error
      sorted. If you do this, open the patch, open the test pattern, and
      hit play, you should see a rendered scene. Did you try the help
      hints? Hopefully they will explain the looping timeline and other
      controls.
      The bias plugin defaults to look like the input image-- you'll need
      to tweak the swatches to change the colors. The lumadisplace plugin
      should produce an obvious warp by default.
      re: blur. Motion blur (temporal antialiasing) is needed just like
      spatial antialiasing to make computer animations look decent. It's
      why film looks good even at 24fps-- the shutter records an
      integration of the light in the scene over time through what is
      essentially a box filter. High-end packages like PRRenderman
      implement motion blur through stochastic oversampling of each frame.
      Accumulating frames on the GPU is what I'm doing-- with enough
      frames, it looks good. Really it should be done adaptively, but let
      me get the patch working first...
      thanks,
      Randy
    • Feb 23 2006 | 4:29 pm
      Hi Randy,
      Thanks a lot for sharing this! I hope I can figure out how to adapt my
      patches to render_node. The test animation is working almost without
      problems on windows too. The only thing I encountered is a distortion in the
      output movie, similar to what I experience if I use @doublebuffer 0 on a
      window in any jitter patch (this has always been like that on my computer).
      Only now it's not visible in the jit.window (when the patch is running or
      recording) but just the output movie.
      My animations use a lot of midi-triggered ramps and lfo's synced to an
      external sequencer. I used to just divide the time constants by 4 and run
      the render metro at fps/4 while recording to disk.
      Any thoughts on how to adapt this to your smpte based render? Thanks again.
      Best, Thijs
    • Feb 23 2006 | 5:31 pm
      I was able to get the render node to work after I added the render_node
      folder to my File prefs. I found some errors in your "sprintf" usage
      for making paths - spaces in the name will mess it up, and probably
      cause filepath errors.
      I've been grappling with a lot of the issues this patch tackles, so I'm
      grateful for your post! I'll give it a shot on windows later. My
      partner Jay has been having trouble on a Windows laptop (X300-based
      video card) getting my texture readback stuff to work properly. I am
      eager for him to try your patch out so I can find out if it's my code
      or his machine (I'm guessing it's the computer's fault...speaking of
      which - does anyone know a good source for 3rd party video card
      drivers? I had him update the drivers, but they are from Dell - the
      ATI ones didn't cooperate - and that didn't help the problem...)
      Because this is sure to be a useful reference for a while for a lot of
      people, here is a contribution to sort out the file paths issues for
      Max and standalone usage. You can insert this into the render_node
      patch as you see fit, Randy, as you are the master keeper!
    • Feb 24 2006 | 7:35 pm
      On Feb 23, 2006, at 8:29 AM, Thijs Koerselman wrote:
      > Hi Randy,
      >
      > Thanks a lot for sharing this! I hope I can figure out how to adapt
      > my patches to render_node. The test animation is working almost
      > without problems on windows too. The only thing I encountered is a
      > distortion in the output movie, similar to what I experience if I
      > use @doublebuffer 0 on a window in any jitter patch (this has
      > always been like that on my computer). Only now it's not visible in
      > the jit.window (when the patch is running or recording) but just
      > the output movie.
      >
      > My animations use a lot of midi-triggered ramps and lfo's synced to
      > an external sequencer. I used to just divide the time constants by
      > 4 and run the render metro at fps/4 while recording to disk.
      >
      > Any thoughts on how to adapt this to your smpte based render?
      > Thanks again.
      You don't really give me enough information to go on. But if you are
      using the fps/4 to insure that all your frames get recorded and not
      skipped when going to disk, this is what the nonrealtime more of
      render_node takes care of for you. You should just be able to remove
      that /4 stuff. When recording offline, render_node uses the
      Nonrealtime DSP driver, clocks everything from the DSP, and runs the
      time messages through jit.qfaker so that all messages will be allowed
      to complete before the next time tick.
      If you have other areas in your patch where you convert from signals
      to messages to trigger events, you can use the same switch logic to
      do the qfaker thing. there's an example in the "messages recorder"
      section of the "...test_pattern" patch.
      best,
      Randy
    • Feb 24 2006 | 8:06 pm
      Thanks I'll check it out.
      best t_
    • Feb 24 2006 | 8:25 pm
      On Feb 23, 2006, at 9:31 AM, Peter Nyboer wrote:
      > I was able to get the render node to work after I added the
      > render_node folder to my File prefs. I found some errors in your
      > "sprintf" usage for making paths - spaces in the name will mess it
      > up, and probably cause filepath errors.
      Thanks a lot Peter! I incorporated your paths fix and it works a charm.
      I had to give up and put my jsui script in the same directory as the
      patcher. I couldn't find a way to load the jsui script dynamically
      without getting en error message at startup when its file attribute
      is set to something nonexistent. I wonder if you've handled this too?
      Anyway, there's a new version of render_node out now, 0.51. It
      solves these path problems and has another minor fix or two. Now the
      timeline is initialized correctly to loop the whole file. The help
      file mentions that the keyboard commands space (start/stop) and esc
      (fullscreen) are enabled by default.
      all the best,
      Randy
    • Feb 27 2006 | 6:41 pm
      let me say, Thanks Randy! You have some great patches on your site,
      and this is sure one of them.
      However, I wonder if you or others here may be able to get me past this issue.
      I am using a patch driven by midi events. I can use the MIDI files in
      Max, but what is the best way, with say a "seq" object to bang through
      Midi files? Id like to load in my files, and then have then drive at
      the same speed as they did originally, around 103 bpm.
      Any suggestions on how to work with the Worldtime or Phasor to make
      the seq object bangable?
      Thanks,
      Computo
    • Feb 27 2006 | 7:53 pm
      Hi Joe,
      The "messages_recorder" subpatch of the test_pattern shows how to use
      seq~ synched with the recorder. You can play your MIDI data into
      seq~ (from seq or wherever) and then play it back at the original speed.
      Possibly seq (sans ~) would work with the nonrealtime driver/qfaker
      setup I am using, but I wouldn't assume anything until trying it
      out. If you wanted to use seq, you would have to hook up a few of
      the globals from the transport in order to get the scrubbing and such
      to work.
      -Randy
    • Feb 27 2006 | 8:09 pm
      But wont playing it back at original speed screw with the timing once
      the frame by frame capture is going?
      This is where I get confused. I dont understand how the Midi file is
      supposed to playback and sync with the video...
      Joe
    • Feb 27 2006 | 8:20 pm
      seq~ is driven by a phasor which represents "world time" regardless
      of whether the animation is running in real time or offline.
    • Feb 27 2006 | 8:53 pm
      You the man, I'll try it all when I get home.
      THANK YOU!!
      Joe
    • Feb 27 2006 | 11:40 pm
      Perhaps this will do what you need (from the Max manual)...
      tick
      After seq has received a start -1 message, it waits for tick messages to
      advance its clock. In order to play the sequence at its original recorded
      tempo, seq must receive 48 tick messages per second. This is equivalent to
      24 ticks per quarter note (the standard for a MIDI Clock message) at a tempo
      of120MM. By using tick messages to advance the sequencer, you can vary the
      tempo of playback or synchronize seq with another timing source (such as
      incoming MIDI Clock messages).
      Cheers,
      Gary Lee Nelson
      TIMARA Department
      Oberlin College
      www.timara.oberlin.edu/GaryLeeNelson
    • Feb 27 2006 | 11:49 pm
      I have a related problem. I am using a frame-based sequencing method. I
      found a place to get running frame numbers inRender_node by dividing the
      world time by the appropriate value. This works great in nonreal mode but
      in realtime, it seems to skip frame numbers. I tried watching for a
      particular frame number and print a bang when it arrived but it was accurate
      only 50% of the time. Is there a better way?
      Cheers,
      Gary Lee Nelson
      TIMARA Department
      Oberlin College
      www.timara.oberlin.edu/GaryLeeNelson
    • Feb 28 2006 | 2:13 am
      On Feb 27, 2006, at 3:49 PM, Gary Lee Nelson wrote:
      >
      > I have a related problem. I am using a frame-based sequencing
      > method. I
      > found a place to get running frame numbers inRender_node by
      > dividing the
      > world time by the appropriate value. This works great in nonreal
      > mode but
      > in realtime, it seems to skip frame numbers. I tried watching for a
      > particular frame number and print a bang when it arrived but it was
      > accurate
      > only 50% of the time. Is there a better way?
      Hmm. Realtime mode will skip frame numbers by design, when the
      rendering cannot keep up with actual time. If you need sequential
      frame numbers, possibly you just want to hang a frame counter off of
      grn_worldtime or grn_render_done_bangs.
      -Randy
    • Feb 28 2006 | 4:19 pm
      Here's my solution to frame counting following Randy's advice above.
      This code fragment verifies that all of the frame numbers arrive in play
      mode.
      grn_framecounter_start is my addition to play_controls as follows:
      Cheers,
      Gary Lee Nelson
      TIMARA Department
      Oberlin College
      www.timara.oberlin.edu/GaryLeeNelson
    • Feb 28 2006 | 5:20 pm
      This thread keeps getting better!
      Im gonna try that tick idea, as seq~ seems a bit too finiky for my
      taste...If I could load my midi file right in, it would be a lot
      nicer.
      Joe
    • Feb 28 2006 | 5:30 pm
      Tick works with seq - midi file player. seq~ is something else.
      Cheers,
      Gary Lee Nelson
      TIMARA Department
      Oberlin College
      www.timara.oberlin.edu/GaryLeeNelson
    • Feb 28 2006 | 7:50 pm
      yeah,
      seq~ was discussed previously in this thread, as a possibility for
      resequencing my midi file, but I think seq is what Im looking for.
      Joe
    • Mar 01 2006 | 6:28 pm
      Let me begin by saying this is BY FAR the best way to output OpenGL work.
      But I am experiencing one noticable issue.
      For some reason, my "pak camera" refuses to send data to my "render"
      object. Is there some sort of issue with this message in nonrealtime mode?
      Everything else is working EXACTLY as expected...But two important events
      are coming into the camera object, with no result.
      Any Ideas?
      thanks SO much everyone,
      Joe
    • Mar 01 2006 | 7:11 pm
      On Mar 1, 2006, at 10:28 AM, Joe Caputo wrote:
      > Let me begin by saying this is BY FAR the best way to output OpenGL
      > work.
      Glad to hear it.
      >
      > But I am experiencing one noticable issue.
      >
      > For some reason, my "pak camera" refuses to send data to my
      > "render" object. Is there some sort of issue with this message in
      > nonrealtime mode?
      I can't imagine how that would be the case. If you want to narrow the
      behavior down to a small test patch, I could look at it.
      -Randy
    • Mar 02 2006 | 7:02 pm
      About the motion blur - I see that you have a combination of slabs
      adding up frames on the GPU, how come you don't use either tp.slide.jxs
      to do a temporal fade, or use a single custom slab? Just curious what
      your reasoning is, I usually just use the tp.slide (with mixed
      results). Stochastic oversampling sounds very tasty...
      -evan
    • Mar 02 2006 | 7:30 pm
      So, I am SOOO close now...the only issue that is keeping me down now is that
      of timing.
      My original work was at 103 BPM, but the playback seems like its going at
      near half that.
      basically all of my actions are supposed to start at around 16 secs in, but
      now dont start until around 30 secs.
      I tried to do the math to get a basic count at around 103, but even when I
      change my math object to a different divisor, I get the same timing.
      How and where can I change the tempo?
      Thanks all,
      Joe
    • Mar 02 2006 | 7:48 pm
      Hi Randy,
      I'm not getting any output to the jit.qt.record - I see "error: attempt
      to close track with no media" when I stop recording. The audio records
      fine, and writes to a file. I'm first setting a file name by clicking
      the button, then using the "rec" ubumenu to start/stop recording, and I
      opened the "render_node_test_pattern" patch after opening the
      render_node0.51 patch initially. I can't figure out why this is so
      after looking through the patch. I can see a preview in the "rn"
      window, but NOT the preview pwindow. I'm on max 4.5.6 and jitter 1.5.2
      on OS X 10.3.9. There are no loading errors (except the fpic not found
      error), all of the shaders are loaded correctly. Any thoughts?
      One issue I noticed is that in the "render_and_window" subpatch the
      [jit.gl.render rn] object has "@blend_enable1" when I believe it should
      be "@blend_enable 1", although oddly I don't see an error message in
      the max window.
      -Evan
    • Mar 02 2006 | 9:41 pm
      On Mar 2, 2006, at 11:48 AM, evan.raskob wrote:
      > Hi Randy,
      >
      > I'm not getting any output to the jit.qt.record - I see "error:
      > attempt to close track with no media" when I stop recording. The
      > audio records fine, and writes to a file. I'm first setting a file
      > name by clicking the button, then using the "rec" ubumenu to start/
      > stop recording, and I opened the "render_node_test_pattern" patch
      > after opening the render_node0.51 patch initially. I can't figure
      > out why this is so after looking through the patch. I can see a
      > preview in the "rn" window, but NOT the preview pwindow. I'm on
      > max 4.5.6 and jitter 1.5.2 on OS X 10.3.9. There are no loading
      > errors (except the fpic not found error), all of the shaders are
      > loaded correctly. Any thoughts?
      What machine are you running on? If it's not able to run the
      gl.slabs, this would probably produce the results you describe.
      I take it when you click the "p" toggle you do not see the preview?
      > One issue I noticed is that in the "render_and_window" subpatch the
      > [jit.gl.render rn] object has "@blend_enable1" when I believe it
      > should be "@blend_enable 1", although oddly I don't see an error
      > message in the max window.
      Meaningless attribute arguments are silently ignored. Thanks for
      catching that.
      -Randy
    • Mar 02 2006 | 9:47 pm
      On Mar 2, 2006, at 11:30 AM, Joe Caputo wrote:
      > So, I am SOOO close now...the only issue that is keeping me down
      > now is that of timing.
      >
      > My original work was at 103 BPM, but the playback seems like its
      > going at near half that.
      >
      > basically all of my actions are supposed to start at around 16 secs
      > in, but now dont start until around 30 secs.
      >
      > I tried to do the math to get a basic count at around 103, but even
      > when I change my math object to a different divisor, I get the same
      > timing.
      >
      > How and where can I change the tempo?
      In your patch. Currently there's no varispeed playback in
      render_node, but it's not a bad idea. Probably, you want to rework
      your timing to run from the world time.
      -Randy
    • Mar 02 2006 | 10:00 pm
      Actually, Im trying to run off of the worldtime, but Im not sure how
      to attenuate the data, to make the devices act AS IF its running at
      103bpm...
      I thought I was doing it right, but what do I need to put between the
      worldtime, and my action objects to make them THINK its 103 bpm?
      Thanks,
      Joe
    • Mar 02 2006 | 10:01 pm
      On Mar 2, 2006, at 11:02 AM, evan.raskob wrote:
      > About the motion blur - I see that you have a combination of slabs
      > adding up frames on the GPU, how come you don't use either
      > tp.slide.jxs to do a temporal fade, or use a single custom slab?
      > Just curious what your reasoning is, I usually just use the
      > tp.slide (with mixed results). Stochastic oversampling sounds very
      > tasty...
      The frame accumulator comprises two slabs. They are both needed in
      order for cumulative feedback to occur. There's no way to create
      feedback within one shader.
      The reason to use the GPU rather than the CPU is because it's a lot
      faster.
      Also, the basic idea is not to fade from one frame to another, but to
      accumulate multiple subframes into one frame. This is not just a
      blurring effect with previous frames but real temporal antialiasing.
      -Randy
    • Mar 02 2006 | 10:14 pm
      On Mar 2, 2006, at 2:00 PM, Joe Caputo wrote:
      > Actually, Im trying to run off of the worldtime, but Im not sure how
      > to attenuate the data, to make the devices act AS IF its running at
      > 103bpm...
      >
      > I thought I was doing it right, but what do I need to put between the
      > worldtime, and my action objects to make them THINK its 103 bpm?
      World time counts in milliseconds, so you can start by asking
      questions like "how many milliseconds are in one quarter note at 103
      bpm," and forming the answers as Max expressions. I would need to
      know a lot more about your patch to get more specific.
      -Randy
    • Mar 03 2006 | 7:20 am
      The issue stems from seq objects. Im trying to drive them in -1
      "tick" mode, with the incoming worldtime, and I just cant get them to
      playback properly. from the reference manual, I have to have 48 ticks
      per second, which is around 20.83 ms, but using speedlim to cut down
      the incoming signal, I am not getting proper playback.
      Im trying everything that seems reasonable, and I still get odd results.
    • Mar 03 2006 | 12:16 pm
      On 2 Mar 2006, at 21:41, Randy Jones wrote:
      > What machine are you running on? If it's not able to run the
      > gl.slabs, this would probably produce the results you describe.
      A powerbook G4 1.25 MHz w/ATI Radeon 9600 mobility 64MB. Definitely
      runs slabs and shaders well (I just used the mac mini for a one-off
      project as a test)
      I traced through the entire patch, and it looks like all the correct
      messages are being send and received, except I can't confirm that the
      jit.render is correctly rendering to the rn_subframe texture. I
      frankly don't think that the "to_texture" message is working properly,
      but I don't understand why I seem to be the only one having this issue.
      I also tried running Jitter Recipe 24.GLFeedback.mxt, and I have the
      same problem (no rendering to the to_texture texture destination) -
      until I use fullscreen mode in that patch, and then it works properly.
      But entering/exiting fullscreen mode doesn't fix the render_node patch
      problem.
      I'll see if anyone at cycling is monitoring this thread, if not I'll
      have to take it up with support and see what they say.
      Otherwise, very nice patch!
      -Evan
    • Mar 03 2006 | 12:20 pm
      Right, sorry, but I didn't explain myself well.
      What I meant was [jit.gl.slab @file tp.slide.jxs] --> [jit.gl.texture]
      --> (right inlet of original slab)
      Like in example jit.gl.slab-slide.pat in the
      jitter-examples/render/slab/ directory.
      That way you wouldn't need a crossfade or frame accumulator, you could
      just use a single slab and texture feedback. If I can fix whatever
      seems to be amiss with my powerbook, I'll show you what I mean...
      -Evan
    • Mar 03 2006 | 5:42 pm
      On Mar 3, 2006, at 4:16 AM, evan.raskob wrote:
      >
      > A powerbook G4 1.25 MHz w/ATI Radeon 9600 mobility 64MB.
      > Definitely runs slabs and shaders well (I just used the mac mini
      > for a one-off project as a test)
      Exactly the same machine I have.
      > I traced through the entire patch, and it looks like all the
      > correct messages are being send and received, except I can't
      > confirm that the jit.render is correctly rendering to the
      > rn_subframe texture. I frankly don't think that the "to_texture"
      > message is working properly, but I don't understand why I seem to
      > be the only one having this issue. I also tried running Jitter
      > Recipe 24.GLFeedback.mxt, and I have the same problem (no rendering
      > to the to_texture texture destination) - until I use fullscreen
      > mode in that patch, and then it works properly. But entering/
      > exiting fullscreen mode doesn't fix the render_node patch problem.
      That's weird, but at lest you can focus on fixing the small
      GLFeedback patch. The render destination must be the exact same
      pixel size as the texture. Are you running any extensions which
      might have to do with display?
      render_node uses its own fullscreen mode based on jit.displays, and
      not "fullscreen 1". This probably explains the difference.
      -Randy
    • Mar 03 2006 | 5:58 pm
      On Mar 3, 2006, at 4:20 AM, evan.raskob wrote:
      > Right, sorry, but I didn't explain myself well.
      >
      > What I meant was [jit.gl.slab @file tp.slide.jxs] -->
      > [jit.gl.texture] --> (right inlet of original slab)
      Sorry, I scanned "slide" and saw jit.slide.
      You can get feedback with one slab this way, but how do you send its
      output somewhere only once every n frames? This could be done with a
      gate, but I am using another slab to flip the image anyway, so I just
      made that slab "thru 0".
      What I tried for first was a higher-resolution accumulator which
      would permit HDR rendering in the future, but I ran into some
      problems with using float textures.
      -Randy
    • Mar 03 2006 | 11:52 pm
      Seq object anybody?
      How can I be doing this wrong...?
      How many ms do I need to limit to get a play speed of 103?
      The reference manual is completely confusing.
      I am boggled.
      Thanks,
      Computo
    • Aug 26 2012 | 1:03 pm
      hi there, even thought this seems to be a pretty old topic.. let's see if I can get an answer to this here.
      I'm a bit confused regarding how to drive a jit.qt.movie from the grn_worldtime count so that it plays back right when rendering.
      I have a feeling it shouldn't be to hard but I can't figure it out.
      thanks in advance
      g.