Render_node for Jitter.

Feb 22, 2006 at 4:07am

Render_node for Jitter.

Okay,

I’ve cleaned up the patch which I’ve been using to record my own work
lately, in hopes that it will be generally useful. I have tried to
address some of the most common problems regarding patch preview and
rendering. From the help file:

A handy tool for managing both real-time preview and offline
rendering in Jitter OpenGL patches. Render_node can record both audio
and visuals offline in sync. Visuals can be rendered with proper
temporal antialiasing or “motion blur.” Playback is controlled via a
timeline interface.

There are a few simple steps to making a patch which will work with
render_node. These are demonstrated in the companion
“render_node_test_pattern.”

Render_node can render your OpenGL scenes through shader plugins,
both offline and in realtime. Examples are in the “plugins” folder.
Open one to activate it.

Tested on OS X only. Theoretically, everything is cross-platform.
If you are on Windows and you find yourself applying tweaks to get it
to work, please share and I’ll redistribute.

I am currently getting some errors using filepath for my
initialization, but the patch is working OK. If some javascripts or
shaders are not found when you run the patch, then this is causing a
problem. If anyone has worked out a good way to add a subdirectory
of the patch’s folder to the search path, let me know.

Please visit http://2uptech.com/archive.html to download.

-Randy

#24546
Feb 22, 2006 at 8:02am

I think the best way to add a subfolder to a patch’s path is to do the
following:

in a loadbang JS method:
->use something like this to get the folder path:
var patch = this.patcher;
var patchFile = new File(patch.filepath);
var patchPath = patchFile.foldername.substring(0,
patchFile.foldername.length-1);

Then, there’s a Max object (I forget the name) for adding search
paths. Use it on conjunction with the JS script to add appropriate
subdirectories.

One caveat is that the patch ahs to be fully loaded for this to work.
So, for things like shaders etc that depend on it, you will have to
load them after the JS loadbang method executes.

wes

#71148
Feb 22, 2006 at 7:28pm

Hi Wes,

Thanks for the help. This is basically what I did in Max, using
thispatcher and filepath. You can ask thispatcher for the patcher’s
path. But filepath is returning errors (-1) when I add the
subpaths. Does the render_node patch work for you? I’d be keen to
hear if it works on Windows as well.

best
Randy

#71149
Feb 22, 2006 at 7:38pm

The patch is semi-functional. You have some paths for pictctrl that
are specific to your machine on there. Also, I couldn’t get the
plugins to work although they were registered. It could just be my
misunderstanding the patch. I would need to look at how it’s
constructed to give more detailed analysis.

Also, what’s the purpose of the blur? I’m not familiar with the
technique of temporally blurring recordings of renderings.

thanks,
wes

#71150
Feb 22, 2006 at 7:46pm

After thinking about it more, I think the whole relative path thing
should be rolled into a max external. Something that would
automatically add subpaths to the search path if it’s in a patch.

wes

#71151
Feb 22, 2006 at 10:31pm

Adding the render_node folder to your search path should solve all of
the file problems for the time being until I get the filepath error
sorted. If you do this, open the patch, open the test pattern, and
hit play, you should see a rendered scene. Did you try the help
hints? Hopefully they will explain the looping timeline and other
controls.

The bias plugin defaults to look like the input image– you’ll need
to tweak the swatches to change the colors. The lumadisplace plugin
should produce an obvious warp by default.

re: blur. Motion blur (temporal antialiasing) is needed just like
spatial antialiasing to make computer animations look decent. It’s
why film looks good even at 24fps– the shutter records an
integration of the light in the scene over time through what is
essentially a box filter. High-end packages like PRRenderman
implement motion blur through stochastic oversampling of each frame.
Accumulating frames on the GPU is what I’m doing– with enough
frames, it looks good. Really it should be done adaptively, but let
me get the patch working first…

thanks,
Randy

#71152
Feb 23, 2006 at 4:29pm

Hi Randy,

Thanks a lot for sharing this! I hope I can figure out how to adapt my
patches to render_node. The test animation is working almost without
problems on windows too. The only thing I encountered is a distortion in the
output movie, similar to what I experience if I use @doublebuffer 0 on a
window in any jitter patch (this has always been like that on my computer).
Only now it’s not visible in the jit.window (when the patch is running or
recording) but just the output movie.

My animations use a lot of midi-triggered ramps and lfo’s synced to an
external sequencer. I used to just divide the time constants by 4 and run
the render metro at fps/4 while recording to disk.

Any thoughts on how to adapt this to your smpte based render? Thanks again.

Best, Thijs

#71153
Feb 23, 2006 at 5:31pm

I was able to get the render node to work after I added the render_node
folder to my File prefs. I found some errors in your “sprintf” usage
for making paths – spaces in the name will mess it up, and probably
cause filepath errors.

I’ve been grappling with a lot of the issues this patch tackles, so I’m
grateful for your post! I’ll give it a shot on windows later. My
partner Jay has been having trouble on a Windows laptop (X300-based
video card) getting my texture readback stuff to work properly. I am
eager for him to try your patch out so I can find out if it’s my code
or his machine (I’m guessing it’s the computer’s fault…speaking of
which – does anyone know a good source for 3rd party video card
drivers? I had him update the drivers, but they are from Dell – the
ATI ones didn’t cooperate – and that didn’t help the problem…)

Because this is sure to be a useful reference for a while for a lot of
people, here is a contribution to sort out the file paths issues for
Max and standalone usage. You can insert this into the render_node
patch as you see fit, Randy, as you are the master keeper!

#P window setfont Geneva 9.;
#P window linecount 1;
#P newex 60 58 30 196617 t b b;
#P message 41 157 29 196617 path;
#P newex 60 133 47 196617 gate 2 1;
#P newex 60 112 27 196617 + 1;
#N vpatcher 25 70 324 282;
#P window setfont Geneva 9.;
#P newex 14 110 19 196617 t 1;
#P newex 13 90 37 196617 r menu;
#P button 47 40 15 0;
#N comlet 1 if is runtime;
#P outlet 13 157 15 0;
#P newex 15 61 27 196617 t b 0;
#P inlet 19 34 15 0;
#P window linecount 2;
#P message 50 63 164 196617 ; max runtime 1 sendapppath menu;
#P window linecount 1;
#P newex 13 134 16 196617 t i;
#P window linecount 0;
#P comment 34 113 100 196617 the above message to max will only output
to “r menu” if this is in a standalone;
#P connect 4 1 1 0;
#P connect 8 0 1 0;
#P connect 1 0 5 0;
#P connect 7 0 8 0;
#P connect 3 0 4 0;
#P connect 6 0 4 0;
#P connect 4 0 2 0;
#P pop;
#P newobj 60 83 86 196617 p runtimecheck;
#N vpatcher 80 70 618 420;
#P origin 0 -189;
#P window setfont Geneva 9.;
#P window linecount 1;
#P comment 98 199 378 196617 sets up a value to be used anywhere you
need the root path in your app;
#P outlet 80 199 15 0;
#P comment 96 173 378 196617 if it doesn’t match , it comes out the
last outlet. Good for Windows XP systems :);
#P button 62 33 15 0;
#P newex 40 195 33 196617 v app;
#P window linecount 0;
#P newex 40 56 30 196617 t b b;
#P window linecount 1;
#P newex 40 113 32 196617 r init;
#P window linecount 2;
#P message 40 81 103 196617 ; max sendapppath init;
#P inlet 40 31 15 0;
#P window linecount 0;
#P message 110 134 240 196617 re
“(.*\\/)STANDALONE.app/Contents/MacOS/$”;
#P newex 40 138 50 196617 tosymbol;
#P newex 40 169 53 196617 regexp;
#P comment 98 152 274 196617 be sure to replace STANDALONE with the
name of your app;
#P connect 4 0 7 0;
#P connect 9 0 7 0;
#P connect 7 0 5 0;
#P connect 6 0 2 0;
#P connect 3 0 1 0;
#P connect 2 0 1 0;
#P connect 1 1 8 0;
#P connect 1 3 8 0;
#P connect 1 1 11 0;
#P connect 1 3 11 0;
#P connect 7 1 3 0;
#P pop;
#P newobj 126 172 76 196617 p app-rootpath;
#P hidden newex 134 217 112 196617 s grn_patcher_path;
#N thispatcher;
#Q end;
#P hidden newobj 41 174 64 196617 thispatcher;
#N vpatcher 20 74 526 335;
#P window setfont Geneva 9.;
#P window linecount 1;
#P comment 307 208 130 196617 see wha’appens to spaces?;
#P newex 165 149 51 196617 tosymbol;
#P newex 50 149 51 196617 tosymbol;
#P newex 50 91 51 196617 tosymbol;
#P message 307 191 107 196617 “Macintoshscripts/”;
#P newex 307 167 62 196617 prepend set;
#P window linecount 0;
#P message 307 126 132 196617 Macintosh HD:/Folder one/;
#P window linecount 1;
#P newex 307 147 109 196617 sprintf “%sscripts/”;
#P window linecount 0;
#P newex 165 119 96 196617 sprintf %sscripts/;
#P newex 165 173 78 196617 prepend append;
#P newex 50 119 99 196617 sprintf %sshaders/;
#P newex 165 197 86 196617 filepath search;
#P newex 50 197 86 196617 filepath search;
#P newex 50 173 78 196617 prepend append;
#P inlet 50 32 15 0;
#P comment 307 109 100 196617 here’s how it was:;
#P connect 1 0 12 0;
#P connect 12 0 5 0;
#P connect 5 0 13 0;
#P connect 13 0 2 0;
#P connect 2 0 3 0;
#P fasten 12 0 7 0 55 113 170 113;
#P connect 7 0 14 0;
#P connect 14 0 6 0;
#P connect 6 0 4 0;
#P connect 9 0 8 0;
#P connect 8 0 10 0;
#P connect 10 0 11 0;
#P pop;
#P hidden newobj 56 217 60 196617 p subdirs;
#N vpatcher 20 74 687 533;
#P window setfont Geneva 9.;
#P window linecount 1;
#P newex 62 251 131 196617 s grn_loadbang_last;
#P newex 88 230 131 196617 s grn_loadbang_after_js;
#P button 18 142 15 0;
#P newex 109 208 131 196617 s grn_load_from_subdir;
#P newex 52 164 64 196617 t b b b b;
#P newex 51 142 64 196617 loadbang;
#P outlet 191 281 15 0;
#P window linecount 0;
#P comment 49 66 200 196617 This little nugget adds some subdirectories
of the patcher’s folder to the search path before sending a bang to
grn_load_from_subdir. This way scrips and shaders can be kept in
subfolders.;
#P connect 2 0 3 0;
#P connect 5 0 3 0;
#P connect 3 0 7 0;
#P connect 3 1 6 0;
#P connect 3 2 4 0;
#P connect 3 3 1 0;
#P pop;
#P hidden newobj 60 37 75 196617 p subdirs_load;
#P window linecount 8;
#P comment 154 57 100 196617 if this is a standalone , then send the
bang to elsewhere – thispatcher’s path msg is useless in standalones
(last time I checked!);
#P fasten 8 0 9 0 65 153 46 153;
#P connect 9 0 3 0;
#P fasten 5 0 2 0 131 204 61 204;
#P fasten 3 1 2 0 100 204 61 204;
#P connect 1 0 10 0;
#P connect 10 1 6 0;
#P connect 6 0 7 0;
#P connect 7 0 8 0;
#P connect 10 0 8 1;
#P fasten 8 1 5 0 102 161 131 161;
#P fasten 5 0 4 0 131 204 139 204;
#P fasten 3 1 4 0 100 204 139 204;
#P window clipboard copycount 11;

#71154
Feb 24, 2006 at 7:35pm

On Feb 23, 2006, at 8:29 AM, Thijs Koerselman wrote:

> Hi Randy,
>
> Thanks a lot for sharing this! I hope I can figure out how to adapt
> my patches to render_node. The test animation is working almost
> without problems on windows too. The only thing I encountered is a
> distortion in the output movie, similar to what I experience if I
> use @doublebuffer 0 on a window in any jitter patch (this has
> always been like that on my computer). Only now it’s not visible in
> the jit.window (when the patch is running or recording) but just
> the output movie.
>
> My animations use a lot of midi-triggered ramps and lfo’s synced to
> an external sequencer. I used to just divide the time constants by
> 4 and run the render metro at fps/4 while recording to disk.
>
> Any thoughts on how to adapt this to your smpte based render?
> Thanks again.

You don’t really give me enough information to go on. But if you are
using the fps/4 to insure that all your frames get recorded and not
skipped when going to disk, this is what the nonrealtime more of
render_node takes care of for you. You should just be able to remove
that /4 stuff. When recording offline, render_node uses the
Nonrealtime DSP driver, clocks everything from the DSP, and runs the
time messages through jit.qfaker so that all messages will be allowed
to complete before the next time tick.

If you have other areas in your patch where you convert from signals
to messages to trigger events, you can use the same switch logic to
do the qfaker thing. there’s an example in the “messages recorder”
section of the “…test_pattern” patch.

best,
Randy

#71155
Feb 24, 2006 at 8:06pm

Thanks I’ll check it out.

best t_

#71156
Feb 24, 2006 at 8:25pm

On Feb 23, 2006, at 9:31 AM, Peter Nyboer wrote:

> I was able to get the render node to work after I added the
> render_node folder to my File prefs. I found some errors in your
> “sprintf” usage for making paths – spaces in the name will mess it
> up, and probably cause filepath errors.

Thanks a lot Peter! I incorporated your paths fix and it works a charm.

I had to give up and put my jsui script in the same directory as the
patcher. I couldn’t find a way to load the jsui script dynamically
without getting en error message at startup when its file attribute
is set to something nonexistent. I wonder if you’ve handled this too?

Anyway, there’s a new version of render_node out now, 0.51. It
solves these path problems and has another minor fix or two. Now the
timeline is initialized correctly to loop the whole file. The help
file mentions that the keyboard commands space (start/stop) and esc
(fullscreen) are enabled by default.

all the best,
Randy

#71157
Feb 27, 2006 at 6:41pm

let me say, Thanks Randy! You have some great patches on your site,
and this is sure one of them.

However, I wonder if you or others here may be able to get me past this issue.

I am using a patch driven by midi events. I can use the MIDI files in
Max, but what is the best way, with say a “seq” object to bang through
Midi files? Id like to load in my files, and then have then drive at
the same speed as they did originally, around 103 bpm.

Any suggestions on how to work with the Worldtime or Phasor to make
the seq object bangable?

Thanks,
Computo

#71158
Feb 27, 2006 at 7:53pm

Hi Joe,

The “messages_recorder” subpatch of the test_pattern shows how to use
seq~ synched with the recorder. You can play your MIDI data into
seq~ (from seq or wherever) and then play it back at the original speed.

Possibly seq (sans ~) would work with the nonrealtime driver/qfaker
setup I am using, but I wouldn’t assume anything until trying it
out. If you wanted to use seq, you would have to hook up a few of
the globals from the transport in order to get the scrubbing and such
to work.

-Randy

#71159
Feb 27, 2006 at 8:09pm

But wont playing it back at original speed screw with the timing once
the frame by frame capture is going?

This is where I get confused. I dont understand how the Midi file is
supposed to playback and sync with the video…

Joe

#71160
Feb 27, 2006 at 8:20pm

seq~ is driven by a phasor which represents “world time” regardless
of whether the animation is running in real time or offline.

#71161
Feb 27, 2006 at 8:53pm

You the man, I’ll try it all when I get home.

THANK YOU!!

Joe

#71162
Feb 27, 2006 at 11:40pm

Perhaps this will do what you need (from the Max manual)…

tick
After seq has received a start -1 message, it waits for tick messages to
advance its clock. In order to play the sequence at its original recorded
tempo, seq must receive 48 tick messages per second. This is equivalent to
24 ticks per quarter note (the standard for a MIDI Clock message) at a tempo
of120MM. By using tick messages to advance the sequencer, you can vary the
tempo of playback or synchronize seq with another timing source (such as
incoming MIDI Clock messages).

Cheers,
Gary Lee Nelson
TIMARA Department
Oberlin College
http://www.timara.oberlin.edu/GaryLeeNelson

#71163
Feb 27, 2006 at 11:49pm

I have a related problem. I am using a frame-based sequencing method. I
found a place to get running frame numbers inRender_node by dividing the
world time by the appropriate value. This works great in nonreal mode but
in realtime, it seems to skip frame numbers. I tried watching for a
particular frame number and print a bang when it arrived but it was accurate
only 50% of the time. Is there a better way?

Cheers,
Gary Lee Nelson
TIMARA Department
Oberlin College
http://www.timara.oberlin.edu/GaryLeeNelson

#71164
Feb 28, 2006 at 2:13am

On Feb 27, 2006, at 3:49 PM, Gary Lee Nelson wrote:
>
> I have a related problem. I am using a frame-based sequencing
> method. I
> found a place to get running frame numbers inRender_node by
> dividing the
> world time by the appropriate value. This works great in nonreal
> mode but
> in realtime, it seems to skip frame numbers. I tried watching for a
> particular frame number and print a bang when it arrived but it was
> accurate
> only 50% of the time. Is there a better way?

Hmm. Realtime mode will skip frame numbers by design, when the
rendering cannot keep up with actual time. If you need sequential
frame numbers, possibly you just want to hang a frame counter off of
grn_worldtime or grn_render_done_bangs.

-Randy

#71165
Feb 28, 2006 at 4:19pm

Here’s my solution to frame counting following Randy’s advice above.

This code fragment verifies that all of the frame numbers arrive in play
mode.

#P window setfont “Sans Serif” 9.;
#P window linecount 1;
#P newex 705 104 129 196617 r grn_framecounter_start;
#P button 705 126 15 0;
#P number 534 151 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#N counter;
#X flags 0 0;
#P newobj 571 148 66 196617 counter;
#P newex 571 124 126 196617 r grn_render_done_bangs;
#P newex 571 171 40 196617 change;
#P message 619 173 33 196617 clear;
#P newex 570 197 44 196617 capture;
#P connect 6 0 4 2;
#P connect 6 0 1 0;
#P connect 4 0 5 0;
#P connect 4 0 2 0;
#P connect 7 0 6 0;
#P connect 3 0 4 0;
#P connect 1 0 0 0;
#P connect 2 0 0 0;
#P window clipboard copycount 8;

grn_framecounter_start is my addition to play_controls as follows:

#P window setfont “Sans Serif” 9.;
#P window linecount 1;
#P newex 182 140 129 196617 s grn_framecounter_start;
#P number 11 274 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 54 106 64 196617 change;
#P message 48 304 50 196617 toggle 0;
#P newex 59 130 64 196617 select 1 0;
#P newex 52 82 64 196617 route toggle;
#P outlet 48 342 15 0;
#P newex 48 274 108 196617 select 0;
#P newex 48 240 108 196617 r grn_transport_state;
#P window setfont Charcoal 10.;
#P message 85 189 44 -1331691510 record;
#B color 10;
#P window setfont “Sans Serif” 9.;
#P newex 274 101 50 196617 select 1;
#P newex 166 101 50 196617 select 1;
#P window setfont Charcoal 10.;
#P message 241 190 32 -1331691510 stop;
#B color 10;
#P message 166 189 30 -1331691510 play;
#B color 10;
#P window setfont “Sans Serif” 9.;
#P newex 223 232 111 196617 s grn_to_record_logic;
#P inlet 274 81 15 0;
#P inlet 52 48 15 0;
#P inlet 166 81 15 0;
#P connect 13 0 8 0;
#P connect 13 0 17 0;
#P connect 6 0 4 0;
#P connect 6 0 17 0;
#P connect 8 0 3 0;
#P connect 4 0 3 0;
#P connect 2 0 7 0;
#P connect 13 1 5 0;
#P connect 7 0 5 0;
#P connect 5 0 3 0;
#P connect 0 0 6 0;
#P connect 15 0 13 0;
#P connect 12 0 15 0;
#P connect 1 0 12 0;
#P connect 14 0 11 0;
#P connect 10 0 14 0;
#P connect 9 0 10 0;
#P connect 9 0 16 0;
#P window clipboard copycount 18;

Cheers,
Gary Lee Nelson
TIMARA Department
Oberlin College
http://www.timara.oberlin.edu/GaryLeeNelson

#71166
Feb 28, 2006 at 5:20pm

This thread keeps getting better!

Im gonna try that tick idea, as seq~ seems a bit too finiky for my
taste…If I could load my midi file right in, it would be a lot
nicer.

Joe

#71167
Feb 28, 2006 at 5:30pm

Tick works with seq – midi file player. seq~ is something else.
Cheers,
Gary Lee Nelson
TIMARA Department
Oberlin College
http://www.timara.oberlin.edu/GaryLeeNelson

#71168
Feb 28, 2006 at 7:50pm

yeah,

seq~ was discussed previously in this thread, as a possibility for
resequencing my midi file, but I think seq is what Im looking for.

Joe

#71169
Mar 1, 2006 at 6:28pm

Let me begin by saying this is BY FAR the best way to output OpenGL work.

But I am experiencing one noticable issue.

For some reason, my “pak camera” refuses to send data to my “render”
object. Is there some sort of issue with this message in nonrealtime mode?

Everything else is working EXACTLY as expected…But two important events
are coming into the camera object, with no result.

Any Ideas?

thanks SO much everyone,
Joe

#71170
Mar 1, 2006 at 7:11pm

On Mar 1, 2006, at 10:28 AM, Joe Caputo wrote:

> Let me begin by saying this is BY FAR the best way to output OpenGL
> work.

Glad to hear it.

>
> But I am experiencing one noticable issue.
>
> For some reason, my “pak camera” refuses to send data to my
> “render” object. Is there some sort of issue with this message in
> nonrealtime mode?

I can’t imagine how that would be the case. If you want to narrow the
behavior down to a small test patch, I could look at it.

-Randy

#71171
Mar 2, 2006 at 7:02pm

About the motion blur – I see that you have a combination of slabs
adding up frames on the GPU, how come you don’t use either tp.slide.jxs
to do a temporal fade, or use a single custom slab? Just curious what
your reasoning is, I usually just use the tp.slide (with mixed
results). Stochastic oversampling sounds very tasty…

-evan

#71172
Mar 2, 2006 at 7:30pm

So, I am SOOO close now…the only issue that is keeping me down now is that
of timing.

My original work was at 103 BPM, but the playback seems like its going at
near half that.

basically all of my actions are supposed to start at around 16 secs in, but
now dont start until around 30 secs.

I tried to do the math to get a basic count at around 103, but even when I
change my math object to a different divisor, I get the same timing.

How and where can I change the tempo?

Thanks all,
Joe

#71173
Mar 2, 2006 at 7:48pm

Hi Randy,

I’m not getting any output to the jit.qt.record – I see “error: attempt
to close track with no media” when I stop recording. The audio records
fine, and writes to a file. I’m first setting a file name by clicking
the button, then using the “rec” ubumenu to start/stop recording, and I
opened the “render_node_test_pattern” patch after opening the
render_node0.51 patch initially. I can’t figure out why this is so
after looking through the patch. I can see a preview in the “rn”
window, but NOT the preview pwindow. I’m on max 4.5.6 and jitter 1.5.2
on OS X 10.3.9. There are no loading errors (except the fpic not found
error), all of the shaders are loaded correctly. Any thoughts?

One issue I noticed is that in the “render_and_window” subpatch the
[jit.gl.render rn] object has “@blend_enable1″ when I believe it should
be “@blend_enable 1″, although oddly I don’t see an error message in
the max window.

-Evan

#71174
Mar 2, 2006 at 9:41pm

On Mar 2, 2006, at 11:48 AM, evan.raskob wrote:

> Hi Randy,
>
> I’m not getting any output to the jit.qt.record – I see “error:
> attempt to close track with no media” when I stop recording. The
> audio records fine, and writes to a file. I’m first setting a file
> name by clicking the button, then using the “rec” ubumenu to start/
> stop recording, and I opened the “render_node_test_pattern” patch
> after opening the render_node0.51 patch initially. I can’t figure
> out why this is so after looking through the patch. I can see a
> preview in the “rn” window, but NOT the preview pwindow. I’m on
> max 4.5.6 and jitter 1.5.2 on OS X 10.3.9. There are no loading
> errors (except the fpic not found error), all of the shaders are
> loaded correctly. Any thoughts?

What machine are you running on? If it’s not able to run the
gl.slabs, this would probably produce the results you describe.

I take it when you click the “p” toggle you do not see the preview?

> One issue I noticed is that in the “render_and_window” subpatch the
> [jit.gl.render rn] object has “@blend_enable1″ when I believe it
> should be “@blend_enable 1″, although oddly I don’t see an error
> message in the max window.

Meaningless attribute arguments are silently ignored. Thanks for
catching that.

-Randy

#71175
Mar 2, 2006 at 9:47pm

On Mar 2, 2006, at 11:30 AM, Joe Caputo wrote:

> So, I am SOOO close now…the only issue that is keeping me down
> now is that of timing.
>
> My original work was at 103 BPM, but the playback seems like its
> going at near half that.
>
> basically all of my actions are supposed to start at around 16 secs
> in, but now dont start until around 30 secs.
>
> I tried to do the math to get a basic count at around 103, but even
> when I change my math object to a different divisor, I get the same
> timing.
>
> How and where can I change the tempo?

In your patch. Currently there’s no varispeed playback in
render_node, but it’s not a bad idea. Probably, you want to rework
your timing to run from the world time.

-Randy

#71176
Mar 2, 2006 at 10:00pm

Actually, Im trying to run off of the worldtime, but Im not sure how
to attenuate the data, to make the devices act AS IF its running at
103bpm…

I thought I was doing it right, but what do I need to put between the
worldtime, and my action objects to make them THINK its 103 bpm?

Thanks,
Joe

#71177
Mar 2, 2006 at 10:01pm

On Mar 2, 2006, at 11:02 AM, evan.raskob wrote:

> About the motion blur – I see that you have a combination of slabs
> adding up frames on the GPU, how come you don’t use either
> tp.slide.jxs to do a temporal fade, or use a single custom slab?
> Just curious what your reasoning is, I usually just use the
> tp.slide (with mixed results). Stochastic oversampling sounds very
> tasty…

The frame accumulator comprises two slabs. They are both needed in
order for cumulative feedback to occur. There’s no way to create
feedback within one shader.

The reason to use the GPU rather than the CPU is because it’s a lot
faster.

Also, the basic idea is not to fade from one frame to another, but to
accumulate multiple subframes into one frame. This is not just a
blurring effect with previous frames but real temporal antialiasing.

-Randy

#71178
Mar 2, 2006 at 10:14pm

On Mar 2, 2006, at 2:00 PM, Joe Caputo wrote:

> Actually, Im trying to run off of the worldtime, but Im not sure how
> to attenuate the data, to make the devices act AS IF its running at
> 103bpm…
>
> I thought I was doing it right, but what do I need to put between the
> worldtime, and my action objects to make them THINK its 103 bpm?

World time counts in milliseconds, so you can start by asking
questions like “how many milliseconds are in one quarter note at 103
bpm,” and forming the answers as Max expressions. I would need to
know a lot more about your patch to get more specific.

-Randy

#71179
Mar 3, 2006 at 7:20am

The issue stems from seq objects. Im trying to drive them in -1
“tick” mode, with the incoming worldtime, and I just cant get them to
playback properly. from the reference manual, I have to have 48 ticks
per second, which is around 20.83 ms, but using speedlim to cut down
the incoming signal, I am not getting proper playback.

Im trying everything that seems reasonable, and I still get odd results.

#71180
Mar 3, 2006 at 12:16pm

On 2 Mar 2006, at 21:41, Randy Jones wrote:

> What machine are you running on? If it’s not able to run the
> gl.slabs, this would probably produce the results you describe.

A powerbook G4 1.25 MHz w/ATI Radeon 9600 mobility 64MB. Definitely
runs slabs and shaders well (I just used the mac mini for a one-off
project as a test)

I traced through the entire patch, and it looks like all the correct
messages are being send and received, except I can’t confirm that the
jit.render is correctly rendering to the rn_subframe texture. I
frankly don’t think that the “to_texture” message is working properly,
but I don’t understand why I seem to be the only one having this issue.
I also tried running Jitter Recipe 24.GLFeedback.mxt, and I have the
same problem (no rendering to the to_texture texture destination) –
until I use fullscreen mode in that patch, and then it works properly.
But entering/exiting fullscreen mode doesn’t fix the render_node patch
problem.

I’ll see if anyone at cycling is monitoring this thread, if not I’ll
have to take it up with support and see what they say.

Otherwise, very nice patch!

-Evan

#71181
Mar 3, 2006 at 12:20pm

Right, sorry, but I didn’t explain myself well.

What I meant was [jit.gl.slab @file tp.slide.jxs] –> [jit.gl.texture]
–> (right inlet of original slab)

Like in example jit.gl.slab-slide.pat in the
jitter-examples/render/slab/ directory.

That way you wouldn’t need a crossfade or frame accumulator, you could
just use a single slab and texture feedback. If I can fix whatever
seems to be amiss with my powerbook, I’ll show you what I mean…

-Evan

#71182
Mar 3, 2006 at 5:42pm

On Mar 3, 2006, at 4:16 AM, evan.raskob wrote:
>
> A powerbook G4 1.25 MHz w/ATI Radeon 9600 mobility 64MB.
> Definitely runs slabs and shaders well (I just used the mac mini
> for a one-off project as a test)

Exactly the same machine I have.

> I traced through the entire patch, and it looks like all the
> correct messages are being send and received, except I can’t
> confirm that the jit.render is correctly rendering to the
> rn_subframe texture. I frankly don’t think that the “to_texture”
> message is working properly, but I don’t understand why I seem to
> be the only one having this issue. I also tried running Jitter
> Recipe 24.GLFeedback.mxt, and I have the same problem (no rendering
> to the to_texture texture destination) – until I use fullscreen
> mode in that patch, and then it works properly. But entering/
> exiting fullscreen mode doesn’t fix the render_node patch problem.

That’s weird, but at lest you can focus on fixing the small
GLFeedback patch. The render destination must be the exact same
pixel size as the texture. Are you running any extensions which
might have to do with display?

render_node uses its own fullscreen mode based on jit.displays, and
not “fullscreen 1″. This probably explains the difference.

-Randy

#71183
Mar 3, 2006 at 5:58pm

On Mar 3, 2006, at 4:20 AM, evan.raskob wrote:

> Right, sorry, but I didn’t explain myself well.
>
> What I meant was [jit.gl.slab @file tp.slide.jxs] –>
> [jit.gl.texture] –> (right inlet of original slab)

Sorry, I scanned “slide” and saw jit.slide.

You can get feedback with one slab this way, but how do you send its
output somewhere only once every n frames? This could be done with a
gate, but I am using another slab to flip the image anyway, so I just
made that slab “thru 0″.

What I tried for first was a higher-resolution accumulator which
would permit HDR rendering in the future, but I ran into some
problems with using float textures.

-Randy

#71184
Mar 3, 2006 at 11:52pm

Seq object anybody?

How can I be doing this wrong…?

How many ms do I need to limit to get a play speed of 103?

The reference manual is completely confusing.

I am boggled.

Thanks,
Computo

#71185
Aug 26, 2012 at 1:03pm

hi there, even thought this seems to be a pretty old topic.. let’s see if I can get an answer to this here.
I’m a bit confused regarding how to drive a jit.qt.movie from the grn_worldtime count so that it plays back right when rendering.
I have a feeling it shouldn’t be to hard but I can’t figure it out.

thanks in advance
g.

#71186

You must be logged in to reply to this topic.