Forums > Jitter

audio/video record at highest rez possible

January 4, 2007 | 6:04 pm

i’ve been using jitter for about 6 months, and this is my first post to the list.

i have a patch that uses peak~ to trigger plume and brcosa effects to the video of the same mov. in addition, 3 interweaving decay functions also feed the plume and brcosa objects so that the effects build, flow and ebb over time.

i’m having a hard time finding the best way to record the output of this patch with:

-1) as high resolution as possible
-2) as close to picture-sound sync as possible to the original
-3) as little time and pitch distortion to the sound as possible

i’ve tried realtime and framedumping with both jit.vcr and jit.qt.record/sfrecord~ as well as audio out with spigot~ and with soundflower.

it works best with the qt rendered at 320×240 in photo-jpeg at 15fps (as stated in the jitter manual appendix), but i’m hoping to get as close to 720×640 as possible. i’ve had limited success at higher resolutions, frame rates, and codecs. the mov is long- about an hour, so it takes a while to create variations.

i’ve yet to record directly out to a deck, try to loadram or buffer the movie, or split the video and audio sources apart.

until someone points me in a firm direction, i’ll slowly attempt each solution, but i could use help.

hopefully, there’s a dumb simple solution that i just don’t know about.

sorry for the long first post. i’m running max 4.6/jitter 1.62 on a 2.7ghz g5.

ps- i just read about 1.63b, so i’ll try that as well.

thanks in advance.


January 4, 2007 | 10:38 pm

Our method is to write only the parameters of every effect we use into a coll during a low resolution record stage, then render them back frame by frame (non-realtime) in a next stage, at as high resolutions and framerates as you wish.

During the low resolution recording stage, put max in overdrive and make sure you use a qmetro for your video output and a metro for the parameter recordings.

Not a very elaborate description but hope it helps,
Mattijs


January 4, 2007 | 11:05 pm

i get the idea. thanks for the tip. i’ll let you know how i do.


January 5, 2007 | 2:40 am

This sort of question came up at in a recent Jitter course that I attended, and the answer was to record to an external recorder via S-video output (everyone in the class had powerbook/macbook with S-video output). Writing to the hard drive is the weakest link on many personal computer systems.

Tim


January 10, 2007 | 7:00 am

Mattijs wrote:

"During the low resolution recording stage, put max in overdrive and make sure you use a qmetro for your video output and a metro for the parameter recordings."

i think i understand the logic behind the qmetro, but why a separate metro for the parameter recordings? i want one reading per frame of video don’t i?

please take a look at the simplified patch enclosed. i’ve set it up with the crashtest mov that comes with the max application for illustration sake. i’m actually testing my patch with a 640×480 version of an hour long video i’ve put together.

in the low rez pass patch, i tie the parameter recordings to the qmetro and qt unit of the mov. this means that with every bang from the qmetro, a gettime message is sent and this elapsed mov time becomes the index for the coll reading. during the high rez pass, every bang cooresponding to a frame from framedump should send a next message to the coll object. this triggers the retrieval of the next parameter reading which gets passed to an effect object (plume in this case).

in my simplistic logic, this seems like it should work, but something is off because the hi rez pass results in frame accurate video, but the coll recordings only play partly through.

the coll time index (which is expressed in qt units) ends about 10k units short of the mov after the framedump of an hour long movie.

also, i eventually want to render out at hdv specs, but even at 640×480, my output is currently a whopping 40 plus gigs.

any suggestions on bringing the effects closer in synch with the audio as well as streamlining the resulting video would be appreciated.

sorry for the unweildy post. thanx in advance for the help.


January 10, 2007 | 10:09 am

To chime in here, the problem could be in "lost" messages to coll.
I’ve found that during intensive operations (like recording video),
messages, lists, etc. can be discarded by the Max scheduler in the
interests of keeping the patch running. I’ve especially seen this
with coll and other more-than-basic objects. [deferlow] will solve
this problem, in my experience, but you have to be careful when you
use it because it pushed messages, lists, etc. to the back of the
scheduler’s queue and may change the order of operations.

One solution I use is a gated record using [onebang] to guarantee
that all of my events happen before the frame is recorded:

max v2;
#N vpatcher 191 250 858 781;
#P origin 0 15;
#P window setfont "Sans Serif" 10.;
#P window linecount 1;
#P comment 460 449 68 196618 evan.raskob;
#P comment 460 468 129 196618 http://lowfrequency.org;
#P window setfont "Sans Serif" 9.;
#P newex 178 431 106 196617 bgcolor 200 200 200;
#N vpatcher 30 89 630 489;
#P outlet 148 262 15 0;
#P inlet 148 67 15 0;
#P window setfont "Sans Serif" 9.;
#P comment 183 155 125 196617 you’d put some stuff here;
#P connect 1 0 2 0;
#P pop;
#P newobj 381 126 108 196617 p some-jitter-stuff;
#P newex 381 90 87 196617 r recording-clock;
#P newex 274 255 72 196617 r resumeBang;
#P newex 218 256 50 196617 loadbang;
#P flonum 198 196 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P button 196 255 15 0;
#P newex 136 220 72 196617 qmetro 33.45;
#P newex 136 257 50 196617 onebang;
#P user jit.fpsgui 239 307 60 196617 0;
#P newex 136 307 87 196617 s recording-clock;
#P toggle 136 143 36 0;
#P newex 381 211 20 196617 t b;
#P newex 381 187 165 196617 jit.qt.record 640 480 @realtime 0;
#P newex 381 155 177 196617 jit.matrix something 4 char 640 480;
#P window setfont "Sans Serif" 10.;
#P window linecount 3;
#P comment 102 80 233 196618 using a gate to guarantee events
happening before frame is finished being rendered and recorded;
#P connect 4 0 8 0;
#P connect 8 0 7 0;
#P connect 7 0 5 0;
#P fasten 3 0 7 1 386 247 181 247;
#P fasten 9 0 7 1 201 277 190 277 190 247 181 247;
#P fasten 11 0 7 1 223 277 190 277 190 248 181 248;
#P fasten 12 0 7 1 279 277 190 277 190 248 181 248;
#P connect 10 0 8 1;
#P fasten 7 0 6 0 141 292 244 292;
#P connect 13 0 14 0;
#P connect 14 0 1 0;
#P connect 1 0 2 0;
#P connect 2 0 3 0;
#P pop;

On Jan 10, 2007, at 7:01 AM, t ozawa wrote:

> Mattijs wrote:
>
> "During the low resolution recording stage, put max in overdrive
> and make sure you use a qmetro for your video output and a metro
> for the parameter recordings."
>
>
> i think i understand the logic behind the qmetro, but why a
> separate metro for the parameter recordings? i want one reading per
> frame of video don’t i?
>
> please take a look at the simplified patch enclosed. i’ve set it up
> with the crashtest mov that comes with the max application for
> illustration sake. i’m actually testing my patch with a 640×480
> version of an hour long video i’ve put together.
>
> in the low rez pass patch, i tie the parameter recordings to the
> qmetro and qt unit of the mov. this means that with every bang from
> the qmetro, a gettime message is sent and this elapsed mov time
> becomes the index for the coll reading. during the high rez pass,
> every bang cooresponding to a frame from framedump should send a
> next message to the coll object. this triggers the retrieval of the
> next parameter reading which gets passed to an effect object (plume
> in this case).
>
> in my simplistic logic, this seems like it should work, but
> something is off because the hi rez pass results in frame accurate
> video, but the coll recordings only play partly through.
>
> the coll time index (which is expressed in qt units) ends about 10k
> units short of the mov after the framedump of an hour long movie.
>
> also, i eventually want to render out at hdv specs, but even at
> 640×480, my output is currently a whopping 40 plus gigs.
>
> any suggestions on bringing the effects closer in synch with the
> audio as well as streamlining the resulting video would be
> appreciated.
>
> sorry for the unweildy post. thanx in advance for the help.
>
>
>
>
> –
> Toshiaki Ozawa
> Director of Photography
> NYC
>


January 10, 2007 | 10:54 am

Quote: bokemono wrote on Wed, 10 January 2007 08:00
—————————————————-
> Mattijs wrote:
>
> "During the low resolution recording stage, put max in overdrive and make sure you use a qmetro for your video output and a metro for the parameter recordings."
>
>
> i think i understand the logic behind the qmetro, but why a separate metro for the parameter recordings? i want one reading per frame of video don’t i?

I understand your confusion. It looks like there is a fundamental mistake in your understanding of how jit.qt.movie works. It looks like you assume that it outputs all frames it contains one by one. This is not the case. Very short: it has a timeline that runs independently of the loaded movie and only when you send it a bang it chooses which frame to output.

It is important to consider that you can’t rely on the output of a jit.qt.movie to be constant. The idea behind qmetro is that it outputs as much bangs as possible considering the space on your cpu (read more in the jit.qball help and about max threading in general). If you use this mechanism to record values to your coll, the amount of entries per second will vary over time. Metro on the other hand interrupts all other processing (when max is in overdrive), so records your parameters as accurately timed as possible.

Practical:

In the recording stage
- write new lines to your coll at a fixed rate (with a separate metro and max in overdrive). You could set this metro to a framerate twice as high as your movie framerate, to minimalize effects of aliasing
- also record the movie time to your coll (note that gettime will give a different value even if you didn’t send the jit.qt.movie a new bang)

In the render stage, do this for each frame, in the following order:
- read a line of the coll,
- set your parameter values (of jit.plume in your case)
- set the time of the jit.qt.movie,
- bang the jit.qt.movie once, which results in one frame to be written in jit.qt.record.
- use jit.qt.record to trigger the rendering of the next frame

note: in the render setup, set jit.qt.movie to rate 0. You don’t need its time functionalities, you only need the separate frames of the times that you recorded in your coll.

note: jit.qt.record has to write the output file in a format with the same framerate as the metro used to record to the coll in the recording stage (see what this implies? There has to be a fixed framerate somewhere. That’s why you need the separate metro.)

>
> please take a look at the simplified patch enclosed. i’ve set it up with the crashtest mov that comes with the max application for illustration sake. i’m actually testing my patch with a 640×480 version of an hour long video i’ve put together.

That’s a nicely simplified patch. If you wouldn’t have included it I couldn’t have given you this reply. :)

>
> in the low rez pass patch, i tie the parameter recordings to the qmetro and qt unit of the mov. this means that with every bang from the qmetro, a gettime message is sent and this elapsed mov time becomes the index for the coll reading. during the high rez pass, every bang cooresponding to a frame from framedump should send a next message to the coll object. this triggers the retrieval of the next parameter reading which gets passed to an effect object (plume in this case).
>
> in my simplistic logic, this seems like it should work, but something is off because the hi rez pass results in frame accurate video, but the coll recordings only play partly through.
>
> the coll time index (which is expressed in qt units) ends about 10k units short of the mov after the framedump of an hour long movie.
>
> also, i eventually want to render out at hdv specs, but even at 640×480, my output is currently a whopping 40 plus gigs.

that is 640×480 (pixels) x 3 (bytes) x 15 (frames) x 60 (seconds) x 60 (minutes). Seems correct to me. ;)

Consider photo jpeg at high quality or any other codec that does compression.

>
> any suggestions on bringing the effects closer in synch with the audio as well as streamlining the resulting video would be appreciated.
>
> sorry for the unweildy post. thanx in advance for the help.
>

np, I hope this helps.
Mattijs


January 10, 2007 | 6:54 pm

thanks so much. this is great info.

i was confused about the role of qmetro in the equation. i also didn’t understand that metro in overdrive provides the steadiest stream of bangs. i thought i understood the stability provided by qmetro, but i need to grasp it better.

>- also record the movie time to your coll (note that gettime >will give a different value even if you didn’t send the jit.qt.movie a new bang)

is it okay to continue making the movie time the coll index or is there a reason not to?

>note: jit.qt.record has to write the output file in a format with the same framerate as the metro used to record to the coll in the recording stage (see what this implies? There has to be a fixed framerate somewhere. That’s why you need the separate >metro.)

gotcha. the coll metro rate becomes the reference rate. makes sense.

>That’s a nicely simplified patch. If you wouldn’t have included it I couldn’t have given you this reply. :)

thanks! it’s the least i can do for all the info.

>that is 640×480 (pixels) x 3 (bytes) x 15 (frames) x 60 (seconds) x 60 (minutes). Seems correct to me. ;)

i guess if i did the math first i wouldn’t have been caught off guard:)

thanks again. let’s see how i do.


January 11, 2007 | 9:42 am

Quote: bokemono wrote on Wed, 10 January 2007 19:54
—————————————————-
> is it okay to continue making the movie time the coll index or is there a reason not to?
>

In your case it shouldn’t matter because you play the movie linearly. Maybe one day you’ll do real-time cuts in your movie ;)

Mattijs


Viewing 9 posts - 1 through 9 (of 9 total)