Using Jitter as an lossless NLE?
Most of the Jitter stuff I've seen focuses on live stuff and often has a variable ('as fast as possible') frame rate, but I've been wanting to apply some Max processes to my generic video editing workflow (e.g. using audio onset detection to shuffle the video around etc...).
So ideally I want to be able to write single frames at a time, where 1frame = 1frame, so if I start with with a 24p 4k video, I end with a 24p 4k video
I'm also super confused with video codecs and containers, so also not sure how to best go about opening a video, shuffling the frames around, and then saving/exporting it without transcoding it or messing with the quality at all.
I built this toy example a bit ago which just randomly pulls frames from the vid and writes them one by one, but I imagine there has to be a better way than this.
Not too bothered with speed/efficiency since it's an offline process anyways, so don't care if it's CPU vs GPU.
Am I on the right track here? Are there any example patches which would be worth checking out that focus on offline fixed framerate "editing"?
there is no way to save frames or movies from jitter without re-encoding. the default avf engine prores4444 codec is probably your best bet for minimal loss. otherwise tiff or png image sequences (via jit.matrix) might be worth exploring.
a recent forum discussion regarding offline capture that maybe of interest:
https://cycling74.com/forums/offline-rendering-frame-per-frame-and-hiq-video-production-with-max
Oh awesome! I totally missed that thread before. I think that covers a lot of what I'm interested in.
And the option of doing it as an image sequence is a solid idea too. Thanks!
"there is no way to save frames or movies from jitter without re-encoding"
i dont understand this. isnt that exactly what happens when you export something with codec "none"?
I totally missed that thread before.
in that thread(right under where i dedicate my sword and axe to the cause in emoji-form, emojis are my only real weapons in life 😭)... i highly recommend the thingie Timo uploaded there.
i dont understand this.
me too... although, jitter matrix is its own format, so it would need to be encoded into something to be saved in any other file format... but what about saving one single frame in the native jitter matrix format? isn't saving jit.matrix in .jit binary format the same as 'save a frame without re-encoding'?
i believe the format of a jitter matrix "frame" and a still image without compression codec file is very similar, but i could be wrong. i mean you can "convert" lossless between them, so for me it the same. :D
but good point, i was wondering that too. instead of using jit.record why not export single frames in jitter binary format. or in other words: why is there no "player" external for that kind of thing?
or is jitter binary always 64 bits? no it is smaller when i was char.
also, the word 'codec' in general is something i'm blurry on(for a hardware 'audio codec' like here:
https://forum.electro-smith.com/t/interested-in-details-about-the-codec/1289
it refers to the chip and its dc filtering for smooth digital-to-analog conversion, etc.? so a software codec probably is anything similar... reformats for some reason or another?)
(my guess, though, is that Rob was answering more of a form catered to Rodrigo's prob, Rodrigo probably will not find the jit binary file format useful for such purposes...
but now, thinking about it all, WHAT CONFUSES ME MOST: how is it that only people with names who start with 'R' have coalesced here within this thread?! 😳🤯🕵️♂️🧐🤓)
maybe i am even wrong about what a "codec" is?
but hey i also didnt know that "resident" was someones name.
maybe it is about a technical difference, in one case the content of a data set is translated (for example from 7 to 7) and in the other case the whole set is just copied.
for example if you "convert" an mp4 file into a quicktime movie, the content is only copied into a new container.
I did something like this a while back, just before fast large ssds were super cheap.
What I found is I got decent results essentially slowing things down to approx 2 frames per second, and recording out to ProRes 4444 via syphon. To get around hd bottlenecks I converted my source material to raw jitter files. Everything was on one long clip, which I basically indexed with frame numbers and Coll.
What I found was that - at least up to a certain size of material - I had two quite good ways of storing my performance information. The first was essentially encoding everything into a multichannel buffer (I.e scaling my parameters such that I can record them as audio, which I then load and rescale back for the offline bounce). The other approach I used was to reference each edit via midi. For this I worked with a low res version of the video, and used combinations of different velocities, midi notes, CCs and midi channels to save the data for playback (recording into a daw). For Example, midi note 1 would hold 1 frame per velocity value, and you could ultimately start extending this virtually infinitely using high and low bits. I recall that I only indexed edits (ie. the command would be: move playhead to referenced frame, then play at X frame rate until another reference frame is given).
I didn’t arrive at a beautiful elegant solution - and I’d imagine you’d have to change your approach on a case by case basis - it did work!