Jitter Video Cables and What They are Carrying

Neal Johnson's icon

I have been updating a bunch of old Max 5 and 6 Jitter projects, primarily video effects. And being a firm believer in not reinventing the wheel, just steal it, I have been hacking Vizzie modules to modify UI and behavior to make new things. The problem lies in GRABBR, the output of the old Vizzie One GRABBR worked with my stuff just fine as I patched its output into various jit. objects. When I moved to Jitter Two, the video appeared or didn't appear as it wasn't passing through because of data errors. I stole the patch with the video handler subpatch and fooled around a bit and got some of my things to work. (I don't understand a lot of the GL things floating around there and I am not using GL.) With a new patch, I discovered that if I just wire in a {jit.world @enable 1] I can take the output of GRABBR and feed it right into a [jit.matrix] and then right into my "code" and everything works. No [jit.world @enable 1] then no pass through. And I get an unwanted window popping up.

My question is just as there are BEAPCONVERT and VIZZIECONVERT modules to get one datatype turned into another datatype, I am trying to either find or make something that will take the GRABBR output, filter out all the GL datatypes which some of the [jit.-------] things dislike from the stream, while not getting the [jit.window] popping up from the [jit.world @enable 1] thing. Basically if there is a video outlet from a Vizzie Two module I can make sure it's just Vizzie One video. I have been all through a lot of the documentation and it's not clear just what comes out of an outlet and what is proper to go into a inlet for a lot of things. The Vizzie Two modules are all plug and play and work fine, but I'm trying to make my own. The modular like CV stuff I can understand, but this is about the audio and now video cables and what datatypes are streaming there. Audio just works, but in Vizzie and Jitter there seems to be some level of complexity to the data stream that I can't figure out. Thanks for any assistance or just a point in the right direction.

Dante's icon

With video there are matrix and textures, green cables and blue cables. Some of the Vizzie modules are built to be able to distinguish between the two. What is inside a matrix is outlined in many places in the reference, but what might be most useful is the 'Video and Graphics' section. particularly Jitter matrix exploration part 5 and 6.

Pedro Santos's icon

Matrices reside on RAM and are processed by the CPU.
Textures reside on VRAM and are processed by the GPU (graphics card processor).
Generally, you would like to offload image processing to the GPU as much as possible, for performance reasons. In a patch that is constantly switching between matrices and textures, the image data has to constantly travel from RAM/CPU to VRAM/GPU and that severely limits performance.
Vizzie 1 modules relied exclusively on the CPU.
Vizzie 2 modules rely more on the GPU but, as Dante Lentz referred, most are able to adapt its processing based on the incoming data (matrix or texture).

If you want to connect a texture output into a matrix input, you can use a [jit.matrix] object between the two.

Rob Ramirez's icon

unfortunately the green cord / blue cord distinction is not 100% the case for some hybrid objects such as jit.movie. but what is always reliable is simply using a message box to see the output (a list starting with jit_matrix for matrices and jit_gl_texture for textures).

In Max 8, Vizzie modules will output texture by default, but will FX modules will adapt to outputting matrices if they receive matrix inputs.

see if this patch clarifies things, along with the useful posts above:

Max Patch
Copy patch and select New From Clipboard in Max.

And make sure you take a look at this recent article: https://cycling74.com/tutorials/best-practices-in-jitter-part-1

Neal Johnson's icon

Cable colors... The cables coming from the Vizzie GRABBR are gray. I can put a watchpoint there and see what's up.

Adding the [jit.matrix]... I have a generic question about matrix size and adaptability. A plain [jit.matrix] is 160 x 120. If I modify it to 640 x 480 and then connect GRABBR to it, it drops back to 160 x 120. It seems that GRABBR is dropping things to 160 x 120. How can you build a video EFX patch where you might be changing the input video source and you would like the video frame size to adapt through the patch? If I turn off Adapt for the [jit.matrix] it stays what I set it. But as above if Adapt is on, GRABBR drops it to 160 x 120. I'm using the FaceTime camera at the moment on my iMac and that is supposedly 1280 x 720. I poked around in the GRABBR code and found some places where resolution is calculated. It looks like it's initialized to 640 x 480. Inside the bpatcher GRABBRCONTROLS there are size calculations and a coll with loads of standard video screen dimensions. But I can't seem to get that resolution passed through from Vizzie.

Thanks for all the reading material posted here. I got more ideas from that.

Rob Ramirez's icon

strongly recommend you use vizzie with texture processing (the default) for adapting video. all texture processing modules adapt their dims implicitly.

Max Patch
Copy patch and select New From Clipboard in Max.