Forums > Jitter

video out a javascript

April 17, 2009 | 8:54 pm

Hi all.
I got a couple of questions;
first one.
I would like to display a "render javascript" (let’s take the jsglnurbstendril-example for instance) by an external monitor.
So i’ve made a " tender" and a "matrix tender" objects in order to brodcast out the script thanks to the "videout" object… But it doesn’t work.
Do I have to scipt this ? Or is it possible to just use the object vidoout ??

And my second point is: do I have to create a matrix to display an open gl render by an external monitor ? Because apparently, creating matrixes really slows down my computer…

I can not use the because I am on Max 4.6, Mac.

Thanks for saving me from headhache !!


April 17, 2009 | 10:02 pm

you should be able to render an opengl context to a named matrix to send to qt.videout. if something isn’t working, post a stripped down patch showing your attempt.

this method will possibly be quite slow, btw.

April 18, 2009 | 9:42 pm

You are right.
The patch is attached below,

April 19, 2009 | 8:53 am

Here’s the patch with the JS not compiled, thanks to that you can see why Max quit !

April 19, 2009 | 12:20 pm

I’ve fixed it !
All I had to is to delete the "jit.window code" in the js object.

What do you think of this patch below ?
I still have a problem of resoultion.
My matrix is 1000 x 600, it should be enough to have a good image through the videoout isn’t it ?
If you have any ideas …

April 19, 2009 | 5:42 pm

i’ve never used videoout, but i assume your output resolution is hardware dependent. isn’t dv 720×480? you should just set your matrix to your output resolution.

you also may want to take a look at, which will perform the readback to a matrix much more efficiently on most machines.

April 19, 2009 | 5:55 pm
i’ve never used videoout, but i assume your output resolution is hardware dependent. isn’t dv 720×480? you should just set your matrix to your output resolution.

you also may want to take a look at, which will perform the readback to a matrix much more efficiently on most machines.

I am on MAX 4.6, there is not the object…

You never use jit.qt.videoout ? So how do you output two different renders through 2 differents monitors for instance ?


April 19, 2009 | 6:29 pm

two graphics cards, or something like matrox dual-head-to-go

April 20, 2009 | 7:48 am

Ok I’ll check it out !
Can you easily route any render window to any monitor ?
Thanks for helping anyway.

April 20, 2009 | 5:09 pm

there’s several recent posts on multiple windows for opengl that you should do a quick search for.

with one graphics card that has two ports, it’s not really an issue. you just drag your window where you want it. depending on what you’re trying to achieve, @shared_context might be the way to go. search for andrew benson’s "preview" jitter recipe for more info.

with two graphics cards, the short answer is to make sure your are instantiated on the correct graphics card before you begin rendering (before you turn your qmetro on). otherwise, you may experience a dramatic drop in frame rate. the wat to do this is provide @rect attribute which are the coordinates of your additional monitors. jit.displays can help with this.

actually, why don’t i just quote anton’s definitive response from another mailing list:

GL to multiple displays with multiple GPUs can be TRICKY. The issue is you have 2 GPUs, and whenever you move data from GPU a to GPU b, things will *slow down* big time (it reads data back to the main memory then re-uploads it). This also means you should NOT share texture resources across multiple GPUs (using @shared_context etc, as data GPU b needs resides in GPU as vram, and depending on how the OS X driver works it may not be smart enough to know to automatically mirror data in both).

I would make 100% doubly sure that you:

Make 2 completely distinct GL contexts.

Upload your data twice, to each context separately.

Space one window across both monitors on the same GPU (which you cant do if you have GPU a on monitor 1 and 4 (how will it span across?) , fix your display preferences and re-order your monitors physically)

And most importantly you will have to be very careful how you init your -> windows:

If your patch starts up with all windows on a single monitor, turns on rendering, and then moves the windows to their proper location, you’ve effectively initted resources on GPU a, and moving things to GPU b after rendering is enabled DOES NOT MOVE THE RENDER CONTEXT, TEXTURES OR RESOURCES to GPU b. Thus doing that slow ass readback I mentioned.

You can work around this for context a make a named matrix jit.matrix temp_a 4 char 320 240

Send draw_to temp_a, to This will force the renderer to switch drawing to your context, and THEN to your window, which is now in the right location, and force it to move things onto the proper GPU. Now do the same thing for b.

This stuff is very subtle and somewhat annoying if the conceptual peices as to how GL works are not understood, because the ramifications of ordering things unintentionally the ‘wrong’ way can have serious performance implications.

June 11, 2009 | 7:05 pm

I am getting some weird behavior from the jittermatrix I couldn’t figure out what is happening ….
In my java external application I am drawing into JitterMatrix using then display the JitterMatrix in a window. My order of operations are;

(1)Draw some stuff using sketch to JitterMatrix
(2)Change the pixels in JitterMatrix
(3)Draw more stuff using sketch to JitterMatrix
(4) send the JitterMatrix to window

Logically I expected that (2) will change what I have drawn in (1) , then on top of the changes (3) will appear without any affect from (2)
But what I see is (3) is completely missing ????? only (1) and (2) working …
But If I comment out (2) I can see everything is drawn properly

Can you please explain what is happening here ???????????

– Pasted Max Patch, click to expand. –

Viewing 11 posts - 1 through 11 (of 11 total)