splitting an OpenGL screen
Hello,
I am attempting to do a performance involving four screens of video, and I'm looking into setup options. As it is, I have a videoplane divided to four 360x240 'screens,' totaling 1440x240. I have a triplehead2go to handle three screens. I would now like to split off the rightmost screen from my videoplane and send it via serial to another computer for output, keeping 1080x240 to output on the triplehead.
Is there a scissors/glue kind of option for openGL? I'd prefer to work on a full (4) screen videoplane and then split it, as there is some spatialization logic going on and I'd like to keep it uniform.
Thanks!
Also if anyone has any awesome links to performance optimization techniques so I can go bigger than 360x240, I'd appreciate it!
Check this topic: https://cycling74.com/forums/one-jit-window-across-two-displays/
Your second question - performance - optimisation is rather general and it is difficult to give advice without knowing what exactly you want to do. But typically - if we are talking about Jitter - your GPU is your friends, so try to move as many calculations as possible onto graphics card (use slab, textures, gen, shaders).
looked at that link, and this one:
https://cycling74.com/forums/opengl-context-share/
Still beating my head against a wall. The example patches don't work. Only one screen is showing anything. My comp is better than the specs they are listing. Using Max 6 in OSX 10.9.5
Not near the triplehead at the moment. Perhaps I can run all 4 from one computer, using the main screen as the 4th. That makes me nervous because I can't do anything on the computer while things are running without the audience seeing it.
I'll keep plugging on.
Dear SUPERSARA
Try describe your problem a bit more detailed:
Are you checked triple head already and it works (you see desktop on all outputs), but you can't display image from Max, or you can't split your image inside Max into 4 independent windows?
Yaniki's method seems perfectly valid. You can take any of the 4 render windows and replace that by a network output (are you sure you want to do that? It's likely much more performant to have the computer that renders it also output it to the screen/projector).
In my setup I have 1 monitor display (that the audience doesn't see) and 4 projectors. On the monitor display, depending on the project's needs, I either have a window with the full output scaled down to fit on the display, or I double 1 of the 4 projector outputs on it.
Unless you're doing very complex rendering, have a slow CPU/GPU or you're very inefficient at patching you should be able to get much higher resolution than 4 x 360x240. GPU/OpenGL is your friend.
Btw, I prefer to have one wide render window spanning all 4 projector outputs instead of a separate window for each. That also works.
@DTR: of course! It's a simplest and most stable solution - but may become a bit problematic if you have beamers with different resolutions.
Btw, in your screenshot you can see that the adjacent windows cast a shadow on each other, at the borders, not by Max but by the OSX. Are those shadows there when connected to projectors?
Yes, this is, why I added the "border" attribute to every window. No border = no shadow.
But if you are positioning all the windows across your beamers (1 win = 1 beamer) it works fine (the problem with shadows is connected to situations when you have two or more windows on the same beamer [or more beamers connected via video splitter]).
Thanks for clarifying!
YANIKI you rock. That one definitely works.
DTR, I'm not sure how to send all four as a whole out of one computer. I only have one video out, and that would be hooked up to the triplehead. I'd have to get a usb-vga and maybe extend my potential screen space that way? I still have a feeling I'd have to slice off a screen to do that as well. Haven't played with it yet. Home for the holidays, no equipment with me.
I do have a whole lot going on here... arduino input controlling variables in several layers of video.
This guy is one of my layers:
https://cycling74.com/forums/js-jit-gl-sketch-to-external-texture/
If I get above 1000 'particles' of static things start getting pretty creaky, even at 360x240. There's probably something wonky going on in there, cause if I do the same thing in processing it's not nearly as slow.
This runs either with a layer of jit.noise with variable contrast/saturation, or with 4 qt.movies.
About 4th output, you haven't told us what you are running this on. I'm talking from the perspective of a desktop graphics card with 4 outputs or so, 1 feeding a Triplehead and 1 for the 4th projector. A laptop indeed has bigger restrictions there. I wouldn't recommend a USB-VGA thingy, that will likely destroy rendering framerates as it is entirely software driven, no GPU hardware. Haven't actually tried one of those so I could be wrong but if it weren't the case we wouldn't need expensive Tripleheads, right?
We can't say much about performance without seeing your actual patch.
Running this off of two Macbook Pros, unless I miraculously get funding for a new comp (fingers crossed). 2.53 GHz Intel Core i5 processor (dual core). 8 Gig memory. NVIDIA GeForce GT 330M 256 MB Graphics card.
Here's the patch, if anyone is brave enough to dig through and tell me of my follies. In the past I've used Isadora for large video projects, siphoning in Processing sketches if needed. Decently new to openGL in max. Using Live/M4L on the other computer, figured I'd be consistent and keep software requirements to a minimum.
Some quick observations:
Disable those jit.pwindow's when you don't need 'm to check what 's going on in your render chain. Should save a good bunch of fps. Make sure that, when live, only what you really need gets rendered/drawn on screen.
Same for the num boxes connected to the line outputs and the button triggering the jit.qt.movie's. Disable/disconnect 'm when not strictly necessary.
You have 2 [r draw]'s connected to jit.gl.slab's. I think those are unnecessary and might cause things to be processed twice within 1 render cycle.
The line's output at 20ms grain interval, so 50Hz. If that's far more (or less) than your rendering fps you could adjust that so superfluous messages don't get sent out.
I didn't check the js for possible inefficiencies. I'm no hero in that.
Apart from that, the 256MB memory on your gfx card might be a bottleneck for the high res required for the multi-screen rendering.
Those pwindows... They're buggers aren't they? Getting rid of those let me double my screen size easily. Running 720x480 at around 30fps now. I also sorted the draw situation and set the line grain to 50. That will do until I win the lottery and can buy one of those beautiful 6-core Mac Pros they're selling now. Thanks!
yup, no pwindows's is jitter gl optimization 101 ;)
Sara, lacking private messaging on the forum, I'm telling you here. When opening your website Firefox warns against it being a 'reported attack page'...
Reported Attack Page!
This web page at sarawentworth.com has been reported as an attack page and has been blocked based on your security preferences.
Attack pages try to install programs that steal private information, use your computer to attack others, or damage your system.
Some attack pages intentionally distribute harmful software, but many are compromised without the knowledge or permission of their owners.
There's a more info link which opens: https://safebrowsing.google.com/safebrowsing/diagnostic?client=Firefox&hl=en-US&site=http://sarawentworth.com/
Thanks... It's a wordpress site, and google analytics can't find the problem code. New project for the week :/