How to select the GPU for openGL rendering?

Ahto's icon

Ok, what I'm doing here, is rendering a 3072x768 jit.window with openGL. All processing is done on GPU, only one or two 1440x1080 matrixes are being copied to GPU from QuickTime. This picture is to be split over three projectors (1024x768).

I have two ATI Radeon HD 2600 boards in my Mac Pro (let's call them GPU1 and GPU2), but I don't need to split the window over multiple boards for more outputs, since I'm planning to use Matrox TrippleHead for that. Actually TrippleHead hasn't yet arrived, I'm using a DualHead on GPU2 and the third projector is connected to the second output of GPU2. I may even have to stick to that, since I'm not sure, if HD 2600 will go any further than 2560x1600. And also a monitor is connected to GPU1.

Now if connect two monitors (1920x1200 and 1280x1024) to GPU1, and display the window over those two, I'll get 60 fps solid. Obviously Max is rendering on that GPU1, and splitting the picture over two outputs of the same board isn't much of a deal. This leads me to think, that if I now move the whole window over to the outputs on GPU2, so it's displayed on projectors, the rendering will also happen on GPU2. Apparently this isn't the case, since fps will drop to 15 - the whole system is jammed with data being copied from GPU1 to GPU2.

How can I move the rendering to GPU2?

Switching the projectors to GPU1 and monitor(s) to GPU2 doesn't do it. Rendering always happens on what is in some writings called the 'main GPU', but there seems to be no apparent logic in choosing it.

Has anyone a clue what is happening here, and could point me in right direction?

vade's icon

OOfff. This is a subtle and complicated issue. Are you on windows or OS X?

If you create a window with Jitter on one monitor, the GPU that the monitor is connected two will 'own' the content. Ie: its textures and memory will be resident on that GPU*

If you MOVE the window (at least on OS X), it is my understanding that OS X's drivers will move the GPU resources to the GPU that owns that monitor. You may have to re-create your GL world. You can do this by having a named matrix in your patch, and set your jit.gl.renderer to drawto namedmatrix, drawto windowdestination, where namedmatrix is your jit.gl.matrix namedmatrix 4 char 320 240 (this forces software rendering), and then immediately switch it back to your named window, which goes back to hardware land. This may kick it on to the second GPU.

If you have two monitors hooked up to one GPU, nothing happens, it works, as youve stated.

If you have two monitors hooked up to two different GPUs, the memory/textures etc are moved, and set up on the second GPU, as far as I am aware (using the drawto trick)*

Now if you span the window across two monitors, depending on the GPU configuration, some things can happen:

If you have two monitors on one GPU, you are fine, content plays just as fast (just as fast as it should at least being resized, as you noticed)

If you have two monitors on two GPUs, the content IS NOT AUTOMATICALLY MOVED AND REBUILT AT FULL SPEED ACROSS THE TWO. Since the content is NEEDED ON BOTH GPUS, the GPU where the window was initially created 'OWNS' the content, and READBACK happens. This readback makes things SLOW. it downloads data to the main CPU memory and then re-uploads it to the second GPU.**

Its not clear what exactly is going on with that dual head on the second GPU from your description.

Is nothing GL related on the 'main' GPU anymore when you move the window to the projectors?

something like:

gpu1 : (DVI output 1) -> monitor 1
gpu1 : (DVI output 2) -> monitor 2

gpu2: -> (DVI output 1) -> dual head -> monitor 1 & monitor2

If everything is on gpu 2 like above, try the drawto command which rebuilds the gl client list, iirc, and may move the context over.

If you arent on OS X, man, have fun, because GL on windows with Jitter seems to be really weird and not quite 'the same'.

Good luck!

* this is how I understand it, I may be wrong, but this seems to follow what ive seen.

** You can do something about it if you need to render on both GPUs (just for your info), but it requires care and a deep understanding of GL.

Make N OpenGL contexts for each GPU that you want (say one monitor per GPU). Make them resident on each GPU by creating the window on that GPU or moving it.

Upload your the textures to each context. Ie:

If you want texturea on all three monitors (assuming 3 gpus), upload the original bitmap 3 times to 3 different textures youve built on each context.

If you want to have your scene 'spanned', offset each camera on each context so they line up next to one another, and use ortho projection so your scene looks as though it is one whole scene.

Since you will eventually be using a TripleHead, its not an issue for you, but MultiGPU (non SLI'd) is NOT easy, and someone smarter than me will probably point out issues in what ive said, which makes it all the more complicated to manage and deal with.

Ahto's icon

Thanks vade, I'll try the drawto trick tomorrow.

System: Mac Pro Intel (2x 4-Core 2,8 GHz)
Graphics: 2x Radeon HD 2600
OS: osX Leopard 10.5.6
Max/MSP: ver5

Monitors and projectors are connected as follows:

GPU 1 (DVI 1) -> control monitor (1920x1200)
GPU 1 (DVI 2) -> monitor 2 (1280x1024)
I only connected the 'monitor 2', to see - and make sure, it is rendered - the whole wide window (3072x768), to confirm that it will indeed render at full speed, if rendering happens on that same GPU. Otherwise this second monitor wouldn't be needed nor connected.

GPU 2 (VGA 1) -> dual head (2048x768) -> projector center / projector right
GPU 2 (VGA 2) -> projector left (1024x768)

As I've intended it, control monitor will not need to contain anything GL related. Render destination will be a window spanned over VGA1 and VGA2 on GPU2. At this point the patch window contains no previews, nor does the patch do any read back itself.

If I disconnect everything from GPU1, then rendering runs smoothly on GPU2. By connecting the control monitor again everything is probably rebuilt, and rendering ends up who-knows-where, which seem to follow your description. If everything is connected from the beginning, the window is indeed BEING CREATED ON CONTROL MONITOR AND THEN MOVED, NOT CREATED ON GPU2.

As for SLI, it wasn't connected in my Mac as I got it from the store. At least I assumed, it would be. It's already out of the scope of this site, but aren't osX drivers supporting it?

And another thing not Jitter related: I've checked all specs and compatibility list, but it is still unclear to me, if the following configuration will work on Radeon HD 2600.

GPU 1 (DVI) -> monitor (1920x1200)
GPU 1 (VGA) -> Triple Head (3072x768) -> projectors 1/2/3

This will eliminate all multi-GPU issues, but would such a wide resolution be possible on analog on that particular board? I'm using analog RGBHV because of cabling already in place. Or could Triple Head be connected to DVI-I, but output analog? HD 2600 would definitely output higher resolutions on dual link DVI.

MJ's icon

hi

i'm also busy with a multiple gpu max project

i have to use 6 projecters and one 'work' screen and an UI screen
using a macpro with four ati 2600 gpu's in it

also i did had a lot of gpu crashes during max-patching

now i first move the jit.windows to the right monitors / gpus
then make the textures and using shared_context to copy textures to different video outputs on the same gpu's
and run the patch

the point is to first put the windows in the right place and then create gl world

tried to find a sort of gpu-monitor but couldn't find any....

Ahto's icon

Hi, MJ

Would you pleas explain your implementation in more detail, or point me to some resources where it has been done?

How do you control the order of window and texture creation, if those are just max objects on the patch - ie. they are created by the fact that they just are there, not by the bang message or alike?
And doesn't shared context need shared memory between GPUs?

This makes me think, if starting the qmetro object for jit.render only after receiving some message from jit.window confirming its creation make any sense?

MJ's icon

there is a lot of banging in my patch caused by loadbang before the metro starts

with jit.displays i get the coordinates of all the extra screens
so it can put the windows in the rigth positions . after that i trigger the patch who makes the textures .

now i have 6 jit.windows on 6 dvi-outputs
and 6 jit.gl.renders
i'm using shared-context to reduce the amount of videoram
so i only have to make the textures three times instead of 6
sharing textures between video outputs is only possible on one video card. most video cards only have two outputs ...
sharing textures between two gpu's gives a kernel panic and gpu will crash

i have to make shure two jit.window/jit.gl.render 's who share the
same textures are on the same gpu . had to follow the cables and mark the monitors.(1A 1B 2A 2B 3A 3B 4A)
monitor 4A is my workscreen which lives on output 1 of gpu 4

Gary Lee Nelson's icon

Not sure I am in the right place. I have an iMac 27" running Yosemite. I have an Apple 27" monitor hooked to one one of the thunderbolt ports. Are both monitors served by a single GPU?

Jesse's icon

Yes.

Gary Lee Nelson's icon

OK. Thanks. That simplifies things. I have just learned how to render on the GPU.