Ok, what I'm doing here, is rendering a 3072x768 jit.window with openGL. All processing is done on GPU, only one or two 1440x1080 matrixes are being copied to GPU from QuickTime. This picture is to be split over three projectors (1024x768).
I have two ATI Radeon HD 2600 boards in my Mac Pro (let's call them GPU1 and GPU2), but I don't need to split the window over multiple boards for more outputs, since I'm planning to use Matrox TrippleHead for that. Actually TrippleHead hasn't yet arrived, I'm using a DualHead on GPU2 and the third projector is connected to the second output of GPU2. I may even have to stick to that, since I'm not sure, if HD 2600 will go any further than 2560x1600. And also a monitor is connected to GPU1.
Now if connect two monitors (1920x1200 and 1280x1024) to GPU1, and display the window over those two, I'll get 60 fps solid. Obviously Max is rendering on that GPU1, and splitting the picture over two outputs of the same board isn't much of a deal. This leads me to think, that if I now move the whole window over to the outputs on GPU2, so it's displayed on projectors, the rendering will also happen on GPU2. Apparently this isn't the case, since fps will drop to 15 - the whole system is jammed with data being copied from GPU1 to GPU2.
How can I move the rendering to GPU2?
Switching the projectors to GPU1 and monitor(s) to GPU2 doesn't do it. Rendering always happens on what is in some writings called the 'main GPU', but there seems to be no apparent logic in choosing it.
Has anyone a clue what is happening here, and could point me in right direction?