I'm currently trying to build a gl patch that could drive three 800 X 600
windows (each on a separate 6600 NVidia card stacked in a quad 2.5 G5). The
tricky part about this project is that I have to subdivize each window in 12
smaller blocks/videoScreens and have a gl.shader available to do real-time
treatment on all the video ouputs.
I'm currently reading the texture files (800 X 600, photoJpg, 15 fps) from
the ram, slicing them up with jit.scissors and tying them to independent
gl.texture and videoplane objects. I'm using a separate rendering context
for each window (seems to be HW).
I'm able to get 14 fps (if i can trust the fpsgui object...) but the video
seems to" clog-up" at moments and the computer eventually crashes after 15
minutes. I was wondering If there would be better ways to structure this
I've read previous threads concerning multiple video Outputs and openGL +
other ones relating to texturing issues. I found there valuable information
(concerning rendering context, colorspace, window syncing and such) but
wanted to ask if anyone on the list tried something similar and what kind of
performance I should expect out of my set-up.