I am using Syphon-fed video feeds from 3 different cameras (PS3 EYEs) in combination with cv.jit.blobs to track IR sources in a stage performance.
I am currently simply connecting jit.gl.syphonclient to jit.matrix in order to get something that would work with cv.jit, which does not accept textures but only matrices. This causes a serious slowdown (due to software rendering), reducing the incoming video (upwards of 40 FPS) to a crawling 20FPS, which makes blob tracking much harder.
I also tried rendering each texture to its own videoplane, placing them side-to-side in 3d space, and rendering the scene as a whole to a matrix (later to be scissored into 3), but there is no lesser slowdown with that setup.
Is there any way to do this that's more efficient? Am I not doing texture-to-matrix conversion in the best way possible?
It would truly be a life saver if the answer to those questions was 'yes'; we are very late in the production cycle and time is running out.
A thousand thanks in advance!