I am using Syphon-fed video feeds from 3 different cameras (PS3 EYEs) in combination with cv.jit.blobs to track IR sources in a stage performance.
I am currently simply connecting jit.gl.syphonclient to jit.matrix in order to get something that would work with cv.jit, which does not accept textures but only matrices. This causes a serious slowdown (due to software rendering), reducing the incoming video (upwards of 40 FPS) to a crawling 20FPS, which makes blob tracking much harder.
I also tried rendering each texture to its own videoplane, placing them side-to-side in 3d space, and rendering the scene as a whole to a matrix (later to be scissored into 3), but there is no lesser slowdown with that setup.
Is there any way to do this that’s more efficient? Am I not doing texture-to-matrix conversion in the best way possible?
It would truly be a life saver if the answer to those questions was ‘yes'; we are very late in the production cycle and time is running out.
well, i don’t think there’s gonna be a magic bullet for this.
the first suggestion is obviously, to downsample as much as possible. i’m not sure if it’s more efficient to downsample from texture to texture or from matrix to matrix. you’ll have to run some comparisons.
downsampling to texture would be something like sending the output of syphon to a jit.gl.texture @adapt 0 @dim 80 60
the other suggestion is use jit.gl.asyncread, instead of rendering to a matrix.
again, keep your resolution as low as possible.