I use Jitter a lot, and I'm curious to know whether I'm using it in the most efficient way I can.
In the Jitter Gen world, we have jit.pix and jit.gl.pix - two objects doing the same thing, but the former using the CPU, and the latter using the GPU.
When deciding between these two objects, the choice seems obvious: use the GPU wherever possible... right? Is it as simple as that, or is there more to it?
FYI, I have a MacBook Pro and the graphics card is a AMD Radeon HD 6750M. Can I (and should I) harness the GPU on this card to make my Jitter patches more efficient? (Happy to provide more info on the graphics card if required.)
Also, I have read somewhere in the documentation the the jit.matrix object does not use the GPU. If this is the case, do I have to refactor all my patches to use objects that *do* use the GPU? (The jit.matrix is used heavily in my patches.)
As some additional background info, everything I do is in 2D, on very low resolutions. However, I have several layers of video running at once, and I alphablend them in real time. Also, in each layer I daisy-chain many matrices, each performing a different function/effect on my video.