Forums > Jitter

A few questions on Jitter and GL

April 20, 2014 | 12:21 pm

I use Jitter a lot, and I’m curious to know whether I’m using it in the most efficient way I can.

In the Jitter Gen world, we have jit.pix and jit.gl.pix – two objects doing the same thing, but the former using the CPU, and the latter using the GPU.

When deciding between these two objects, the choice seems obvious: use the GPU wherever possible… right? Is it as simple as that, or is there more to it?

FYI, I have a MacBook Pro and the graphics card is a AMD Radeon HD 6750M. Can I (and should I) harness the GPU on this card to make my Jitter patches more efficient? (Happy to provide more info on the graphics card if required.)

Also, I have read somewhere in the documentation the the jit.matrix object does not use the GPU. If this is the case, do I have to refactor all my patches to use objects that *do* use the GPU? (The jit.matrix is used heavily in my patches.)

As some additional background info, everything I do is in 2D, on very low resolutions. However, I have several layers of video running at once, and I alphablend them in real time. Also, in each layer I daisy-chain many matrices, each performing a different function/effect on my video.



dtr
April 21, 2014 | 5:01 am

Are you running into CPU performance limits? If not keep things as they are.

If yes, GPU can speed up things dramatically but one has to be consequent. For optimal results, once your data is on the GPU it should stay there, so no going back and forth between GL and CPU (jit.matrix) objects. This especially applies to high res video but maybe with lower resolutions you could get away with channeling data back and forth. Depends on the size of it and your hardware.

Even antique openGL accelerated cards can provide big improvements over CPU.

Watch out for jit.pwindow objects. They can be detrimental to performance/frame rates in GL patches.


April 24, 2014 | 10:26 am

Hi @dtr – thanks for your reply.

Ok, so the main thing is to not hop back and forth between CPU and GPU – that makes sense.

In practice, would either of the following examples of patching create such a hop?:

- A jit.gl.pix object, the output of which is sent directly to another jit.jl.pix object;
- as above, but a send-receive pair in between the two, instead of a direct patch cord.

Also, regarding the jit.pwindow object: do you mean to say that if I add a pwindow at the end of a chain of jit.gl.pix objects, it would actually slow down the processing of the gl objects that come before it?


April 24, 2014 | 12:53 pm

adding any jit.pwindow will slow down a patch, I used to use 2 or 3 to understand what was happening in my render chains with jit.gl.slab. They slowed my patch dramatically, there is an option, something onscreen that should be unchecked to gain some cpu.
Objects like a float or integer (and message boxes whose contents change) can slow down the cpu as well, If possible always remove any objects which have to be updated graphically.
In my biggest patch I had over 300 float boxes, after patching I managed to remove over 260 of them, my frame rate went from 25 to 50 !

Try sending a video to a matrix object , and then to a jit.gl.videoplane to see the performance difference, that should make it obvious ;-)



dtr
April 24, 2014 | 1:23 pm

In practice, would either of the following examples of patching create such a hop?:

- A jit.gl.pix object, the output of which is sent directly to another jit.jl.pix object;
- as above, but a send-receive pair in between the two, instead of a direct patch cord.

Both are fine and will stay on the GPU.


April 24, 2014 | 1:29 pm

Hi @Andro – good call on disabling the jit.pwindows when they’re not needed, I do this already.

I think you’ve also given a good rule of thumb there: anything that updates graphically should be avoided. I always use the [int] object over the [number] object.

@dtr – thanks for the confirmation, that’s really good to know.


May 4, 2014 | 12:16 pm

I have one more scenario for which I’d like to know whether processing stays on the GPU:

A Max for Live device with jit.gl.pix object connected to a send object, and another Max for Live device in the same Live Set containing a matching receive object connected to a second jit.gl.pix object.

(I do visuals work using Ableton Live as a platform!)

Do you know the answer to this too, @dtr? If not, do you have any pointers on how I could find this out for myself?



dtr
May 4, 2014 | 1:42 pm

I haven’t tried this but have no reason to think it wouldn’t stay on the GPU.


Viewing 8 posts - 1 through 8 (of 8 total)