Jitter/Max for Live performance optimizing help needed
I need some help figuring out some performance issues. This is my first real Jitter project, though I've done a few Max projects. I have a few concerns, and so this is going to be a bit long, but I would really appreciate any help anyone can give me. I've tried to search the list for answers on this stuff but I'm having a hard time.
I'm on a 2.16 GHz Duo Core iMac (10.6.2) with 2 gigs of ram and the latest Max/Jitter versions.
Okay, my project is this, at present:
1) Three video sources:
Two of them are modified versions of the "sketchpad" tutorial rendering to 320 x 240 matrices. The other is a jit.qt.grab object, at the same resolution.
2) Two processing blocks:
One for each matrix on the render side, for downsampling and streaking, basically copied from the tutorial on "feedback using named matrices," which involve three more "320x240 or less" matrices each.
3) A couple of mixers to combine the three post-processing sources.
4) A whole bunch of MIDI routing to route incoming messages from my hardware to different variables of the sketchpad objects and processing blocks. (Nothing very complicated).
The sketchpad rendering products are fairly simple: one is just two circles, two rectangles and eight triangles and the other is four tori. No lighting, textures, fog, etc. Most everything is driven by qmetros at 30 milliseconds; there are zero interface objects.
Now, my first question is that with these objects running, without even any incoming MIDI, but with all the render objects and camera on, processed and mixing, I'm at %40 CPU usage (and I can't go above %50 because I can only use one core, correct?). It seems to me (and I know nothing) that rendering sixteen simple shapes and a camera feed shouldn't be tapping out that much. Does that seem reasonable with this computer?
Second, I believe part of this is related to (from the tutorials):
"Hardware vs. Software Rendering: One of the great advantages about using OpenGL for rendering graphics is that most of the work can be done by the graphics accelerator hardware in your computer, freeing the CPU up for other work such as generating audio. When drawing into jit.window objects or jit.pwindow objects, the hardware renderer can be used. Unfortunately, due to limitations of the current system software (Mac OS 9) the hardware renderer cannot draw directly into jit.matrix objects. This is not a limitation inherent in OpenGL, and may change in the future. Now, however, this means that drawing directly into Jitter matrices is significantly slower than drawing to jit.window or jit.pwindow objects, especially for complex scenes or those involving textures."
But I wanted to check see if this was still accurate since it was obviously written some time ago. There is no way for me to render to a window and then grab the contents of that window for a matrix and thus use the GPU, is there?
Third, in searching the forums I did find a bunch of vague suggestions referring to the adjustments of some parameters to optimize jitter performance, but not really specific details that I could use. And while I think Cycling '74 really sets the bar for every other software company in terms of documentation and tutorials, I can't find any help on what these settings mean and how I should adjust them. I appreciate that it might be taken for granted that people using jitter might have more familiarity with these terms than I do; this is my first excursion into video.
NEXT:
So, today I bought Live 8 and Max for Live and transferred my app into a Live set, and I was delighted with how quickly I got it working. But, when things are running, I start to get buffer problems. I increased my buffer size to compensate, but I'd like to add a bunch of audio processing, and I feel like I'm between a rock and a hard place, in terms of increasing latency in order to get the processing going that I want to run alongside the jitter happenings.
My old audio interface kicked the bucket and I'm using the USB send/return on my Allen & Heath Zed14 mixer, which is obviously less than high end (it's a $400 mixer, new). I'm due for a new interface, and I have my eye on the Focusrite Saffire 24 DSP, but I'm also sort of broke and I have to be careful with my spending…
The point I'm vaguely spiralling towards is that I don't really understand latency and buffer size issues. That is, if the buffer settings are something happening on the CPU side, would that mean that the benefits from a new audio interface would not be all that significant? Also, if I want to go up to using 4 ins and 4 outs, are my problems going to get worse? Or would the Saffire mean that I might be able to cut that latency down?
Or is the only thing that is going to help me going to be a new computer with a better processor?
In summary:
a) Does the CPU usage seem reasonable for a patch of that complexity?
b) Is the bit about GPU rendering still relevant? Any hope that will be addressed in a future release? Any suggested work-arounds?
c) Will a new audio interface help my buffer issues significantly, or not so much?
d) Can anyone help me with, or point me to, some info on optimizing my Jitter setup?
Thanks so much in advance. This community has always been tremendously helpful, but I really suspect I'm pushing your goodwill with a post this long…
a - does not seem unreasonable, but of course depends on your patch. make sure you check out the vade optimizations for processing quicktime:
http://abstrakt.vade.info/?p=147
b - rendering opengl to matrix is slower than to the window, and will probably always be. you can instead capture to gl.texture, and process your texture with gl.slab chains (as opposed to matrix processing) for significant speed boosts (depending on your graphics card).
c - ask separate post on max/msp main forum
d - see above.
Thanks!
b) saved me %20 on the CPU usage meter! This was exactly what I needed, thank you.