Forums > Jitter

Jitter peformance enhancing

January 24, 2013 | 6:37 pm

Hey all,

I’m using a laptop with a 8-core i7 processor, a 1g nvidia gpu and 8 gigs of ram. (Win7 64-bit)
I’m checking the cpu and ram usage while doing some live processing in jitter and max does not seem to be making full use of the machine, yet the performance in poor (20-24 fps). It says I have 5 gigs of free ram and the overall cpu usage is around 35 percent. Max is the only program running, no heavy background processes.
Is there any way to make max use the computer’s full capacity (or at least more than now)? or is it possibly some kind of configuration problem?
thanks in advance


January 24, 2013 | 8:30 pm

hi,
do you read several files on the hard drive ?
maybe this process decrease the frame rate …



dtr
January 24, 2013 | 10:04 pm

Some elements of Max can make use of multi-cores/threading, not all. Whether there’s room for optimization really depends on your actual system/patch. Impossible to say anything useful without seeing it.


January 25, 2013 | 9:39 pm

No files, opengl context with two meshes feeding to two independent gl.multiple objects, both using 20 20 float32 matrices for position, rotatexyz and scale. For personal reasons I’m afraid I can’t post the actual patch
Ideas from the top of my head:

I’m using several op’s, maybe merge them to a single jit.expr?
For transformation automation I use blines with a metro 1. Could the short interval slow the system? I noticed its running way slower than real time.
There’s live audio feed coming in that pass through an svf~ and a peakamp~ to make things react to different frequency bands with bangs (ie. audio comes in, bang bline envelope if it reaches a peak value). Could this eat up the power?

Many thanks guys. This forum and the people here are lifesavers


January 25, 2013 | 9:57 pm

a single instance of max will never use all the resources of a multi-core machine, except maybe with audio processing using a poly~ object. not your case apparently.

you could possibly split up your opengl, if it makes sense in your specific case, and run with two (or more) max 6 instances. render them to a texture and (assuming you are on os x) share them using the syphon external.

replace all your ops with jit.gen. better yet, move to the GPU use jit.gl.pix.

you should definitely not be driving blines with a metro 1, but use the same qmetro that you are driving your render-context with.

the peakamp~,snapshot~, whatever should also be driven by the render context qmetro.

remove *all* GUI objects from your gl patch.
anything that needs GUI, move to a separate patch, and run in a separate instance of max, communicating with the GL patch using udpsend/udpreceive.

no idea if any of these apply to your specific case.


Viewing 5 posts - 1 through 5 (of 5 total)