Forums > MaxMSP

best ways to push down cpu usage?…

October 23, 2009 | 8:17 am

hello all,

as the title suggests i am looking for ways to push down the cpu usage of a patch, and namely any patch for that matter. but in no way do i know where to start.

i do know [poly~] and have used it in my patches when need be and it does work a treat.
but even so from using that it still is quite high on cpu, around 30%.
i am not saying that my patch is in no way small. 6 [bpatcher], that are loaded all the same which are in turn connected to the outputs and also a record function i have going.
the [bpatcher] for each one do have thing like stereo delay, spectral shifting of sorts and a few other effects.
this is a video i posted of it, althpugh i have edited it some what recently.
http://www.vimeo.com/7022326

as you can tell i am in no way a max ninja, and do know that more effects etc do add to cpu.
but when i saw the ‘livid looper’ software and ran that, that runs at around 8%, which is really low for what is actually in the program, and that has effects, if not more than what i have.

so my basic question after that little explanation, is how to push the cpu down more and more.
what would be the best ways to really push it down, [poly~] aside.

many thanks for people who can help…


October 23, 2009 | 12:01 pm

Generally, the easiest way to decrease cpu usage is to use a large vector size and a small sample rate.
The downside of a large vector size is of course a higher latency.

You can also set the vector size/samplerate for only the objects inside a poly object!

If you really want to optimize the performance of a patch, in my experience, it’s best to use a profiler program (such as Shark, which comes with the Developer tools on OS X).
The profiler will show you, which objects use the most cpu power, so you can try to improve on those.
This especially makes sense, if your patch uses multiple instances of the same sub-patch.


October 23, 2009 | 12:24 pm

aye vector size etc, is all well and fine. but shark does seem the way to go.

i would rather not mess with settings like vector and the such, because i have everything fine as is.
shark it is then Smile

cheers…



nit
October 23, 2009 | 6:28 pm
mudang wrote on Fri, 23 October 2009 14:01
Generally, the easiest way to decrease cpu usage is to use a large vector size and a small sample rate.
The downside of a large vector size is of course a higher latency.

You can also set the vector size/samplerate for only the objects inside a poly object!

If you really want to optimize the performance of a patch, in my experience, it’s best to use a profiler program (such as Shark, which comes with the Developer tools on OS X).
The profiler will show you, which objects use the most cpu power, so you can try to improve on those.
This especially makes sense, if your patch uses multiple instances of the same sub-patch.

Thanks for this suggestion. I’m having the same problem with a terrifying huge patch wich therefore is quite hard to debug.


February 3, 2011 | 5:54 pm

Hello old thread, I was searching for this Shark program but I didn’t know the name of it.

Anyway, what do I make of this? How can I tell which objects are taking most CPU? Do I need to select some different options?

===== shark profile majig ====

8.5% MaxMSP juce::LowLevelGraphicsSoftwareRenderer::clippedBlendImage(int, int, int, int, juce::Image const&, int, int, int, int, int, int, float)

======================


February 3, 2011 | 6:03 pm

Ah not to worry, best to turn the audio on when doing the test!


February 3, 2011 | 8:45 pm

i’m not sure shark will be working properly when you just test a max patch. to use shark you’re meant to build your application in xcode with "generate profiling code enabled". I might be wrong about this.

mtof~ can be a cpu killer. i attached my own version which is at least 100% faster

oli

http://www.olilarkin.co.uk

Attachments:
  1. ol.mtof.zip

February 3, 2011 | 9:13 pm

Hi Oli, thanks for the response.

I do get some useful results when patching (object names, cpu values etc)

Is there reason to think these would change drastically in an application?


February 3, 2011 | 9:27 pm

Hi Lewis, do you use the (thispoly) object? you can mute single poly instances when they are not used, which decrease the general cpu load a lot.


May 24, 2013 | 6:28 am

This slightly off topic, but since Shark was replaced with Instruments, I have no idea which test to perform to get the same results.

In Shark I used to be able to see object names, but can’t generate the same information when testing with any of the CPU templates in Instruments.

Any ideas how to go about this?


May 24, 2013 | 8:43 am

I’ve not seen the patch, but don’t underestimate the cost of GUI objects. I removed most of meter/scopes from my patch and got tons of CPU back. Also check activity monitor in addition to what Max is telling you as Max only accounts for audio rate objects, and not overall CPU use.


May 24, 2013 | 12:34 pm

Yes there’s definitely a lot of CPU taken up by LCD drawing, but it was nice to see the specific signal objects in Shark.

I think I have an update rate of 200ms for the meter~ objects, seems like a good trade off between performance and CPU. Waveform~ is also a bit of a CPU hog for what it does (or perhaps I don’t understand what it does down at code level).


Viewing 12 posts - 1 through 12 (of 12 total)