So for last few months I've been designing what I'd call a "basic" synthesizer in Max/MSP (something fairly similar to the feature set of Reason's Subtractor). Everything is going great, except that I am almost finished and the CPU performance is really bad. I've done some basic testing versus other similarly specced synths (by sticking both in as plugins in Live and comparing their CPU usage) and my synth seems 4 or 5 *times* more processor intensive with 5 voices going at once.
Unfortunately, for a variety of reasons, I am not quite ready to to put all the code out there yet (I plan on giving it away for free with the source eventually, I am just not quite ready yet). I know that any responses I get will be greatly hindered by not being able to see exactly what I am working with. But here is a screenshot of the top-level signal chain which I think captures the basic idea pretty well:
The oscillators each have a choice between four waveform types, the lowpassfilter uses a biquad~, the amplifier envelope uses an adsr~, the lfo has three waveforms. I've tried adding and removing each section in turn to identify bottlenecks, but each section uses roughly the same amount of CPU (i.e. there are no bottlenecks/the whole thing is a bottleneck).
That whole signal chain is contained in a poly~ patch (that also contains the interface). The interface sends data into the patches above via pattr. (I believe that pattr is considered inefficient, but the pattr's are only used when the user is actively manipulating the interface, so I don't think it is a big deal).
I guess my questions are the following:
1. Should Max/MSP be able to get the same performance as a similarly specced Synth written in C++ (for example)? Or will Max/MSP simply always be more processor intensive than something written in a lower level language?
2. Is there a blanket way of reducing CPU utilization that I can use in a Synth like this? One idea I found while searching the forums is to down sample the container poly~. But when I tried "down 2" (as an argument to poly~) audio became mangled. I believe this is because the sample rate has to match the dac~ sample rate. I tried to lower the dac~ sample rate, but it was already set to the lowest allowed (44100, further reading makes me believe that the options are available here are the ones allowed by your hardware audio interface).
If there is a way of reducing the total amount of CPU usage used per voice would be great! I know not having the full source is problematic. But I'd greatly appreciate any suggestions you might have.
Thanks in advance! Sorry about the provocative thread title... Max/MSP is a great language in all other respects, hopefully (and likely) I am just doing something wrong here.