I have an mxj external I have created (essentially a trigger sequencer, but with pattern morphing and pattern filters and a few other bits like bjorklund/euclidean pattern generation thrown in, internally it used BigInteger and lots of bitwise logic), anyway, I have spent hours lovingly crafting max for live devices so that I can use this java code to control drum racks and instruments in Live. Each track of the sequencer loads up 4 instances of the mxj to sequence various things. I can run about ten tracks before I start getting sloppy timing and audio dropouts (this is just firing samples in live, nothing fancy) on an 8 core Mac Pro. In total we are talking about forty mxj instances being banged around every 100ms.
Now, I am sure my code has plenty of room for optimisation but I was really surprised at how poorly it performed.
I prepared a fairly unscientific performance test outside of Max to test my code, I created a java test method (using JUnit) that creates 1000 instances of the class and then bangs each one of them and issues a morph command and generally gets it to do some work, it then sleeps for 10ms and does it again. This uses up around 90% of one cpu according to activity monitor, compared to around 150% when running in Live and is hitting my code several orders of magnitude more frequently (40/100ms vs 1000/10ms!).
Now, obviously there is a lot more going on when I run my mxj through Max For Live, but the difference I am seeing is huge.
So my question is, what actually happens when Max call out to java? Is there an expensive marshalling system that is really hurting my performance? What might be the low hanging fruit when trying to optimise performance? It is worth trying to reduce the number of times I move from Max to Java and back again for example?
If there are performance issues moving between max and java do these exist when using native c/c++ externals?