Simple patch causes CPU spike in M4L, but not in straight Max

Noah Neumark's icon

It's just a sending a list of 8 random numbers to a group of 8 live.numboxes each in one of 8 bpatchers. When I load this in straight Max, I don't see any issue, but when it's loaded in Max driven by Live, it gets hung up. I have a similar algorithm running a big project I'm working on, and it's causing some issues.

Max Patch
Copy patch and select New From Clipboard in Max.

Noah Neumark's icon

Fyi, the join object with all inlets hot is redundant in this example patch, but it makes sense in the context of the M4L device I’m working on.

tyler mazaika's icon

Aside from the 7x duplicate messages being sent with the [join @triggers -1] I don't see any reason to think this would be especially stressful. In Live do you need all 64 numboxes "automated and stored"? For writing undo history I could see that maybe being more problematic.

Depending on how time critical these are you could also try replacing the first uzi with a zl.queue of your numbers (or better yet: pre-compiled symbol addresses) and defer between their outputs to stagger the load. I've done that for large pattrstorage recalls and it helps with cpu spikes.

Noah Neumark's icon

Tyler, thanks for the ideas. There are some good things to try for further testing. I’m not sure I know what you mean as far as precompiled symbol addresses. Can you elaborate?

Are you referring to the send location of the forward object?

I suspect the “automated and stored” might be the culprit. In straight Max, Max does nothing with that, but Live has to process that, so I can see there being a big difference there.

tyler mazaika's icon

Ah, "precompiled" was too fancy a word. I was just thinking that if you do a queue, you could either make it a list of numbers that you feed into your [combine] object. Or you could make a list of symbols like "bpatcher1data" "bpatcher2data" etc. so you don't have to do the [combine] step. Performance wise this would be a tiny win at best, though.

Max Patch
Copy patch and select New From Clipboard in Max.

I'm curious about the need for [join @triggers -1] in the Live context. With lots of potential simultaneous changes that makes a ton redundant updates. Or is the random numbers thing just for performance debugging?

Noah Neumark's icon

Ah I see about the addressing. Actually that’s less of an issue because I’ll probably have it send to one bpatcher at a time. For context, this is used in an arpeggiator, so instead of random numbers, there are incoming notes going through a borax for distribution to the 8 number slots. I’m using all “@triggers -1” because a note on or off can come in at any slot, and needs to update the final list. I could perhaps change to having 8 forward objects and then it wouldn’t have to process the entire list each time, but again, I think all this is a trivial cost. The real expense I predict is with the automate/store. I haven’t had a chance to test that yet. If that’s the culprit, I have a way to rework it.

tyler mazaika's icon

Oh ic, so it’s your “held pitches” or “active voices” list then? That makes sense. Thanks, and happy patching!

Noah Neumark's icon

Yeah, I split the data into individual units with their own receive objects for each numbox. It significantly cuts down on the redundancy. Changing "automated and stored" to "hidden" did nothing. I also turned off "visible to mapping" and that did nothing. I'm still not sure why it spikes so much, but I think if I split the data, and use defers appropriately it will manage the issue sufficiently.

Max Patch
Copy patch and select New From Clipboard in Max.