Question About Max For Live API Mapping Efficiency

Duffield's icon

Hey Guys,

I'm working on a project and we're running into some major performance issues. I'm looking at redesigning how we're inputting data from Max into Max For Live. We're using four XBee's each with four analog values coming in (per sensor) in an external Max patch, that is sending via UDP into various Max For Live devices. The values from the sensors are mapped to various midi instruments and effects in Max for Live. Yes I know this is a bit crazy from a CPU standpoint but it's what the client wants!

Essentially I've created a modification of the 'M4L Api AMap1.amxd'to manage all of the mappings on a track by track basis. We have many of these in the Live project (potentially about six tracks sometimes with three of these on a track, or the variation of four nested bpatchers in one unit). I'm looking at potentially switching to a Midi control structure in a Max patch that is mapped with Ableton's MIDI mapper to see if it lightens the load. Attached are two examples of the type of mapping system I'm talking about. I notice that the general approach is to have Max data coming in the form of a midi cc (see Max Api CtrlMIDIcc.amxd) or live.numbox and then convert it into a signal before inputting it into the mapping bpatcher.

My question is why is the Max data converted into a signal in the attached Live devices? Is it simply because a signal responds faster than MIDI / Max Data? There are some smoothing / exponential curve, etc. algorithms that can finesse the signal, is the conversion into a signal because the MSP objects make these sort of calculations easier to implement?

Any insight would be much appreciated. If there's anything that needs clarification ask away as this is a pretty important project in terms of bread and butter! Thanks in advance!

Max-Api-AMap1-Project.zip
zip
Max-Api-CtrlMIDIcc-Project.zip
zip