You are right, the send~ is not working within maxforlive.
Stefan Tiedje already suggested a live.out~ multi-channel-object (http://forum.ableton.com/viewtopic.php?f=35&t=129706) , which I think would be a perfect solution for spatialization or multi-channel audio. I could work exactly like the "audio to" function in Live itself and would overcome the limitations of the stereo-out of maxforlive.
send~/receive~ between devices didn't make it into the first version of Max for Live, but we are definitely looking at a solution for future updates. We know this is something a lot of people want to do, and it is high on our list of features to implement.
If you don't mind introducing a bunch of latency in the sending, you might be able to hack together a buffer~ based solution.
send/receive and coll are more or less fine. Let me explain the more or less a bit... It's likely that the devices are in different threads, so with send/receive you will get an unknown latency, with coll you might try to access some data in one device while you clear the same coll at the same time in another device, and there's no protection against that.
One of the cool things I like in Live is the Instruments (Audio / MIDI Effects) Racks. You can use a Step sequencer and still use 4 different synths for instance. Communication between Max for Live devices using send and receive is supported, but there may be some latency involved when sending data between devices. Multithreading is tricky... c'est la vie.
i'm assuming (based on the above, and the fact that the only signal-rate MSP devices are live.remote~ and live.param~) that there is no way to to tap into the audio output of another track *without* having a separate M4L device in that track (i.e. via the API).
is this sort of thing possible in principle (signal-rate audio reading directly through the API)? with a platform like M4L, it would be highly beneficial for the purpose of sharing devices to be able to keep things as self-contained as possible. for instance, i'm working on a patch that would ideally operate globally, within the master track, but be able to read audio from other tracks (for analysis purposes), and not be dependent on the rest of the set being configured any particular way.
any thoughts on this? i don't consider it critical to my projects, but it would be a big step up in my mind to be able to do this eventually.