Routing audio and Midi to a Max for Live device simultaneously

Henrik's icon

Hi there,

I try to program my first Max/Jitter visuals, that should react to my Ableton Live set.
The Max for Live device contains several visuals, that react to midi or audio signals (or both).

Is there a way to route an audio channel (e.g. master) and midi signals to my Max for Live device simultaneously?

Henrik's icon

My first workaround for this is the following:
I create a Max audio effect and access the Midi clips via the Live API.
But this is not exactly what I want, because when I play a live set I have to change the audio input and midi input to different tracks of Ableton.
Is there another way, which is faster and more flexible to solve this problem?
(e.g. route several audio tracks and several midi tracks to my Max effect simultaneously?)

Thanks

Lee's icon

Hi, audio can only be received on the track that the device sits in. This is a limitation of M4L. Whilst this also applies to MIDI, it is possible to use send/receive to route MIDI data from multiple sources into a single device without any perceivable delay.

Andro's icon

I found it far easier to build my visuals in a max standalone patch.
I then use the live.observer.transport with udpsend to send a bang when live is playing to max to activate the qmetro.
You can then send all your midi and audio analysis data via udpsend to udprecieve in max to control your visuals.
I found it far easier to just have a lot of small m4l patches than doing it all in ableton.

Andro's icon

Another thing i do is use the snapshot~ object to transform an audio signal into a float and send it into udpsend. I then create 8 send/returns and place a copy of the patch on each return with a unique port number.
You can then send any audio track to any return object and swap them on the fly live.
I normally use the pre setting on the returns instead of post so that changing track volumes live doesnt affect the data.

Henrik's icon

Thanks Andro,

I will try your workaround. This seems to be a good solution for my project.
But I'm a little bit worried about the latency. Do you have good experiences with the udpsend/udpreceive object?

broc's icon

@ANDRO

I found it far easier to build my visuals in a max standalone patch.

Interesting. But what exactly is the benefit? Why not building the visuals as M4L device on the master track and communicate to it inside Live with UDP or send/receive?

Andro's icon

I just find it easier to have a dedicated program for visuals. It allows a more modular approach. Latency is minimal. I ve got live visuals reacting to 8 channels from a live band and it runs lightning fast. I also sometimes need to load a new live set the visuals carry on during the load.