Why can't I find a reasonable way to create sidechain effects in ableton with Max4Live!!??

Jesse Meijer's icon

I'm really bugged out by the fact that I can't find a way to send and receive audio without (or at least with almost no) latency between M4L patches in Ableton.
I've searched all over the internet and even 10 years ago Max developers said it was a high priority, but there doesn't seem to be a valid solution.

I've tried using [plugsend~] and [plugreceive~] and the delay is terrible for use as a sidechain input.
I've tried writing on a [buffer~] object and reading it on the other side and that can give me something close to realtime, but gives me digital distortions and echo's and is therefor unusable.

Is there a magic doorway to a faraway land of the no/low latency hyperloop infrastructures between M4L devices that I'm missing? If so.. teach me masters.

If there isn't a way to do this, can someone please explain to me why not? It seems to me that this would be a very basic functionality that any given M4L patcher would need at some point.

The only workaround I've found is this:
Ableton -> Audio Interface -> Max Msp (send your side chain signals back and forth and do whatever processing you need) -> Back to Audio Interface on a different channel -> Back into Ableton on a different channel.

This means that one round trip takes up 2 channels on your interface. One to get from Ableton into Max and one to get back from Max into Ableton.

There must be a better way to do this and otherwise, developers, please fix this... :'(

Roman Thilenius's icon


this doenst answer your question (and maybe there is no answer) but... in a dynamics effects you have latency anway... i.e. you might want to delay the carrier signal anyway to match it to the sidechain. so if plugsend~ adds some more, just sum it up and compensate for it.

also, while you experiment with using DAC channels... you might want to try one of the various virtual sound device drivers for mac or windows.

third idea: build mono devices which use the right channel as sidechain input and then use the new live audio routing system to swap the tracks likewise. that´s how we did it in VST/RTAS before they supported sidechaining/multichannel.

Jesse Meijer's icon

hey roman thanks for the reply.

I like the idea of splitting a stereo channel into a carrier input and a control input. Will check it out, but it's a shame that some stereo shapes and sounds will have to be sacrificed. Also, I need more than 1 control signal path into each sidechain so that'll be a puzzle just by itself.

compensating for delay times won't work because i'm working on dynamics processing for live music so everything is done real time. You're right about always having some latency and I've been able to deal with that nicely if i'm using the interface as a sort of transit station. Within max itself everything works as smooth as possible. the thing is that it's costing me 10 channels now to process 1 mono signal and 2 stereo signals. I can cut it down to 5 channels by sending the audio like this; 'source -> converter -> max -> converter -> ableton,' but then if I record, it's a destructive process.
I find it strange that something as seemingly basic as having m4l patches communicate with each other within ableton seems to be so far fetched (and otherwise clunky) for such a long time and there doesn't seem to be a straight answer why.

Maybe I should just mail the developers and I'm still hoping someone found a way :)

Roman Thilenius's icon


i have no idea how that works in ableton - in many DAWs you can use the AUX outs to split audio and put it on an additional mixer channel.

otherwise, well, just make a copy of the channel. (easy if it contains an audiotrack, but impossible if it runs an instrument driven by a probability sequencer.)

sidechaining is questionable to implement directly in a program, because you would have to make sure that the user does not start building feedback loops - and that can include actions like loading a plug-ins or re-routing mixer settings - that is a real mess if you (as a developer of a host program) have to take care about that.

the best option is when a plug-in developer cares about that himself, inside his range of plug-ins, like powercore, orange vocoder and a few others once sucessfully did.

supresses a comment about pluggo,

-110

Roman Thilenius's icon


maybe it is an option for you to create control data and control parameters with those? those are easier to send across tracks and devices. something like 20 ms, then interpolated to 4ms, could work.

benj3737's icon

Can you use the built-in audio routing?
https://cycling74.com/articles/audio-routings-a-new-system-for-multi-channel-routing-in-ableton-live

Jesse Meijer's icon

haha you're right about the feedback loops. that's exactly what I'm trying to do. I'm building a system where 3 signals influence each other and themselves via controlled feedback loops. So i understand that developers don't want people to break their cautiously built systems. It's a valid answer to my first question.

Creating controller data is a good idea. I'm using audio to create clicks and I can use those clicks to generate envelopes, so it might just as well be a bang coming out of a patch and into another patch where it's translated back into an audio click. Still worried if I have to deal with vectors then. So far I've built the whole process in gen to make it as accurate and fast as possible.

A new chain using data would then be:
source -> converter -> ableton (little Y split so it sends it's audio onto a second track) -> converter -> max (generate data) -> data into the second audio track in order to process the audio.

This could cut my used channels down to 3 because I only need to create a control signal with them so they can be mono.

This might be the way to get it working, but I don't have a good idea of how I would get the data from max into a m4l audio patch. Should I use a virtual midi port or is there a cleaner way to do this?

By the way, thanks for braining on this with me :)

👽'tW∆s ∆lienz👽's icon

this is not necessarily a problem for Cycling74 developers, the Live PDC has had this unpredictable latency the further down you go in the device chain for years now:
https://cycling74.com/forums/can-i-get-the-audio-buffer-size-of-live

Jesse Meijer's icon

@raja, thanks for this heads up. now when I'll encounter a ghost in my system I'm aware of a possible culprit.

@benj, I'm going to check your option out and learn what I can about the Live API. I used to work only in max msp and for this project it would be smoother to have everything entangled in Ableton. So i guess I'm still scratching the surface of the possibilities within m4l, but I'm being forced to learn new things which is never a bad thing.

Thank you guys! Loving the open source community here. I'll let you know when the problem is solved and how. Otherwise the quest continues :)

Roman Thilenius's icon


indeed, if you need that multiple times,you could pack it all in one device.

it will receive several channels of audio, analyse peak and power and whatnot, and lets you route that to some live knobs, VST parameters or other devices.

(if you use a third party VST you can measure if and how fast it internally interpolates its treshold or ratio parameter already... by sending it a constant signal, for example of "1.", as audio input, then send it a parameter change at control rate, for example "0.33" ... "0.77", then record its output and look at it - and then send your parameter data in that same speed.


Roman Thilenius's icon


do you have suite by a chance? it really sounds like something i would rather do in max.

Jesse Meijer's icon

Hey Guys.

Little update. The audio routing patches were the solution I was looking for. For me, Live API is a whole new language by itself and therefor it feels a little uncomfortable, but it's really nice that they left their patches unlocked and made some utilities that you can freely implement.

I've managed to reverse engineer their 'audio routing example' patch to take signals from the audio tracks I needed, from there on I've built in my entire system to do all of the processing within this patch and then used their output system to send the results to different 'parallel' tracks. So now I have 3 clean signals (which will be muted in a real-life situation) one patch that does all the processing for multiple signals and 3 processed signals. When I solo these channels (clean and processed) there's no phasing and that's good enough for me :)

So thank you once again for all the suggestions and bringing me closer to where I need to be.

I'm still working out all of the kinks in my system and test it to see if it really works the way I need it to. When I'm done I'll post the result in this thread.

For now it seems that I can set this thread on resolved :D