Forums > Max For Live

Latency with max4live devices

September 18, 2010 | 4:50 pm

I built a m4l device which gates the signal at a synced rate (i.e. 1/4). I have several other m4l devices in this liveset as well. Depending where in the effect chain this gater-effect is placed there is more or less latency involved, which is not compensated. This means that the gater-effect is i.e. too late.
Does anyone know of a solution to automatically compensate latency?

September 19, 2010 | 1:36 am

Automatic latency compensation in Live (and other DAWs) aligns the latency among different *tracks*. Technically, this can be easily be implemented by delaying some tracks such that all tracks have the same latency.

Latency among different devices on the same track is a new problem specific to M4L, in particular with synced devices. It’s another level of complexity and I think that automatic compensation would be very difficult to implement, if possible at all.

September 19, 2010 | 12:24 pm

So, here is a simple example to demonstrate and investigate the issue.

Take an M4L audio device that generates clicks on the beat, and load 2 instances one after the other on the same track. When running you’ll notice that the clicks are not in sync. However, sync can be achieved by delaying the second instance. On my system it requires a delay of 13.6ms (at 512 audio buffer size).

In theory, this "device delay compensation" may also be done automatically by introducing appropriate sync delay depending on the actual position of a device within the chain. But the feasibility can only be estimated by developers..

For anybody interested, here is a click generator patch with adjustable delay. As described above, it can be used to measure the actual delay per device in a chain of your setup.

– Pasted Max Patch, click to expand. –

October 20, 2010 | 2:15 pm

Interesting topic. I already noticed that when a new live set is instanciated, there is an overall audio latency which is increasing while adding tracks / devices / clips to this set. We noticed that in syncing two computers with a clock and adding new tracks / devices on each ableton instance.

Is it possible to get the value of the new global latency from a M4L patch right after dropping new tracks / devices ?

October 20, 2010 | 10:26 pm

I think automatic measuring would be difficult but you can estimate it. The global latency is determined by the track that contains a chain with the highest number of M4L devices. Each device added to that chain will add to global latency by an amount proportional to audio buffer size.

October 20, 2010 | 11:06 pm

"Technically, this can be easily be implemented by delaying some tracks such that all tracks have the same latency."

what most audio apps do is to pre-delay the track with the plug-in by reading
files from disk earlier, or by shortening the track buffer.

delaying all other tracks sound more like a worksround solution in emergecy cases.


October 21, 2010 | 9:24 am

Yes, shortening the track buffer is an elegant solution which works for conventional setups, but probably not with M4L since in general you can have multiple devices in a chain on the same track. For example, with 8 devices in the chain and buffer size of 128 samples, the shortened buffer would be only 16 samples.

October 21, 2010 | 9:53 am

i think live is fixed at 64 samples isn’t it? i’m very interested, so that was not really a rhetorical question and please correct me if i’m wrong

October 21, 2010 | 10:03 am

No, in the audio preferences you can choose any buffer size from 14 to 2048.

October 21, 2010 | 2:33 pm

yes, sorry, i was offtopic, thinking about vector size…

April 14, 2012 | 5:54 am

There’s also the "Defined Latency" value in the device’s inspector. (not to wake the zombies, just in case anyone else ends up here)

April 20, 2012 | 7:17 pm

as I understand it, The delay increases as you go down the effect chain. Although audio may be delay compensated, it still takes x + y + z samples to process where these are the respective delays for your chain. So if your device is first, it will be processing audio when the buffer request for audio is called. If you can, try putting it first and seeing if that changes how it’s working, it would be interesting to know…

Viewing 12 posts - 1 through 12 (of 12 total)