metro for syncing to Live transport / quantized MIDI note output

soundyi's icon

Hi there,

I am wondering about the inaccuracy of the metro object, trying to use it to synchronize MIDI note output to Live transport respectively to a quantized „beat grid“.

I grabbed the core logic from Cycling 74’s sample patch M4L.bg.06.MetricalTiming, which comes along with one of the free packs of Live 11 Suite.

In a nutshell : metro @interval 16n @quantize 16n @active 1 -> makenote 100 16n @repeatmode 1 -> noteout.

My assumption : all notes should be quantized to a 16th notes beat grid.

But the issue is : recording the generated MIDI notes onto another track shows that all notes are „off the beat grid“ respectively not quantized as set in the metro attribute - in the example case the notes are not quantized to 16th notes. Its only the first note that starts on time, but its duration is already offset from a 16th note, which is weird.

Whereby it might not only be the metro object, as the repeatmode attribute of makenote is also needed so that one can generate a sequence of continuous (100% adjacent) notes and might even be the source of the problem.

The offset in time is just a tiny bit, but it makes me curious why it even exists - I was more than surprised and wonder if this offset might become bigger under whatever circumstances and what kind of „not 100% integration“ of Max for Live inside Live this reveals.

One can see this offset in time even in the Live footer having the MIDI Clip opened in the MIDI editor and a note selected.

To get more precision about the offset, I wrote a little patch that reads the notes of the clip via Live API and analysed their timing.

So e.g. the start-time of a note (measured in fractional beats, where beats are quarter notes) has a deviation of 0.000013 (modulo’ed against 0.25, which is a 16th note expressed in fractional beats) or a note duration has a deviation of 0.249988.

So the offset seems to be most often after the 5th position behind the decimal point - which is not even a breath of whatever being ;-) … but it is something and something that I think, should not exist.

And by the way, its not the MIDI routing between 2 tracks : using a 100% quantized MIDI Clip on track A and recording its MIDI notes on track B, the recorded notes on track B are still 100% quantized - no time offset at all, everything is perfect „on time“.

So I am wondering if I just missing some setting or stumbled upon a fundamental limitation (aka pitfall) of Max for Live, which makes building accurate sequencer nearly impossible, which would be crazy and something I cannot believe … I do not want to believe ;-).

Any ideas or explanations on this mystery are very much appreciated.

Cheers
soundyi

SyncWithTransport.amxd
amxd 3.67 KB
MIDI Generator

Note Analyser.amxd
amxd 62.79 KB
Note Timing Analyser

broc's icon

This just happens because the timing in Live is based on float numbers which are inherently not accurate, in contrast to integer ticks of Max. For example, a triplet 8th note in Max is exactly 480/3=160 ticks and in Live 1/3=0.33...beats. But as you have measured the inaccuracy is very small in the order of microseconds. So I don't see any musical problem. Note however that floating-point time calculations must be handled with care. In particular regarding relational operators it may sometimes be useful to eliminate inaccuracies by rounding.

soundyi's icon

Hi Broc,
I got your point and from a „pure musical“ perspective I can agree - this small offset won’t most probably ever by noticeable and using this just for myself, I would just stick with it.

From a Max 4 Live device development perspective though, your explanations are also valuable, but not satisfying ;-).

My thoughts : a sequencer that is not able to quantize its MIDI note output to the „beat grid“ of the DAW is „non professional“ (or maybe even a tool of non grata in the DAW ;-), and even more worse are the technical limitation it implies.
Creative techniques that a developer of a sequencer wants to come up with, are blocked or cannot shine, because of the loosely timing.

Or in other words : a car has to able to stop before the line in front of the traffic lights, if the lights are red. Its as fundamental.

I hope that some current or former Cycling 74 developer or insider might share his or her wisdom on this as well, @Ben Bracken ?

Regarding „time measurement“ in ticks versus beats - are you sure that Live operates internally in fractional beats? The Live API delivers note timing in fractional beats, but that does not mean that Live internal clock & MIDI system is driven that way.

I see the timing offset in the MIDI editor as well and just choose the note information of the Live API to get a better grip of the visual indication of offsets and not so informative time information in Live’s footer.

But in the end the „timing bridge“ between Max for Live and Live is what matters - if this bridge suffers rounding issues, we got a problem … and this problem would be „by design“ (through Cycling 74 & Ableton) and that is something that I can hardly believe.

Which drives me somehow crazy : music has 2 fundamental concepts (for sure there are other perspectives, but for a Max for Live device development its quite handy), which are pitch & time (not only as a means for rhythm) and when you think about it more theoretically and think pitch as frequency, it all comes down to „one basic thing“ and that is time : a waveform that wants to be played at a certain frequency, needs a space & matter of time - hence if there is no concept time, the concept of pitch cannot unfold.

And this fundamental concept (respectivley bridge to Live) was or still is „broken by design“ in Max 4 Live … I do not want to believe this ;-)

+++

In the world of „pure“ Max, I saw some folks using the approach of sequencing with signals, Philip Meyer did a small series of tutorials on the basics of it, see : https://youtu.be/lC9RJW57Dnk

But I am not sure, if this approach is useable in Max for Live as well and if it might have other side effects.

If this signal based approach works flawless in Max for Live, the 2 recent books of Max come into play (https://cycling74.com/books).

Any experience on this signal based approach in Max for Live?

Cheers
soundyi

P.S.: By the way, I got 2 paid M4L sequencer, which I like very much, that suffer timing issues in their MIDI note output as well, whereby its even more worse than in Cyclings 74 sequencer samples (one drops its first note). So this is a real issue and not only a theoretical complaint … Cycling 74, can you hear us? ;-)

Wil's icon

Any experience on this signal based approach in Max for

hi SOUNDY.
I don’t have Mac for Live.
but I use phasor- to drive everything these days in Max — Audio envelopes, everything video, pitch triggers dynamics. etc.
i dont for now use phasor directly though- for audio envelopes I use snapshot~. / for pitch trigger I use what~ and a typical trigger code.
I am sure the there must be an away to connect float from snapshot and trigger to what is going to Live.
snapshot doesn’t dilute the accuracy of the phasor, it only dilutes how often the signal is checked. [snapshot 10] seems a good -

phasor~ has been super accurate. I recently had 40 channels of nested tuplets 4 levels deep with adjustable probability , and timing is perfect. (That’s like 32nd notes blazing along at 250 bpm quarter notes)

best to set up a Philip Meyer patch and test it out. Just set up a simple phasor and see if you can use it in live. then Check out tuts on phasor, subdiv, stash, what, rate to get started.

Iain Duncan's icon

This comes up frequently and is a very common source of misinformation. You don't need phasor in modern Max to get accurate clocking. You can certainly use it, and in some contexts it has advantages, but the idea that you need it to compensate for issues with metronome is way out of date.

The issue is that the scheduler thread and the audio thread do not run at the same time. There will always, no matter how you do it, be some small jitter when you trigger events in the scheduler thread compared to timing in the audio thread because the scheduler runs once per signal vector size. If you are in Max for Live, this is every 64 samples. So even if you clock off a phasor, if what you are trying to trigger is an event (i.e. you eventually go from audio patch cords to message patch cords), then you will get the jitter at that point as you wait for the next boundary. In modern Max, the clock functions underlying metronomes are just as accurate as doing it with a phasor (this was not the case several versions of Max ago). You can test this by setting the signal vector size to 1 - you will get sample accurate timing. The important thing is that this jitter does *not* accumulate - metronomes, just like phasors, will keep accurate clocking under the hood.

Now, if you make a purely audio chain (like what Eric Lyon describes in his books and papers), yes you can get better timing that way. But that means all objects in the chain must be audio in and audio out, no message event trigger - no snapshots! You have to come up with your own protocol for triggering things from audio. This make sense if you are using modular synthesizers or building modular style patches of all audio cables, but it doesn't change anything if you then go to messages. For it to work, note onsets (envelopes etc) must be triggered by gate signals in the block of audio samples. If you are doing *this*, then yes, phasors will get rid of that jitter.

You should really test it. But in my tests, there was literally no difference between clocking off a phasor and going through a snapshot and just clocking off a clock. I recorded output for a long time at various singal vector sizes and analysed them. At singal vector of 1, timing was accurate to the sample. At higher signal vector sizes, jitter happened up to a vector but never more.

Iain Duncan's icon

Also, it is worth mentioning that this same issue is common in lots of other audio software too - many platforms use the same pattern of rendering blocks of samples at one time and running midi/event stuff once per block. It makes for much less CPU load. So it's all tradeoffs. (For example, VCV rack and ChucK do not do this, but as a result, you use way more CPU for a comparable patch.)

The one thing I think is unfortunate in the design is that in Live we cannot change the vector size if we would like to prioritize lower jitter over CPU use, but I'm sure they have reasons. PureData also uses a vector of 64 samples.

soundyi's icon

Thank you very much Iain for your detailed insights - this is the kind of information I was looking for!

But that does not depreciate your informations Wil & Broc - I appreciate yours as well and they inspired my to research further more.

E.g. different BPMs resulted in different offsets - these tests were driven by a first & supposed not to be „the technical truth“ theory (but one has to start with something ;-) that if a 1ms high precision timer (offered by most operating systems) would be used by Live to record the MIDI note event times, a note whose length has a fractional number of milliseconds cannot be measured as „on grid“.

Results at 100 BPM (were 16th note are 150 ms long) showed that 11% of the notes start-time and 23% of the durations were offset.

Hence a „misalignment to a whole 1ms“ of a 16th note length (respectively a 16th note that has a length of a fractional number in milliseconds ), could not be the source of the issue / effect.

And by the way : at 60 BPM, at least in short recording of 4 bars, no offset appeared.

+++

Long story short : I realised that there is a different processing in the MIDI pipeline than in the Audio pipeline of Live and that the timing accuracy is not that high as in the Audio pipeline (also by testing VST based Sequencer as MIDI Note source and their recording offsets) … but stuck with no idea of why & how.

Iain, can you please point me to books or articles that discuss the concepts you mentioned in detail, especially the threading model of Max and how it works in Max for Live respectively interacts with the threads / internals of Live. I think this is crucial to understand. I saw a video of Cycling 74 on their Youtube Channel, but that’s just for a brief overview.

You mentioned Eric Lyon, but it seams that his book is (at least ;-) one level to deep - something that makes sense further down the road of max development.

I have a software development background, but no background in DSP.

+++

Regarding your side note of other DAWs, Logic Pro seems to work differently. I did a test with HY-RPE2, which is a Euclidean Sequencer, running with 16 steps, 16 pulses and a 100% gate (hence a sequence of continuous 16th notes) at 120 BPM

In Logic Pro I enabled „Record MIDI to track here“, which means that the MIDI generated by the AudioUnit MIDI FX Plugin (not the normal / legacy Audio FX version of the plugin) can be recorded to the same track, and recorded some bars - the result : only the first note is offset, all other notes are perfect on a 16th note grid.

Logic has a nice note inspector (list editor) which lists all note events in a MIDI clip - hence its very easy to see what is on or off grid.

Cheers
soundyi

soundyi's icon

Hi Iain,

and especially for all the others that stumble upon the curiosity of the „jittered“ timing of MIDI (messages and events that run in the Max scheduler) inside Max 4 Live (in contrast to accurate timing of signals / audio).

At 8:00 in the „Threading in Max“ Video from Cycling 74 (https://youtu.be/7n-sl687tkI?t=480) Timothy Place says : „… the Scheduler could also actually happen in the audio thread (Scheduler in Overdrive checked + Audio Interrupt checked)
… now it (the Scheduler) actually happens in the audio thread, in between processing vectors of audio
… and this is the default in max 4 live devices …


As I watched it some time ago, it put me on the wrong path, as I picked up „scheduler runs in the audio thread“ and that this is the default for Max for Live and did not realised what his words „happens in between processing vectors of audio“ might mean.

My wrong assumption : everything (also MIDI / message processing) inside Max for Live happens in the audio thread (of Live) - hence why the hell are there these offsets in timing, recording MIDI note events generated by a Max for Live device ;-).

The real thing (as Iain pointed out) : the scheduler happens „in between“ the 64 samples (vector of audio) of audio processing - or in other words : only every 64 samples.


And for clarification (please correct me, if I am wrong) - if you look at 10:58 in the video when Timothy turns on „Audio Interrupt“ (Overdrive is already on), we see that a message, that is routed through some „time sensitive“ Max object (in the video its delay - but this should be the same with metro), comes out of the outlet that verifies that it has run on the audio thread.

This can also be confusing - if something runs on the audio thread, it does not mean, that it can be „delayed“ to the boundaries of the audio vector processing.

Or in other words, in the video it looks like the „message based processing“ (like MIDI) is running „in parity“ on the audio thread - its not obvious that there is a different handling of message versus signal / audio processing.

Its not obvious that the "message work" that normally would run on the scheduler thread, is processed on the audio thread, but only before and after the 64 samples of audio and hence the offset (jitter) in the recorded MIDI notes.

+++

Another thing that is something that one needs getting used to, if one has a background in conventional programming, is the naming of „thread related things“ in Max … hence I am looking for book or article, which introduces the concepts and names in a clear fashion and in context of midi & signal / audio processing - giving some hands on examples to make it obvious.

A scheduler is often a concept that is related to a thread pool and not a single thread on its own.

The main thread it often the UI thread and that something like „defer“ routes processing work from the scheduler thread to the main thread is also „unusual“. In GUI based software development, one lifts heavy processing „away“ from the main to a background thread, in order to free the main (UI) thread from this computational work, so that the main thread keeps responsive to user input.

In Max its just the other way round … funny ;-) … but recognisable, if one reflects what Max (and Max for Live) is all about : its not the UI that much, it is more about messages (non UI events), MIDI, signal & audio processing … the most important „processor“ (also in the sense of a thread) that has to be taken care of with how much „computational work“ its loaded, is the thread that does „timing related / sensitive“ computation and that’s the scheduler thread.

Maybe this inspires some Cycling 74 person or experienced Max / Max for Life user, to write an up to date article about the threading model in Max and how it applies in Max for Live.

Another motivation for the Cycling 74 team (especially as their are part of Ableton ;-) : with Ableton Push 3 standalone, Max 4 Live gains a new level of relevance.

Running Live on a Push 3 standalone, Max for Live is the one & only technology that can be used to build and use custom (third party) MIDI & Audio Plugins!

This is a huge creative opportunity … and a clear & hands-on documentation of the basic „runtime concepts“ (related to timing) like the threading model and how it applies to MIDI & Audio in Max for Live is crucial to take the opportunity ... thanks in advance ! ;-) !

Cheers
soundyi

soundyi's icon

2 futher sources on information on this topic, which might be interesting - although they are not explicitly in the context of Max for Live.

https://docs.cycling74.com/max8/tutorials/04_mspaudioio : "The Scheduler in Audio Interrupt feature is available when Overdrive is enabled. It runs the Max event scheduler immediately before processing a signal vector's worth of audio."

https://cycling74.com/articles/event-priority-in-max-scheduler-vs-queue : this article is a great supplement to the "Threading in Max" Video mentioned in the last post, understanding that "low priority queue" = "main thread" and "high priority queue" = "scheduler thread" matches this 2 sources of know-how ;-).

Although this article is from 2004, I hope its technical information are still correct ... Cycling 74, are you with us ? ;-) ?

Also worth trying out are the "Threadcheck" Max externals from Timothy Place, that he uses in his Video and are for download here : https://cycling74.com/tutorials/advanced-max-learning-about-threading.

I gave them a quick test in Live respectively in Max for Live Live and on Windows they work fine and makes things clearer about what runs on the main thread and what on the scheduler thread.

Sadly on a Mac with an Apple Silicon CPU I experience a crash - but it is comprehensible as the Max external will be a build for Intel CPUs.

If anyone knows of a comparable Max external or technique to make these "low level observation" that are also Apple Silicon compatible it would be great, if they get postetd here.

Cheers
Soundyi