Articles

Event Priority in Max (Scheduler vs. Queue)

The following article is designed to shed some light on the different priority levels of Max events. We will cover low priority events, high priority events, threading issues related to these two priority levels, and when it is important and/or useful to move events from one priority level to the other. We will also cover the MSP audio thread and how it can interact with low and high priority events. And finally, we will touch on some additional threading issues that will be of interest to Javascript, Java and C developers.

An event in Max is fundamental element of execution that typically causes a sequence of messages to be sent from object to object through a Max patcher network. The time it takes an event to be executed includes all of the time it takes for messages to be sent and operated upon in the Max patcher network. The way in which these messages traverse the patcher network is depth first, meaning that a path in the network is always followed to its terminal node before messages will be sent down adjacent paths.

Events can be generated from different sources--for example, a MIDI keyboard, the metronome object, a mouse click, a computer keyboard, etc. The first two examples typically have timing information associated with the event--i.e. the events have a scheduled time when they are to be executed. The latter two do not--i.e. they are simply passed by the operating system to the application to be processed as fast as they can, but at no specifically scheduled time. These first two events (i.e. MIDI and metro) fall into the category of high priority or scheduler events, while the second two events (i.e. mouse click and key press) fall into the category of low priority or queue events.

Overdrive and Parallel Execution

Only when overdrive is enabled, are high priority events actually given priority over low priority events. With overdrive on, Max will use two threads for the execution of events so that high priority events can interrupt and be executed before a low priority event has finished. Otherwise, Max processes both high priority and low priority events in the same thread, neither one interrupting the other--i.e. a high priority event would have to wait for a low priority event to complete its execution before the high priority event itself may be executed. This waiting results in less accurate timing for high priority events, and in some instances a long stall when waiting for very long low priority events, like loading an audio file into a buffer~ object. So the first rule for accurate timing of high priority events is to turn overdrive on.

With overdrive on, however it is important to note that messages can be thought of as passing through the patcher network simultaneously, so it is important to take into consideration that the state of a patch could change mid event process if both high and low priority events were passing through the same portion of the patcher network simultaneously. Using multiple threads also has the advantage that on multi-processor machines, one processor could be executing low priority events, while a second processor could be executing high priority events.

Changing Priority

Sometimes during the course of a high priority event's execution, there is a point at which the event attempts to execute a message that is not safe to be performed at high priority, or is a long operation that would affect the scheduler's timing accuracy. Messages that cause drawing, file i/o, or launch a dialog box are typical examples of things which are not desirable at high priority. All (well-behaved) Max objects that receive messages for these operations at high priority will generate a new low priority event to execute at some time in the future. This is often referred to as deferring execution from the high priority thread (or scheduler), to the low priority thread (or queue).

Occasionally, you will want to perform the same kind of deferral in your own patch which can be accomplished using either the defer or the deferlow objects. An important thing to note is that the defer object will place the new event to be executed at the front of the low priority event queue, while the deferlow object places the event at the back of the low priority event queue. This is a critical difference as the defer object can cause a message sequence to reverse order, while deferlow will preserve a message sequence's order. Another difference in behavior worth noting is that the defer object will not defer event execution if executed from the low priority queue, while the deferlow object will always defer event execution to a future low priority queue servicing, even if executed from the low priority queue.

There may also be instances when you want to move a low priority event to a high priority event, or make use of the scheduler for setting a specific time at which an event should execute. This can be accomplished by using the delay or pipe objects. Note that only the high priority scheduler maintains timing information, so if you wish to schedule a low priority event to execute at a specific time in the future, you will need to use the delay or pipe objects connected to the defer or deferlow objects.

Feedback Loops

You may encounter situations where the output of one sub-network of your patch needs to be fed back into that sub-network's input. A naïve implementation of such a feedback loop without some means of separating each iteration of the sub-network into separate events will quickly lead to a stack overflow. The stack overflow results since a network containing feedback has infinite depth, and the machine runs out of memory attempting to execute such an event. In order to reduce the depth of the network executed per event, we can break up the total execution into separate events once for each iteration of the sub-network. This can be done either using either the delay or pipe objects if it is important that the events are executed at high priority, or the deferlow object if it is important that the events are executed at low priority. These objects can be placed at any point in the patcher network along the cycle, but typically it would be done between the "output" node and the "input" node.

Note that the defer object alone will not solve the above problem, as it will not defer execution to a future event if called at low priority. However, the combination of delay and defer could be used to accomplish this task.

Event Backlog and Data Rate Reduction

With very rapid data streams, such as a high frequency metronome, or the output of the snapshot~ object with a rapid interval like 1ms, it is easy to generate more events than can be processed in realtime. This can lead to event backlog--i.e. the high priority scheduler or low priority queue has more events being added than those it can execute in realtime. This backlog will slow down the system as a whole and can eventually crash the application. The speedlim, qlim, and onebang objects are useful at performing data rate reduction on these rapid streams to keep up with realtime. One common case of such backlog that has been reported by users is the connection the output of "snapshot~ 1" to lcd, js, or jsui; each of which defers incoming messages to low priority. Here the solution would typically be to use the qlim object to limit the data stream.

High Priority Scheduler and Low Priority Queue Settings

As of MaxMSP 4.5, there is an Extras menu item patch titled "PerformanceOptions". This patch demonstrates how to set a variety of settings related to how the high priority scheduler and low priority queue behave--both the interval at which the scheduler and the queue are serviced as well as the number of events executed per servicing (aka throttle). There is also a mechanism called scheduler slop that can be used to balance whether long term or short term temporal accuracy is more important, as well as settings for the rate at which the display is refreshed. Each of these settings are sent as messages to Max, and while these values are not stored in the preferences folder, you can make a text file that is formatted to send these messages to Max and place in your C74:/init/ folder if you want to set these values to something other than the default each time you launch Max. An example which would set the default values would contain the following:

 max setslop 25; max setsleep 2; max setpollthrottle 20; max setqueuethrottle 10; max seteventinterval 2; max refreshrate 30; max enablerefresh 1;  For more information on the various settings exposed by this patch please read the descriptions contained in the Performance Options patcher.

Scheduler in Audio Interrupt

When "Scheduler in Audio Interrupt" (SIAI) is turned on, the high priority scheduler runs inside the audio thread. The advancement of scheduler time is tightly coupled with the advancement of DSP time, and the scheduler is serviced once per signal vector. This can be desirable in a variety of contexts, however it is important to note a few things.

First, if using SIAI, you will want to watch out for expensive calculations in the scheduler, or else it is possible that the audio thread will not keep up with its realtime demands and hence drop vectors. This will cause large clicks and glitches in the output. To minimize this problem, you may want to turn down poll throttle to limit the number of events that are serviced per scheduler servicing, increase the I/O vector size to build in more buffer time for varying performance per signal vector, and/or revise your patches so that you are guaranteed no such expensive calculations in the scheduler.

Second, with SIAI, the scheduler will be extremely accurate with respect to the MSP audio signal, however, due to the way audio signal vectors are calculated, the scheduler might be less accurate with respect to actual time. For example, if the audio calculation is not very expensive, there may be clumping towards the beginning of one I/O vector worth of samples. If timing with respect to both DSP time and actual time is a primary concern, a decreased I/O vector size can help improve things, but as mentioned above, might lead to more glitches if your scheduler calculations are expensive. Another trick to synchronize with actual time is to use an object like the date object to match events with the system time as reported by the OS.

Third, if using SIAI, the scheduler and audio processing share the same thread, and therefore may not be as good at exploiting multi-processor resources.

Javascript, Java, and C Threading Concerns

The first thing to note is that at the time of this writing the Javascript implementation is single threaded and will only execute at low priority. For timing sensitive events the js and jsui objects should not be used for this reason. However, this may change in a future release.

External objects written in both Java and C support execution at either low or high priority (except where those objects explicitly defer high priority execution to low priority). When writing any Java or C object this multithreaded behavior should not be overlooked. If using thread sensitive data in your object, it is important to limit access to that data using the mechanisms provided in each language--i.e. the synchronized keyword in Java, and critical regions, mutexes, semaphores, or another locking mechanism in C. It is important not to lock access around an outlet call as this can easily lead to thread deadlock. Deadlock is where one thread is holding one lock waiting on second lock held by a second thread, while the second thread is waiting on the lock that is held by the first thread. Thus neither thread can advance execution, and your application will appear to be frozen, although not crashing.

Finally, if you are writing an object in Java or C which creates and uses threads other than the low priority and high priority threads in Max, you may not make outlet calls in the additional threads your object has created. The only safe threads to output data through a Max patcher network are the low priority queue and high priority scheduler threads. The Max Java API will automatically detect when attempting to output into a patcher network from an unsafe thread and generate an event to be executed by the low priority queue when using the outlet() method, and generate an event to be executed by the high priority scheduler when using the outletHigh() method. In C there is no such safety mechanism when calling from an unsafe thread, so it is up to the C developer to generate such events using defer_low(), or and qelem functions for low priority events, and schedule() or the clock functions for high priority events. Otherwise the C developer risks crashing the application.

More information on related issues can be found in the Java and C developer SDKs. A good introduction to the concepts behind multi-threaded programming and other scheduling concepts, may be found in "Modern Operating Systems", by Andrew S. Tanenbaum.

by jkc on September 10, 2004

Creative Commons License
jhaysonn's icon

Hard to believe nobody has thanked you for this! Let me be the first to...well thank you =) Very informative on atopic I've been trying to understand.

_HasBeen_'s icon

Hard to believe I'm the second one ! :-)

Julien's icon

Third! a seven years after question then : does the javascript code still only run in the low priority queue?

vichug's icon

eeerrr... another questions years later : waht about that 'performance otions' patcher ? does that stil exist in max 5 and 6 ? is it deprecated ? if it still exists, where is it ? are the messages to max in example still relevant today ?

Alessandro Quaranta's icon

Thank you JKC. It's more than a decade since this has been published, but it's still the most informative article about this topic around the web.

FP's icon

Great. Thx.

Florent Ghys's icon

Thanks for this article.
It would be awesome to see this article as a tutorial in the first Max tutorials. Something like "How Max works" with examples.

dhjdhjdhj's icon

I've actually run into a rather nasty problem with this stuff. I have been using snapshot with a 20ms poll to generate a MIDi Note event to be used for a click track from a short sample loop but I had this number hard coded and when we used it on a loop that was faster, the click became irregular (unsurprisingly). However, when I tried polling at a faster rate (10ms), the click would be fine until I switched to another window outside Max at which point there is always a noticeable glitch in the click generation.
I've been experimenting with various scheduling parameters but haven't been able to solve this one. Quite a bit of a headache actually.

Florent Ghys's icon

Did you try snapshot~ with a tempo-relative attribute and the transport object? It should be pretty tight if you're in Audio In Interrupt.

dhjdhjdhj's icon

I'm not even sure I understand what you're suggesting. Do you happen to have a simple example of this?

Florent Ghys's icon
Max Patch
Copy patch and select New From Clipboard in Max.

I was thinking about something like this, but now I am not sure it's actually be different from what you're describing, and I don't know your patch

dhjdhjdhj's icon

Fair enough --- attached is the abstraction I use to play a sample that also generates an index to be used to generate clicks (there are references to some other objects that won't be found but they're not relevant).

The idea is that as the loop plays, I'm calculating the current position of its "playhead". For a simple 4-beat loop, I need to know when the current position has reached each quarter of the loop. Clearly the more often I poll, the better the accuracy of that detection and so the less jitter. The output is then just an index typically going from 0 to 3 (for a 4/4 loop) that I can use to create suitable MIDI events (accented if the index is 0 say).

Max Patch
Copy patch and select New From Clipboard in Max.

It would not surprise me to find that there's a much better way to do this ---

phiol's icon

bumping this !
just as a reminder for old users and insight for new users :-)

This is really relevant and buried in the forum.

Florent Ghys's icon

hey, I'm just gonna put this here: https://youtu.be/7n-sl687tkI

lasmiveni's icon

Very useful, thanks a lot Lilli! It would probably need a refresh nowadays.

Etna_Labs's icon

I normally lurk on threads here, but logged in to say my thanks as well. Thank you @JKC.