Ideal snapshot~ time interval for MIDI performance?
In your own experience, what would be the ideal compromise for the snapshot~ object time interval setting, between high-resolution and CPU-light MIDI performance? If I recall correctly, the snapshot~ object defaults at 20 ms, but I'm afraid I'd be losing precision for faster MIDI performances (I use breath controller with performance-orientated instruments such as SWAM and so forth...). On the other hand, I do orchestral composition and so I need to stay as CPU-light as possible with my devices.
Any thoughts?
My only thought is - what has snapshot~ to do with midi ?
Could you precise what you would use snapshot for ?
There were reports about snapshot not being reliable
when set to auto - report, and suggestions to use external bangs
instead, or other means to convert signal to float
https://cycling74.com/forums/snapshot~-dysfunction
Yes, sorry I forgot to specify exactly how it's used. I convert the MIDI input into signal for smoothing and mapping it to parameters, but sometimes I'll also convert it back to integers for track output. There's a good example of this in the native Expression Control device used for mapping MIDI to parameters. The snapshot~ object there is used only for visually showing the output through a slider object, but in my case I'm actually outputting the MIDI data to use it with my instruments.
Thank you, I wasn't aware of that bug and never had problems with it, but I'll use bangs out of precaution.
everything clear.
You probably use scale / line~ with some ramp time to create smoothed signal,
and that signal is used to control something directly as signal, or ?
If parameters get controled with nonsignal float at the end , then I don't really see
the need to convert midi to signal.
I use either slide~ for log smoothing or rampsmooth~ for linear. As you can see from the patch, after the smoothing the signal branches out into two paths : first one is scaled to match Live's remote object range and then it connects directly to the live.remote object ; second one is converted back into integer and scaled back to 0-127 and sent out for track output. This way I can use this device either for mapping to Live's objects or for controlling plug-in instruments.
I basically based this device off of the Expression Control device that comes with Live and added much more options, so as for why it's being converted to signal, it's only because that was the way it was programmed so I figured it was the best way to do it. Maybe the smoothing has a higher resolution with signal? Also, I'm not sure if it's possible to smooth logarithmically with non-signal...
it is not a good practice to convert messages to signals and then back to messages. you can can do all of that using control rate objects such as slide & line.
Interesting. May I ask why it's not a good practice?
Also, why do you think Ableton devs converted messages to signal for ramping and mapping to Live's objects?
Well, I would respectfully answer Roman's point by saying that what he suggests works... unless the amount of message data created by the line object overloads the scheduler. I've been mucking with this myself, and am finding the optimal solution depends a lot on how much other audio processing or scheduler load is being generated in the patch. (I generate a ton of scheduler messages from Scheme for Max sometimes when I want to do something in Scheme)
i hardly know why i am doing things like i do, so i cant comment on what other people do and why.^^
but i suppose that max4live (in opposite to pluggo) doesnt even have a high priority thread (?), and then it might make of course sense not to overload the runtime with messages.
that it requires tons of CPU and introduces at least one vector delay is clear though, so i wouldnt like to use that in a sequencing context.
in addition using signals for messages will scramble you message order, so complex patches can get complicated.
@iain:
why do you think that you can send more message within 1 ms if the scheduler ticks faster? wouldnt it need to be slower in order to catch up with the spikes?
This particular device/patch has 8 MIDI inputs so it would handle a maximum of 8 MIDI data streams. The data only gets scaled and then smoothed out before being outputted as either MIDI track output or controlling a Live object remotely.
Could someone explain what the scheduler and runtime are? I'm guessing the consequences of an overloaded scheduler would be delayed events?
Just had a quick look at the slide object, and I'm guessing the best way to use it would be to use a metro with a 1 ms interval?
Re my use: my patches generate high loads of messages with Scheme for Max in the high priority thread. They all need to get dealt with, but they come in at uneven rates. My assumption is that how many the scheduler can deal with successfully depends on how much of real-time it is allowed to eat up. If poll throttle is at 200 events per pass say, and the thread runs once per ms, than at most it can deal with 200 scheduler messages per ms. My hope is that allowing the scheduler to run twice as frequently should double that. It will, of course, rob *something*, namely audio processing CPU cycles. I do need to conduct some proper tests to see if I'm correct on this.
Simon, the scheduler thread is the same thing as the high-priority thread. If you have Overdrive selected, it runs at higher priority. All events from metronomes, midi inputs, or messages that are explicitly put into the high thread with delay or pipe objects go into this thread, as well as some other objects (like I believe line?) If you have Audio Interrupt selected, this is the same thread as the audio processing thread. Under the hood, objects send messages to each other on a queue, so the scheduler queue can get too full if, for example, you put enough metro objects in your patch with short enough time intervals.
there is a great article about all that stuff from around 2016 here on the site, but eventually it is still a bit too complicated for a beginner.
programs have various threads, think of it as streets with different priorities between each other.
in max that is mainly audio, graphics, signals and messages.
if "overdrive" is ON, max also has a high priority thread for messages. that is what all audio apps with sequencer do have, and the usual ways how to get good timing where needed.
if "overdrive" OFF, all messages run in the same thread, the "main thread".
only objects such as delay, pipe, metro, or line do output to high priority - if available - others never.
to your question:
"consequences of an overloaded scheduler would be delayed events?"
nope, in "overdrive" mode these "high priority messages" will never come late, that is what happens in main thread - have a look at the [uzi] helpfile here for a simple example.
in overdrive/high priority/scheduler instead you get a runtime error if the queue of this thread is "full". otherwise it is guaranteed that things come in time.
saying that, i´d like to add that you hardly will ever see this. i do hardcore sequencing with long lists on a 20 years old computer and my scheduler is never "full".
(the only thing what happens is that sooner or later the graphics wil become slow - and thatr audio DSP seems to require more CPU since less is available.)
on an M1 macmini you can probably send like 100,000 numbers per millisecond and they all will arrive at the next logical millisecond. this is enough for the data coming from 150 x 10-channel midi interfaces.
Thank you both for your explanations! Is this the article you were referring to? https://cycling74.com/articles/event-priority-in-max-scheduler-vs-queue
Most of it is a little over my head but it still gave me some basic knowledge of the way events get handled.
So all in all, what you're saying is that if I stick with messages, there is only a very small chance that the scheduler gets overloaded and if it does...what happens then? You say I get a runtime error but does that entail concretely?
Also, I've never used the slide object before and I find it a little bit odd that I need to use the metro at 1 ms interval to get the right timing (higher intervals multiply the up and down smoothing times). Is this the correct way to use it?
the more things happen in the scheduler thread, the slower the graphics and mouse input can get and when you hit the border max goes boom. this doesnt happen with one metro, but it might with 500.
using a 8 ms interval to trigger snapshot~ and then interpolate it to 1 ms steps with another metro is probably more lightweight than talking to snapshot~ every 1 ms, but i havent tried it.
if you plan to trigger polyphonic, external hardware with MIDI you shouldnt care much about about 1-2 ms latency, inaccuracy, or jitter, because most synths already have more than that on their own.