delaying notes by samples (a better way than using a pipe?)

wildjamin's icon

I tried to set a pipe to "samples" in the inspector, but it kept turning to ms..
I was told to append "samples" to each delaytime.

is there a better way to do it?
maybe a script? (I want to do some coding anyway)

thanks, benjamin

Stephane Morisse's icon

Do you have Gen ?

Christopher Dobrian's icon

If you're talking about MIDI notes when you say "delaying notes", then pipe is the best way to do it, and the advice you got in your previous thread regarding how to set the delay time in samples is correct. If you're talking about delaying an audio signal, with the delay time expressed in samples, see the delay~ object.

wildjamin's icon

I bought the Max 6 Crossgrade + Gen for Max For Live Owners
(because I thought one day I might need it.. and it looks like a pretty nice software)

so what is it about gen?
---------------------------
appending a string to a number seems like a crazy thing to do, if you want to do something as clean as possible.. I wish things where a bit more straight forward (like it was, when I programmed my c64..) -> in max I'm lost + a complete newbie

to me javascript seems closer to what I was used to..
if it's about speed & max for live I should go for java?

Christopher Dobrian's icon
Max Patch
Copy patch and select New From Clipboard in Max.

If that offends your sensibilities, no problem, just use ms in pipe and use the translate object to convert samples to ms.

wildjamin's icon

I don't really care about the unit, it's about the resolution.
are you shure pipe/max objects are working with 0.xy without rounding them?

broc's icon
Max Patch
Copy patch and select New From Clipboard in Max.

I think resolution in the Max event world is generally about 1 millisecond.
You can use [cpuclock] to measure the actual timing with microsecond precision.

Christopher Dobrian's icon

Furthermore, each MIDI note message takes about 1 ms. The I/O latency of just about any MIDI device is more than that. I advise not to sweat sample accuracy and microseconds of delay when you're dealing with MIDI.

Wetterberg's icon

I advise not to sweat sample accuracy and microseconds of delay when you’re dealing with *just about everything in Max* :)

We've seen many a programmer delve too deeply into that hole and never return...

Roman Thilenius's icon

no idea about "samples" mode. i would just do it by milliseconds which i converted from sample values on my own and ignore this built-in bs.

wildjamin's icon

to me this is not about latency (a minimum latency would be great of cource..),
but about building a "humanizer" which varies normally (gaussian), so if I compute this (the values should be drawn from a precalculated array), it should have some resoulution.. if this is not possible in max, I'll have to find another way.. for this reason I was asking for advice to build a code based solution (I saw that someone build his own interface to output midi..)

thanks for your replies.. before I really felt alone/helpless/stupid

Peter Castine's icon

If you're dealing with MIDI, I would use milliseconds as your unit and forget about anything behind the decimal point. There's not only latency in MIDI, there's also jitter (with a lowercase "j").

BTW, if you want a Gaussian distribution, you probably want to look at lp.gsss from Litter Power. That's a real Gaussian generator; precalculated arrays are never going to give you the real thing.

If you're with Max/MSP 6.1, the news (good or bad, I don't know) is that you would have to use the commercial Litter Pro packet. The free Litter Starter Pack is currently not quite up to Max 6.1. It's on the list of things to do, but updating free software, unfortunately, is on the back burner right now. Alternately, there is a Gaussian generator in the Classic Litter package (these are all abstractions built as Max patches, rather than externals programmed in C, like the Pro and Starter versions). If you care about random numbers, you owe it to yourself to take a look at Litter.

wildjamin's icon

thanks for the link..

maybe you didn't get the point: jitter in midi was to some extent a good thing -> it is simple more natural.. think of it: the best electronic music was done outside the computer, with synthesizer and sequencers, but also with midi-to-cv boxes.. the thing that lead me to this is: I tried silent way (this plugin to control vintage synths via your soundcard) and my ms20 turned out to sound too much like a plugin (with lots of latency, but probably minimal jitter..)
then I stumbled across this basic humanizer (that came with live 8)

I think this could be done better..
(if not, I'll go back to just sequencing midi)

ps.
if you think of drum'n'bass: it's really interesting, if you see it from a jitter-point of view. it combined 3 rhythmical qualities: a sampled loop (human timing of the best parts of rhythmical exceptional good musicans), freezed in a computerstyle way (within a sample the timing stayed sample accurate) fired by midi (with it's sloppy timing) - but this was interesting, but as it turned too much into a softwarebased thing it turned boring/grimly(?)..

Peter Castine's icon

Over and above what I've written to you privately: I'm not saying that jitter is a problem in the case of computer-generated "humanization." Rather the reverse. What I was trying to say is that there is really no point in worrying about sample-accuracy in the Max world if you're going to send things through a MIDI-protocol with jitter that is several orders of magnitude larger than the millisecond-versus-sample question you're worrying about.

Still, if that's what you're after, there are techniques in Max/MSP for sample-accurate playback control. For instance, check out the patches in examples/sequencing-looping/audio-rate-sequencing-looping/. There are other people here who may be able to say more about incorporating these techniques into the M4L world.

Peter Castine's icon

One more thing: wanting to use a Gaussian distribution instead of the more common band-limited uniform distribution for "humanization" is a nice touch. As I wrote elsewhere, a log-normal or exponential distribution might be even more "real life."

Does anyone here have anything to add about the factors in real human playing and how best to model them? What would the most appropriate distribution be? Does anyone have some psychological chops to add to this?

brendan mccloskey's icon

digital expressivity?

can open.

worms everywhere.

wildjamin's icon

if I want to avoid to append "samples" in max, can I simply return a string (''+x+'samples')

and if so, how does it work in javascript:
outlet(0,''+x+'samples');

Peter Castine's icon

No, what you've done in Javascript is create single symbol reading "666samples" (or whatever the current value of x is). That is definitely not what you want.

Check the Mac JS documentation, but I'm pretty sure the JS syntax to send a list (consisting of a number followed by your symbol) through an outlet is outlet(0, x, "samples");.

wildjamin's icon

thanks! this works..

Peter McCulloch's icon

@Peter Castine: this is tangential, but a fascinating read:

It discusses analyses of time vs dynamic variances with expert performers. (using machine-learning techniques!)

Roman Thilenius's icon

"Does anyone here have anything to add about the factors in real human playing and how best to model them? What would the most appropriate distribution be? Does anyone have some psychological chops to add to this?"

definitely, peter.

not only that gaussian seems more "organic" or "natural" than uniform or linear, there is another important factor which makes triggering notes more realistic.

i am often using different ranges for "earlier" and "later", because that is a perfect reflection of how an instrumentalist "works": some are late most of the time, others tend to be too early.

of course, as a result of the above theory, different settings for every instrument should be taken. this can do wonders to choir voices, and if you carefully play with the settings, also for percussion.

-110

Roman Thilenius's icon

btw, i have never (==not yet) been doing this, but when you think of a guitar or a piano, you will easily come to the conclusion thst there are most likely different delay times depending on note, string, last interval, or number of voices.

everything depends on everything ...

-110

Joe Sayer's icon

Bumping a nearly decade old thread here!

I'm looking to adapt a 'Random Note Delay' Max for Live device, which is currently limited to a precision of 1ms.

https://maxforlive.com/library/device/3448/random-note-delay

What I've figured out so far is that this device uses the 'Pipe' function to delay / offset notes in 1ms increments.

Similar to OP, I'm hoping to increase the resolution of the delays by converting the device to delay in samples instead of ms.

If it wasn't obvious already, I am a clueless beginner to Max programming, but willing to learn!

Any help would be massively appreciated.

Thanks!

Iain Duncan's icon

First, you should start a new thread. Reviving old threads from long ago versions of Max is likely to get you misinformation and confusion.

Second, there is another thread here in which I explain the situation with timing delays, it might be helpful!
https://cycling74.com/forums/delay-480-ticks.

Roman Thilenius's icon


a search for "converting samples to millisecconds" should reveal some interesting ideas, too.

wildjamin's icon

it's really strange to get an alert (because I was involved in this thread) and feel the tone of the posts.. -> I didn't miss feeling help/clueless in this forum! and I'm not any more, because I had to take the hard, long and lone way. I tried for a long time to alter the timing in a subtle way, but max for live isn't made for this // first I tried with conrolling a pipe, then my external received midi and set a clock with a normal varried amount of time for the output.. but still midi had to flow from midin to my external and back to midiout. and there was no way to know, what was really going on internally (you don't have to worry if you patch most things, put if you want to humanise most of your midi (in a dance music context) you have to know..)

Joe Sayer's icon

@Wildjamin,

I was hoping to summon you somehow, but I couldn't find a PM function on here, hence me bumping this ancient thread.

Did you ever manage to delay MIDI notes in values smaller than 1ms?

Like you I'm interested in humanization. In this case I'm attempting to generate percussion flams by randomly offsetting large clusters of simultaneous notes in real time.

I'm seeing on this article that Pipe can be told to delay in samples with an attribute argument:

https://docs.cycling74.com/max7/refpages/pipe

I think this may be the only change I need to make, but I'm not sure how to apply this. I believe the solution may be very simple to those in the know.

To all responders, thank so much for the help.

Iain Duncan's icon

You can ask for a delay in samples, but the actual minimum delay will always be the amount of time between scheduler thread passes. It doesn't matter what you do, this is always the case for Max *messages*. If you want to make this frequency sub 1ms, you need to set Audio in Interrupt to On, and set your signal vector size to whatever frequency you want (in samples). This is the fundamental thing you need to understand though: message frequency can never, no matter what you do, be higher than than of the scheduler runs - it's how Max works. If you are willing to pay the processing price, you can make this sub 1ms, but all your Max processing gets heavier as a result.

Joe Sayer's icon

@IAIN

I'm only interested in manipulating MIDI here. 'Audio in Interrupt' sounds like it would pertain only to audio. Please remember you are dealing with an absolute beginner here, correct me if I'm wrong! Thanks for your patience.

Your point about the scheduler is very interesting, I think that will be a stumbling block.

I've found settings in the preferences regarding this: Event Interval and Scheduler Interval.

Side question: are these settings saved per patch or universally to all patches running on the system?

I see the scheduler interval cannot go below 1ms, so what I'm looking for isn't possible?

Putting that issue to one side for a moment, what I'm stuck on now is much more basic. I don't even know how to apply the argument to the Pipe object (telling Pipe to delay in samples instead of ms).

I have attempted to find the answer elsewhere, my research has brought me here. Sorry to be asking such an elementary question.

👽'tW∆s ∆lienz👽's icon

Sorry to be asking such an elementary question.

no worries

• first off to apply timing in samples you can just specify 'samples' for the "@delaytime" attribute... as you saw in the reference page, for example, for 1ms at 48kHz, something like this:
[pipe @delaytime 48 samples]
once the object is initialized in samples, you can create an 'attrui' object(click the yellow arrow, available when mousing over the left-side of any object, then from the pop-up menu, choose the attribute you want to control, and it'll create an 'attrui' pre-connected to the object for you), then you can send messages to the attrui like '24 samples', etc. to keep changing the delaytime in samples... looks like this:

but this still won't do much good if you're trying for anything under 1ms(no matter what sample-rate)

• next for your question about scheduler settings, using pipe with 'samples' in this way will still be limited to the minimum scheduler interval which should, for most purposes, be fine to leave at 1ms
here(when i say 'minimum scheduler interval' this is referred to in Preferences as 'Scheduler Interval', while the 'Event Interval' is more for the lower priority events(i.e. when using objects like 'defer' or 'deferlow'))
Important: I wouldn't mess with any of those Preference settings(leave them at 1ms for the Scheduler Interval, and you probably aren't interested in 'defer'/'deferlow'/low-priority events for what you want so leave 'Event Interval' alone as well)...
To Answer Your SideQuestion: These are global settings! Don't mess with them if you're a beginner(or if you do, remember what they were at default so you can set them back), these can seriously affect CPU performance.

• and finally to answer the main question:

I see the scheduler interval cannot go below 1ms, so what I'm looking for isn't possible?

ya, pretty much not possible without efficiency hits, but sometimes the efficiency hit can be negligible, Iain has done significant research on this, so I'll defer to their expertise here in the thread link they posted above for further info and solutions:
https://cycling74.com/forums/delay-480-ticks

also, a quick summary of 'Audio Options' settings:
putting the 'Scheduler In Overdrive' puts the scheduler in a high-priority thread along with Audio, and then in 'Audio Interrupt' additionally polls the scheduler at the beginning of each audio signal-vector to create better synchronization between audio signals and events('overdrive' makes it highest priority same as audio, and 'interrupt' synchronizes to signal vectors)... you could try very low signal-vector sizes with these settings and you'll gain accuracy, but at a severe cost to CPU(especially if you go as low as 1 sample vector sizes).

hope it helps 🍻

Iain Duncan's icon

Yeah, it's not obvious from the settings label. But if you want midi events to happen more frequently than 1ms, you need the scheduler thread to run that frequently. The way to make that happen is to put it in Audio in Interrupt and lower the signal vector size so you get a scheduler pass per audio vector and your audio vector is shorter than 1 ms.

Now, that said, I also did an experiment in a Computation Analysis of Music class on timing onsets of people playing, and the truth is that the slop in a real player is *very big*. So you really don't need sub 1ms accuracy if you're going for humanization - people move around the clock by a lot more than that!

Also, if you are talking about Max for Live, you don't get to set the signal vector size, it's locked at 64 samples I think?

Roman Thilenius's icon


beside the suprising information that pipe and del actually take float values maybe i should add another tought:

for those who use USB based midi connection to physical devices in this context, you dont have to worry much about an 1ms jitter in the max scheduler anyway, because USB has that exact same issue, too .
not to speak of the problem that devices require an unpredictable, non-constant amount of some 0,5 - 5 ms to actually produce sound from midi note input... similar to the way how MSP can not react as fast as the scheduler asks it to do something.

Roman Thilenius's icon


there are three different topics mangled up here, hopefully i dont cause even more confusion with the following.

"Side question: are these settings saved per patch or universally to all patches running on the system
I see the scheduler interval cannot go below 1ms, so what I'm looking for isn't possible?"


scheduler and dsp settings are gobally, but can be controlled from within a patch.

to change the audio settings for one patch only, you needed to use poly~ (but i am almost sure it does not work in m4l)

and sure, you can create a metro 0.5 and hook a del 0.33 behind it, and it will do exactly what you expect.
but the most frequent use case of max messages is to send them to audio objects - and they are the bad guys who almost always only take and execute incoming stuff with the next audio vector.

at 44 khz and a vectorsize of 32 samples, the vectorsize is smaller than 1 millisecond already - with this setup everything which is going from messages to signal is at least 100% correct within the audiovector "grid" of 0.7 milliseconds.
if the audio vector size is bigger than the scheduler rate,some messages might get delayed to the next vector even when they are closer to another message as to a third one.

this is called jitter https://en.wikipedia.org/wiki/Jitter

as a result, if you want a sequencer (or a bouncing ball delay or flam effect) run smooth in m4l, you needed to use at least 88 khz.

Joe Sayer's icon

OK guys, huge thanks for all the responses.

Looks like what I want is impossible in Max For Live.

At the very start of my journey in Max programming here, gonna have to work my way through the help files like everybody else!

You may see me rearing my head in these forums again in the future.

Thanks again!

Iain Duncan's icon

It's not impossible, you're just limited to a minimum delay size. I would encourage you to try out that size, 1-2 ms is not a large size for humanization. Real humans have much larger "jitter"

wildjamin's icon

my notifications went into the spam folder..
(I may be reached here: wildjamin@gmx.de)
I'm still into trying to (re)humanise midi
>>as orginal midi had a really nice (and nasty in case of simulantious notes) quality to it!

"You can ask for a delay in samples, but the actual minimum delay will always be the amount of time between scheduler thread passes. It doesn't matter what you do, this is always the case for Max *messages*."

this is what I came to think, but couldn't express.. to me there is an easy way to check:
just double a midi drum track and listen to the flanging that will occur..

my idea to overcome the scheduler was to reach for a direct output from my external with something like https://www.music.mcgill.ca/~gary/rtmidi/

but back then I didn't know how to incorporate it with my external (and/or it wasn't working anymore)
there is an update: Latest Release (16 November 2021): Version 5.0.0

I wish there was a method like :
class_addmethod(c,(method)MyExternal_int,"int",A_LONG,0);
for direct midi input (from LIVE-> the context is timing sensitive dance music!)

it's strange, but a thing one can do:
you can process raw midi (for notes it's 3 seperate INTs coming from midiin) received by calling three times the above "int"-method.

more important: a direct midi output
(otherwise you have to send 3 bytes or a list of three bytes by cable, through the schedulder for most things)

#define MIDI x->outlet[0]
outlet_int(MIDI,144);outlet_int(MIDI,DATA);outlet_int(MIDI,n);

it would be nice, if max for live/it's externals could be as close as possible to the source/target
(if one wants to)
Benjamin Wild




Roman Thilenius's icon


i am still not sure if ian´s statement

"You can ask for a delay in samples, but the actual minimum delay will always be the amount of time between scheduler thread passes."

was not a bit misleading.

the "actual" time when a message is sent and processed does not really matter when and because it is later used to control something audio, midi, serial, or whatever, so you would of course always program stuff using their desired time values.

for example when you want to send two note on events to a VST instrument with one beeing 0.01 milliseconds behind the other, you can make them appear at 0.0 ms and 0.01 ms and process them in exactly that time difference until the end of the processing chain.

what happens later, when you convert it to USB-midi or to a live device instrument is another story.


Iain Duncan's icon

huh, Roman is right, at least when I trigger clicks. I am getting recordings of clicks out with sub 1 ms delays. However I had previously in a different patch experienced what I'm saying, where onsets where jittered to the boundary of the nearest audio vector. That said, I cannot remember OTOH how I was making those notes on my previous tests, it could have been that the VST hosting or midi output introduced that minimum jitter, or it could have been that how the delay was working was different. I will explore later!

At any rate, you should definitely test it! A good way to look at different scenarios is to record stereo outs to something like Reaper where you can then zoom in to the sample level. But also realize that what you get from Max standalone may be different in Live.

Roman Thilenius's icon


yes you can test it with click~ ;

if you send a bang delayed for ((3*vectorsize) - 0.01ms) and one at ((3*vectorsize) + 0.01ms) the clicks will have one sample delta time to each other, so everything at the "conversion" to signal is interpreted with the "right" time value (the ones you see)

i would go so far to say that you do not even need to think in ticks - not even as an external developer - all you need to make sure that in the runtime the scheduler rate is smaller than the vectorsize, which is the case at 32samples/44khz (in max4live you needed to use 88 khz to get a proper scheduler) and then eventually round or dither these values before going into a signal object (but dont overrate the latter)

>> "where jittered to the boundary of the nearest audio vector"

yeah but that is caused by the audio object or VST plug-in, not by the main thread / scheduler thread.

data rate itself has a much higher "resolution" than audio samples, limited only by thread type, size, and available CPU cycles.