If you're talking about MIDI notes when you say "delaying notes", then pipe is the best way to do it, and the advice you got in your previous thread regarding how to set the delay time in samples is correct. If you're talking about delaying an audio signal, with the delay time expressed in samples, see the delay~ object.
I bought the Max 6 Crossgrade + Gen for Max For Live Owners
(because I thought one day I might need it.. and it looks like a pretty nice software)
so what is it about gen?
appending a string to a number seems like a crazy thing to do, if you want to do something as clean as possible.. I wish things where a bit more straight forward (like it was, when I programmed my c64..) -> in max I'm lost + a complete newbie
if it's about speed & max for live I should go for java?
Furthermore, each MIDI note message takes about 1 ms. The I/O latency of just about any MIDI device is more than that. I advise not to sweat sample accuracy and microseconds of delay when you're dealing with MIDI.
to me this is not about latency (a minimum latency would be great of cource..),
but about building a "humanizer" which varies normally (gaussian), so if I compute this (the values should be drawn from a precalculated array), it should have some resoulution.. if this is not possible in max, I'll have to find another way.. for this reason I was asking for advice to build a code based solution (I saw that someone build his own interface to output midi..)
thanks for your replies.. before I really felt alone/helpless/stupid
If you're dealing with MIDI, I would use milliseconds as your unit and forget about anything behind the decimal point. There's not only latency in MIDI, there's also jitter (with a lowercase "j").
BTW, if you want a Gaussian distribution, you probably want to look at lp.gsss from Litter Power. That's a real Gaussian generator; precalculated arrays are never going to give you the real thing.
If you're with Max/MSP 6.1, the news (good or bad, I don't know) is that you would have to use the commercial Litter Pro packet. The free Litter Starter Pack is currently not quite up to Max 6.1. It's on the list of things to do, but updating free software, unfortunately, is on the back burner right now. Alternately, there is a Gaussian generator in the Classic Litter package (these are all abstractions built as Max patches, rather than externals programmed in C, like the Pro and Starter versions). If you care about random numbers, you owe it to yourself to take a look at Litter.
maybe you didn't get the point: jitter in midi was to some extent a good thing -> it is simple more natural.. think of it: the best electronic music was done outside the computer, with synthesizer and sequencers, but also with midi-to-cv boxes.. the thing that lead me to this is: I tried silent way (this plugin to control vintage synths via your soundcard) and my ms20 turned out to sound too much like a plugin (with lots of latency, but probably minimal jitter..)
then I stumbled across this basic humanizer (that came with live 8)
I think this could be done better..
(if not, I'll go back to just sequencing midi)
if you think of drum'n'bass: it's really interesting, if you see it from a jitter-point of view. it combined 3 rhythmical qualities: a sampled loop (human timing of the best parts of rhythmical exceptional good musicans), freezed in a computerstyle way (within a sample the timing stayed sample accurate) fired by midi (with it's sloppy timing) - but this was interesting, but as it turned too much into a softwarebased thing it turned boring/grimly(?)..
Over and above what I've written to you privately: I'm not saying that jitter is a problem in the case of computer-generated "humanization." Rather the reverse. What I was trying to say is that there is really no point in worrying about sample-accuracy in the Max world if you're going to send things through a MIDI-protocol with jitter that is several orders of magnitude larger than the millisecond-versus-sample question you're worrying about.
Still, if that's what you're after, there are techniques in Max/MSP for sample-accurate playback control. For instance, check out the patches in examples/sequencing-looping/audio-rate-sequencing-looping/. There are other people here who may be able to say more about incorporating these techniques into the M4L world.
One more thing: wanting to use a Gaussian distribution instead of the more common band-limited uniform distribution for "humanization" is a nice touch. As I wrote elsewhere, a log-normal or exponential distribution might be even more "real life."
Does anyone here have anything to add about the factors in real human playing and how best to model them? What would the most appropriate distribution be? Does anyone have some psychological chops to add to this?
"Does anyone here have anything to add about the factors in real human playing and how best to model them? What would the most appropriate distribution be? Does anyone have some psychological chops to add to this?"
not only that gaussian seems more "organic" or "natural" than uniform or linear, there is another important factor which makes triggering notes more realistic.
i am often using different ranges for "earlier" and "later", because that is a perfect reflection of how an instrumentalist "works": some are late most of the time, others tend to be too early.
of course, as a result of the above theory, different settings for every instrument should be taken. this can do wonders to choir voices, and if you carefully play with the settings, also for percussion.
btw, i have never (==not yet) been doing this, but when you think of a guitar or a piano, you will easily come to the conclusion thst there are most likely different delay times depending on note, string, last interval, or number of voices.