delaying notes by samples (a better way than using a pipe?)
I tried to set a pipe to "samples" in the inspector, but it kept turning to ms..
I was told to append "samples" to each delaytime.
is there a better way to do it?
maybe a script? (I want to do some coding anyway)
Do you have Gen ?
If you’re talking about MIDI notes when you say "delaying notes", then pipe is the best way to do it, and the advice you got in your previous thread regarding how to set the delay time in samples is correct. If you’re talking about delaying an audio signal, with the delay time expressed in samples, see the delay~ object.
I bought the Max 6 Crossgrade + Gen for Max For Live Owners
(because I thought one day I might need it.. and it looks like a pretty nice software)
so what is it about gen?
appending a string to a number seems like a crazy thing to do, if you want to do something as clean as possible.. I wish things where a bit more straight forward (like it was, when I programmed my c64..) -> in max I’m lost + a complete newbie
if it’s about speed & max for live I should go for java?
If that offends your sensibilities, no problem, just use ms in pipe and use the translate object to convert samples to ms.
-- Pasted Max Patch, click to expand. --Copy all of the following text. Then, in Max, select New From Clipboard.----------begin_max5_patcher---------- 551.3oc0VssbaBCD8Y3qfQO6lAwMSya86nSmLB7FGkAjXjDotMS92qt.o10D PNMg1LdrvZ05Um8r23wv.TE+.HQQWG80nffGCCBrhLBBF1GfZIGpaHRqZnZd aKvTnMtyTvAkUdqbTTGQUeGks+FATqblF+4jqh2DUFaVSJrq5eG8sg+Bcm0F 7p6+DNazN2xYJFoErG8EAkzLdBqukxZ.kEQ3Q0E6qL6M2wypw6Ui5EejYkze ZMKVCKizmBCMKa9K4.IosqA7fHRsq44yRDI++SDL36ZndFOv3JPegyvCYkFW Gm3dfmkHJuPdH8enWSYK5zthfxzybYGXU+nCbpinlTrSdLE8r8UllbJ8j9tS OczNHJ184pkSMhydoTiy3IC6X+NE8TbgzS16M8baCWa645Qjk867j73k8esE I1TjJBa+jjP9xjPGQnkq.wM.iT0.GWy3Q9SxaX9SeaEHVlfFZhF6cgzKROoe nnmoKuTBBS1PTPzvTnn4mH63v7BWe3LOpxlbH0aRyG7xri8xQMT1e91JVeyH +TJSx6E0iPenOPTxytvNPpnLhhxYGoS4I5bGc2NfcbfdGUZh8V2e5P2kfFrG nAuZnI1CzDuJnYqGQphUKRs0iHUwpEo15QjpX0hT4dhlzUAMXOPS9pwModfF 7qDMtlgjttG.gbvjVfnmTbOWX1VrwtkxbascXQB3A5n912nAQD5ACJ8Tgdgq 08gxBTn4ddJ7Wz9W6LB -----------end_max5_patcher-----------
I don’t really care about the unit, it’s about the resolution.
are you shure pipe/max objects are working with 0.xy without rounding them?
I think resolution in the Max event world is generally about 1 millisecond.
You can use [cpuclock] to measure the actual timing with microsecond precision.
-- Pasted Max Patch, click to expand. --Copy all of the following text. Then, in Max, select New From Clipboard.----------begin_max5_patcher---------- 452.3ocyV9saBBCEF+Z3onoWqFZk+sc21qwhYo.UsaPKQJYtY7cezBD0YTqV BY2.gSObNe8GebfctNvDwVZED7L3MfiyNWGGcHU.mtqcfEjso4jJcZvk4Bdc AbR6RKEbYE6GpZIj2LutvMYv34To9VPcAYY5BHR9XJBebA3jBcAfurgQxAuJ xy5WtjHSWy3qdeCMU1JSj2SM8AfiiTmBTMEfhm4AVbn4hZYe2wcQaCI+tj1V F0FgHgS.vDBeEDrPk1dWW0gIFxBN8qlcSuVkzsZIBmB7lcO.BeNfhsmO9gpS ywyBtJePWmOCIWRKqSyEoeZo4Ixd13gzdG+a3cFQ1HAIVhkPqwBJDqoy7GEK C9aRCjiIvBzD625Xv++bLLq8L9V.lPsmINvnoLWbJLia4L3BZUEYE8L7frbB 7bqQSvwnI79sLCpaojURa19VBEj8yXv5Oa6G7nuHcJUz2MLmw+6OwnUfJ9on pRTuIsuTcOlAGzPFsRx3DISvOJG+SxYMKKipWtmZErrRQiQtSBW3wloJR0Mz MTTvnpnPCXTznpnHCTT7npn.CUDZzTTrAJR8e3ir09lRZTUDx.EEZghZtXu6 uzUPVvA -----------end_max5_patcher-----------
Furthermore, each MIDI note message takes about 1 ms. The I/O latency of just about any MIDI device is more than that. I advise not to sweat sample accuracy and microseconds of delay when you’re dealing with MIDI.
I advise not to sweat sample accuracy and microseconds of delay when you’re dealing with *just about everything in Max* :)
We’ve seen many a programmer delve too deeply into that hole and never return…
no idea about "samples" mode. i would just do it by milliseconds which i converted from sample values on my own and ignore this built-in bs.
to me this is not about latency (a minimum latency would be great of cource..),
but about building a "humanizer" which varies normally (gaussian), so if I compute this (the values should be drawn from a precalculated array), it should have some resoulution.. if this is not possible in max, I’ll have to find another way.. for this reason I was asking for advice to build a code based solution (I saw that someone build his own interface to output midi..)
thanks for your replies.. before I really felt alone/helpless/stupid
If you’re dealing with MIDI, I would use milliseconds as your unit and forget about anything behind the decimal point. There’s not only latency in MIDI, there’s also jitter (with a lowercase "j").
BTW, if you want a Gaussian distribution, you probably want to look at lp.gsss from Litter Power. That’s a real Gaussian generator; precalculated arrays are never going to give you the real thing.
If you’re with Max/MSP 6.1, the news (good or bad, I don’t know) is that you would have to use the commercial Litter Pro packet. The free Litter Starter Pack is currently not quite up to Max 6.1. It’s on the list of things to do, but updating free software, unfortunately, is on the back burner right now. Alternately, there is a Gaussian generator in the Classic Litter package (these are all abstractions built as Max patches, rather than externals programmed in C, like the Pro and Starter versions). If you care about random numbers, you owe it to yourself to take a look at Litter.
Almost forgot: <http://www.bek.no/~pcastine/Litter/>
thanks for the link..
maybe you didn’t get the point: jitter in midi was to some extent a good thing -> it is simple more natural.. think of it: the best electronic music was done outside the computer, with synthesizer and sequencers, but also with midi-to-cv boxes.. the thing that lead me to this is: I tried silent way (this plugin to control vintage synths via your soundcard) and my ms20 turned out to sound too much like a plugin (with lots of latency, but probably minimal jitter..)
then I stumbled across this basic humanizer (that came with live 8)
I think this could be done better..
(if not, I’ll go back to just sequencing midi)
if you think of drum’n’bass: it’s really interesting, if you see it from a jitter-point of view. it combined 3 rhythmical qualities: a sampled loop (human timing of the best parts of rhythmical exceptional good musicans), freezed in a computerstyle way (within a sample the timing stayed sample accurate) fired by midi (with it’s sloppy timing) – but this was interesting, but as it turned too much into a softwarebased thing it turned boring/grimly(?)..
Over and above what I’ve written to you privately: I’m not saying that jitter is a problem in the case of computer-generated "humanization." Rather the reverse. What I was trying to say is that there is really no point in worrying about sample-accuracy in the Max world if you’re going to send things through a MIDI-protocol with jitter that is several orders of magnitude larger than the millisecond-versus-sample question you’re worrying about.
Still, if that’s what you’re after, there are techniques in Max/MSP for sample-accurate playback control. For instance, check out the patches in examples/sequencing-looping/audio-rate-sequencing-looping/. There are other people here who may be able to say more about incorporating these techniques into the M4L world.
One more thing: wanting to use a Gaussian distribution instead of the more common band-limited uniform distribution for "humanization" is a nice touch. As I wrote elsewhere, a log-normal or exponential distribution might be even more "real life."
Does anyone here have anything to add about the factors in real human playing and how best to model them? What would the most appropriate distribution be? Does anyone have some psychological chops to add to this?
if I want to avoid to append "samples" in max, can I simply return a string ("+x+’samples’)
Check the Mac JS documentation, but I’m pretty sure the JS syntax to send a list (consisting of a number followed by your symbol) through an outlet is
outlet(0, x, "samples");.
thanks! this works..
@Peter Castine: this is tangential, but a fascinating read:
It discusses analyses of time vs dynamic variances with expert performers. (using machine-learning techniques!)
"Does anyone here have anything to add about the factors in real human playing and how best to model them? What would the most appropriate distribution be? Does anyone have some psychological chops to add to this?"
not only that gaussian seems more "organic" or "natural" than uniform or linear, there is another important factor which makes triggering notes more realistic.
i am often using different ranges for "earlier" and "later", because that is a perfect reflection of how an instrumentalist "works": some are late most of the time, others tend to be too early.
of course, as a result of the above theory, different settings for every instrument should be taken. this can do wonders to choir voices, and if you carefully play with the settings, also for percussion.
btw, i have never (==not yet) been doing this, but when you think of a guitar or a piano, you will easily come to the conclusion thst there are most likely different delay times depending on note, string, last interval, or number of voices.
everything depends on everything …
Forums > MaxMSP