efficient LFO for MIDI CC modulation
Hi there,
I am looking for the "right" way to implement a LFO for MIDI CC in Max, which should also run in Max 4 Live.
Regarding a sine LFO I thought of using cycle~, but I wonder if this would be a waste of CPU resources and may even cause issues on older systems, when using this technique on a lot of (virtual) tracks, especially as I want to modulate MPE Pressure and Slide for those tracks on which I run a MPE synth.
Running cycle~ in poly~ to implement MPE should be possible, but also makes it clear that there could be up to 15 cycle~ instances for just 1 track.
On the other side, a LFO could be calculated easily with a f(t) and "t" could be calculated from raw ticks outlet of transport or plugsync~ when I am in Live.
But why reinvent the wheel when there are already efficient Max object at our disposal?
What I am performance wise not sure about is this audio-rate to midi/message-rate transition that happens if I use the values of cycle~ for sending MIDI messages at the end.
Or in other words : PPQ vs. audio-rate ... I do not need all the values of cycle~ in between 2 MIDI ticks and I wonder if there are ways to make cycle~ work in an "efficiency mode" or just using another Max object that does it right.
And what happens in Live, when this max abstraction runs in MIDI device : will cycle~ be processed at message rate anyway and hence there is no waste of performance?
You can have a look at the jit.mo package for easily creating event-based LFO. [jit.time.sin] will provide you a sin function for which you can change the @freq and @scale and send a "phase 0" to reset the phase. It's meant to work with jit.world but it works without if you set @automatic to 0 and feed it with regular bangs using a [metro].
The [metro] interval allows to define how often you want the data to be updated.
In terms of efficiency, you can use a single metro to drive all jit.time objects, and send a 1/0 to enable/disable them individually.
Not sure how it performs compared to audio domain LFO with a snapshot. I would say less, but didn't tested!
One thing I just noticed what seems like a bug though: a freq below 1 doesn't seem to work properly, so you need to use @speed to get proper sub 1Hz frequencies.
Thanks @TFL - this is an interesting solution & great idea.
Using a metro to generate a new LFO output value from the jit.mo objects fits naturally into my patching idea - awesome!
But what do you mean by it performs "less" compared to a audio domain LFO with snapshots?
I would assume that jit.mo objects would use less CPU resources (don't believe that these are executed on the GPU - as its not jit.gl.mo) than a cycle~.
As jit.mo objects produce their values only on demand by a bang - and as a metro at full PPQ rate of Max (or Ableton Live) triggers much less often than signals at audio-rate, it should use less CPU resources, or do I miss some Jitter performance overhead that is "baked into" the jitter motion objects per se?
I would assume that jit.mo objects would use less CPU resources (don't believe that these are executed on the GPU - as its not jit.gl.mo) than a cycle~.
That's what I meant by "less", sorry for the wrong phrasing! I just mixed "performance" and "resources" in my head.
the [snapshot~] object takes about 300 times more CPU than a dozen of math objects, and in exchange for that it gives you a far worse timing accuray than [metro] or [line] do. so forget about it, unless music signals are the source of your midi generator.
it is clearly a good idea to start such things from [plugsync~], but you will find that you want to add a few custom things to it, to overrun its output when required.