Lots of MIDI I/O objects vs as few as possible
Is there any advantage/disadvantage to using lots and lots of MIDI input/output objects in a complicated MIDI patcher?
Max has a set of objects that take MIDI in and send MIDI out. Midiin/midiout, ctlin/ctlout, rtin, etc etc. Now mpe too!
Since these objects connect Max to the "outside world" I try to limit the number of such objects in a patcher.
My thinking is, like a physical cabling setup, better to have a few connections doing as much as possible than a lot of connections that don't do much.
However I am starting to question this method. It can lead to byzantine connections inside Max, and some arbitrary-seeming patcher structures.
So I'm asking here--does it make any difference performance-wise or anything to use lots of MIDI I/O objects rather than as few as possible in a patcher hierarchy?
One assumption: Active MIDI inport and outports are carefully managed via the Max MIDI Setup or messages to Max.
Your experience/knowledge/ideas are most welcome.
Although I don't know all that much about Max, I think it would make more logical sense if those objects connect to some other part of Max handling the I/O instead of connecting directly to the ports in which case creating those would have the same performance impact as having multiple connections to a single object. Depending on how it handles sends/receives and routing to subpatches, I could see it being more efficient to connect directly to that I/O handling instead of passing data around.
@jeremy I see you posting on the forum I bet you know a thing or two about this.
Bump for daytime in the Americas
Nobody else? I think this is mildly interesting!
I have absolutely no idea of the CPU usage of this stuff. On Mac there's Audio MIDI Setup, so for all I know it does most of what's needed and the path from there to Max is really easy?
So I just use whatever's easier for patching, whatever makes it simpler and clearer. If you're processing individual controllers or individual MIDI message types they have to get separated out somewhere along the line, so the point at which that happens quite possibly doesn't matter much. If they don't need to be separated then just a [midiin] is fine.
When thinking about efficient patching I just remind myself how much data a single stereo 44.1 kHz audio file uses. Compared to that, MIDI messages would have to be a walk in the park, wouldn't they?
I did once try to investigate a bit further, but it took a *LOT* of simultaneous MIDI processes to make much difference in the Mac's Activity Monitor. I did some MIDI thing (can't remember what), copied it 9 times, encapsulated it, copied it 9 times, encapsulated it... and so on till there were maybe 10000 of the same thing happening. The Activity Monitor jumped and went back to normal, but nothing freaked out and I've hardly thought about it since then. That doesn't exactly answer your question, but it seems to suggest that MIDI's pretty easy for the CPU.
Bill, thank you.
Some of the things I work on do pass a lot of MIDI, and, strangely, even though audio is obviously a lot more intensive, timing can get weird especially when interfacing with other applications. (not Max for live....)
But, yeah, I concur that MIDI protocol is lightweight, and thanks for confirming this. I have done similar stress tests and find Max much more accurate timing-wise than most DAW applications.
The thing I wonder about is the method Max uses to expose its MIDI IO objects to the OS.
Does invoking lots of MIDI input objects, for example, at all kinds of different patcher levels, lead to unintended complications for the overall MIDI IO/performance/something of Max?
Bump for the weekend. I will stop bumping this, just v curious about the topic.
Must work differently in Windows and Macintosh
i totally know what you mean... the solution is probably to make a custom abstraction which can take it all.
Roman, good to know I'm not alone.
And I do make all kinds of abstractions/patchers to make complicated apps more modular.
Nonetheless I still wonder about many vs. few MIDI input/output objects. Is the Max application indifferent to this, or does MIDI from the outside world entering Max from 1 vs 16 or 256 Midi input objects at varying patcher levels have implications for, perhaps, the scheduler thread or any other core Max processes?
the input order is controlled by the source, the output order is at your hands.
the order of for example two the same [ctlin] objects ist right to left, bottom to top, subpatcher to motherpatcher.
i believe that the smaller midi objects are coming from times where computers ran at 30Mhz. since y2k you should be fine using almost exclusively midiin/midiout/oms.
about scheduler thread: midiin is low priority and you can move it up using del 0.
but there is also an option to globally have midi in the sheduler thread. in my ancient max4 it is called "prioritize midi" i think. i dont use it.
Yes, MIDI thread can change:
1) Default: Main thread
2) Scheduler in overdrive: Scheduler thread
3) Scheduler in overdrive, audio interrupt enabled: Audio thread
Regarding my post:
Nonetheless I still wonder about many vs. few MIDI input/output objects. Is the Max application indifferent to this, or does MIDI from the outside world entering Max from 1 vs 16 or 256 Midi input objects at varying patcher levels have implications for, perhaps, the scheduler thread or any other core Max processes?
In my case usually scheduler is usually set to Overdrive, sometimes Audio Interrupt is on.
So, if there are 2 notein objects, one taking in and passing through midi notes for processing and another gated does that double the amount of events Max has to assign to whatever thread MIDI is being assigned to? (Example pasted below.)
1) If so this seems like a reason to have as few MIDI in/out objects in a patcher system.
2) If not seems easier in certain cases to have MIDI in/out objects directly connected to their subsequent processing, selectively gated.
My hunch till now has been (1). Is that correct?
yes of course. 2 midi objects means twice the data, or in other words, when overdrive is on it is not a good idea to use 145 midiin objects.
otoh, i am not so sure what happens when the second midiin is not connected to anything. or gated... :)
i´d say if you really need different type of incoming data at very different places, you are good advised to use only one main midi object and then send/receive to other places. the same is probably true for OSC, TCP, serial or mouse and keyboard.
well - one per port! in some studios there are four dozens of ports maybe...
-110
i´d say if you really need different type of incoming data at very different places, you are good advised to use only one main midi object and then send/receive to other places. the same is probably true for OSC, TCP, serial or mouse and keyboard.
Thank you for taking this journey with me.
Seems obvious that more IO = more to schedule, even if the I don't send anything past an almost-immediate gate.
But, I'm always curious whether something that seems self-explanatory has other dimensions.
For what it's worth, I've always just used a single [midiin] object per port. This is because I immediately follow the [midiin] with a [midiparse] and then I distribute notes, CC events and so forth as atomic entities rather than parsing raw MIDI bytes all over the place. It also means that I only have to define my MIDI ports in one place, which is important if the actual MIDI devices names can change often.
FWIW, my take in a recent rather big MIDI project has been to use [ctlin] and one [notein], as well as one [ctlout] and one [noteout]. Was just logical given the MIDI data I was dealing with, and made my patch design smoother than with [midiparse]. Your decision will come not only from performance, but also from style, code maintenance, documentation, potential evolution, etc.