Anyone got an example of how to wrap a vst~ in a poly~
Trying (for probably the 4th time) to refactor my system to use the poly~ object but I just can't figure out how routing is supposed to work. I was looking at a tutorial I found on how to build a polyphonic synth but there were aspects that made no sense to me.
For example, I see one prepends "midinote" to a note message but I assume I need to construct VST "midievent" messages and then prepend "midinote" to that. Is that correct?
I also don't understand what it means to have multiple "in" objects with the same index value. What's the purpose of that?
I believe that I'm supposed to just have 1 voice for the poly~ and depend on the encapsulated VST~ for polyphony. Will that work?
I had a lot of trouble just getting off the ground because, unlike abstractions in bpatchers, the poly~ patcher did not refresh after I inserted [in] or [out] objects to the abstraction until I forced the poly to reload its argument.
My current abstraction for managing VSTs ultimately sends audio signals via [send~] but if I do that, will I still get the benefit of poly~ when a particular vst~ is not active?
If anyone has a simple example of how to manage/control a vst~ from inside a poly~, I'd really appreciate a peek at it.
Thanks,
D
Have you looked at MSP Tutorial 21? It explains the main function of poly~, ie. creating a polyphonic synthesizer from multiple instances of a monophonic synthesizer. And as I understand it, the [cycle~] object used in the examples can basically replaced by [vst~].
i remember there is a problem with send~/receive~ inside a poly~, but i do'nt remember which exactly
Yes, I've looked at that tutorial -- I get the main function of poly~, that's why I'm trying to use it. But among other things, that tutorial doesn't explain how one gets the different kinds of messages into a VST (I don't think you can just replace cycle with VST, the latter wants lots of different kinds of messages) nor what it means to have multiple [in~] with the same index, etc.
The examples I've found seem to assume that the target object inside a poly~ receives (and only receives) note messages. So questions as to how one deals with the various messages that come out of a VST (parameters, etc) as well as the other commands that can be sent to a VST (to load a particular plugin, for example) are not addressed. For example, is it the case that I could use an [in] just for MIDI notes and use regular [send/receive] stuff inside the poly~ to control other aspects of the VST?
Can I do away completely with the [in/out] stuff and just receive MIDI notes inside the poly?
Each of my poly~ objects will contain just one VST with just one voice. All I really care about is being able to turn off processing when a particular VST is not in use.
Each of my poly~ objects will contain just one VST with just one voice. All I really care about is being able to turn off processing when a particular VST is not in use.
Is there a reason to not just use the vst~ object outside a poly~? The vst~ object understands 'disable' and 'bypass' messages that turn off processing.
That's exactly what I'm doing now. But it's my understanding (and if I'm wrong, someone please correct me) that this doesn't stop processing, it just causes 0s to be continuously sent though the audio chain so Max still does DSP work. I think when you mute a poly, it actually stops DSP processing for whatever is inside the poly~ and hence saves a lot of CPU cycles.
My hope was to encapsulate my entire VST abstraction (which has lots of other stuff in it) inside the poly~ but it may be that I need some serious refactoring first! which is why I'm trying to understand how to use poly for a vst
If you don't need the voice allocation feature of poly~ you could use explicit 'target' messages and thus send any message you like directly to the VST through a control inlet [in1]. And to turn off processing when an instance is not busy I think within poly~ you have to send a 'mute' message to 'thispoly'.
Yeah, 'target' does seem to be part of the solution. I also think I need to tap into the output somehow so that when the volume reaches (epsilon of) zero, I can make that mute happen automatically. I had noticed in one of the tutorials that an ADSR~ was being used for this but that won't work for me.
I was just hoping that someone knew of a simple example of poly encapsulating vst so that I didn't have to start from scratch to figure it all out.
How about controlling mute from midi input: note on > mute 0/note off > mute 1?
Because the VST sounds themselves often have release segments.
A quick and dirty test of vst~ using Max's built in DSP CPU monitor and a convolution reverb plugin (NI Reflektor): disable 0 (plugin running) CPU utuilization is 9% - 10%, disable 1 CPU utilization is 1%. Might just be this plugin, but I get the same results using bypass. This is on a 2012 15" MBP w/retina display, Mavericks, Max 6.1.6.
I see, but maybe you could delay the mute 1 message by some reasonable amount. Otherwise I would measure the signal output with peakamp~ and trigger mute 1 if the value becomes close to zero.
@broc --- thanks for peakamp~, that will be very helpful.
Delaying mute message is not practical --- depending on what's being triggered, the release time can be anywhere from instantly (most things) to several minutes (for some special effects that fade out at the end of a song, for example).
@bdc --- that is very interesting, although I haven't seen those results myself, I'll have to check again in case I made a mistake in my implementation. I am using a number of NI VSTs. On the other hand, I'd really like to take advantage of both parallelization and automatic disable when a VST is not sounding. That way, I can have a lot of VSTs loaded, with different ones used at different times in the song.
@broc ---- thanks again for the guidance ---- I have the basic mechanism working, encapsulating one of my VST~ abstractions ---- muting works beautifully and I am enjoying watching the CPU utilization go way down.
I created an [in 2] inlet that goes directly to the encapsulated VST~ and I'm just sending the standard VST messages through that inlet. I guess if I had more than one instance of a VST inside each poly, the target message would be relevant.
One question --- if I don't use the midinote or note messages, which seem to be related to choosing instances automatically, it seems to me that I have two approaches going forward.
1) Create a separate poly~ for each VST abstraction and control them separately.
2) Create a single poly~ that would hold my entire bank of VSTs but then use 'target' to send commands (plug, read, midievent, etc) to the desired instance.
What are the benefits/drawbacks to one approach vs. the other? Do I get the multicore benefit if I have multiple individual poly~ objects?
I still don't understand the purpose of [in] with a parameter and why you can have multiple [in] objects with the same parameter. How come [in] doesn't use the same mechanism as regular inlets to control the order of input ports on a poly? In fact, why can't one just use the regular inlets?
Regarding multicore benefit you can find some explanation on the reference page under the 'parallel' attribute. But I'm not sure if that info is up-to-date.
Perhaps the numbering of in/out was introduced to distinguish poly~ patches from ordinary abstractions.
Thank you ---- I'm busy working to encapsulate my large object now --- but it's very cool to see the CPU utilization drop down to 0% (not even 1%) when the sound fades away. Can't wait to see how well this will work on my real system.
Again, really appreciated the pointers.
I have poly~ working now in my environment. I don't know if I'm getting the benefit of multiple cores but I am now able to leave 8 or 9 VSTs loaded and only have CPU cycles used for whatever is currently sounding. That peakamp~ did the trick perfectly, both inside the poly object to detect when sound has sufficiently died away to mute as well as outside the poly to detect incoming audio (some of my VSTs are effects and don't have any MIDI input so I need to detect audio from another VST) and unmute a poly.
However, I'm left with one question as to exactly what is happening when a poly~ is muted? I thought it disabled the entire DSP chain but clearly it only disables signals inside the poly~ otherwise I wouldn't be able to detect arriving audio.
So is it the case that all audio connections going to poly inputs as well as all those connections following the poly still get processed, i.e, when the poly is muted, there's still a 0 signal being propagated?
I ask because I'm wondering whether it makes sense for me to throw other stuff in the audio chain into separate poly~ objects just to disable processing when I don't need them.