Can not achieve polyphonic to save my life!
First of all I am sorry for the double posting! I am trying my hardest to achieve polyphonic in my synthesizer and I have followed the new video tutorial and looked at tonnes of stuff on poly~ but don’t know where to start!
I wont to play chords with the input of a midi instrument but can’t seem to find a way is my organisation of my patch totally wrong? Should I have started polyphonic straight away?
I don’t know why I find this so difficult please could someone point me in the right direction? I can’t seem to get any input from my midi keyboard at the moment.
I am sorry I am stressing over getting this to work!
seems clear to me?
in the beginnignone tends to htink that there is only a magic button press and then
the world is polyphonic (like the polyphony in your forum post – by the click of one
back button yu make 2 copies of the same post.)
in fact you will either have to investigate the help file or find your own way how to
automatically switch between voices and turn them on and off.
the cycling video nice, but personally i find the idea of how the adsr object works
using adsr´s fade out time you introduce 10 ms of latency in you whole
synth engine only to save one running voice – you could as well make a custom
release function for your custom zl reg / line~ envelope and then use 17 voices.
also, i think that 10 ms will still plop anyway, so why not jut cut it of totally.
last but not least voice stealing might not be the best idea anyway. it means using
a limited system. the composer will still have to take this number-of-voices barrier
into account, and it will still "sound different" when too much voices are played.
the 17th will start – but at the cost of the 1st beeing stopped. so what is it good for? ^^
if i (the composer) know that i will need 17 or 18 voices, i will enable 20 for my poly.
or 32. the instances which are not running arent using CPU anyway. they´re just loaded
or the other way round: if i know that i dont have enough voices for envelopes with
long fadeout times, i will make shorter envelopes. but making the fade out time
dependent from the number of voices seems a bit unmusical.
there is one exception for this idea: if you model an acoustic original ("trombones quartet"),
you might want to use voice stealing. :)
bla bla bla.
The 10 ms fade time is:
a – probably imperceptible in most cases
b – only an issue when voice stealing
when not in voice stealing mode, why would you use it then? :)
Do I need to think about reconstructing my patch from the ground up? With all the In ports in the patch so far do I need to send the target 0 message to each input of poly~ for different parameters to be controlled for example the waveforms and different carrier frequencies? And just send the target $1 from the notein’s pitch output?
Is it way to difficult to add polyphony to my existing patch? Although I have looked at numerous pol~ tutorials and had feedback I am still really struggling to see how I can apply this to my existing patch and getting really frustrated about it.
Thanks for the help
the target message needs to be send to the first inlet only (or actually to any i think,
but i use the 1st), and it only has to be sent once to send many messages to the same
I bet if you had started with a smaller project, and converted that to run under poly~ the current project would not cause you so much grief. Once you have come to terms with poly~ it’s probably a task on the order of a handful of hours to convert your big patch, but I don’t think it’s a good place to learn. I suggest that you strip this down considerably, make that core run under poly~, then add stuff back in. Learning is helped by being to make mistakes faster, and your patch is big enough that it takes a long time to make a mistake.
Further, I would forgo Roman’s advice to do your own voice allocation until I had things running smoothly, and determined whether it was really needed. The poly~ allocation works much like how normal polyphonic hardware synths work, so you may find that doing your own allocation is overkill.
Thanks guys. I will do what you both suggest and start off with a new patch and try to make that polyphonic and then will start adding other synthesis elements back in. Thank you once again
Is it bad coding to use sends and receives inside of poly~?
No, it’s not bad, and in your case (lots of parameters to control) it would probably make things a lot easier for you, just as long as you are aware that s and r message order is indefinite.
Thanks Terry for the help.
Ok my brain is fried now. I had polyphony working in my patch but then I started to add filters back into it and it stopped work. I then removed the filters and put it back to normal and its still not working. Have I missed something here? Sorry for asking so many questions but my patch has completely fried my brain!
Whoops I uploaded the patch twice don’t know how to delete one of them
Wow that patch is excellent! How long have you been using Max? Was that for an undergraduate course or a postgraduate course?
using send and receive inside poly is far too dangerous. you would need to
watch out carefully for each of them if you want them to be per-voice or global.
as a a basic rule for the interface, try to do as much things inside which
are 1. per-voice and 2. signal, while everything global and everything data
should happen outside.
sending data from outside into the poly via send can also be confusing, using
send~ from outside can even cause crashes if you do something wrong.
i know that it is also a matter of taste, but using s/r where a cable would also
have worked seems bad behavior everywhere. :)
Really helped me mate so probably the best thing to do is keep the interface parameters (osc selectors frequency etc) inside poly and get the user to control them in the main patch using a bpatcher is it?
When you say about the different voices isn’t this just the main frequency being sent in by a midi keyboard?
Should I also put the filter in poly~ or do you think that’ll it be fine outside? Thanks for the help mate.
All the things that should happen per voice need to be inside poly~. This includes the VCA-equivelent, and associated envelope. The poly~ needs to know when voices are in use or not. I have a couple poly~ examples.The simplest one is called StupidSynth, and is available here. There is a more complicated example called SImple FM Synth available here.
silly question but what is the easiest way of putting my control such as waveform, frequency etc for both oscillators in the parent patch? If I use a bpatcher in the main patch how will the various parameters be able to control the different settings in the polysubpatch?
Would it be best to put the control knobs and radiogroups in the main patch then put various inlets going into poly~.
Hope to hear back from you guys as I don’t know why this is frying my head so much. I have looked at nearly all objects and still am unable to get a working patch!