Dec 06 2013 | 3:08 pm
    I'm building a drum sequencer. I had 5 voices for the drum sounds, now 10. Each is a sub patch with various function boxes for adsr, number boxes for frequency, etc. all being synthesized from cycle~ and noise~. My problem is that i've noticed the patch has become quite slow since adding an additional 5 voices (now 10 altogether). My cpu usage is only at around 10%.
    My understanding of poly~ isn't absolutely clear right now but would I be able create a poly~ object of these 10 sub patches in order to make this patch a little bit more responsive? Maybe I could create one set of function and number boxes and send those attributes to each voice of poly maybe? Or what would maybe be a better solution?

    • Dec 06 2013 | 5:01 pm
      Post your patch. Could be a lot of things causing inefficiencies.
    • Dec 14 2013 | 3:56 pm
      i think i figured it out, i moved all my pan objects into the sub patcher and temporarily got rid of all the meter~ objects and for some reason it seems to be running fine.
      i just have another issue i've run into that i would appreciate some insight on..
      It seems like i need to create some sort of connection between the sequencer's tempo and the domain of each function box in order to avoid clicking because when i have the tempo fast enough and the domain short enough, i'm getting a POP at the beginning of the sound being trigger even though my adsr function boxes are set to fade in .
      Also, is it a good rule of thumb to scale every amplitude .25 to avoid clipping in a situation like this? i've read something like that somewhere and its been in the back of my mind for some time now... thanks for any help in advance
    • Dec 14 2013 | 4:10 pm
      regarding the popping, i guess what i'm asking is there a way to interrupt those waveforms without causing a pop or without adjusting the domain to be the same as the tempo of the sequencer?
    • Dec 14 2013 | 5:34 pm
      Hi you should really post an example of what you're describing, a small excerpt from your main patch. As for amplitude scaling, divide total amplitude, 1., by the number of combined signals, then scale each audio signal by that value. For example, 8 voices should each be scaled by 0.125, 2 voices by 0.5.