Tricks to avoid clipping during Granular Synthesis

Rodolfo Cangiotti's icon

Hello all,
I am currently working on a project to build a kind of all-in-one Granular Synthesizer. The algorithm is pretty similar to the one implemented in the example available on this website: a poly~ object generates instances/voices that randomly calculates parameters for the synthesis. The range in which these parameters are generated is sent to all voices through the send/receive system. The main issue, using this approach, is that the randomic generation of the grains produces the clipping phenomena several times. Instead of lower manually the level of the output, are there some tricks that allow the control of the signal amplitude without interfacing with the user? Actually, I have tried to generate a coefficient taking into account the number of active voices, but clipping still persists sometimes and when there are a lot of active voices, the level is attenuated too much. I was also thinking about a compression algorithm. What about that? Could it avoid clipping?
Waiting for your replies I thanks in advance.

daddymax's icon

Compression could squash the peaks before they hit clipping point, i'd certainly try it.
Sometimes if i want to add some level control, ill run a sound through peakamp and invert the output and use that to control an amp envelope - if the volume gets too high, it compresses it, if it gets too low, it boosts it. Finally - i have tied voice count to amp level in the way you mentioned with some success, its just a case of getting the scaling right.

brendan mccloskey's icon
Max Patch
Copy patch and select New From Clipboard in Max.

Hi
are you using this method (scale the voice's output by 1/total number of voices):

This always works for me; try 0.8 or even 0.5 / total number of voices; but you might have some filters or eq or something that is otherwise affecting the output signal. Alternatively, put [tanh~] on the output - this will colour the sound though, as will any kind of folding. Compression is another option, but this is a sticking plaster solution. Find out why ONE voice is peaking first.

HTH

Brendan

Rodolfo Cangiotti's icon

Thanks for your replies.
@Brendan: I have checked your method and it's a good solution if the amount of active voices is constant (I remember, if I'm not wrong, that you have used the same method to control the grain amplitude in your Granary project). Currently, I am using a similar implementation - here attached - to "dynamically" control the amplitude of each grain, taking into account the number of active voices:

Max Patch
Copy patch and select New From Clipboard in Max.

The result is pretty good but, as I texted also in my first post, the signal still clips sometimes. I think that this is also due to the phase of each grain - constructive and destructive interference - and its duration.

EDIT: I have also tried tanh~. It seems a good alternative because it adds harmonics but in a smoother way.

Rodolfo Cangiotti's icon

@Daddymax: I have also checked the object that you have suggested. But I haven't properly understood how you compress or boost a signal using the inverse output of peakamp~. Can you please post a brief example? Thanks in advance.

Lume's icon

Ever checked out Nobuyasu Sakonda's Sugarsynth?
It uses thispoly~ to determine the phase of grains. Might help.

sugarSynth.zip
zip
Peter McCulloch's icon

I'd also recommend using 1/sqrt(n) for the scaling factor. It'll produce a more even scaling where sound sources are not strongly coordinated. You can use an asymmetric smoother (slide~) to reduce clicks as the scaling factor changes. Make the release time shorter than the attack time to do this, maybe something like slide~ 2000 200.

Rodolfo Cangiotti's icon

Thanks for your suggestions, guys.
I have tried to build a compressor but the final result is not satisfactory because it influences too much the timbre of the sound, in my opinion. Even more, clipping still persists sometimes, maybe because the phenomena occurs in a really short time interval which is shorter than the attack time to obtain a smooth compression.
@Peter: currently, I am using the equation that you have suggested to scale the amplitude of each grain. The number of the active voices is sent to all of these ones using the send and return objects. So, the calculation of the coefficient is computed when a voice is recalled and it remain constants during the whole instance. Clipping still happens because the mentioned coefficient is calculated in a certain state which can vary during time, I suppose.
Maybe could it be better to send them as a signal to all the voices? In this way it can vary during an instance. I could smooth the variation using a LPF with a really low cutoff frequency. What about? Thanks in advance.