gen~ works differently in 6.0.7 ?

Carlo's icon

Hi,
I have a spectral delay patch, made with a gen~ patch inside a pfft~, which used to work perfectly, and with a very good sound quality, in MAX 6.0.5.
This gen~ patch is not working now in the same way in MAX 6.0.7.

Seems like to me I can't get anymore a good synchronization between the branch who carries amplitude and phase data, and the other branch who handles bin indexes to drive the filters, delays and feedback buffers (the same buffer for filters and delays is used).

So, as a consequence, a small bit of filtered sound comes out immediately when the spectral delay is fed with a sound, like it has a delay of 0. Moreover this bit of sound doesn't repeat itself in the feedback loop, although I set the feedback buffer at the maximum level of 1. All this didn't happened in Max 6.0.5.

I might avoid this behavior using a 1 sample delay on the branches which carry delays and feedback values (not on the filter!), but in doing that i get a little unwanted pitch shift.

I join here an exemple patch to test all that, along with the special buffers I'm using with this spectral delay.

The internal temporal structure of these buffers might also be part of the problem, but anyway, they were perfectly working with MAX6 previous release!

Someone can help on that?

Thanks to everyone who will answer.

Carlo

4474.SpectralDelaytest.zip
zip
Carlo's icon

Anyone?..

Graham Wakefield's icon

Hi there,

Sorry I didn't see this sooner. I'm not sure I fully understand the question, but I'll comment on a few things which maybe might be causing problems. Let me know if this helps.

One thing that I notice is that the delay time is not quantized to the spectral frame size. When using delays within a pfft~, one sample of delay equates to a frequency shift of one bin, two samples is two bins etc, and eventually this wraps around at the framesize to account for a whole spectral frame with no frequency shift. So to avoid unwanted frequency shifts, the delay time should be quantized to the frame size. In the gen~.spectraldelay example patch, this is done by dividing by the frame size, stripping the fractional component, and multiplying back up to the frame size. I've made this change to your SD.gendsp patcher below.

Another possible issue is that, in discrete signal processing, a feedback delay of zero is of course impossible. The absolute minimum delay for feedback is 1 sample. So if a delay of 0 is requested, the [delay] operator will apply 1 sample of delay. In the pfft~ context, this will cause a frequency shift of one bin, which isn't ideal (maybe that's what was causing the additional sound you mentioned?). One way around this is to insert a frame-size worth of delay in the feedback path instead, or more simply by enforcing the minimum delay = framesize. That way each bin gets exactly one frame's worth of delay, which is the minimum. I've also added this to the gendsp below.

When I was working on the gen~.spectraldelay_feedback.maxpat (in the gen examples folder), I found the effect to be slightly cleaner if the framedelta/frameaccum is applied inside the feedback loop, rather than outside (I don't fully understand why...). I've added that change too just in case it helps.

Best,

Graham

Max Patch
Copy patch and select New From Clipboard in Max.

PS. There's a separate forum for gen-related topics here: https://cycling74.com/forums/forum.php?id=13

Carlo's icon

Hi Graham,
thank you very much for your exhaustive answer!

As a matter of fact, I didn't include frame quantization for the delays because this was already done, outside gen~, by an external object which directly generates the buffer with a delay time per bin. This way, delay values should be already quantized to the frame size.

I eventually tried to clean the delay buffer, to get rid of uwanted zero values. But, as always, I had the small sound leaking out at the beginning, with a delay of zero, and which it doesn't get captured in the delay loop.

If I do as you said: I quantize the delay times to the spectral frames, and also impose the minimum value of 1 frame of delay, I don't get anymore the leaking sound at the beginning.
Anyway I'm asking myself why the simple "max1" isn't enough to solve the problem (I've tested that), so I also need to do the frame quantization.

I would also really like to know what are the differences in detail between the previous gen~ version from Max6.0.5 and the current one, since in that other version I didn't have this problem at all, and above all I had louder sounds at the output, while now it's a little softer.

Can you explain to me? At least what has been really changed in gen~ from 6.05 and 6.07 ?

Thank you very much,

All my best,

Carlo

Carlo's icon

.. I mean, was the minimum delay time of 1 frame already implemented in Max6.0.5 gen~ ?

This would explain why I didn't have problems with that version.

I also compared the outputted sound from the two versions. It's slightly different. Neither better or worse, just different.

And by the way, it's true, having framedelta/frameaccum inside the feedback loop makes the sound cleaner.

Best,

Carlo

Graham Wakefield's icon

Hi there,

The minimum delay of 1 sample is a theoretical constraint for any discrete feedback system; if there's a change between 6.0.5 and 6.0.7 this isn't it. :-) What a clean sound needed however was a minimum delay at the frame size, which the added quantization in the patcher ensures.

Yes, I'm not sure why putting the delta/accum within the feedback loop helps, but it does.... I'd like to understand why.

G