I am dealing with delays at the moment and one of the requirements is to be able to change delay times dynamically without pitch shifting or doppler artifacts.
I have read every thread on the subject and used most of the abstractions that implement a vdb (Ejies, Bennies, etc...) Now in my patches I am using the [M4L.vdelay~] object pretty regularly. There are many versions of the basic algorithm and they all manage to suppress pitch shifting artifact *fairly* well, however - no matter the size of the crossfading window - when I change delay time dynamically I can still hear glitches in the resulting audio. This happens especially with tones that have a pretty pure harmonic content such as flutes and bells.
I am wondering if it is possible to improve on the typical crossfading algorithm as implemented in the [M4L.vdelay~] abstraction for example.
This 10-year-old thread makes me think there is hope:
There Roman Thilenius said:
"you should use 3 and not only 2 parallel processes for best results."
Roman, could you elaborate on that, please?
Does anybody know about a more polished/effective strategy to dynamically change delay time without introducing pitch shifting artifacts? Any patches to show?
Thanks a million for any suggestion/advice.