Gen~ Tape Delay Emulation (tape speed)
Hi! I've successfully managed to recreate pitch movement in a delay by changing the write and read index position. In hindsight my issue with this is that after the delay time has reached it's target, all pitching goes back to normal, because now the difference between read and write index is constant again and there is no more doppler effect. The only way to combat this is having a really long smoothing time between tempo changes but it seems inelegant.
I'm looking to fully recreate the effect of a slowed down tape:
When I change the delay time, the whole 'tape' changes tempo and all previously recorded sounds are staying pitched forever, only newly recorded sounds are at the 'correct' pitch. As I understand it, for this model, the read and write heads (index position) stay constant, and the delay time is only effected by the speed of the tape.
My thought process was therefore to keep the index positions at a constant difference, and only change the rate of the phasor writing and reading the data buffer.
I've found an amazing forum post that tackles the exact questionI have. I tried to implement everything but I failed to do so correctly. Maybe somebody with more gen/max knowledge would know of a better way to convert this explanation into gen.
https://forum.mutable-instruments.net/t/simulating-tape-analogue-delay-in-dsp/7789/7
(This is the post, by user TheSlowGrowth)
If somebody could give an insight into where im going wrong, i'd be extremely thankful! Especially with the linked forum post being exactly my question, I feel so close but yet so far.
Thanks!
Cornelius
Hey Cornelius,
Did you ever make any progress on this patch? I'm also trying to do the exact same thing, but having no luck.
I read that forum post and got the idea that you'll need to upsample then preform your delay, then downsample back to your original sample rate.
I'm wondering if this type of thing is even possible in gen. Does anyone have a good example of a upsampler in gen?
wrap your gen~ based delay effect in a poly~ patcher.
upsampling x2 is usually enough for delay effects, if you plan to modulate the speed like crazy you might want to be more safe and go with x4.
I'm wondering if using a codebox inside gen could also be a way to implement an upsampler?
Each tick it would perform interpolation on the last X number of samples and store it in a data object. The length of this data object would have to be Y times longer than your normal circular buffer where Y is the maximum upscaling factor you want. Still in the codebox you'd then poke that upsampled data object after the delay time to get X number of delayed samples which you'd downsample to a single output sample.
To get this specific feedback pitching effect would you need to vary the speed of the upsampling? Are fractional upscaling factors even a thing?
Maybe changing X would also create that effect? hmm...
for things like a tapping buffer you could use 2 parallel streams, but that also will cause 1 or more samples of latency. plus you can not make a proper and steep bandlimiting like that.
it is surely easier to wrap the gen~ up in a poly~.
Yes -- it is possible to do upsampling inside of gen~, but it does require quite a lot of extra work and thought compared to sticking things in a poly~!
For general purpose cases of e.g. 2x or 4x upsampling/downsampling, this can be done by patching (by maintaining multiple signal paths for each polyphase tap and carefully managing the state across them) or in a codebox (there's a way to do it in codebox using for loops that can be neat, but also tricky too). Either way, you also have to make your own bandlimiting filters for the up and down sampling.
However, this doesn't necessarily help for a varispeed write delay, with non-integer resampling ratios, which is a much fiddlier case. There's a few different ways to approach the problem. Probably the easiest way is to always write at current sample rate and use variable speed playback rate, but then you run into the issues you mention in the first post.
Another way is to use a variable-speed buffer approach, rather like a BBD. That can be done by making potentially multiple writes per sample (and again, it needs good antialiasing filtering/interpolation of an input history for the sample rate conversion here); and potentially multiple reads per sample (again with appropriate filtering/interpolation -- much like a wavetable oscillator) -- and yes this can be done with a codebox. Fiddly to get right but it can sound quite lovely when it does.
Another way is to bounce signals between buffers, such that writing is still locked to the current sample rate but reading is managed carefully to maintain the new loop size. This works great and is a bit cheaper, and has the right pitch effects as you would expect from an analog tape delay, but doesn't handling continuously variable delay times is a lot more difficult.
It's a very big topic!
Thanks for the ideas, Roman and Graham! I think a poly~ would be the easiest to implement, but since I'm porting this to a daisy using oopsy~ I'll have to confine myself to gen.
I think I will try out the BBD style variable-speed buffer, and I'll update this thread with any progress I've made.