I'm working with a bank of four phase vocoders in a system I'm working with at the moment, and was interested to read Dudas and Lippe's articles on the subject (PDF's can be found here: http://www.richarddudas.com/publications.html
The techniques described within interest me as I would like to access a central buffer~ in my phase vocoders without having to have a separate buffer for each within the pfft~ in which I must record the spectral data (as shown in MSP tutorial 26). To give some background, I'm working with one large (15 minute) buffer in which continuously live sampled material is time stamped according to phrase boundaries. These phrase are then recalled in the course of a live performance and transformed by various means - including the use of my phase vocoders.
My current set-up for phase vocoder playback consists of referencing a recorded phrase (position in the buffer), which initiates recording of this phrase into one of four buffers in a pfft~. This works well, however in a live setting there is clearly a delay as the phrase is being re-recorded. The articles above use index~ to read from the parent buffer before performing the fft with the standard fft~ object within the pfft~. However, to allow for transposition they then use play~, which as many of us know has some terrible aliasing when working with larger buffers. I have integrated the ideas from these articles into my patch, and although the playback of phrases is immediate, as I suspected the sound quality diminishes markedly when introducing play~ for this purpose.
My question is - how can I avoid this kind of aliasing when designing a phase vocoder in the manner described above? I am currently using gizmo~ in my original patch - but as the phase vocoder described by Dudas and Lippe requires a full spectrum fft, gizmo~ doesn't seem to work - I guess because it is designed to be used within pfft~.
Does anyone have any ideas that could help here?
Thanks in advance,