OT: "Convolution" Clarification
This is not really a Max question, but I wonder if someone can clarify something for me.
It seems there are different methods of convolving audio. In Max for instance we can use FFT to convolve two signals. But this is ongoing, in the sense that each frame is convolved with the current frame of each signal. It continues as long as you like.
In a convolution reverb, if I understand correctly, each sample of a signal is convolved with the entire "impulse response" of the reverb sample.
How does one refer to the first as opposed to the second process? Are there names for them?
Thanks in advance for any help,
cheers, Dan.
"Moving" convolution for the first, and "Real" convolution for the second.
That was fast! Thank you Peter.
cheers, Dan
take a look at alex harkers convolution externals (especially his no-latency convolution reverb), he has both time-domain and frequency-domain convolution objects: http://www.alexanderjharker.co.uk/Software.html
Interesting. Thanks for the link Timo.
Alex refers to "Partitioned" and "Non-partitioned" convolution. I assume Partitioned is "moving", and non-partitioned is "real"?
Or is Alex referring to something else ?
well, it doesn't matter that much, I think time-domain vs frequency-domain matters more..
and if you choose your fft frame at least as big as your impulse response this is non-partioned convolution.
You can implement a realtime convolution reverb both in time-domain or in frequency-domain, the latter is computationally more efficient (if done well). However, frequency-domain only introduces a latency. However with Alex's (partitioned) trick to do a combination of both enables him to do no-latency convolution reverb while taking advantage of the computational benefit of frequency-domain processing.
Hello all,
There are a couple of distinct issues here with the frame-by-frame approach:
1 - with the frame against frame approach the impulse is essentially changing every frame (the distinction between impulse and signal is in fact a false one, as the two can be swapped, but often one is constant, and the other changes - as with reverb).
2 - If you don't zeropad at least one of the inputs then the results of the convolution can wrap around the frame in time (time-aliasing) - thus this is known as circular or periodic convolution. Zero-padding in max is a little tricky to acheive as there is no nice easy way to do it).
With reverb=style convolution you are properly modelling a linear-time invariant system, by mapping the full invariant impulse onto every input sample.
ASFAIK the terms above (moving/real) are not universally understood. The terms circular/periodic convolution however, are. However, Peter's terms are perhaps a useful way of thinking about it.
Partitioning is to do with the implementation of fast (efficient) convolution with an FFT, which as Timo says incurs latency. Time-based convolution does not incur latency.
A.
Thanks Alex. Things getting clearer, even if the terminology isn't. :-)
If we're doing a moving convolution (ie. both signals are changing constantly) is there still a need for zero padding? My gut feeling is that if the same window size is used for both signals, padding isn't necessary.
Depends what you mean by necessary. If you have two signals length M then the result of convolving them will be 2M - 1 in length. Obviously the FFT frame is not big enough to hold almost double it's length, so the extra M-1 samples wrap back round in time to the beginning of the frame.
I've never bothered to code up a convolution with two live inputs and zero-padding to prevent time aliasing to see if it sounds better or not, so I've no idea, but I'd be interested in the results if anyone does.
A.
I've never bothered to code up a convolution with two live inputs and zero-padding to prevent time aliasing to see if it sounds better or not, so I've no idea, but I'd be interested in the results if anyone does.
This is on my todo list. Might be able to get to it in the next couple weeks, but most likely I'll be writing this external in March. I'll post a link when it is ready.
@Roth - cool, but you don't need to write an external to do this if you don't want to. You just need a custom window with the zero-padding as part of the window and an appropriate overlap. It wouldn't take too long to code in Max, I've literally just never bothered to take the time to check out how it sounds....
True, but these days I really prefer working with FFTs in C than in MSP.
No worries - any particular reason out of interest?
Well, as we've discussed in other threads, I've got a need for speed and like wrapping up complicated processes in C externals for efficiency reasons—although I'll often prototype my non-FFT signal processing ideas in MSP if that is simpler than prototyping in Octave or C.
One reason I prefer working in C than MSP for FFT stuff is that for many types of processes the sequential signal based approach of MSP is not ideal for working with frame based data like DFTs. If I want to design an effect that either requires access to the bins in another order or to all of the bins before outputting the DC bit to fftout~
or ifft~
then an additional frame of buffering is required which increases the latency of the process (not ideal for processing of live performance on acoustic instruments when you already have the latency of the sound card, I/O buffer latency, and the latency of the FFT analysis. When working in C (or Octave) I've got easy access to the DFT as an array that I can do whatever I'd like with it.
The other reason is the fact that FFT size and window size can not be decoupled with pfft~
—making things like zero padding requiring the use of fft~
instead. I know any sort of funny windowing stuff I'd want to do could be done with fft~
and ifft~
, but a couple of years ago when starting to write my general C DSP library I wrote some wrapper functions for handling DFT and iDFT that handle everything for me (i.e. any sort of combination of FFT size, window size, window function, overlap, and buffering signal vectors that are different sizes than the window size—for now I'm stuck with power of 2 signal vector sizes which is fine for MSP but I plan to fix that so I can use it in any sort of environment I need).
So these two things combined, it is much easier for me to work in C than MSP for doing FFT stuff. After explaining some of this stuff to a few friends of mine who are Max users but not C programmers I got inspired to write some externals to help users work with DFT bins out of order in MSP. I've got the functionality all mapped out in my head, but I'm stuck on how to build secondary DSP chains (the few experiments I tried to figure this out in December did not seem to work and I've been meaning to ask about this on the Dev forum when I had time to properly formulate my questions. I'm assuming starting your own DSP chains is how your dynamicdsp~
stuff works; do you have any tips you could share—on or off list—?).
Hi Roth,
Wow - a lot of detail. Yes - all makes sense. YOu can actually zeropad with pfft~ but it gets complicated, especially due to a few under-documented or not well known aspects of the implementation.
I can maybe help you with the dsp chains offlist if you drop me an email. It's not super hard. Interestingly I was considering some similar for frame processing a few years back, but I never finished it due to lack of time and the complexity of my ideas. In my case I was actually building my own thing like a dspchain, but not - for various long and boring reasons I won't go into here.
I'm interested in what your ideas are though, as I have considered re-using dsp chains in a different way for FFT stuff, but in my head it always gets waaaay too hacky, unless you just want to operate over frames within an object, in which case conceptually you might as well make a standard object for use in pfft (if it weren't so slow).
Anwya - drop me an email and we can continue this offlist if you like.
Roth,
have you managed to complete this external yet? Would be interested in using a partitioned convolution....