FFT is one of the main areas in Max I’m not particularly sure about beyond the basics.
I’m making something for a friend, which uses this idea to create a responsive sort of drone within a space.
I have an idea for some live processing that involves having a master/meta sound, which is a pre-recorded/already defined sound file. There is a microphone input recording the external environment.
What happens is there is an fft analysis of this incoming sound, and the partials present within this are subtracted from the pre-recorded sound, in theory creating a constant fixed sound comprised of a synthesis between the live environment and Max/MSP.
To clarify, the pitches/frequency bands are always present, but in one sound source only – the computer is filling in the gaps.
Having just typed this I’ve realised an obvious problem that I totally overlooked and which should have been obvious – feedback or something similar. I’d imagine there might be a way around this depending on frequency bands used, so I’d still be interested if anyone has some ideas related to the initial question.