(Max 7) formant scaling...
When transposing audio during playback from a buffer or a sound file, pitch shifting and formant scaling can be coordinated to preserve the formants; but there does not seem to exist a formant scaling functionality for audio input although a pitchshift~ object is available. So, any ideas on how it can be applied to the audio from adc~?
as a little workround you could just combine pitching with formant preservation with another pitchshifting method which does _not preserve formants.
otherwise you would have to build it yourself, i dont think there is a third party external for proper formant shifting. my favorite tool would be melodyne, but you will also find some okay VST plug-ins such as the one from akai - for windows there are even free ones.
I wonder if a trick involving continuous recording into a very short buffer~ and playing back from there with the formant feature would work...
I mean, a certain look-ahead time must be in use with the objects that work with formants, hence the limitation to pre-stored audio. Just how many milliseconds... so will the latency be tolerable if designed as I described...
funny enough: the name of one of the standard ways how to analyze formants - linear predictive coding - (LPC), does not suggest that looking forward would be needed.
then again, you can always just implement things with a certain amount of latency if you need it to work in realtime. for dynamcs effects or FFT there is no other choice anyway.
the buffer you are looking for is tapin/tapout. but parallel processing and using a simple delay msp object should be enough, too.
in the case of LPC a buffer will not be of much help, because LPC generated data will not be an audio signal at all. its basic model is optimized for speech resynthesis using a pulse train (see ->train msp, ircam ->resonantors msp, ->chant), and you would probably have to write your own external to replay a form of LPC. there should be some C++ stuff in ->sndtools for LPC, but i cant help with that at all.