I’ve been trying to make a patch which quantizes in real-time an incoming audio stream. The way i’ve been trying to do it to start recording to a buffer when the mic level goes above a threshold, but then that recorded sound isn’t played back until the start of the next beat/half-beat/etc. The resulting gap can then be filled by stretching or reversing the original input stream. Is this a possiblity and do you know of any other attempts at this? I know it would never sound perfect, but it would be totally awesome to walk around town with it enhancing the natural rhythms of the city.
> I’ve been trying to make a patch which quantizes in real-time an
> incoming audio stream.
> it would be totally awesome to walk around town with it enhancing
> the natural rhythms of the city.
check out Roman Haefeli’s WorldQuantizer, it does exactly what you
describes and runs on an iphone. It records rhythmic events via onset
and "randomly" sequences them (rather than using a loop-based approach
w. time streching). There are some videos on youtube and you can
download the pure data source code from the rjdj wiki.