Newbie: Extreme real-time timestretch patch possible?
Well.. just a little project i wanted to engage in, but was feeling a bit lost in terms of direction, so could anyone clarify if this is actually possible with maxmsp and possible routes to actually making it!
A patch which time-stretches audio in real-time with the audio coming IN to max in real-time through a standard analogue input (Imagine for example an ipod plugged into an input of your audio interface.)
I would like the time-stretching to be as smooth and transparent as possible, I was thinking granular sampling/time-stretching? Something along the lines of 9 beet stretch/paul stretch's time stretch application. Can that even be implemented in a max patch?
Help....?
Thanks!
Amour
real-time time-stretch very possible in max. not very easy.
see example here for starters on granular way:
Max5/examples/sampling/granular
it is not exactly what you want to do but it will teach you basics you need to know and implement. granular is not always smooth and transparent compared to other forms or mixes of forms... this is because of amplitude windowing(even with overlap). many of these things are easier for advanced people mainly because they become so extensive that it's easier to code a max external, so, if you're not quite that advanced yet, you may want to just simplify and go for (not the most smooth and transparent but rather) any kind of granular time-stretch to start off.
as for it being real-time, there will always be a slight delay between recording into a buffer and playing it back within any environment. but to my ears, when you set the DSP settings properly for tight scheduler response, this delay can be very negligible. in time-stretch applications, though, a delay is inevitable by mere fact that granular-time-stretch implements delay by nature.
If the example doesn't suit you, you might also look into 3rd party externals like granular toolkit, etc.:
http://www.lowkeydigitalstudio.com/2007/03/granular-toolkit-v1-49/
for ultimate smoothness(if you're more interested in getting the sound than learning to patch), although it is CPU expensive, elastic~ gives a good smooth sound:
http://www.elasticmax.co.uk/
hope it all helps.
________________________________
*Never fear, Noob4Life was never here!*
i would suggest you to start with this tutorial:
than go to the example suggested by Noob4Life.
After this you will be ready for:
http://www.microsound.info/?p=22 (Nobuyasu Sakonda – MSP Granular Synthesis patch v2.5 )
The patch from Sakonda seems the easiest on a first glance but it is actually the hardest to grasp if you're new to Max and especially MSP. But it produces amazing results with very low CPU costs.
Next step would be phase vocoder, ...
there was a "freeze spectrum" patch I played with awhile back, awesome:
incredible sound possibilities there... still trying to understand how it all works
:) Will check them all out and post my opinion. Thanks!
jean-francois charles patches are really great,
i suggest reading the accompanying cmj article as well, very informative,
Ok, after looking further into them, I've got a few questions >
1. The phase vocoder works on an existing sound and accesses it through a buffer. I might have confused myself but doesn't this only apply when the sound is for example, a pre-recorded .wav file on your hard drive and not coming in in real time?
2. Not quite sure how to implement the Jean Francois Charles files into a patch on max? The youtube example he has with the oud sounds excellent!
Thanks
ah_london, your question number 1 brings up a basic conceptual issue about how you plan to implement a real time time stretch. You can only really apply the process to a relatively small fragment after which the sound source is now "somewhere else" so to speak.
Perhaps its more realistic to apply a spectral freeze so that at one point you are "live" and then with a quick crossfade the sound is frozen? You can actually do this quite a few ways even using something like sigmund~ or analyzer~ depending on the sounds you want to stretch..
It's not that it is not possible to grab one or more frames of fft data "on the fly" but the stretched version will lag progressively behind the source so that the relationship becomes kind of arbitrary unless you choose strategic points to crossfade between the two. Also, it will probably the case that the actual stretch "factor" may need to be smoothly approached from unity to make the change between realtime and stretch more effective (ie go from real time to 50x or whatever on a curve.
As far as implementation it might be a bit easier to go the granular method (than a moving fft method) Apart from any of the suggestions made earlier such as nOOb4Life mentioned elasticX which is a good granulator for real time signals and the GMU externals (free) one of which is an object for real-time which is also pretty capable (although I suspect that editing patches with the GMU objects with audio running did cause crashes on my machine...must follow that up!). Also you could implement something like that in IRCAM's FTM, but there is a bit of a learning curve there (but I recommend the effort if you have the time/inclination)
There is thread called "alternatives to elasticx~" which while mostly concerned with file based stretching has quite a few links to the externals I've mentioned and patches that may provide a few cluesto get where you want... https://cycling74.com/forums/alternatives-to-elastic
As far as question number 2. goes, You may be able to get the patches from the link seejayjames posted above. I believe JFCharles also has a share page here on C74 where he has a suite of Jitter based fft patches available for download.