Time Stretching in Real Time? External Sensor to Start effect?
Hello,
I want to apply time stretching, without modifying the Pitch, at different time and in real time on audio input.. I want to use an analog signal (via BNC) to start and stop the acoustic modulations. The signal come from a pressure sensor..
Does anyone alrealdy try /or have done similar applications?
I'm using a MAC with a FireWire 1418 box of M-audio (18 inputs and 1 Midi port).
My questions are :
1) I want to apply time stretching without modifying the Pitch? Does MAX has such kind of DSP or VST in real time (or not in real time)?
2) I found Real-Time Granular Synthesizer from LowNorth is it possible to import those kind of "pluggin" in MAX/MSP?
3) Is max/MSP adapted to do that kind of application? (I was thinking using Logic pro but I'm not sure if it is possible to do it with Logic?)
Thank you for answering my question, and for any advice...
Sincerely
Boris
If you're on a PPC Mac then check out Topher Lafata's
stretch~ object (see www.maxobjects.com). Sadly
there's no UB yersion yet (Topher?)
There's also a nice granular timestretcher in Marcel
Wierckx's Grain Tools, which are at
http://www.lownorth.nl/
hth,
Roger
--- BorisReynaud
wrote:
>
> Hello,
> I want to apply time stretching, without modifying
> the Pitch, at different time and in real time on
> audio input.. I want to use an analog signal (via
> BNC) to start and stop the acoustic modulations. The
> signal come from a pressure sensor..
>
> Does anyone alrealdy try /or have done similar
> applications?
>
> I'm using a MAC with a FireWire 1418 box of M-audio
> (18 inputs and 1 Midi port).
> My questions are :
>
> 1) I want to apply time stretching without modifying
> the Pitch? Does MAX has such kind of DSP or VST in
> real time (or not in real time)?
> 2) I found Real-Time Granular Synthesizer from
> LowNorth is it possible to import those kind of
> "pluggin" in MAX/MSP?
> 3) Is max/MSP adapted to do that kind of
> application? (I was thinking using Logic pro but I'm
> not sure if it is possible to do it with Logic?)
>
>
> Thank you for answering my question, and for any
> advice...
> Sincerely
> Boris
>
> 1) I want to apply time stretching without modifying the Pitch? Does MAX has
> such kind of DSP or VST in real time (or not in real time)?
Yes.
> 2) I found Real-Time Granular Synthesizer from LowNorth is it possible to
> import those kind of "pluggin" in MAX/MSP?
In Max, you will use your own granulation patches, maybe with externals.
Www.cycling74.com -> forum -> Search "granular"
> 3) Is max/MSP adapted to do that kind of application? (I was thinking using
> Logic pro but I'm not sure if it is possible to do it with Logic?)
In real time, you will better go with Max.
Www.cycling74.com -> forum -> Search "time stretching"
JF.
3) get the BEST DRUM SOUND ON THE PLANET! (except for mine you
bastard stop stealing my trade secrets! GOD!)
Quote: BorisR. wrote on Thu, 15 February 2007 09:13
----------------------------------------------------
> Hello,
> I want to apply time stretching, without modifying the Pitch, at different time and in real time on audio input.. I want to use an analog signal (via BNC) to start and stop the acoustic modulations. The signal come from a pressure sensor..
>
> Does anyone alrealdy try /or have done similar applications?
>
> I'm using a MAC with a FireWire 1418 box of M-audio (18 inputs and 1 Midi port).
> My questions are :
>
> 1) I want to apply time stretching without modifying the Pitch? Does MAX has such kind of DSP or VST in real time (or not in real time)?
Also consider fft pitchshifting (gizmo~)
> 2) I found Real-Time Granular Synthesizer from LowNorth is it possible to import those kind of "pluggin" in MAX/MSP?
> 3) Is max/MSP adapted to do that kind of application? (I was thinking using Logic pro but I'm not sure if it is possible to do it with Logic?)
Ableton Live on the other hand does this pretty well. You'll have to make some smart midi bindings but that's about it..
- Mattijs
>
>
> Thank you for answering my question, and for any advice...
> Sincerely
> Boris
----------------------------------------------------
Hi Mattjis,
Thank's for answer, I've tested the Zplan ElassticPro, works well but not for using it in a "live environment" on audio input. I'm right?
You told me that with Live Ableton I could do it pretty well. I've contact them and they say that Live can perform Time stretch on prerecorded sound, but not in real-time on audio input.
I wander if you have some knowledge about it and perhaps you know something that the ableton tech support is not aware of?
Thank's for Help.
Boris
Quote: BorisR. wrote on Fri, 23 February 2007 13:42
----------------------------------------------------
> Hi Mattjis,
>
> Thank's for answer, I've tested the Zplan ElassticPro, works well but not for using it in a "live environment" on audio input. I'm right?
>
> You told me that with Live Ableton I could do it pretty well. I've contact them and they say that Live can perform Time stretch on prerecorded sound, but not in real-time on audio input.
>
> I wander if you have some knowledge about it and perhaps you know something that the ableton tech support is not aware of?
Ah, hehe. I'd say there are two different things:
a) real-time time stretching
a pre-recorded sound is played slower but with the original pitch. The processing to make this happen is done in real-time instead of first having to render something to disk.
b) real-time time stretching of real-time audio input
a is performed on audio that has only just been sampled.
I was talking about a) while I see that you were talking about b)
So, let's talk about b). As you can imagine it's the same as a), only working on a buffer that is being written in at the same time. Live doesn't do that. It does something that comes pretty close, namely recording audio and then applying the time stretch immediately after recording.
In max it is possible to time stretch a buffer that is being written in at the same time. But if you want to do it correctly you'll need to do it as good as zplane does and as far as I know, nobody came up with something as good as that in max, although it is definitly possible.
I'm dying to spread some granular time stretching patches I recently made but as I know that wouldn't be smart, seeing the amount of time and money it cost to develop, I won't. I hope later I will.
Kind regards,
Mattijs
post salutem, I warmly encourage everyone to pick up the challenge to make something in max that sounds as good as the demo apps on zplane's website. It's possible, you know. The only restriction is your own insight :)
BorisReynaud wrote:
> I wander if you have some knowledge about it and perhaps you know
> something that the ableton tech support is not aware of?
They probably don't know Max... ;-)
The basic problem which would come up, is that you need to define a
maximum amount of time you'd possibly strech. As you will read out a
buffer slower than you record it, it will run out at a certain time.
(This is something the support of a mass sequencer would never want to
deal with, so they better keep it offline... ;-)
I'd just take a tapin~/tapout~ and change the delay time with a line~.
then as this would pitch down, place a gizmo~ fft-based pitchshifter (or
a granular one) behind it to bring it back to the original pitch...
If you need it in Ableton, create a pluggo...
If you are concerned about the quality of the zplane algorithm, monitor
how long it takes to do it non-realtime with zplane, and you'd get an
idea of the processing power which is required for an optimized
algorithm (I have no idea)...
Finally after correcting your mispelling of the tool, I found their
website, and it seems obvious, that the algorithm is completely aimed at
non-realtime processing...
It might rely on information which just doesn't exist (yet) in a
realtime situation, it might be necessary to look into the future somehow...
Stefan
--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com
Quote: Stefan Tiedje wrote on Mon, 26 February 2007 11:08
----------------------------------------------------
>
> If you are concerned about the quality of the zplane algorithm, monitor
> how long it takes to do it non-realtime with zplane, and you'd get an
> idea of the processing power which is required for an optimized
> algorithm (I have no idea)...
>
> Finally after correcting your mispelling of the tool, I found their
> website, and it seems obvious, that the algorithm is completely aimed at
> non-realtime processing...
> It might rely on information which just doesn't exist (yet) in a
> realtime situation, it might be necessary to look into the future somehow...
What do you mean? It -is- real-time. Ableton uses these algorithms.
Mattijs
>
> Stefan
>
> --
> Stefan Tiedje------------x-------
> --_____-----------|--------------
> --(_|_ ----|-----|-----()-------
> -- _|_)----|-----()--------------
> ----------()--------www.ccmix.com
>
>
----------------------------------------------------
>>
>>something that the ableton tech support is not aware of?
>
>They probably don't know Max... ;-)
>
from what I heard (about ableton) they know max inside out
kasper
--
Kasper T. Toeplitz
noise, composition, bass, computer
http://www.sleazeArt.com
Mattijs Kneppers wrote:
> What do you mean? It -is- real-time. Ableton uses these algorithms.
Ah, I was suspecting it wasn't because the original post suggested it
somehow...
(I don't know Ableton, but have seen a lot non-realtime algorithms...)
Stefan
--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com
First I want to thank's all of you for your answers.
I also see that to do what I want to, as Stefan Tiedje says, "in a real time situation, it might be necessary to look into the future somehow..." Seems not so easy...
To be clearer about my project I will explain it more details:
Subject will chew some crispy products and I want to modify the original sound of mastication (sounds are mainly cracks and snaps) and replay it to subject’s ear in “real-time”.
I'm using sensor to detect the beginning and the end of each mastication cycles. Delay between original sound and the modified one has to be imperceptible to subject. That's why I need to perform effects in real-time.
One of the effects I want to do is to increase, or decrease, the number of acoustic event during the mastication process (for an acoustic wave in the time domain, the signal crosses a preset threshold a certain number of times. To calculate the number of acoustic event you'll just have to count the number of peaks that are above this preset threshold).
I was thinking using a time stretching effect. If I stretch the time, I will increase the number of event per second (that's theory). But as you said, to do it in real-time I need to know the future (not easy). Perhaps there are others methods, like smart algorithms able to stretch a very small buffer. Imagine a buffer of 20 ms stretch at 90%, so the stretched sound has18ms. Is it possible to extrapolated the 2 ms missing from the 18ms to reach again the 20ms (initial time) and keeping a good, and reliable, sound quality? Of course the extrapolated time (2ms in the example) as to be in function of the 18ms stretched…
At the end we will have some “kind of” time stretched effect, but the time is not stretched, only the perceived sound.
Wahoo Hope I made myself clear?
Do you have any suggestion of how I can modify the number of acoustic event in real-time, without affecting the pitch and keeping a good sound quality?
Do you know if anybody has tried that before? Are there any software or hardware tools to perform that effect?
Thanks in advance for helps and advices.
Sincerely,
Boris
BorisReynaud wrote:
> I was thinking using a time stretching effect. If I stretch the time,
> I will increase the number of event per second (that's theory).
No, you will stretch the length of the events, but the number of events
would not change.
If you want to control the number of events, you'd need to work on
concrete events, all was had been said about timestreching doesn't apply
for what you want to do...
I'd record each event automatically into a buffer~, and then play back
these buffers with some kind of stochastic trigger. That way you can
easily increase the numbers of events without loss of quality, as they
remain completely intact... If you combine this with some analysis of
the current number of events in the input, you can relate it to your
live signal...
Instead of looking into the future, you repeat the past...
(Like politics.. ;-)
Stefan
--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com
Stefan Tiedje wrote:
>I'd record each event automatically into a buffer~, and then >play back these buffers with some kind of stochastic trigger.
Thank's for the tips, but I'm not sure to understand it well.
What do you mean by a stochastic trigger?
If I understand it well, you propose to add random events (following stochastic mathematical rule) on a buffer and then play back that buffer in live? Are you sure that I will not create artifacts, how can I link all buffer together without affecting the sound?
Are there some tools in Max/MSP to do such kind of stochastic effects?
Have you ever tried that before? Or do you know anybody how has?
By the way, you're right about the politics ; )
Sincerely,
Boris
BorisReynaud wrote:
> Thank's for the tips, but I'm not sure to understand it well. What do
> you mean by a stochastic trigger? If I understand it well, you
> propose to add random events (following stochastic mathematical rule)
> on a buffer and then play back that buffer in live?
Yes, the rules how to play them should be related to the real events
somehow, It does not need to be stochastical, it just seems the easiest
aproach...
> Are you sure that I will not create artifacts, how can I link all
> buffer together without affecting the sound?
You just play one buffer~ with each trigger, it would not create
artifacts, it will play that event as it had been recorded, if you make
it dense, they just overlap....
> Are there some tools in Max/MSP to do such kind of stochastic
> effects?
There is a whole pallette of externals (my externals folder lists 343,
which come with the standard disribution.... ;-)
I'd start with [random], create some buffers, have a record~ with some
kind of trigger detection to find the events, playback within a poly~ etc...
> Have you ever tried that before? Or do you know anybody how has?
If you are willing to dive into that, its worth the effort. Its not an
easy task, I'd not give it as arbitrary task to a student, BUT if your
interest is high enough, you did all the tutorials and read all the
docs, I'd say even for a beginner it is managable. But you will need a
good amount of time to do it the way you envision this.
Out of your description, I'd say you will be the first who thinks that
way (of course I could be wrong...;-).
I had projects which would involve stochastic granular synthesis, which
would need some of what you'd need, other projects had the analysis
part. But even for me it would not just be a snap, it would require some
experimentation....
Good luck it sound very interesting, I am looking forward to the
progress of your project...
--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com
Kasper T Toeplitz skrev:
>>>
>>> something that the ableton tech support is not aware of?
>>
>> They probably don't know Max... ;-)
>>
>
> from what I heard (about ableton) they know max inside out
>
> kasper
I know for a fact that you're right.
Andreas.