Doppler effect, some leads needed
Jun 6, 2012 at 10:58pm
Doppler effect, some leads needed
I’d need some leads to work on my doppler effect in myUniverse (= 3D Space with a cam and moving objects emitting sound)
My cam is the place where ears are.
My objects emit sounds.
0/ does the doppler effect should be apply on my master output ?
1/ what would be required to calculate my doppler shift ?
ANY ideas/leads would be appreciate
Jun 6, 2012 at 11:33pm
my approach would be to calculate the distance from your mic(s) to each object at every update (presumably every audio vector) and then delay the audio emitted from each object by the time equivalent to that distance (ie delay time = distance/(speed of sound). Each sound source would have it’s own delay
Jun 7, 2012 at 11:42am
Hi terry and thanks a lot for your answer.
I’ll use super collider as the sound generator, using synth and voices handling inside of it .
Indeed, it is totally true that changing the pitch itself wouldn’t be ok.
I’ll make a post about the mic(s) implementation for spatialization purpose.
thanks a lot again.
Jun 7, 2012 at 5:48pm
Since you’re synthesizing, you can probably just change the pitch of the synthesized sound via line~. The doppler patch in the examples is a good starting point. (IIRC there’s maybe even a synth version in there)
Since you’re already doing the distance calc in your java code, it’s really just a matter of interpolating that over time into the appropriate units. You could optionally do it with delay, but that’s probably more expensive, since you can’t downsample the interpolation if need be, whereas for the synth you could do that if you were using line~ to change the frequency.
Jun 7, 2012 at 6:12pm
These are 2 options.
About the “it's really just a matter of interpolating that over time into the appropriate units”, I guess I missed something.
Currently, I'm firing an event as soon as the position is changing.
By interpolating, you mean to use line~ I guess.
Jun 7, 2012 at 6:20pm
Hi Julien, I forgot that your sound engine is in SuperCollider, so you’d just do the interpolation there, so line~ wouldn’t really be needed in this case. For the interpolation time on that, you probably could just set this to some small value (< 20 ms?) and I imagine it'd be fine. Alternatively, you could derive this from the framerate, but I suspect that level of precision will be overkill.
Also, the volume is inversely proportional to the distance^2, so it’s handy to have that value floating around…
As far as the tuning goes, there’s this from the doppler example patch:
– Pasted Max Patch, click to expand. –
Copy all of the following text.Then, in Max, select New From Clipboard.
Jun 7, 2012 at 7:48pm
Ok Peter I got your point.
About the distance, I would use that power 2
About all what I have to do in my sound sources (basically, voices of synths in Super Collider), I’ll put a little module in each synth.
The (now famous) objects know distance to cam, angles to mic, so they can fire & tweak SC synths in realtime
Does this schematic make sense ?
Jun 7, 2012 at 8:32pm
Yeah, I think that’s pretty much it.
I use spatialization rather than volume to handle density in my pieces, and it works well. I use a onepole~ filter to rolloff the highs as things become more distant.
Jun 7, 2012 at 8:38pm
you mean you don’t use the 1/r^2 volume decrease ?
Jun 7, 2012 at 9:13pm
Should have said “spatialization rather than just volume”… I have amp, filtercutoff, and dry level stored into a lookup table so I just treat distance as some 0-1 where 0 is “IN YO FACE!” and 1 is infinity.
You can also do a global reverb for each output (separate from dry), so really distant sounds can only be in reverb, close sounds can be mostly dry, etc.
These threads have been kind of funny for me (the whole universe thing) as I’ve been re-reading some Douglas Adams recently…
Jun 7, 2012 at 9:33pm
I got that.
(and I could really be interested by Douglas Adams too!)
Jun 8, 2012 at 2:51am
I’ve always just done it in 2d, either cartesian or polar, mostly out of laziness… The only thing that the angles probably matter for is the spatialization of the position, since the other parameters are usually functions of distance. In 3d you could use it azimuth to do some high-frequency tailoring via high-shelf, but that is more subtle and something that might be worth doing if you find you have extra CPU hanging around. Mostly, I just worry about distance, since that’s the thing that seems to have the biggest impact on everything.
I use the angles to adjust the dry signal that goes to the various speakers and in combination with the distance to determine where the reverb goes, so that really far off sounds on the left-front are only in the left-front reverb whereas sounds that are really close are in all of the reverbs.
As an alternative approach, you could also treat your speakers as virtual microphones (the old room-within-a-room reverb trick) that are projected from your camera at fixed distances. (imagine that your camera is at the center of a square and that the virtual microphones are at the corners) There’s a lot of nice things that you get, but it comes with a higher computational price, since you’re now dealing with four distance calculations.
Jun 8, 2012 at 8:15am
I made a very light schematic.
About the distance attenuation.
About spatialization/volume distribution to my speakers
I guess I'm missing something (a value to measure for sure)
You must be logged in to reply to this topic.