Doppler effect, some leads needed
I’d need some leads to work on my doppler effect in myUniverse (= 3D Space with a cam and moving objects emitting sound)
My cam is the place where ears are.
I still don’ t know if I’ll use a more than one microphone approach. One microphone = 1 dimension panning, enough actually.
My objects emit sounds.
0/ does the doppler effect should be apply on my master output ?
I mean, the cam is moving near from 2 other objects moving.
Which frequency would be shifted ?
Answer is: all relative distance change involves a shift.
What would be the approach ?
1/ what would be required to calculate my doppler shift ?
I guess I’ll have to make some continuous change in the frequency of my sources.
Is there a global formula that could be applied directly inside my objects to modify the nominal frequency ?
Is this the approach to follow ?
ANY ideas/leads would be appreciate
my approach would be to calculate the distance from your mic(s) to each object at every update (presumably every audio vector) and then delay the audio emitted from each object by the time equivalent to that distance (ie delay time = distance/(speed of sound). Each sound source would have it’s own delay
use tapin/out to vary the delay– changes in delay will cause a perceived pitch shift corresponding to the doppler effect
so you don’t vary the pitch of the sound sources directly, but only via the changing delay
Hi terry and thanks a lot for your answer.
It is totally clear.
I’d call this implementation : the natural Doppler effect :-)
I’ll use super collider as the sound generator, using synth and voices handling inside of it .
I’ll add the delay for each sources in sc
Indeed, it is totally true that changing the pitch itself wouldn’t be ok.
I’ll make a post about the mic(s) implementation for spatialization purpose.
Still try to figure out the concept.
thanks a lot again.
Since you’re synthesizing, you can probably just change the pitch of the synthesized sound via line~. The doppler patch in the examples is a good starting point. (IIRC there’s maybe even a synth version in there)
Since you’re already doing the distance calc in your java code, it’s really just a matter of interpolating that over time into the appropriate units. You could optionally do it with delay, but that’s probably more expensive, since you can’t downsample the interpolation if need be, whereas for the synth you could do that if you were using line~ to change the frequency.
I got it.
These are 2 options.
About the "it's really just a matter of interpolating that over time into the appropriate units", I guess I missed something.
Currently, I'm firing an event as soon as the position is changing.
I don't know the frequency of updates. It is driven by the frequency of position informations popped out from jit.gl.camera.
By interpolating, you mean to use line~ I guess.
Indeed, from discrete values, line~ can fire continuous sample accurate continuous value.
Is this what you mean?
Hi Julien, I forgot that your sound engine is in SuperCollider, so you’d just do the interpolation there, so line~ wouldn’t really be needed in this case. For the interpolation time on that, you probably could just set this to some small value (< 20 ms?) and I imagine it'd be fine. Alternatively, you could derive this from the framerate, but I suspect that level of precision will be overkill.
Also, the volume is inversely proportional to the distance^2, so it’s handy to have that value floating around…
As far as the tuning goes, there’s this from the doppler example patch:
----------begin_max5_patcher---------- 1056.3oc2YF0cihBE.94jeEr9Tm8jlQPPw4s82wd1ybHJok4nniP51tyYxu8 EAs0lZrlrwXm8EQPBbuebuWtP9wxEdaJdjq7.eA7mfEK9wxEKrMU2vhl5K7x YOljwT1t4kTjmykZuUtuo4OpssmJTZlLgC1VUjCTE6pLueS9mZ6nbWdwNcFW aGF+lVKY5j6Ex69ZEOQ6Di.+09q.P2yfn5BjoB3uZ9IhT67Ur4a2h7aG8sER sjkyse5OpDrrNyqP1NsvNcWI9Ga2gv0jXBjFU+oetbY8iU+GgQovnVfJlVT. 1VTYXB+663xDAWAJ1VSGYZCiNS9DfwVxfHCAHX7LCHmJcx5XjSEgVciz9rWM jNndzt.cpKJWPaA88bfpjySeYRtI+ypOAtI.6uFECXZfhy.Y7G3Ymt6BDEX4 Sn0r.gCGzdHZlsGj7+1HGugQedOvAi9U+Vov0j9oRtS28Th6jFI+YU8nlRPT cAEOHbBOQ3ftvvYaVgY76GAnig.yOhYL5.daXx6FlDHmgh0qh3OHJHuOJJYU l107puxkrMY7t1oygMzuc6DaDgnVOsvfAIGdlMhNVXnJdlIx2CbfIJSQhP+z gw7bQkN83OXGbPNqK+gcwBl63O6x2vq1eh9XMVHqZ81FxTogCtBHZv7WfnKo wRy2xKR4c6qQ3sKgq8ub9Z+9d.FC88WOQtZvXaQ3vAofePc0bFYV+KVdYlI8 hRSUEOoPldF6ua8nfQthP5fHYtSHdh2BCScrvlRXzfnH9WucvXoliPo2o.pp Sjf0vSH0igc3Xqm0fwknyriUNWoX2weCfNqs2GDJAjl3M1zlcIOCo8SknYlJ Gwpw37jZcbNMr7t9ZMngX80Hu0WKoHqnx0U+0wXheLcU8alyriH0ug8in3.C O6GmgePO6dpPUlwR30ezE31FD+rtKCHo4xLftf2CteF4++4FQ5RigyMB+KZp Q2tehxIJxFbJftFG4SnwGmbAeLiSkZN.xS6AP.7xCHjK7sKowAOABZlgicjm JKjQbcY8pj98baYG89b6nz1YvKSHO7JjsRXc6ulDMWtlSqZRRC7hblxUZgr9 94jc5ScZ9Fql1NcuHMkK6J0FIsN6MqR1ui7XEG5qlo9EmWKxuUZxEokElbwZ HBzmrlTmxp87flsTaq7xzbwD+nwPSx6H+m015GnzAgg05IhXcEovmqLAJc3H TZ50Smg3tKzXzroyQSgNe4TgfQnB3SzUyt.DSc+2AsuOSxtMpk+UIpEZDhSv UKFJbLFmP5UiNGDv6HxS30iOiRdvui7bfoeSHVJr4VyZqLAF+P7X44UZ8ENV d9QRdPmSnMDL18mY8bkoX8cjQStRzbDRC5LkFW1qrxxG3UplgzJHlL1+lauw vU1pBoqpcD8p3OHZ6OYY8n8yk+KvtiqpB -----------end_max5_patcher-----------
Ok Peter I got your point.
I guess there is something I missed in your patch, btw.
The pitch ratio would change only when there is a distance variation (= when a movement occurs)
About the distance, I would use that power 2
But indeed, all is virtual.. I mean, I have also some doubts about units in the virtual world.
I can make very little objects and put the max speed at a very low value OR make huge objects etc…
I’ll post a metaphysical post/question about that in few minutes :p
About all what I have to do in my sound sources (basically, voices of synths in Super Collider), I’ll put a little module in each synth.
This little module will be responsible for:
- sound attenuation
- sound spatialization
- doppler stuff (delay? or direct pitch alteration as you mentioned)
- sound modifications in case where I want to create some specific atmospheric fx (only some filters or I don’t know)
The (now famous) objects know distance to cam, angles to mic, so they can fire & tweak SC synths in realtime
(OMG … poor cpu)
Does this schematic make sense ?
It is only a schematic to sketch the global chain at the end of my sound sources (=synths in SC)
I already put it there: http://cycling74.com/forums/topic.php?id=40379 about spatialization (for which I still have to figure out mic & calculations I’ll have to do)
Yeah, I think that’s pretty much it.
I use spatialization rather than volume to handle density in my pieces, and it works well. I use a onepole~ filter to rolloff the highs as things become more distant.
you mean you don’t use the 1/r^2 volume decrease ?
Should have said "spatialization rather than just volume"… I have amp, filtercutoff, and dry level stored into a lookup table so I just treat distance as some 0-1 where 0 is "IN YO FACE!" and 1 is infinity.
You can also do a global reverb for each output (separate from dry), so really distant sounds can only be in reverb, close sounds can be mostly dry, etc.
These threads have been kind of funny for me (the whole universe thing) as I’ve been re-reading some Douglas Adams recently…
I got that.
about spatialization, are you calculating stuff with angles in 3D ? by projecting all in the cam plane?
(and I could really be interested by Douglas Adams too!)
I’ve always just done it in 2d, either cartesian or polar, mostly out of laziness… The only thing that the angles probably matter for is the spatialization of the position, since the other parameters are usually functions of distance. In 3d you could use it azimuth to do some high-frequency tailoring via high-shelf, but that is more subtle and something that might be worth doing if you find you have extra CPU hanging around. Mostly, I just worry about distance, since that’s the thing that seems to have the biggest impact on everything.
I use the angles to adjust the dry signal that goes to the various speakers and in combination with the distance to determine where the reverb goes, so that really far off sounds on the left-front are only in the left-front reverb whereas sounds that are really close are in all of the reverbs.
As an alternative approach, you could also treat your speakers as virtual microphones (the old room-within-a-room reverb trick) that are projected from your camera at fixed distances. (imagine that your camera is at the center of a square and that the virtual microphones are at the corners) There’s a lot of nice things that you get, but it comes with a higher computational price, since you’re now dealing with four distance calculations.
in my case, I'll probably add more atmospheric effect a bit later.
Indeed, I like the basic idea of using azimut to control the amount of sound sent to that or this speaker.
I made a very light schematic.
About the distance attenuation.
I can distinguish the 2 sources (1) & (2) by distance
(1) will be heard a bit less loud than (2)
About spatialization/volume distribution to my speakers
If I take only the basic first angle to the source, I cannot distinguish (1) & (2)
And in fact, (1) should be heard a bit louder on the FRONT RIGHT than on FRONT LEFT & REAR RIGHT, compared to (2)
I guess I'm missing something (a value to measure for sure)