Max 8 having a nice "Binaural sound objects in 3d space" or HRTF library

phiol's icon

Hi all, as the title indicates,
I was wondering if in Max 8 , we could expect a nice "audio objects in 3d space" engine.
Something like this https://valvesoftware.github.io/steam-audio/

Unity has audio object , even TouchDesigner (a visual software) has release this in the experimental downloads.
Please direct me if you already know of something else that works in Max.
But with VR and etc... being so popular, it is surprising that Cycling doesn't have (not that I know of) it's own vanilla library for this. "Binaural sound object in 3d space"

I am presently building my own "hack" system using Jitter,
but I question how robust this can really be.
-----
here is quick sketch

Max Patch
Copy patch and select New From Clipboard in Max.

Graham Wakefield's icon

Hi phiol,

I'm working on something for the vr package that builds on the steam audio api -- talked about it all the way at the end of this thread https://cycling74.com/forums/oculus-rift/replies/4

Would welcome thoughts/ideas/collaborative contributions etc.!

Graham

Pedro Santos's icon

Hi, Phiol and Graham. I've been using a very simple method with HOA or ICST libraries:
Instead of directly using the coordinates of the objects / sound sources in the OpenGL scene, I do the math to account for the camera/user position and orientation. It's these new relative coordinates that are sent to the sound spatialization system. The math behind it is in this patch, inside jit.gen:

Max Patch
Copy patch and select New From Clipboard in Max.
phiol's icon

Thanks guys,

Pedro, I remember seeing that patch a while back. Any reason why you're using both icst and hoa.
I think hoa is cool but you can only have 1 HRTF (which is normal in this case (the user)). And maybe I'm wrong but, you can't weak to roll off zone that much, you can't make it narrower , or have sound become complete zero amp. For example, if you take the hoa.2d.map~ help file the middle circle is your sweat spot , the only way you can tweak the roll off is to use the @zoom. 0.01 being the smallest , can always hear the 4 sounds. Not like say vanilla Max object [nodes].
Unless again, I overlooked how to.

@Pedro , I wonder how you use your setup. you have to work upside down where your sound are coming towards you in the hoa.2d.map (again having @zoom 0.01). And you treat the circle diagram as a near_clip and far_clip in a 3d scene where objects are coming towards you then are muted when far enough.

Might be hacky but I like the shared patch I started because it behaves like a 3D version of nodes ; where your head/hand can touch and fadein a 3D sound object, but I would still need to send their outputs in something like the hoa , to get the correct binaural panning.

I've never really dove into this but I do know there are crazy softwares that have already fully covered ambisonics. I know about spat revolution but have never tried it. maybe all solution are there.
I've heard that Unity has something similar to what I started , where you can very simply put 3D sound object in your scene and voilà. Would be nice to have the same in Max.

Looking forward on your advice and expertise.

Thanks.

ps. @Graham, as you see , I'm way to much of a beginner in this field to be of any help.
Plus I don't even have acces to a Vive at them moment.
Simply having a quick and easy "3DSoundObjects" in a regular on screen 3D context scene, working in a binaural (if using Headphones) setup or surround 7.1 setup would make my day !

thanks a lot guys

phiol

phiol's icon

@pedro , How I understand you use the hoa library is ;
you place sound objects in the 3D scene as a group in hoa.map and then you dynamically influence the params of that group base on the camera position. the center point is you head.

I have 2 questions
1. When moving the cartesian x y and change the azimuth based on head Y axiz. how to avoid the skip? or how to control both simultaneously . See attachment video link.
https://www.dropbox.com/s/d6of1s147efuhag/HRTF_hoa.map.mov?dl=0

2. how to control the diameter of the source. Raise the order value ?
There is no lookup table like in the icst library

Thanks again

phiol

kcoul's icon

Hi @Graham,

I've been trying to reach you on this and related topics concerning the VR package and how I'm hoping to contribute towards it in conjunction with proposed improvements to the Envelop for Live project. This was all instigated by the discovery that it would become much simpler to route audio between Live and Max with Live 10 / Max 8.

Would it be possible to correspond over email and/or the vr package mailing list? My address is kieran.coulter@gmail.com

I'm also looking to write objects for Playstation Move controllers, and to either further develop the Kinect objects or else at least offer better plug and play support with the dp.kinect objects instead.

It would be interesting to compare the pros/cons of using Envelop (HOA) vs Steam Audio.

Phiol, if it's not already clear, this is something that even now is still in the hands of the 3rd party package community, I am not sure at which point Ableton/Cycling will decide that 3D audio needs to be a 1st party offering (hopefully soon!)

To answer your questions:
1. You need essentially a 2-axis controller. I am working on supporting a range of gestural input devices for this task, a project I started in University called GestureLab. In such a case, it ismuch better to use polar coordinates and control the azimuth and elevation simultaneously gesturally, leaving distance as the independent variable to automate with or without gesture (distance scaling may not map well to the reach of the arm... audio loudness over distance is logarithmic and our arms are.. well... linear!) Varying distance effectively varies sound source diameter, because when you really, really think about it, sound sources don't have an actual diameter. What you have instead is a large number of combined point sources to create a perceived diameter. Check out the Sound Particles application to see what I mean.

Far away sources may have a wide perceived diameter because of reverberation. Close sources may have a wide perceived diameter because the interaural time difference is very low, especially for sources above or below.

But in effect acoustic diameter is always an illusion so models are better built up without any direct control for this kind of a variable.

phiol's icon

Yup I get you kcoul, but just as a tool like we have with nodes. Not in a real Vr situation.
I agree , in real life thing don't have spherical bubbles around them or cones or geos.
But it would be a fun tool. Basically what I built in the 1st post.

3rd party: yup I find it funny to have to go to Visual softwares like Unity and TouchDesigner that have these tools. But they are not generative music software.
Please Ableton Cycling 74 give these great tools in the next release. Please.

Graham Wakefield's icon

As said, I'm hoping to have a working tool in the VR package as soon as I can fix up the filtering (as per https://cycling74.com/forums/oculus-rift/replies/4 ), and the goal is to do it in a way that they can be hacked upon and experimented with more freely, as well has having a decent enough 'plug n play' example. I think that's more appropriate to Max than having a fixed pipeline. Building spherical bubbles rather than continuous decay is an example of one of the things it will be open enough to support.

And, you don't need a VR headset etc. to use these audio objects.

---

@kcoul -- I just emailed you. Thanks!

kcoul's icon

Well I think part of convincing them is offering 3rd party options that are decent for what an individual contributor or small group can do in their spare time but of course no match for 1st party.

I am working on 2D tool similar to nodes for native Max use because it is just an adaptation of the Envelop for Live plugins offered with that platform. So instead of plugins for Live channels they can just be bpatchers and fed by sound sources within Max. For my use case this will be good for stand-alone live gestural control of ambisonically spatialized synthesis because none of the sequencing/automation facilities provided by Ableton are needed in that scenario. Check it out:

Of course the dream would be to work on this stuff directly for Ableton and/or Cycling, but in the meantime before dreamtime, if you want it you gotta build it yourself! (or at least make some noise about it ;) )

phiol's icon

Thanks guys