Sound Particles in Max

Anthony Palomba's icon

I found this interesting product called Sound Particles, a 3d audio rendering application...
http://www.sound-particles.com/demos.html

"Sound Particles is something completely different from any other audio software that exists today. It’s like a CGI software (e.g. Maya or Blender), but for sound, capable of reproducing thousands of sound sources spread over a 3D space.
Imagine a virtual 3D world, void. Now, imagine that you add sound sources (particles) that reproduce sound (each point on the image is a sound source). Finally, add one or more microphones to be able to capture the overall sound of that world. "

The audio demos are pretty impressive. I could see this tool being excellent for designing soundscapes. But I could also see it as a way of simulating players in a room with mics at different locations. For example simulating a multi mic-ed 100 piece orchestra. Then you could simulate driving by said orchestra at 80 mph ;)

I was wondering how one would go about implementing this in Max? The particle system would be pretty easy. Mapping the sound sources to 3d particles might be tricky. I imagine one might have to convolve all the sounds sources with each mic source.

Any ideas?

kcoul's icon

It all comes down to your spatialization engine and the interface it provides -> ideally inputs for mono sources that can each have an associated (azimuth, elevation, distance) attribute which can be updated in realtime.

If you look around you'll find solutions for this, usually involving ambisonics... A few I know that exist for Max are Spat from IRCAM and the ICST Tools from U Zurich.

Then the real work begins - to make 1-to-1 relationships between particles in a Jitter world and sound source inputs to the spatialization engine.

Finally you need to think of a good interface for positioning the particles - mouse-based? gestural? There's pros and cons to every type of UI.

metamax's icon

I agree a particle system isn't too much trouble because there are patches available that demonstrate how to create one - but I haven't seen many that clearly demonstrate how to track particles (position and other features) in a way that allows for precise manipulation in other domains. The ol' "just link the particles" or "check out the such and such object" doesn't help me much. GL gets a lot more complicated than ZL...

One specific issue I encounter is getting the data from a GL render context without slowing everything down by transferring the results back to the CPU via matrices and lists. It seems like GL-based solutions are good for visuals and/or functions that are applied on the GPU but as soon as audio processing or control voltage is involved, the performance benefits of GL are negated or worse by constant back and forth transfer of data. The alternative almost seems worse, with Jitter handling everything (no GPU to CPU bottleneck) and the obvious additional drain on CPU resources otherwise used by the DSP.

I'd love to see a simple scalable example of generating particles and using that data in some other context. It seems that a lot more can be done on the GPU than is typically considered but it's even more important to avoid inefficient GPU to CPU processes. Either way, I probably need more than one computer + OSC for what I have in mind, so I'd consider just about anything.

spectro's icon

'COSM' http://www.allosphere.ucsb.edu/cosm is a (ca 2010) object library for virtual worlds that can place and move 'agents' in a (limited) 3d world to which one can, among other things attach audio sources. I've used it in projects for extensive spatialization of multiple (usually 'autonomous') sound sources - apart from some near field audio zippering motion artefacts, it works pretty well in creating a convincing ambisonic sound field. As luck would have it (and as far as I can determine) it still (mostly) works in Max 7.

Though some of the components are now arguably better served by updated versions of Max - i.e. Jitter Physics etc., the audio functionality is still pretty useful - though, as mentioned I'd like to achieve a smoother (non zippering) version of the audio spatialisation engine one day (maybe using Gen~?) Perhaps Graham Wakefiled (thank you!) - one of the creators of COSM might like to offer some tips or suggestions to that end...

Anyway, I'd suggest those interested to check the COSM audio examples -

Now to checkout Sound Particles...