Spatialization in m4l
I need to create a 8/12 channel spatializer in maxforlive.
So far I’ve been able to make something that works using 12 returns, one dial controlling each send.
My problem is that I would need to make a sound rotate between 8 or 12 speakers with a constant intensity,
and I’m finding myself unable to find a good algorithm for that.
Looking at pan2s in the examples folder it appears the volume of a speaker has to decrease from 1 to 0 in a square root function.
But even when testing this example with a white noise, the sound is louder when it’s on one speaker than when it’s between 2 speakers.
Anyone worked on something similar?
I had a look to the Icst ambisonics library but I can’t find how the volume of a track is controlled.
Has anyone worked with some of these libraries inside m4l?
Unless Ableton has pulled a rabbit out of their hat recently, there is no way to do ‘true’ and accurate multichannel work natively in Live (ie. not relying on sends). Regarding the panning algorithm – and you probably need to look at ‘constant intensity panning’ (see here:http://www.cycling74.com/docs/max5/tutorials/msp-tut/mspchapter22.html) the transition ‘between’ sends most likely won’t follow the same law anyway, so you’ll have to tailor a ‘panning law’ of sorts depending on the characteristics of Live’s send gains.
So, it comes down to two choices depending on how crucial to your project Lives other features are: Find a way of routing audio sources from Live/M4L – perhaps something like this http://cycling74.com/forums/topic/ambisonics4live/ (which I have’t tried) – OR Use a host, like Max (or perhaps REAPER) that can enable flexible multichannel routing internally.
- This reply was modified 2 months by spectro.
I need to use it in live, and to record automations, etc, so I’m still searching the good mathematic function which will work well working with ableton’s sends.
I reproduced these algorithms described in ‘constant intensity panning’, and they don’t work that good on a transition between ableton’s sends. At the moment what works better for the transition is to use a 5th square root (0<x<1).
Log, sin, etc., don’t work well.
You need just rotation or also 2D position?
This is how you can do it and it doesn’t involves max for live at all.
Lets say you want to specialise an operator synth over a circular array of 8 speakers.
1) Right after the operator place an empty audio rack and name it "Array Panner" and set "Audio To" to Sends Only
2) Create 9 empty layers and name them from 1 to 9
3) Edit zones for al layers and fades as pic 1 and pic 2
4) Create nine audio tracks and name them s1 to s9
5) Route each one to a single speaker, track s1 to speaker 1, s2 to speaker 2… IMPORTANT s9 goes also to speaker 1!
6) Select audio from for each track as follows:
(operator track’s name is "operator")
s1: Audio From 1-Operator
Array Panner | 1 | Post Mixer
s2: Audio From 1-Operator
Array Panner | 2 | Post Mixer
s2: Audio From 1-Operator
Array Panner | 3 | Post Mixer
7) Activate Monitoring for all the S tracks.
It should look like pic 3
Now you can use the Chain Select Ruler (pic4) control to pan smoothly between all the speakers, automate it or map it to an external controller.
If you need to compensate for something, just modify the zones and crossfades till you get what you need.
…and the project in case you need it
I tried with 4 speakers, works nicely !
Didn’t know this method in live.
Comparing with my m4l patch, I don’t get which one works better. I need to test with more speakers..
great! let me know how it goes!
I also might have a solution for you. For a recent project I developed an Ambisonic based M4L device/Patch pair. It not in a release stable version at the moment therefore I would not like to publish here. But If you are interested you can e-mail me (jan[at]janmech[dot].de). It works, an I produced a multi-channel audio installation with it.
The actual spatialziation is done by the max patch that receive the audio via external routing (i.e. SoundFlower and the information about the position in space of a Live Channel.) All movements can be recorded as automation in Live. As ambisonic is the base it’t pretty flexible in loudspeaker setup, numbers of speakers and input channels.
Let me know if you are interested.
- This reply was modified 2 months by Jan.
Nice work Jan, what about minimal latency vs stability?
Also Salvador, here is a quadra-panner I did using only empty Audio Racks.
latency is a problem, as the IO vectors have to be rather large when using SoundFlower. I haven’t tried JackAudio as a routing interface wich some people claim is superior to SoundFlower in some aspects. My bet for using ist with many speakers/channels in a live performance hardware routing would be necessary. It served me as a production tool, at the end I recorded the entire pice, so latency wasn’t such a problem.
The device itself is stable, but there are still some smaller bugs, the distance filter-option is not yet there where it should be… . There is also a joystick control build in which I used to record more organic movements, but this part is only designed to work with the joysticks I had … These are mainly the reasons why I don’t consider it as done.
regarding the earlier part of this thread regarding equal power panning using live sends:
I did some work on this with Live 8 to make a equal-power 8 channel panner that compensated for the response curve of the Live send pot. Details at:
I don’t know if the response has changed for Live 9.
Sounds great, hope Jack Audio improves latency etc,
@Richard Garrett thanks, I’m looking to your solution, it seems interesting. will see how it goes. I’d like to find an arithmetic solution but it seems to be hard to find.
@Jan I emailed you, I’m interested in having a look to your patch.