but my answer was a bit unclear, what i wanted to say is that it is probably a too complicated task to generalize it and make a subset of classes for a programming language.
i mean, we are talking about hundreds of channels and millions of possible arrangements of them - and no location looks like the other.
when i remember right, robert from monolake has made a simple controller for something.
but it was for a 4-speakers setup, which means it is a study only and useless in any thinkable real life situation. and it already look pretty complicated.
Probably what is most challenging is a good user interface. The game-of-life foundation makes available the software developed for it's own system (written in SuperCollider by Wouter Snoei and Miguel Negrao) but, if I'm well informed, which will work with any arbitrary loudspeaker setup:
Hi JVKR -
yes thanks - maybe it would be better to write my own software in Max - it might be quicker, (edited to add : after looking at that paper I would have to learn several year of math first and I have a limited time scale - so probably not quicker !) . I have been trying to download game of lifes software but cant find the window installation files in the zip file - I guess Ill have to use a Mac at uni or install Linux which would mean messing round with Jack
Roman - it is not necessarily hundreds of channels - you could use as little as 16 - my system has 32..
The speaker configurations are not that complicated and there are not millions of them - generally you use straight line arrays - in my case one straight line though you could enclose the listener in a box , have a line on either side or even two line arrays one above the other for 3d wfs. However most programss (there are several available already) allow you to specify the speaker positions first (a bit like ICST ambisonics plugins)
Yeah I was reading about monolakes setup - didnt realise he only used 4 speakers - I dont think you could really do wfs with so few speakers.
for some reason it was clear to me that WFS is something you´d only to a whole room such as a theater or cinema but it seems i am not up to date.
i am not aware of what you called a line setup, but just found some examples by searching the web.
so there is two different tasks in question, either moving a single source (soundesign/mixing)
or just encoding a existing mix (performing/playing).
the latter usually means that the source was 5.1 and you have to directly recode that to something
like 600.0, which is probably very very complicated. ;)
for moving sources around, in a room where speakers are arranged in a circle or a square ... it should
be possible to use something like "regular" spatialisation expression in order to get the gain paramters,
and in the case of a circular setup you can probably live without any frequency filters at all, or maybe
if you are anal about the spectrum something like an inversed binaural filter effect will work.
the doppler effect and/or haas delay involved when moving sources should also be doable with a
simple geometric speaker setup.
but i bet that nobody ever wrote it in max/msp and so we´re looking forward to see yours, ;)
Jesus I dont know - I can program max to a resonably high level - but I am not a mathematician - and I have a very limited timescale before I have to begin production. If there was somebody who understood the maths who could just give me instructions then I could program them easy enough. I was hoping to spend more time actually creating and composing in this project than getting bogged down in technical stuff.
I understand the principle of delays and huygens principle - but the filtering involved to imitate near and far sound sources, doppler etc is where it will get complicated.
Actually the way we estimate sound distance etc is not just gain , distant sounds have less top end (air absorption etc) and greater reverb/source sound ratio - closer sounds more bassy etc etc so frequency filters are quite important in creating distance/proximity illusions - it may seem anal but when all these little thing add up its the difference in the audience between "hmm its OK" and "WOW !"
I still dont know if they are used much in wavefield synhesis - getting through some papers now. The best explanation I've seen of how wavefield synthesis works so far is actually an animation - if you imagine a line of speakers lining the top of the video :
I dont think there'd be much point in directly translating a 5.1 mix - youd be better off just starting with the original tracks and panning them where you want with a WFS panning object.
Anyway we'll see how it all pans out... pun intended.
I recall seeing something about a wfs~ external on the PD list a few years back. It was done at UCSD, although I can't remember who by. Might help for you to look at their source code as a starting point.