The principle of wavefield is very simple and easily realized in max. What you need per loudspeaker is an interpolating delay line, a filter and amplitude control. However, you need to get the math right. This thesis of a student at the university of Delft, where the first research was done is quite insightful:
Probably what is most challenging is a good user interface. The game-of-life foundation makes available the software developed for it's own system (written in SuperCollider by Wouter Snoei and Miguel Negrao) but, if I'm well informed, which will work with any arbitrary loudspeaker setup:
yes thanks - maybe it would be better to write my own software in Max - it might be quicker, (edited to add : after looking at that paper I would have to learn several year of math first and I have a limited time scale - so probably not quicker !) . I have been trying to download game of lifes software but cant find the window installation files in the zip file - I guess Ill have to use a Mac at uni or install Linux which would mean messing round with Jack
Roman - it is not necessarily hundreds of channels - you could use as little as 16 - my system has 32..
The speaker configurations are not that complicated and there are not millions of them - generally you use straight line arrays - in my case one straight line though you could enclose the listener in a box , have a line on either side or even two line arrays one above the other for 3d wfs. However most programss (there are several available already) allow you to specify the speaker positions first (a bit like ICST ambisonics plugins)
Yeah I was reading about monolakes setup - didnt realise he only used 4 speakers - I dont think you could really do wfs with so few speakers.
Jesus I dont know - I can program max to a resonably high level - but I am not a mathematician - and I have a very limited timescale before I have to begin production. If there was somebody who understood the maths who could just give me instructions then I could program them easy enough. I was hoping to spend more time actually creating and composing in this project than getting bogged down in technical stuff.
I understand the principle of delays and huygens principle - but the filtering involved to imitate near and far sound sources, doppler etc is where it will get complicated.
Actually the way we estimate sound distance etc is not just gain , distant sounds have less top end (air absorption etc) and greater reverb/source sound ratio - closer sounds more bassy etc etc so frequency filters are quite important in creating distance/proximity illusions - it may seem anal but when all these little thing add up its the difference in the audience between "hmm its OK" and "WOW !"
I still dont know if they are used much in wavefield synhesis - getting through some papers now. The best explanation I've seen of how wavefield synthesis works so far is actually an animation - if you imagine a line of speakers lining the top of the video :
I recall seeing something about a wfs~ external on the PD list a few years back. It was done at UCSD, although I can't remember who by. Might help for you to look at their source code as a starting point.