WFS DIY
This is a Wave Front Synthesis (WFS) system built entirely on Max. It's a low latency processor aimed at live productions and aims to bring the cost of the system. It can handle up to 64 inputs and 32 outputs if your computer allows it. It can be configured to your needs and your processing hardware. It has rather unique functions to cope with specific live performance issues such as when a loud source comes close to a speaker and the level drops on the nearby speaker to avoid over-amplification. It has different level maps too that can be used for instance to open and close channels depending on positions.
It comes with documentation in French and English. Manual positioning of the sources can be done with Lemur. There are some templates with macros for QLab to build triggers that can make changes in the WFS system via OSC.
More theoretical and practical info on the blog.
sacré job ;-)
Félicitations et merci d'être open source!
H
Merci !
Je serai début novembre au forum international du son multicanal au théâtre de Chaillot si jamais... J'y vais undercover. Je vais pas leur ruiner leur business avec mon gros patch. ;)
Chapeau bas !
Le gui, le Ipad: un "bête" ordi parvient à le faire. Bonne nouvelle ! Merci
Et l' info: Chaillot, je me suis inscrit, je devrai pouvoir y être. Au plaisir de te croiser, hope so
michel
Merci Michel !
Il y a d'autres bricoles pour QLab qui sont en préparation grâce à un petit pack de touches (El Gato Stream Deck) pour créer des pas dans la conduite de façon à pas avoir à taper les commandes OSC en pleine créa.
J'ai aussi quelques modifications au niveau de l'atténuation avec la distance qui sont prévues pour plus de finesse.
++
It's actually called Wave Field Synthesis. Nice project!
The Game of Life Foundation made open source software for WFS (programmed in SuperCollider).
https://sourceforge.net/projects/wfscollider/
http://gameoflife.nl/en
In case you're interested in this subject.
Wow! This is really really awesome, well done and thank you for providing it! :D I'm literally in the process of implementing a WFS system using Max 7 and IRCAM SPAT package. Will be a prototype system for a WFS system we're developing using distributed SHARC processors on an AVB network. Each processor handles the processes for delay, gains and filtering for 10 individual speakers and these are calculated by a central server. We're hoping that this will provide an easily scalable and powerful WFS tool for DIYers and enthusiasts! :D
I have a few questions regarding your system. It would really help me if you'd be willing to take a bit of time to answer them :)
1. What processing limits did you encounter during the project?
2. What processor did you use for development?
3. What was the max delay that you implemented and how did you determine it?
4. What speaker configuration did you use? 32 channels seems a bit small to me if the speakers need to be extremely close together.
5. What resources helped you develop your project? In particular, any other open source implementations? I've seen WFSCollider (but have no background in SuperCollider whatsoever), wfs-designer and wfs-pd by toshiro. Are there any others I've missed? I'm still a bit unsure of certain intricacies like which speakers (on a linear array) should be used when synthesizing the wavefront? (i.e which arrays are active and which aren't when rendering sources). As well as how to encode directional information, apply the correct tapering windows for linear arrays, calculating incidence gains correctly and gain limiting functions.
6. I'm hoping the Max prototype I build will handle 32x80 processing blocks (each block has filtering, gain and delays applied to the signal). So in total that's 2560 which is larger than the 2048 you have from 64x32. Do you think reducing the amount of input channels would allow for an increase in output channels? I feel my 2560 might be unrealistic for a small mac mini 2,6GHz i5 processor, what do you think?
If you can answer these I'd greatly appreciate it. Otherwise, well done again on creating such a cool open source project. Very excited to know other DIYers are working on WFS implementations :)
All the best,
Hi Sean,
for live shows as in theater or music you can't really have that many speakers and even if you don't fully recreate the wave front the result is quite good. But don't expect to places sounds in front of the array with such a small number. So basically my biggest limits is my wallet since this is all self funded in my case.
The 32 channel limit is purely arbitrary. You can change this easily in my project if you like. It will mess up a bit the UI.
I'm going to start learning FPGA code on a Digilent Zybo Z7 dev board.
currently I use an Intel i7 7700k at 4.8GHz. It runs at about 30~40% for 32 inputs and 18 outputs with hardly any droped buffers at 256 samples on an RME HDSPe MADI fx. It runs at 64 samples with a similar load but it's more sensitive to OS background routines. This is under windows 10. The same hardware running as a hackintosh is at 60% and not so stable whatever the buffer setting. A friend of mine has his own patch, less interface intensive and has rather good benchmark figures too. If I split the UI from the processing i have more or less the same performance figures. It's less risky to start some CPU intensive application that way.
The max delay is 96000 samples, so 1s @ 96kHz or 2s @ 48kHz. It's in the gen code.
There are some resources on the blog. So of the pages are still only in French at the moment, but I'm sure Google translate can help you a bit. I mostly spent time on some smaller but regular shows to fine tune rather coarse approaches and went from this to a Pd patch a year ago and moved to Max because it offered lower latencies for sound reinforcement of live sources. This is actually my very first Max project. :)
As for which speaker or level for each source I use some level damping proportional to distance. It's not the actual physical reality, but this way people off axis are still getting some sound. I find that about -0.7dB/m with speakers spaced at about 2m from each other given a clear localization.
In my usage I don't have any possibility to rig speakers on the sides. Venues are too different one from the other. In a classical opera house you have booths all around and there is simply no way to rig speakers on the side and behind the audience. So I key things simple it's limited to frontal arrays either in a line or a curve.
Keep me posted about the shark avb system you're working on.
And if you like to join:
https://www.facebook.com/groups/884013488430058
There have been a few updates since the last post here. You have now more control over the directivity of inputs and outputs.
More updates since the last post here. It's still at the same URL.
Now with some more controls of speakers as groups and filters/eq on the outputs coming up soon.
Some dedicated reverb feeds and returns computed through the WFS system. No reverb integrated yet into the processor.