Using B Format Audio Inside ICST Ambisonics Objects

RonanQ's icon

Back for another round of questions with regards to these objects.

I'm looking to play back ambisonic B format example recordings from the soundfield website through the ICST ambisonics objects and wanted to ask how would I go about doing so? The recordings I have are already encoded as B format audio, so would I be correct in assuming that I just skip the ambiencode~ object and just use 4 sfplay~ or groove~ etc. objects routed directly into the four inputs of a first order system with channels being in the WXYZ order?

If there's any other advice on things I should watch out for - settings etc. - that would be much appreciated too :).

Trond Lossius's icon

Hi,

This is correct. Also make sure that you use the Furse-Malham mode when decoding.

Cheers,
Trond

RonanQ's icon

Good to know thanks. My other question that I've just thought of is what about A format material that has just been recorded with a Soundfield SPS200 microphone? I have a few recordings I've done that I'd like to make use of here but I'm unsure if these objects can encode audio which already contains a specific direction in them - how would I do this (or can it even be done)?

Trond Lossius's icon

Hi,

You'll need to first use the free VST plugin provided by SoundField for encoding A-format to B-format. Make sure that you render to a 6-channel signal, as that VST expects four channels in (A-format) but returns the B-format on channels 1, 2, 5 and 6. Afterwards you'll need to dispose of the empty channels 3 and 4. I currently end up doing these changes in the now discontinued WaveEditor program, but it's all surprisingly convoluted.

You might want to search my blog, I believe I've been posting in the past on the workflow while I was battling with it.

Cheers,
Trond

RonanQ's icon

To explain this a bit better, the reason why I asked that is I wanted to compare ambisonics recorded with a microphone to ambisonics synthesized via the max objects. Suprisingly at the moment the synthesized version is performing better with a mono source but I wanted to see about encoding the material as well as decoding it in max. Have you had any experience with this?

Either way, If you could send me a link to your blog though, that would be good, would make for an interesting read :) I'm also trying to track down a dialogue recording if possible, would you know where I could find one? Ideally just stating the position where they are speaking from in relation to the listener, front, front left etc. in both mono and A formats to compare between each through the max objects and soundfield vst - I've recorded some myself but I don't know if they're working properly as I can't seem to get them to work via the soundfield vst plugin even though they were recorded with the soundfield sps 200 microphone =/ Could be that the microphone (sps200) was damaged when I rented it.

Trond Lossius's icon

I'd basically think of SoundField recordings and synthesised sound fields as two beasts of a different kind. SoundField is fantastic for capturing surround ambiences, and when listening to the recordings on a surround rig I often find that the recordings can be used for what I'd call a sonic scenography.

Synthesised ambisonic sound fields will tend to be much less rich and diverse in terms of what directions sound is coming from (as sound is coming from a limited number of discrete directions rather than being a continuous field), but if you synthesise a mono source (e.g. a spoken voice) the location or direction of that voice will be much more precise than if the voice is captured in a room using the SoundField. So both have their use.

I was writing up some notes on different spatialisation techniques the other day. It's unfortunately in Norwegian, but hopefully Google Translation might be of some help. These notes are based on my practical experience with different spatialisation techniques over the past 12 years. You can find it here:

Cheers,
Trond