This isn't Max/MSP specific, but I thought this community more than others that I'm connected to would be able to help.
When dealing with stereo audio, panning essentially "moves" the signal strength completely to one side or the other so that a hard L pan will make the entire signal heard out of the L earbud/speaker and silence would occur out of the R earbud/speaker.
I want to replicate that concept with audio encoded into a YouTube 360 video. Say that the camera is located in the center of a room with four walls, and each wall is playing a different video. I want the listener to be able to rotate around the 360 space and have the sound of the wall that they're facing be treated as a "hard pan" - they hear the audio of that wall and all of the other walls are silent and/or very distant.
I've experimented with this and also done a bunch of research and it seems like this might not be possible - I know that I can create a binaural audio file attached to the video that can make the current facing louder, but I can't seem to find a way to attenuate the signal of everything else around to silence, or at least to the point where it is incredibly faint background. I feel like the nature and approach of binaural ambisonics is not designed for this kind of thing since the point of immersive audio is to have audio in an ambisonic sphere be as realistic as possible and this breaks that.
Does anyone have any insight as to how I could potentially achieve this, or am I trying to fit a square peg into a circle hole?