I am trying to create an "auralization" system that allows for the acoustical modeling of physical spaces. To do this a 3D computer model of a space is created and then an artificial impulse response (IR) is measured and generated from this model (i.e. using CATT-Acoustic or Odeon). From here I want to be able to convolve this impulse response with an anechoic audio file to model what the anechoic audio would sound like if it were played in the modeled space.
I am now stuck trying to create a patch that allows for realtime direct convolution of a multi-channel impulse response and an input audio file. This is basically creating a convolution reverb. I have looked into the HISS toolbox and I am hoping to be able to create something similar to their multiconvolve~ object, but I would like more control than multiconvolve~ provides. I am thinking that this is reasonable to achieve with pfft~ but am struggling with the details. I am currently storing the IR in a buffer.
I have recently learned Max to try to accomplish this task, so I am still a relative novice.
Thank you in advance!