Direct Impulse Response Convolution

    Mar 13 2019 | 6:16 pm
    I am trying to create an "auralization" system that allows for the acoustical modeling of physical spaces. To do this a 3D computer model of a space is created and then an artificial impulse response (IR) is measured and generated from this model (i.e. using CATT-Acoustic or Odeon). From here I want to be able to convolve this impulse response with an anechoic audio file to model what the anechoic audio would sound like if it were played in the modeled space.
    I am now stuck trying to create a patch that allows for realtime direct convolution of a multi-channel impulse response and an input audio file. This is basically creating a convolution reverb. I have looked into the HISS toolbox and I am hoping to be able to create something similar to their multiconvolve~ object, but I would like more control than multiconvolve~ provides. I am thinking that this is reasonable to achieve with pfft~ but am struggling with the details. I am currently storing the IR in a buffer.
    I have recently learned Max to try to accomplish this task, so I am still a relative novice.
    Thank you in advance!

    • Mar 20 2020 | 8:50 pm
      Hey, it's been a year, but I saw your post and wondered if you were still working on this. My research group does something very similar, so I might be able to help.
    • May 23 2020 | 5:10 pm
      Personally we ended up building a conv reverb based on multiconvolve for ambisonic 1st order bformat files. We tend to record using mics and create multichannel IRs (custom patch) ... multiconvolve works quite well, and from several tests we could simulate in studio a quite reliable thingie. Curious how your reasearch groups are working, maybe we can get some collaboration going on, ciao J