I've been studying this paper about convolution techniques in Max/MSP:
It describes two techniques to increase the spectral intersection between two sounds.
One is compression, which I was able to build. The other is amplitude spectrum smoothing. (see section 2.1)
I have made this patch for the spectrum smoothing.
But I don't think it works correctly.
It smooths the amplitudes over multiple bins. But I think something is still missing.
Does anyone know how to do this?