Procedural voice synthesis for a game

GVA1994's icon

Hi all,

For my graduation project I'm using MAX/MSP to procedurally generate audio for creatures in a game. The idea is that the audio will be driven by in-game parameters like creature size, what type of creature, etc. We're using Unity to send this data to MAX/MSP, which will send the audio in real-time to Wwise for mixing. The project is still in its early stage. The reason I'm posting here is to see if there are others who have experience doing something similar and to see if they might have any big no-no's or best practices, especially in terms of signal flow.

I'd also love to hear suggestions on how to handle the "performance aspect", which primarily comes down to modulating the carrier's frequency; creating logic that produces acceptable results seems to be one of my major challenges. Since I'm relatively new to both procedural audio and MAX/MSP all help is appreciated.