For part of my degree course, I'm looking at building a live/real time system that is to be controlled with some sort of hardware device.
I am proposing to build something that is fairly basic and within the context of a live installation piece.
So far I have a basic idea to simulate aurally, the effect of walking through a busy city, using a USB dance mat as the hardware controller.
The premise is not dissimilar to the way a computer game engine would handle audio dynamically, with each audio element "sample" controlled separately to simulate the sound source's position, distance etc (similar to acoustic environment modelling and 3D audio).
The main problem at the moment is I don’t really have any idea where to start in terms of actually implementing this idea.
Forgive me if this sounds a little cheeky, but does anyone have any suggestions as to a good simple method/approach, or just any general advice?
Maybe someone here can point me to a similar project?
I should point out that my experience with Max MSP is fairly basic and limited.
I would guess that I'd need a lot of buffer and groove objects to each handle their own samples individually, each with their own volume, filter, EQ and reverb parameters.
The “spots” on the dance mat could be used to send bangs which would control the automation of the system and provide real time audio events to occur, e.g. a car driving past to the left.
In terms of the interaction I am imagining, the person would be walking on the spot and aurally, it would sound like they are moving from one place to the next, however if they stop, it would sound like the person is standing stationary and the sound in that “area” still exists.
I guess I am having trouble imagining how I could achieve this real time city ambience effect with authentic distance and localisation cues in a fairly basic stereo system.
I apologise for the lengthy post.