City Ambience Installation Idea
Hi,
For part of my degree course, I'm looking at building a live/real time system that is to be controlled with some sort of hardware device.
I am proposing to build something that is fairly basic and within the context of a live installation piece.
So far I have a basic idea to simulate aurally, the effect of walking through a busy city, using a USB dance mat as the hardware controller.
The premise is not dissimilar to the way a computer game engine would handle audio dynamically, with each audio element "sample" controlled separately to simulate the sound source's position, distance etc (similar to acoustic environment modelling and 3D audio).
The main problem at the moment is I don’t really have any idea where to start in terms of actually implementing this idea.
Forgive me if this sounds a little cheeky, but does anyone have any suggestions as to a good simple method/approach, or just any general advice?
Maybe someone here can point me to a similar project?
I should point out that my experience with Max MSP is fairly basic and limited.
I would guess that I'd need a lot of buffer and groove objects to each handle their own samples individually, each with their own volume, filter, EQ and reverb parameters.
The “spots” on the dance mat could be used to send bangs which would control the automation of the system and provide real time audio events to occur, e.g. a car driving past to the left.
In terms of the interaction I am imagining, the person would be walking on the spot and aurally, it would sound like they are moving from one place to the next, however if they stop, it would sound like the person is standing stationary and the sound in that “area” still exists.
I guess I am having trouble imagining how I could achieve this real time city ambience effect with authentic distance and localisation cues in a fairly basic stereo system.
I apologise for the lengthy post.
It may be of interest that your tutors most likely read this list so be careful what you ask for if you are asking for help. Word of warning over.
Look at [poly~] and [hi].
Oh, and read the tutorials. THE BEST source of information for Max/MSP.
Hi,
I'm sure there is a good chance that tutors/lecturers do read these forums but I also fully understand the rules in regards to plagiarism and referencing in academia.
I'm not asking for pre-built patches or anything like that, just general advice or approaches such as useful objects/externals, anything that maybe useful for this idea.
From reading my previous post, I was perhaps a little vague about the reproduction of the sound. I have begun searching the forums for general advice about implementing surround sound, however with the very limited resources and budget at this time, I am having to settle for either stereo playback using near field monitors or headphones, the latter being possibly the best solution if I can find a way to create 3D sound in max.
Thank you for your help though and I have already begun reading the max/msp tutorials.
i don't think your post gave the impression you were looking for a freebie at all.
there are many directions you can take this sort of thing, and lots to learn if you are new to max (or programming in general).
there's not really a 3D spatialization objects that come with the standard max/msp, but i'm sure there's some 3rd party objects out there that do the trick. however there are example patches that come with max that should help you out. check out:
Max5/examples/spatialization/
as for your usb dance pad, look at the "hi" object and search the forum for "game controllers" or something similar.
^
Yeah there seems to be lots to learn with this program, it can be quite intimidating which is why I want to keep it fairly simple to begin with.
I already have the dance mat hooked up successfully with Max where it is able to respond to particular "button" presses, its just the case of figuring how to best utilise this to control the system in someway.
I will look into those examples, thanks.
I once made a soundscape generator driven by an image.
I used Photoshop to "airbrush" into the R, G, & B channels separately (see attached) using a aeriel photo/map as a template. Roads are red, train lines and stations are blue and parks/gardens are green (obviously in places they overlap).
Then I loaded the image into a Jitter Matrix and navigated around the map reading out pixel values at the desired coordinates (you could use the lcd object but one limitation at the time was that the image needed to be onscreen).
The three colour channels then controlled three banks of appropriate sounds. The higher the value from a particular colour channel the louder, less reverb and the more high freq. the associate sound bank had. The lower the value from a particular colour channel the quieter, more reverb and the less high freq. the associate sound bank had. Since the image was airbrushed the transitions as you navigated between areas were smooth but not too regular. I considered using geometry to measure the distance from simulated point sources of sound but this would easily get complex for even very simple soundscapes and I didn't think it would be more convincing.
Clearly you could use multiple images for more than three sounds or greyscale images for easier management of multiple sounds.
M
^
Coincidently I was one of your students when you taught at the Technology Innovation Centre in Birmingham, I remember that map example with the helicopter from a lecture and it actually part inspired some of this idea.
That does sound like an interesting and manageable approach, one I’ll explore further.
Thanks.
search 'binaural' on the www.maxobjects.com for some good simple
spatialization techniques with example patches
On Thu, Jul 31, 2008 at 12:02 PM, Elliot wrote:
>
> ^
>
> Coincidently I was one of your students when you taught at the Technology Innovation Centre in Birmingham, I remember that map example with the helicopter from a lecture and it actually part inspired some of this idea.
>
> That does sound like an interesting and manageable approach, one I'll explore further.
>
> Thanks.
>
>
I've put a copy of it here:
Max 5
I removed the image of the map since I'm not sure about the copyright for that. The sounds are BBC SFX.
M
Do your research on HRFTs because you can probably simulate many of the effects with simple methods, like filtering & small time delays. Maybe a good way to get started, and then when you have a working interactive interface you can embellish the algorithm to have more layers of sound and different reactions.
One thing to consider is that there may be no reason to 'artificially'
spatialize. Why not just use 6-10 speakers and pan between them?
On Wed, Jul 30, 2008 at 1:17 PM, Elliot wrote:
>
> Hi,
>
> For part of my degree course, I'm looking at building a live/real time system that is to be controlled with some sort of hardware device.
> I am proposing to build something that is fairly basic and within the context of a live installation piece.
>
> So far I have a basic idea to simulate aurally, the effect of walking through a busy city, using a USB dance mat as the hardware controller.
> The premise is not dissimilar to the way a computer game engine would handle audio dynamically, with each audio element "sample" controlled separately to simulate the sound source's position, distance etc (similar to acoustic environment modelling and 3D audio).
>
> The main problem at the moment is I don't really have any idea where to start in terms of actually implementing this idea.
> Forgive me if this sounds a little cheeky, but does anyone have any suggestions as to a good simple method/approach, or just any general advice?
> Maybe someone here can point me to a similar project?
>
> I should point out that my experience with Max MSP is fairly basic and limited.
>
> I would guess that I'd need a lot of buffer and groove objects to each handle their own samples individually, each with their own volume, filter, EQ and reverb parameters.
> The "spots" on the dance mat could be used to send bangs which would control the automation of the system and provide real time audio events to occur, e.g. a car driving past to the left.
> In terms of the interaction I am imagining, the person would be walking on the spot and aurally, it would sound like they are moving from one place to the next, however if they stop, it would sound like the person is standing stationary and the sound in that "area" still exists.
>
> I guess I am having trouble imagining how I could achieve this real time city ambience effect with authentic distance and localisation cues in a fairly basic stereo system.
>
> I apologise for the lengthy post.
>
>
--
Morgan Sutherland
Quote: Walter Odington wrote on Thu, 31 July 2008 14:58
----------------------------------------------------
> Do your research on HRFTs because you can probably simulate many of the effects with simple methods, like filtering & small time delays. Maybe a good way to get started, and then when you have a working interactive interface you can embellish the algorithm to have more layers of sound and different reactions.
----------------------------------------------------
This is kind of the idea I'm thinking would be the most suitable for me as I do not have access to more than two speakers or a surround decoder. By using headphones, it will provide opportunity to explore 3D spatialization techniques and HRTFs within Max.
I have done a search for "binaural" on the max objects database as someone advised and located the ep.binSpat~ object which looks very good, though I have to question how CPU intensive it will be to implement.