Interactive installation floor grid question

Joshua Hickman's icon

Hello everyone, first time poster.

I am currently in the process of writing a proposal for a interactive installation for my MMus degree. The idea of the installation is to have a room that uses six speakers immerse a person in sound. When a person walks into the room and starts to explore, I want the room to recognize where the person is and have elements of the sound output change accordingly (via digital signal processing and adding new sounds).

As if now, I am having some issues figure out how to develop a grid that would 'section' the room into four sections. Originally I was going to use IR motion sensors, but the room that I will be using is roughly 5 meters squared (after I modify it for the project). I was tempted to use lasers as a trip wire, but apparently that may be a health and safety risk. My next solution was to buy some LED flex light line (similar to what you would use during Christmas) and use a camera overhead to detect (using jitter) when a person crosses over into another section of the room.

The question I am asking is what would be the ideal method of sectioning off a room? The overall goal is to have a bed of sound

Unfortunately, my experience within Max MSP has been limited to producing algorithmic music (mostly using random/col/urn etc). So the use of Jitter and/or Arduino will be a steep learning curve for me. This is compounded by a limited time to set up the project (eight weeks). I have added a rough sketch of the room if that is any help.

Immersion-booth.jpg
jpg
Andro's icon

Maybe use an infrared camera in the roof ? (problem is getting it high enough to cover the area, most cameras are 4:3 and your room looks like its 5:3)
Blob tracking will give you the XY position.
Either way theres a problem or two with how to parse the data.
Can there be more than one person ?
If so how do's their position affect the sound ?
A laser tripwire would be 0 or 1. On or off. This would give big jumps in the audio unless you smooth the data out.

Ultrasound sensors can be used but they're tricky to get working right and once again will only detect the person closest to the sensor.
A cheap hack is a standard camera on the opposite side of the entrance.
Get the visitor to wear a led headband with a unique colour and then use blob tracking to track each user.

Joshua Hickman's icon

Hello, Thank you so much for responding.

Unfortunately, I do not have the exact measurements for the room just yet. Although I will be putting false walls up to modify the size of the room.

One person will be within the installation at a time.

I will be using two sources for sound, a ambient 20 minute loop of sound created for the booth and and several small loops that will be triggered when the the participant comes walks within a certain area.

Overall, I am trying to produce a installation that puts the person in another environment (rainforest, a living room in a mansion etc). I have yet to pick a theme yet but once I have, I will have the walls covered in blown up images recreating the theme. So, in example, if there was a picture of a fire place on one end of the booth, as they walked towards the fire place, they would hear a fire crackle and become louder as the got near it. The fire crackling would be one of those small loops and the gain would increase as they walked near it. The overall ambient 20 minute loop will be used to have a constant sound source to take the individuals minds off of the shallow visuals.

DSP wise, I was going to use low pass filters for certain objects to help produce a feel of distance as the participant walked away from virtual objects within the booth.

Originally I was going to have certain sections that triggered on or off. But I have only just recently heard of Jitter so the option of having a more fluid approach is enticing.

I could easily modify the room to make room for a camera and buy some hi-vis vests. But I have never used jitter or blob tracking before. (My experience within Max has been mostly within sampling and generative music.

I am curious about Blob tracking, are there any tutorials I could look up?

I hope this helps.

LSka's icon

Seems like a good setting for a Kinect and user tracking with dp.kinect:
https://hidale.com/shop/dp-kinect/

Joshua Hickman's icon

Thank you LSKA,

I have contacted my head instructor who is helping me through this as well. Apparently the room may not be high enough for a camera/jitter setup. However, the dp.kinect may be what I am looking for. I will have to research the dp.kinect a bit further before I make a decision for the proposal.

I appreciate the input guys, Cheers.