All software for the installation was made with MAX using security cameras and projecting on transparent screens. Three max patches were used running on three networked computers for the installation. The first patch used the cv.jit library to do blob tracking on the audience in the first room. That data from the blob detection was then sent to the second patch running the 3d model built in with jit.gen would then react and move based on that data. Finally the third patch would extract audio data out of the image using a slit-scan process to then drive a oscbank~.