Sonoplastic


Sonoplastic is an audiovisual performance based on gesture analysis to produce and control sounds and images.
Musician’s gestures during performance has been historically dependent on the ergonomics and functionality of musical instruments, most of it involved the body parts responsible of activating the exact pitch on the exact spot with the exact pressure at the exact timing and only part of it to the desired expression and meaning.
It is time, now that the technology opens up new scenarios, for a paradigm shift that lies in the elimination of the dichotomy between the figure of an interpretation (merely intended as body movements mainly finalized to control instruments that generate movement of air particles) and a corporeal experience and mental representation of movement that generates and elaborates creative processes in a sense giving activity. In other words, bringing as close as possible the body-related gestures to the sound-related gestures.
In order to achieve this, the first step is not to stop to what more immediately technology offers us, as for example to map the values ​​of XYZ to control parameters of predefined interfaces of virtual instruments and sequencers, but it can certainly be inspiring to dwell on the metaphoric potential that new technologies can offer us.
In Sonoplastic i use the tracking and mapping of gestures of my two hands not only to detect their location in three dimensional space or to recognize predefined gestures, but rather to create a sensitive environment in which others are the movement’s properties that are detected.
That is frequency, density, coarseness, consistency, character, grain, flexibility, roughness, pattern, smoothness, stiffness, strategy, warp and woof rather than disposition, form, organization, quantity, scheme and structure.
In other words, think about how to generate and process sound through the metaphor of the plastic manipulation of fabric in space rather than its cut and its wrapping ‘ready to wear’.

How did this project use Max?

Technically I started with one of the most straightforward ways to do video sensing: performing frame-differencing with a mean filter which gives a reading of the amount of movement detected. 
Recently since the Leap Motion device is available I integrated it in my system (thanks to aka.leapmotion) to have a wider and better control.
 Using these techniques and tools within the Max/MSP/Jitter environment i am able to map the amount of motion in the scene where my hands are moving to specific algorithms that i conceived and keep enhancing.