The dfscore system is a realtime dynamic score system built to function over a local network to display a variety of musical material.
La mélodie du bonheur is the musicalization of the emotions of our society.
Umbra is an interactive dance performance that uses Max/Msp and Jitter along with a webcam to track the dancers movement which is manipulated with Jitter in real time.
“The Conductor’s Philosophy” – is an audio-visual work composed by Damian T.
SOMO is a wireless wearable motion tracking device that turns body movement into sound.
La Plataforma / The Platform is a modular small scale performance platform or audio-visual instrument to create small scale performances.
El viaje de Lissajous (The Lissajous travel) is a 3 piece visual-sound project: El viaje, Lissajous, a=3 b=4.
“She Was A Visitor” is a vocal piece written by Robert Ashley in 1967.
The Typewriter is an OSC controller and software that allows to create performances generating or manipulating written text, sounds and images within the same instrument and using the same gestures. The hardware is build out of an old mechanical typewriter, equipped with contacts to recognize the different key strokes (including upper case and lower case characters, carriage return, space and backspace).
EPIC-Tom (the performance) broadens the field of audio visual performance in significant ways: through the use of interspecies collaborative modes, through the presentation of the canine point of view as an alternative outlook on creativity, and through the expansion of electronic sound using live theremin performance.
The “La Petite Mort” audio-visual liveAct is based on Fine Cut Bodies' music, but this time we’ve focused on how we can connect musical events to it’s visual representation.
dublab's all-night ambient happening at Sonos Studio... -- Live performances -- Jon Hassell (performing his 1969 piece Solid State) Sun Araw M.
“Neurointegrum” is an artistic interpretation of the processes taking place in modern society, the processes of augmenting human capabilities with machines. Human has only one way for output information — by muscles (speech, gestures, etc.).
The Voice-Controlled Interface for Digital Musical Instruments (VCI4DMI) is a system driven solely by voice for the real-time control of electronic musical instruments, including sound generators (synthesizers) and sound processors (effects). The VCI4DMI implements is a generic and adaptive method to map the voice to digital musical instruments real-valued parameters, and it is based on several real-time signal processing and off-line machine learning algorithms for producing and using maps between heterogeneous spaces, with the aim of maximizing the interface expressivity and minimizing the user intervention in the system setup.
The main patches control a live session fully automated (via MIDI ! Bloom Box started before Max4Live arrived…) and allows to define advanced sequences of clip recs and plays and stop.
This is an object for Max that implements variable order Markov models for generating melodies, beats or other musical data.