Artist Focus: fuse*
Some of you might recall Ben Bracken’s article from 2016 on the installation AMYGDALA by fuse*, an Italian digital arts studio. fuse* has a new multi-media performance, Dökk, that’s currently on tour in Europe. While this piece uses some similar techniques to AMYGDALA - in that it explores 'Sentiment Analysis,’ a way to determine the emotional status of messages shared by users across a network - it also stands completely on its own as a visually and sonically powerful performance. It’s difficult to summarize this massive piece, so I’ll let fuse* do so in their own words:
Dökk (‘darkness’ in Icelandic) is about a journey throughout a sequence of digital landscapes where the perception of space and time is altered. Dökk’s scenography is designed for delivering a sense of deep interdependence between the protagonist and the world surrounding her. In order to represent this concept, a system has been created which processes real-time data from biometric and movement sensors (worn by the performer and placed on the stage) as well as data coming from social networks. These data contribute to modify the performance’s digital and sound landscapes: every time the performance is staged, the system analyzes in real-time the messages that people from all over the world share on social networks, inferring their emotional state through a sentiment analysis algorithm. These data, together with the performer’s biometric data, make the performance different every time it is staged.”
Over the past week I interviewed two members of fuse* who were involved in this production, Mattia Carretti and Riccardo Bazzoni. We talked about the history of fuse*, the background for this particular piece, and of course how Max is integrated into the performance. You can read below for the full interview, but first take a look at scenes from Dökk:
How did fuse* first form?
Mattia: fuse* was founded at the end of 2007 by Mattia Carretti and Luca Camellini, united by their desire to use technology and new media with the aim of realizing projects with a great impact on the audience. This is done by creating a sense of profound empathy which follows from deep connections generated between the artwork and the people who come into contact with it.
Digital technologies let us intertwine different languages - first and foremost, the two languages fundamental to us; image and sound. This is our starting point - trying to blend music and its visualization as closely as possible. In 2009, Riccardo Bazzoni, our sound designer, joined the studio. From that moment, Max became one of the most used tools inside the studio, giving us the chance to keep on pushing our limits and generate more complex processes for synesthetic connections between images and sound.
I know that fuse* produces many different types of works, from digital artworks to interactive installations and architecture. Can you talk about the commonalities and differences between these projects? Do fuse* members stay focused in a particular area of expertise, or do you contribute across disciplines?
Mattia: Our structure has become more elaborate over time. The expressive possibilities given by digital technologies let us experiment in different directions and connect various disciplines. We started using interaction to build bridges between the artwork and the audience, and then we started thinking about how the new media communication could relate to space and architecture. This process led to the initiation of new divisions that cooperate with fuse*; fuse interactive and fuse architecture. While fuse* works are more concerned with experimental projects and our self-productions, fuse*interactive and fuse*architecture are the brands we use when we work on multimedia or commissioned architectural projects.
The projects are becoming more integrated over time, and allow - or even require - all the professionals in our studio to diversify their skills. Our approach is to create ad-hoc multidisciplinary teams who are capable of realizing custom-made projects with the ambition to generate original artworks with a positive effect on the community.
How many people are currently a part of the fuse* team?
Mattia: At the moment, the studio is composed of a core team of 10 people with the addition of external collaborators wo may be involved in the processes whenever we find that specific expertise can be useful for creative or productive needs.
Tell me about your new piece, Dökk.
Mattia: Dökk is the latest fuse* production. It is a live media performance of 55 minutes, distinguished by scenes which are generated in real-time using custom software where all the scenic elements such as images, sound, light, and the dancer’s movements are interconnected and influence one another in different ways every time the show is performed.
Dökk contains a great number of hidden messages and metaphors, but the one thing that probably drove us the most during the conception phase is the principle of interdependence. In order to get this theme to permeate the whole show, it was fundamental to work in real-time. The great challenge became the interdependent connection of everything that happens within and between the scenes.
How did the idea for Dökk first form, and how long did it take to realize this project?
Mattia: The first idea of Dökk took shape in 2015; the work evolved over a period of time of almost three years, during which we dedicated approximately 4500 hours to its development.
The visuals in Dökk are really stunning. Can you talk a bit about your idea behind the visuals and how you created them?
Mattia: One of the underlying principles of Dökk is the concept of interdependence, which says that everything that exists is in some way connected. All the things that surround us are nothing but a collection of atoms, particles, and electromagnetic fields, vibrating without any apparent meaning. When these impulses are interpreted by our mind, they become colors, tastes, music, memories, and emotions - the foundation of what each one of us perceives as reality. Everything that happens in Dökk is represented with a universe that is initially displayed with its real shape with particles representing the galaxies positioned exactly in real coordinates. Then, during the journey, this universe changes its shape - assuming different configurations until it reaches a new balance a the end of the show. The journey of Dökk begins with birth and ends with death in a cycle that is repeated every time the show is performed on stage. The universe has its real shape only at the beginning because as children we are able to see things for what they really are, without any filter. Growing up, our vision of reality is modified and distorted by a series of forces until it finds a new vivid and clear view at the end of the cycle with death.
The graphical part is entirely generated in real-time every time the show goes live. The scene rendering is carried out by using GLSL shaders through openFrameworks. The digital environments that follow one another are rendered inside a 3d environment, and they split into two different physical layers that correspond to the two projection - one in front of the performer on a holographic fabric and one behind her. Real-time rendering lets us change the generative landscapes through interaction with the different elements of the show:
The dancer’s movements (by means of Perception Neuron and two KinectV2)
The dancer's heartbeat (through a cardio band)
Tweets that are produced during the show
Real-time generated audio (sample and frequency analysis of three separate audio tracks).
The ten scenes in a Dökk performance can be subdivided into three principal visualizations:
The universe according to the 2MASS Redshift Survey.
A depiction of a neural net that transforms the scale of the filaments of the universe into axons and dendrites of the human brain.
The universe as a snapshot of the EAGLE simulation (Virgo Consortium) which is further modified.
I read on your website that a number of the musical pieces in Dökk were influenced by a music therapy session for children with mental disabilities. How were your recordings of this session integrated into Dökk?
Riccardo: During these sessions, a diverse set of instruments were recorded such as drums, xylophone, metallophone, and flutes. Later we sampled the recordings by extrapolating small sound sections and playing them at different velocities and pitches to create dense sound layers with a chaotic evolution. The intent was to simulate a constantly changing, unpredictable flow as an external element that accompanies the whole composition.
Did you record the music therapy session with the intent of using it in Dökk, or did you decide to use those recordings later?
Mattia: We wanted to use them for Dökk from the beginning. The idea came from the fact that we wanted to represent the age of childhood in the opening scenes of the show . For this reason, we decided that the music had to start with a starting point performed by children with different disabilities and associated with a pre-school mental age. The sounds that would have characterized these rooms had to have that energy.
How is Max used in this piece?
Riccardo: Max was used throughout the production of Dökk. We used chains of effects for sound processing and real-time audio synthesis patches for live parts of the show as well as the whole soundtrack.
An important aspect was the organization of the different patches inside Ableton Live and the communication with the software managing the graphical part and the data acquisition from the different motion-tracking devices. For this reason, independent patches have been realized so that we have a flexible and fast way of managing the project. These patches communicate through virtual links (send/receive objects) with the main patch, which in turn uses the OSC protocol (UDPreceive object) to receive data packages from the software handling the graphical elements almost 40-50 times per second (based on the frame rate).
Some of this data includes the coordinates of the joints of the Perception Neuron device (used for the motion-tracking), percentages of six emotions (happiness, sadness, fear, anger, disgust, and surprise) produced by a sentiment analysis algorithm, the frequency of the performer’s heartbeat, and the control messages for various of Ableton Live features such as clip launch, arrangement position and volume control.
The Perception Neuron values are used to control four granular-synthesis patches (sixteen voice polyphonic). The parameters of the patches are driven by the spatial position of the joint-sensors positioned on the hands and feet of the performer. Every little movement creates small fragments of sound that overlap each other which creates rich sound textures.
Moving along the x-axis it is possible to change the dimensions of the grains (between 600 and 3000 ms) and explore the audio sample content. Frequency and range of the grains vary according to the movement on the y-axis.
The percentages of the analyzed emotions control the volume of six audio channels with different “ghost” tracks that are mixed with the main soundtrack. In some sections of the show, a patch controls the repetition speed of a sound using the frequency of the performer’s heartbeat.
Since the dancer's movements alter the audio through the use of joint sensors, was the choreography composed with this in mind? In other words, was the choreography composed to produce certain sounds, or was the sound design made to work with certain planned physical movements?
Mattia: Indeed, the process wasn’t linear, the scene where we use this technique comes after a very dark part of Dökk in which we wanted to instill a feeling of emptiness and loss. After this scene, we needed to find a metaphor inside the show that could represent a sort of rediscovery, connection and it is at this moment that Elena takes control over the sound. So, we knew in what part of the show that would have happened and roughly how the scene would be composed, but the choreography was still to be considered. Riccardo did a few tests at the beginning, combining the data of Perception Neuron (the motion tracking suit we used) with sound variations. We then tested a different set of sounds before finding the right ones just before beginning a test phase in which Elena tried to play its soundtrack while moving and performing. From that moment on, we worked closely together changing back and forth between the choreography and the patch until we found the right balance.
You mentioned earlier that Max allows you to keep pushing your limits. What are a few things that you're able to do with Max that you couldn't do before?
Riccardo: I believe that Max has revolutionized the way of creating music, the ability to build “instruments” to your need offers a great expressive power. Generative patches can significantly speed up your sound creation workflow and the realization of real-time audio applications for interactive installations. I think that’s a tremendous opportunity for sound designers.
Is there anything that you wish Max could do, that it's not currently capable of?
Riccardo: An internal research project we recently carried out in the studio involved the integration of the entire workflow of the patches we build within mobile applications. The gen~ object offers this chance, but it is allowed to export only its content, and this is certainly a limit. It would be great to have an export option for mobile devices in addition to the Max Collective and Standalone formats.
I understand that you're releasing a double LP of Dökk's soundtrack. Why was having a physical release important to you for this project, as opposed to other projects that did not have physical releases? What about the format is relevant to the work? Why LP instead of just digitally?
Mattia: The whole Dökk soundtrack is available also in digital format but we wanted to make it available also as a real, physical object. We did the same thing for Ljós, because it’s a meaningful gesture for us and it’s a way to celebrate an important phase of intense production by having a tangible reminder of the project. Moreover, we think that the experience of enjoying music through vinyl is very different from listening to a digital album. In a way, vinyl forces you to listen more carefully and to stop and slow down, and is therefore an experience more similar to when you go to the theater to see a show.
Looking forward, do you have further plans for Dökk? Or are there other projects you're working on that you would like to share?
Mattia: With respect to Dökk, we’re in the promotion phase of the show in order to do as many shows around the world as possible. We’ve also got a new project in the definition phase which exploits some research we have done for Dökk, though it won’t an installation rather than a live-media show.
Dökk’s original soundtrack will be soon available on double 12″ gatefold vinyl and all digital formats exclusively at fuseworks.bandcamp.com/releases.
If you are located in Europe and would like to see the live performance, here’s a list of upcoming shows:
February 3rd, 2018 - Teatro Testoni / Bologna, Italy, as part of the “IN BETWEEN – Dialoghi di luce” exhibition at CUBO w/ Paolo Scheggi and Joanie Lemercier
March 22nd, 2018 – L’Avant Scène / Cognac, France
May 27th, 2018 – Week53 / Salford - Manchester, England
Everyone else can view clips of Dökk and other fuse* productions on their Vimeo page, and read more about their projects on their website.
For those of you who use Max for Live, check out the free device that fuse* created. Contact is a MIDI sequencer that uses the binary code of the Arecibo message to create rhythmic patterns. And for information on other fuse* projects that use Max, look at their profile here.
by Ashley on 2018年1月30日 22:49