Thanks for your help. We've received your bug report.

geneSynth

May 2013
Atlanta, GA

geneSynth explores genetic algorithms, multi-agent control, interactivity in music and the simulation of autonomous flocks or agents. It was designed to perform standalone as well as with a human musician, and produce evolving ambient music. It was inspired by philosophical and scientific ideas like metaphysics, gene sequences and the ancient Indo-Greek five element theories.

geneSynth is built on Max (controlling the genetic algorithm and audio/MIDI processing) and Processing (for the visuals and physics simulation), which communicate on OSC.

The heart of geneSynth lies in the simulation of the flock of agents, using the Boids algorithm by Craig Reynolds. A microcosmic world is simulated top-down, with finite boundaries and the five elements: earth, fire, air, water and the surrounding aether. A flock of triangular, mortal agents reside in this world, whose sole purpose is to feed on the ambrosia that periodically appears in random locations, giving them a short boost of life. When their life eventually runs out, they die, and their genetic offspring replace them in the world. The behavior and dynamics of the flock is governed by three parameters which control the separation, alignment and cohesion of the agents in the flock. The five elements possess different physical properties and impose unique restrictions on the agent, as seen in the video.

Every agent is identified by a musical note, which is made to drone throughout its lifetime, its contribution to the world. The selection of notes is controlled by a generic algorithm, which in turn can be controlled by a human musician (not documented in the video). A rudimentary key detection algorithm detects the key the user is playing in, out of an arbitrary selection of three scales representing angry, mysterious and serene moods. If a change in key is detected and sustained, the genetic algorithm is directed to a new solution (the key the musician is playing). The algorithm is slowed down to produce interesting consonant and dissonant intervals along the way; the progeny bear notes of the new scale (and those created by mutations and crossovers), and the drone evolves.

One immortal agent is also present in the system, but rather anticlimactically, the agent lives, feeds and behaves just like all the other animals in the system. The Immortal forms the center of a sonification method, when activated, generates a momentary force field through the world. It now knows the presence of every other living agent (by its note) and its distance, and maps this data into MIDI pitch and velocity for the sonification. The distances and relative position of the agents depend largely on the aforementioned three flocking parameters, which affect the final musical output of this sonification method. The output of the force-field sonification is compelling – a phrase rarely repeats. Permutations of phrases can be heard as the agents move around, notes are constantly added and removed from the phrase as agents are created and destroyed. The intervals between the notes within a phrase are chosen by a random walk algorithm, and a phrase is generated at regular intervals.

geneSynth offers a lot of parameters with varying levels of control. The position of the elements in the world can be arranged, the densities and life span of the agents can be modified, the rates of creation of ambrosia and the agents can be altered, the group dynamics and genetic algorithm offers many more possibilities, each with potentially different musical outcomes.

How was MAX used?

Max was responsible for pitch to MIDI conversion, simple key detection, the genetic algorithm and generating MIDI output.

geneSynth

No replies yet.

You must be logged in to reply to this topic.