This algorithm focuses on when to generate notes, based on analyses of syncopation and repetition within Live MIDI clips. As the algorithm morphs between inputs, transition tables are continually updated with probabilities of next attack times. The application is written in Swift, communicating via Max-for-Live and OSC.
Two types of visualization are implemented in Jitter:
1) On a Max-for-Live device are two graphs, each showing probabilities for the most probable next attacks on the X-axis, based on the current attack shown on the Y-axis.
2) A separate video window continually jumps to a collection of future video frames. Each future frame is crossfaded according to the probability of the attack time coinciding with that frame. A second video window shows time unfolding normally, catching up with the frame shown to the left just as each attack is generated.
These visual components are intended not so much as artistic output, but as stabs at better intuiting the algorithm.