cool interview. i thought these words are pretty important to a post like this:
"i’d really love to be able to say to them, ‘I just wrote a computer algorhythym, and the computer did it all. I wrote a program and it all just intelligently works it out,’ but it doesn’t exist, it’s fools gold thinking that someone can sit there writing a piece of software that can make intelligent decisions about pace and animation, the closest I have seen is perhaps itunes."
in these kind of cases, you can begin by analyzing audio.
of course, you can also use the midi informations coming from each track (if it is available)
I’m currently building my next visuals+sounds live performance.
I have some hesitations about what to grab, what to use.
The current patch grab all midi notes data from Live, from each track
I’m also analyzing the audio master output and my current hesitation is only about making the (fffbank based) analysis in a max for live device which sends data using OSC to my jitter based standalone, or, sending audio via soundflower to my jitter based standalone..
first solution seems more efficient (currently)