I’m a graduate student at the HKU (art academy in Holland, based in Utrecht) For my graduate project i want to create an interactive video installation. In short, the user controls an animation with his voice. The animation is linked to decibel and herz. Has anyone an idea how i should set up a project like this?
Well a big question and very general… ;) Try breaking it down into more specified sub-topics. Otherwise people won’t be able ( or willing) to help you… ;)
Basically you are dealing with two main-fields:
What do you want so create? As animation is a broad field. And 2nd what do you want to control inside these visual data?
The Sound: Look around for ‘pitch detection’. There are plenty of posts in this form.
Pitch detection and something like bonk~ can work, but it can be simpler and still quite effective to do just use thresholds and envelope following (slide~ for example). If you have max for live, you can pull apart the CellDNA max for live device "CellDNA-Soundtrigger" and see what I do to convert audio to data just using a biquad, slide~, and some thresholds. The download is here: http://lividinstruments.com/support_downloads.php#dnaforlive and there’s a bit more info on our blog.