Hi everybody !
I’m a music student from University in Paris. My research is about the possibilities
of computer vision techniques for musicians and composers, and particularly about the open source CV.jit library by Jean-Marc Pelletier (http://www.iamas.ac.jp/~jovan02/cv/).
I’d like to know more about your common (or uncommon…) uses of computer vision for musical purposes. Do you use it for interactive installations? For live pieces of instrumental music with electronics? For music with dance? Maybe for movie files analysis? What kind of techniques do you use then : motion analysis, point tracking, shape recognition… How do you manage the segmentation/cleaning of input video before analysis?
Was it necessary for you to learn details about CV techniques?
Since I know/use neither Cyclops, SoftVNS, nor non-Max environment (Eyesweb, Isadora, Processing, etc.), your opinions about them are welcome as well.
I hope this amount of questions won’t afraid you : all kind of remarks are interesting for me.
Thanks in advance.
It would be very good to get in contact with you directly since I’ve worked with jitter and cv for an interactive installation. It is realtime audio and video synthesis.
My email is: firstname.lastname@example.org
The webpage is under construction now… sorry…
C74 RSS Feed | © Copyright Cycling '74