Data visualization and visual music
Hi i'm a beginner and I need help in this project. I have to make a visual music work that transforms image based on sound (and/or vice versa) using objects from the Jitter Special FX library and/or the Jit Math library. It has to be transmodal (transmodality refers to changing data from one modality, like sound, into another modality, like drawings and/or video). I'm confused on where to start and how to handle this. Can someone help me out?
I'd suggest breaking your project down into pieces, like this:
How can I play a soundfile and get information from it (e.g. peakamp~)?
or, to go about it the other way....
How can I get information from a movie (e.g. jit.3m)?
then....
What kinds of things in a video would I like to use Max messages to control (choose a jit object)
or, again, to go about it in the other way....
What kinds of modifications to a playing audio file would I like to use Max messages to control (filtering, delay, etc.)
finally....
How do I take output from the first part (audio or movie data) and convert that information into a form that I can use to make my modifications?
That's it, basically.
As a simple example, behold the canonical throbbing techno-donut: https://cycling74.com/forums/live-object-distortion
Hi Sara, did you figure something out? I'm trying to accomplish the same thing (draw to a jit.lcd using a song) but can't quite figure it out. I'm not sure how to isolate the beats / pitches.