I’m currently designing a video generator for my music to visual personal research and live performances.
Basically I’m grabbing data from music (MIDI but audio too with some fft wizardry) and sending that to around 10 send/receive busses.
I want to be able to load dynamically what I call "video generator snippets".
I’m currently doing that using a JS removing/loading abstractions. I tested that using bpatcher but I don’t need any UI so ..
Basically, each snippet has (or doesn’t) receive objects in order to "suscribe" to the previously described data flows/busses. Then, they all draw to an openGL context instantiated and handled at the highest level. Orthographic projection mode is set to 2 (projection On ignoring lens stuff…) because I only need 2D stuff and all goes on a videoplane.
- In order to draw basic primitives like rectangles, circles, very basic stuff, I think to use JS dynamically manipulating Jitter OpenGL stuff. Any opinions ?
- In order to "cover" the abstraction remove/load moment (where vid is a bit frozen) what would you suggest ? (maybe another way instead of load/unload)
any leads, even small, would be nice, especially for the second questions.
I finally ended by putting all abstractions required on the "playground", and made a simple routing system for the master qmetro banging stuff.
that way, afaik, all non banged structures are off, and don’t kill performances.
finally, I don’t have a lot of benefits to load/unload.
I don’t kill performances when I have 10 snippets turned off (no bang) and only one on.
so I can really load them (even a lot) at the main root patch load, then change the route for my bangs.