Work in progress here that I thought I'd share, since it is relevant to a number of questions people have been asking. Basically, what the patch does is this:
1. "Tracks" motion by turning a video input into black and white. I intended this for use with dancers, hand movements, or any other moving object against a blank, contrasting background. (Not real motion tracking, but a simple trick that made sense here for preserving whole outlines.)
2. Translates the black (or white) pixels to a series of vertices for use as OpenGL geometry.
3. Renders overlapping planes in 3D; here mapped to a quasi-particle system by using an image of a dot.
That wouldn't be that exciting, but the effect looks nice after adding some Z-plane randomization for a live-sketched feel.
If that didn't make sense, feel free to have a look at it. Since this doesn't look like much without the image, I've zipped this up with the image file source and placed it here:
It's especially fun if you change the texture source or use colors like light orange.
This is just one of a number of ideas I'm working on, but I'd love some feedback on how I'm doing this. Specifically:
* Is there a way to make this more efficient? I thought of using mesh in place of gridshapes, but I'm new to this and didn't immediately know how to format the matrix with vertices for mesh.
* I tried using a Wacom external to use tablet input for parameters, but couldn't get polling speed right . . . should I hook up Wacom to a separate metro object instead of qmetro? (This reveals some things I don't fully grasp about how Jitter schedules events.)
* Next step for me would be to do the reverse visual effect, to create a particle system that "avoided" either black or white pixels. I know how to measure distance from a single point, a la the marbles example in the documentation, but I'm still wrapping my head around calculating distance from edges . . . anyone have suggestions for which way to go?
Hope this is interesting and stimulating to, well, someone. Thanks!