Motion to 3D planes patch example


    Mar 08 2006 | 12:04 am
    Hi everyone,
    Work in progress here that I thought I'd share, since it is relevant to a number of questions people have been asking. Basically, what the patch does is this:
    1. "Tracks" motion by turning a video input into black and white. I intended this for use with dancers, hand movements, or any other moving object against a blank, contrasting background. (Not real motion tracking, but a simple trick that made sense here for preserving whole outlines.)
    2. Translates the black (or white) pixels to a series of vertices for use as OpenGL geometry.
    3. Renders overlapping planes in 3D; here mapped to a quasi-particle system by using an image of a dot.
    That wouldn't be that exciting, but the effect looks nice after adding some Z-plane randomization for a live-sketched feel.
    If that didn't make sense, feel free to have a look at it. Since this doesn't look like much without the image, I've zipped this up with the image file source and placed it here:
    It's especially fun if you change the texture source or use colors like light orange.
    This is just one of a number of ideas I'm working on, but I'd love some feedback on how I'm doing this. Specifically:
    * Is there a way to make this more efficient? I thought of using mesh in place of gridshapes, but I'm new to this and didn't immediately know how to format the matrix with vertices for mesh.
    * I tried using a Wacom external to use tablet input for parameters, but couldn't get polling speed right . . . should I hook up Wacom to a separate metro object instead of qmetro? (This reveals some things I don't fully grasp about how Jitter schedules events.)
    * Next step for me would be to do the reverse visual effect, to create a particle system that "avoided" either black or white pixels. I know how to measure distance from a single point, a la the marbles example in the documentation, but I'm still wrapping my head around calculating distance from edges . . . anyone have suggestions for which way to go?
    Hope this is interesting and stimulating to, well, someone. Thanks!

    • Mar 08 2006 | 1:33 am
      Hi Peter, I took a look at your patch. The first thing I noticed in terms of efficiency is your use of jit.iter. Avoid this object like the plague. It will really slow you down if you have a matrix of any size. I've done similar things in the past ->taking video, extracting cells, drawing a geometry. In the end, I created some jitter externals to do this because I wanted to stay in matrix land and the basic jitter objects don't really lend themselves to doing this efficiently. Here's a modified patch that uses 2 objects I created. It runs alot faster now. Hopefully, it's what you were going for and you'll find it usefull. The 2 objects I used were xray.jit.cellcoords and xray.jit.line2quad. The first extracts white pixels (you could easily extract the black pixels by inverting the matrix). The second takes a line a generates a quad as well as the appropriate texture coordinates. This is what redraws the video as circles with your png texture. If you don;'t already have the externals, you can get them at http://www.mat.ucsb.edu/~whsmith/xray.html . Admittedly, there are a few things I had to do to adapt to my object's input type that are a slight knock on efficiency such as the jit.transpose. Also, if there was an object that created a quad from a point, then the jit.op wouldn't be needed. In any case, below is the patch.
      enjoy, wes
    • Mar 08 2006 | 2:38 am
      Hi Wes, Funny you should say that, because right after posting that I came across your externals and said . . . oh, duh, that'd be helpful. And yes, you had exactly the idea.
      What about implementing the same thing as JavaScript? I've noticed a huge difference in performance there, particularly with anything recursive.
      I've certainly seen which objects speed up / slow things down, and usually even test this. I don't always have a sense of why. The JavaScript examples in Jitter seem to improve efficiency by reducing repeated instructions and recursive code. I guess that the problem with jit.iter is you suddenly have this huge list that has to get moved around from object to object, all the while being scheduled. (that, at least, SOUNDS nasty) Replacing jit.xfade with jit.op moves from CPU to GPU on operations that benefit from that, just to give another example . . .
      Anyway, any additional light you can shed on the situation is appreciated. Thanks for sharing your fantastic externals; I'll let you know how this turns out for me.
      Cheers, Peter
    • Mar 08 2006 | 3:32 am
      > What about implementing the same thing as JavaScript? I've noticed a huge difference in performance there, particularly with anything recursive. >
      Sometimes you can model things more efficiently in JS than in a patcher. This is where speedups occur. If you're soing lots of messaging or number crunching, this will definitely be a slow down.
      >I guess that the problem with jit.iter is you suddenly have this huge list that has to get moved around from object to object, all the while being scheduled.
      It's due to the massive amounts of max messages
      > (that, at least, SOUNDS nasty) Replacing jit.xfade with jit.op moves from CPU to GPU on operations that benefit from that, just to give another example . . .
      not quite sure what you're getting at here. Jit.xfade doesn't operate on the GPU. a shader that implemented a jit.xfade-like fragment shader would be a good GPU speed up however
      best, wes
    • Mar 08 2006 | 5:45 am
      I believe I mispoke. I've certainly benhmarked jit.op combining two matrices faster than jit.xfade. I meant working with video on GPU as a separate issue. Anyway, back to this patch . . .
      Have worked with your modified patch more, and it looks great; made sense to me for the most part (taking it apart a little more and thinking through each step). I changed the mesh scale to 10 10 to account for a chance in scale elsewhere.
      Now, my challenge is to recreate the effect of random values for the Z coordinates. I would naturally want to do this with a jit.noise matrix rather than the random object, since I'm happily back to matrices in place of lists.
      I'm just honestly unsure of where/how to combine the noise matrix with the matrix that's feeding line2quad. Your line2quad object does work with 3D data in mode 1, correct? So if I took the existing matrix and added in Z values, say between 0 and 1 or 0 and 0.5 or some other range . . .
      (Sorry if this is a dumb question, just trying to think through the best place to glue in the Z values, again using as little CPU bandwidth as possible . . . )
      This would presumably be useful, too, if someone wanted to map the color data to Z, or some other purpose I didn't devise here, instead of choosing random Z values within a range as I'm doing.
      Thanks, Wes, this is all really great. If California and New York were closer, I could buy you a drink . . .
    • Mar 08 2006 | 7:09 am
      > I'm just honestly unsure of where/how to combine the noise matrix with the matrix that's feeding line2quad. Your line2quad object does work with 3D data in mode 1, correct? So if I took the existing matrix and added in Z values, say between 0 and 1 or 0 and 0.5 or some other range . . . > > (Sorry if this is a dumb question, just trying to think through the best place to glue in the Z values, again using as little CPU bandwidth as possible . . . ) >
      You can use it with 3d coordinates. Below is a simple modification to do just that. The @mode 1 of xray.jit.line2quad is a bit quirky right now and is something I need to fix up a little, bit I think it works to decent effect in the patch. Potentially,you could do the random z displacement in a shader as well. Your preference.
      wes