I Have been working on a project where I am trying to track hand gestures to cause things to happen when a particular gesture is made. I’ve downloaded the cv.jit objects and opticalflow appears to be what I need, but my inexperience is getting in the way of what I would think is a simple task. I see a nice pretty colorful picture that appears only when I move. this is good. Now how do I get this into usable X/Y data?
"I used max msp/jitter with open CV, Horn Schunk optical flow tracking is built into open cv, it outputs 2 matrix sets one for up down and one for left right (visually red and green values for left and right movement and yellow and blue values for up and down) – I simply use those values to rotate the cube and add a ‘Line’ object to soften the rotation, (giving the spin the appearance of momentum)"
I still cant figure out how to get workable number values from this. If anyone can explain to me what I am missing I would really appreciate it
cv.jit.opticalflow returns a 2 plane matrix, with one plane corresponding to the x-axis displacement, and another to the y-axis displacement. If a pixel has values (-3.0, 1.0), you know it moved 3 pixels left and 1 pixel down. Note that the results aren’t precise enough to perform solid tracking, but they give you an idea of how things are moving.
If I were to reproduce that video, I would probably extract only part of the flow with jit.submatrix and them average the flow using jit.3m. This will tell me the general velocity in the region under the cube. With this information, you can then control OpenGL parameters or whatever you wish.