Draw OpenGL circle over body-tracked points?

    Oct 12 2011 | 11:25 pm
    Hey all, I've seen a few similar posts on the forum but I couldn't quite get the info I need out of them, mostly because I am a beginner to visual programming languages. I want to use the OpenCV system which averages masses of white--which I will use the Kinect's depth map to grab--and use the X and Y coords to position circles over the blobs (which will be projected onto the users from above). I have figured out how to extract the coordinates sort of with jit.cellblock, but I can't for the life of me figure out OpenGL and drawing circles. It's important to me that these look aesthetically pleasing as well! I imagine I will have to do some texturing to achieve this. Anyhow, any help would be GREAT. Here is my test patch:

    • Oct 13 2011 | 10:37 am
      Personally I would use OSCeleton to do this, you'd save processing power and it would be a lot easier. You can have the /newuser x y coordinates coming in to Max (or /torso x y if you are calibrating) and just use them to position your circles.
      As for the circles, jit.gl.gridshape is a relatively simple way to start with that.
      If you want an easy way to set OSCeleton up you can use Zigfu and then just get the precompiled version of OSCeleton from github.
    • Oct 13 2011 | 8:20 pm
      Very interesting! Do you think OSCeleton would work for a top-down camera view?
    • Oct 13 2011 | 11:45 pm
      Hi. I may be able to help. However, part of your inquiry is unclear to me. First,the OpenCV codebase can see pixels of colors and assign weights and centroids for them. Separately, Kinect can create depthmaps. What I do not understand in your inquiry what you mean about masses of white (a color) and a depthmap (physical depth) being related.
      Are you only asking how to draw a circle? If so, then I need to also understand what is the thing doing the drawing? Is this an OpenGL scene? Is this some image that you want to project on to a sculpture or people?
    • Oct 15 2011 | 6:34 pm
      Hey, thanks for the heads up.
      To clarify, what I envisioned was generating a depth map with Kinect and forcing that into a black and white movie through a little manipulation (I've done this bit before in Processing) then using the CV centoids object to find the center of those white masses which are peoples bodies. The camera and the projector will be positioned on top of the people in the installation, which will project back down on them these circles (which I am going to use as masks to reveal video underneath). So really I just need the best/fastest way to track bodies and draw circles over them. In addition to this, at some point I will be programming some logic that will allow two circles (two people) to combine mass when they are close enough, making a big circle instead of two.
    • Oct 18 2011 | 1:25 pm
      Ah, much clearer. for the kinect, you'd have to use either the imagemap (rgb data) or irmap (ir camera's image data). The body tracking would likely not work (or reliable enough) given the kinect is overhead.
      On thing to keep in mind, the kinect is limited in its range to detect depth. Skeletons/bodies is about 4m. I haven't tested the raw depthmap myself but the doc seems to suggest it might get up to 10m; who knows how accurate. And finally, remember that the camera has a field of view so have fun, really :-), positioning it high enough to get a wide enough spread of the camera's field of view to see all the people you want.
      Lets set that physical stuff aside. If you are on Windows, use my jit.kinect object to get the depthmap. full wiki doc, install, etc. at https://github.com/diablodale/jit.openni
      If you are mac, try one of the others like http://jmpelletier.com/freenect/
      Finally, the rest of the center coord code you need is directly form the help patchs for cv.jit.centroids or cv.jit.blobs.centroids. Install the CV objects, right click on them and you'll see. A quick test is replace the cv.jit.grab in those help patches with my jit.openni or freenect. All the image->binary code is already there. If you use the depthmap instead of the rgb camera, you can remove the unneeded jit.rgb2luma object because its already a greyscale image.
      The fun part will also be mapping the center you see through this code back to another matrix that you will send to your projector. You have to account that the project will have a different lens and placement than your kinect. You have to account for this. Many options, for example: using some math, or you can do it trial and error through scaling and warping with objects like jit.mxform2d.
      This isn't impossible, just takes time and planning. Clearer?
    • Oct 25 2011 | 11:55 pm
      Many thanks! I followed your instructions and have had lots of success in experimenting. I'm running into a new problem, however, and if anyone has advice I would appreciate it. When two bodies come close enough together, blobs.centoids thinks of them as a single blob... is there a better way to keep track of people so that they are always their own blob until they exit the space? What I would like is if when two people come together, their OpenGL circle grows exponentially, let's say to 2.5 times the size of a single person's circle. For 3, 4, 5 people and so on this would work. I'd rather not do it just based on the size of the blob because I want children and larger people to get the same circle size.
    • Oct 26 2011 | 6:55 pm
      What about using a threshold on the size of the blob? Take the size of a large person as your maximum and then if it is over this size (ie more than one person) then you get a bigger circle.
    • Oct 28 2011 | 3:19 am
      Elegant solution. I feel like I should have gotten that myself, it's been a hell of a semester though. Thank you so much for your help. I'm gonna get some sleep. Occupy Wall Street!