Take photo automatically when face is in position
I’m trying to do the following: I need to take a lot of profiles of faces in the same position and converts the face to white and the background to black (I’ll be taking the photos against a blue background).
What I need is that the patch will automatically take the picture when the person’s nose is two thirds of the way into the frame.
Basically I need somehow to detect when the person’s nose crosses a certain position in the matrix the camera is feeding into.
I don’t really know how to get started on this…any help would be appreciated. I’m including the patch that takes the pictures and processes the colors.
you could search for the cv.jit externals and install them on your computer.you can use "cv.jit.faces" to detect a face then and trigger the shootings…
(you should read a haarcascade-file for "profile" into cv.jit.faces, not sure if its part of cv.jit, but you can easily find one in the web…)
Thanks, I’ll check out cv.jit, but that’s not exactly what I need. I need to picture to be taken not when a face appears, but when it reaches a certain spot on the screen. Basically I need to look at one of the columns of the image, and when one of the pixels of that column becomes white (face color), take the picture…
Did you had a look into jitter tutorial 25? you can use jit.foundbounds to search for white (and you can use jit.submatrix in front of it to search only in a specific area of your screen)
Given there are many colors of skin beyond white, probably not an approach I would recommend. Instead, you might have more luck finding the scenarios when your image deviates from your blue background. There will be some blue in most of the skin colors, but not at the same ratios.
A quick look at a picture of a tanned caucasian woman shows RGB %’s of 69, 59, 55. Grace Jones % is 42, 28, 23. I suspect if you got the % of RGB in your background, your B component would be significantly higher than both the red and green. Perhaps you can look for a column in which you see a significant number of pixels that have that larger % of blue.
Or applying a similar approach to jit.findbounds. Looking only at the blue channel for values that are less than your background’s blue value.
This, of course, fails with Na’vi.
Oye, way to much complications…. simply take a reference background shot image (when noone is in the cameras view) and store it in a matrix. Use this image to make out the difference between background reference and the live video stream using jit.op @op diff. Then feed the result to jit.findbounds and track the output values to a point u find desirable to trigger the photo. No need to worry about skin color….