My last project worked out wonderfully, thanks to pmpd and this message board. I am now working on another project that needs to be completed before December 31st. I was wondering if you guys could point me in the right direction and give me some ideas on how feasible this idea is.
I need to make composite faces out of people detected in a video. Only one face would show up at a time in the video. I could use cv.jit.faces to find the face. I could then use alpha masks eyes, nose, mouth, cheeks, etc which would be scaled and applied within the cv.jit.faces box. I would need to scale, position, and compile the various alpha masks so that I have one alphamask which will fit over the face in the original video stream. I would then need to take a snap shot and scale it to my desired final image size so that it could be averaged into my original portrait. I want to use alpha masks for each facial feature so that people of various shapes of face will not cause strange blurs in the final composite image.
Does this sound like the right way to do what I am trying to do? Is this possible in my time frame and how would I go about positioning,scaling,and layering the facial feature masks so that they can be applied to my source images?
I need to figure out a way to take the part of my input matrix which falls into the box created by cv.jit.faces and scale it to my final composite size. After that I can apply alpha masks to get the facial features I am looking for, but I am stuck at the first step of scaling the initial matrix.