How would you generate 'polygon graphics' from a webcam input?
I’m trying to generate ‘polygon graphics’ and ‘wire-frame-based polygon graphics’ (please see the attached) from a webcam input, but have been stuck.
I would appreciate your insights.
Many thanks for your help,
I just came across with a video which shows what I’ve been trying to achieve.
The video was made with vvvv. Still trying to figure out how I can simulate that with Max.
The author of the video says ‘Live video feed from a webcam>erode/dilate>contour tracking>Delaunay triangulation. Playing with some adaptive thresholding values here to make it more lively’.
i’m guessing you’re using the webcam to make low poly representation of human faces, instead of let’s say your cat or cactus?
well the generating the polygon pattern is fairly simple using [jit.bfg] with voronoi. in the help patch for [jit.bfg] and under the voronoi tab, check out the "id" option in the "prepend metric" section. that will give you hard edge single color greyscale polygon. using the area of each polygon, sample the average brightness/color in the corresponding area of your webcam matrix, and color each polygon to that sampled color. Using [jit.cellblock] you should see fairly clearly where the coordinates for each polygon’s bounds are (since it’ll be a one plane greyscale matrix).
and then using the jit.cv objects, track a person’s face, and using the xray objects (???) determine the facial perimeter contours, and use that to carve the facial contours into colored polygon pattern image and mask out all the surrounding polygons.
that’s probably confusing, mainly because i’m still hashing it out in my head, and my solution in pretty overwrought. maybe there’s some shader out there that will convert everything into low poly?
I’d love to see solutions to this as well.
Thank you very much Greg. I will try what you suggested.
For now, I share another relevant example I came across with.
The author of the video wrote "real-time mocap > Delaunay Triangulation > Voronoi Tesselation > .obj sequence > 3DSMax"
Now I have started to build the actual stuff. But I am not sure how I can "using the area of each polygon, sample the average brightness/color in the corresponding area of your webcam matrix, and color each polygon to that sampled color". I know what you mean, but I am not skilful enough to implement that… Could you kindly give me some clues? All I can guess now is that I can use jit.gl.pix for sampling.
Also could you kindly tell me a bit more about "xray objects"…?
Thanks a lot
Hi Freeka, not really. For visuals I have switched to TouchDesigner.
Forums > Jitter