For the project I am thinking to do, I need to track people in front
of a white wall so that I can track their silhouette. I want to use
this data in processing to build some eco-system interacting with the
user. I have heard couple of ways to bring jitter into processing.
maxlink, and OSC. There are couple of issues though, I want to see
the user in the applet as well besides tracking him. In other words,
I want to have two copies of the picture, one will be the sensing
picture, and the other will be displaying picture.
One question is, while jitter using to sense the user, is it possible
to use the same camera to display this picture in processing? I am
pretty sure this is no go with one camera. Any thoughts on that? it
looks like it doesn't make big sense to use jitter in sensing and
doing some process based on this sensing in processing.
I just don't want to deal with java, sensing the people, has anyone
any experience with that? Any ideas, thoughts would be great@@