For the project I am thinking to do, I need to track people in front
of a white wall so that I can track their silhouette. I want to use
this data in processing to build some eco-system interacting with the
user. I have heard couple of ways to bring jitter into processing.
maxlink, and OSC. There are couple of issues though, I want to see
the user in the applet as well besides tracking him. In other words,
I want to have two copies of the picture, one will be the sensing
picture, and the other will be displaying picture.
One question is, while jitter using to sense the user, is it possible
to use the same camera to display this picture in processing? I am
pretty sure this is no go with one camera. Any thoughts on that? it
looks like it doesn’t make big sense to use jitter in sensing and
doing some process based on this sensing in processing.
I just don’t want to deal with java, sensing the people, has anyone
any experience with that? Any ideas, thoughts would be great@@
Actually I need to bring lots and lots of data from the web
according to sensing. Ideally I am thinking to sense colors from the
users and map this data into some color names and then run google and
flickr searches according to this. Eventually the data would be
displayed on the processing window with some physical properties that
user can interact with.
Personally i would stay in the max environment, but it depends also on your equipment/processorpower which you have. Tracking the people can cost a lot of processing the more so if u want send the video over OSC to Processing.