Forums > Jitter

3-D image using Binocular Camera system

October 29, 2006 | 5:49 pm

I’m creating an art installation and need to do something very specific as part of a critical step: using two cameras of the webcam sort (or better) separated about the distance of a pair of eyes, I can send both QT signals into a patch, but what I want to do then is process them both so that I am left with one black and white movie signal where closer objects are whiter and farther objects are increasingly darker. It would work like our own binocular eyes. Everything past a certain distance would be all black, but once that threshold is passed, a person’s image would become lighter as they approach the cameras. Also, the greater the resolution, the better.
Any ideas for this?


October 29, 2006 | 10:30 pm


October 29, 2006 | 11:30 pm

I think you may have to be a bit more reasonable about what you want
from stereo cameras. This is a very open research field that alot of
EE and CS people are working on. There are no definite solutions. I
would suggest thinking about how you want your installation concept to
unfold and figure out some simple things that can be done and work up
to getting depth images from stereo cameras. There is however a
solution if you got the bucks:
http://www.ptgrey.com/products/stereo.asp .

wes


October 30, 2006 | 1:50 pm

Quote: wesley.hoke@gmail.com wrote on Sun, 29 October 2006 16:30
—————————————————-

> There is however a
> solution if you got the bucks:
> http://www.ptgrey.com/products/stereo.asp .

I use a Bumblebee2, and here are a few comments.

First, I needed to make an external to get the image into Jitter. This was on Windows, using the SDK provided by Point Grey. I haven’t managed to get it to work on an Mac, but the SDK is essential anyway and it’s only for Windows.

Second, the quality of the output is very variable, even using a rather expensive and well-callibrated solution like the one above. Forget about getting anything out of two off-the-shelf webcams. It’s possible to tweak the space so that you get fairly good results out of it, however, getting the environment just right for stereo vision may not be possible in an artistic setting.

I’m pretty much doubling what Wesley wrote, but don’t underestimate the complexity of the stereo vision problem. It’s something we take for granted because our brain does it so well, but it is in fact an impressive data-processing feat.

Anyway, if you have the money, and aren’t affraid of programming, the only option that approaches what you want to do is the Bumblebee2 mentioned above (or if you’re rich, the Digiclops). It works best if the area you’re interested in is fairly narrow – say about 1 meter along the z-axis.

Jean-Marc


October 30, 2006 | 3:05 pm

i am pretty sure this is an example of the bumblebee2 output.

http://www.uva.co.uk/wp/wp-content/projects/colder/colder.mov

On 10/30/06, Jean-Marc Pelletier wrote:
>
>
> Quote: wesley.hoke@gmail.com wrote on Sun, 29 October 2006 16:30
> —————————————————-
>
> > There is however a
> > solution if you got the bucks:
> > http://www.ptgrey.com/products/stereo.asp .
>
> I use a Bumblebee2, and here are a few comments.
>
> First, I needed to make an external to get the image into Jitter. This was
> on Windows, using the SDK provided by Point Grey. I haven’t managed to get
> it to work on an Mac, but the SDK is essential anyway and it’s only for
> Windows.
>
> Second, the quality of the output is very variable, even using a rather
> expensive and well-callibrated solution like the one above. Forget about
> getting anything out of two off-the-shelf webcams. It’s possible to tweak
> the space so that you get fairly good results out of it, however, getting
> the environment just right for stereo vision may not be possible in an
> artistic setting.
>
> I’m pretty much doubling what Wesley wrote, but don’t underestimate the
> complexity of the stereo vision problem. It’s something we take for granted
> because our brain does it so well, but it is in fact an impressive
> data-processing feat.
>
> Anyway, if you have the money, and aren’t affraid of programming, the only
> option that approaches what you want to do is the Bumblebee2 mentioned above
> (or if you’re rich, the Digiclops). It works best if the area you’re
> interested in is fairly narrow – say about 1 meter along the z-axis.
>
> Jean-Marc
>


October 30, 2006 | 5:06 pm

Yes it is. And it’s a a really great creative use of the errors the
bumblebee2 produces at that.
wes

On 10/30/06, yair reshef wrote:
> i am pretty sure this is an example of the bumblebee2 output.
> http://www.uva.co.uk/wp/wp-content/projects/colder/colder.mov
>
>
>
> On 10/30/06, Jean-Marc Pelletier < gustave433@yahoo.fr> wrote:
> >
> > Quote: wesley.hoke@gmail.com wrote on Sun, 29 October 2006 16:30
> > —————————————————-
> >
> > > There is however a
> > > solution if you got the bucks:
> > > http://www.ptgrey.com/products/stereo.asp .
> >
> > I use a Bumblebee2, and here are a few comments.
> >
> > First, I needed to make an external to get the image into Jitter. This was
> on Windows, using the SDK provided by Point Grey. I haven’t managed to get
> it to work on an Mac, but the SDK is essential anyway and it’s only for
> Windows.
> >
> > Second, the quality of the output is very variable, even using a rather
> expensive and well-callibrated solution like the one above. Forget about
> getting anything out of two off-the-shelf webcams. It’s possible to tweak
> the space so that you get fairly good results out of it, however, getting
> the environment just right for stereo vision may not be possible in an
> artistic setting.
> >
> > I’m pretty much doubling what Wesley wrote, but don’t underestimate the
> complexity of the stereo vision problem. It’s something we take for granted
> because our brain does it so well, but it is in fact an impressive
> data-processing feat.
> >
> > Anyway, if you have the money, and aren’t affraid of programming, the only
> option that approaches what you want to do is the Bumblebee2 mentioned above
> (or if you’re rich, the Digiclops). It works best if the area you’re
> interested in is fairly narrow – say about 1 meter along the z-axis.
> >
> > Jean-Marc
> >
>
>
>
>
>


October 30, 2006 | 10:03 pm


October 30, 2006 | 10:24 pm

>
> this relatively cheap ($2500) tabletop 3D

yeah, "cheap" sometimes is relative ;-)

some time ago i found an article about a tech award for students who
built a 3d scanner based on a webcam and a laserpointer.
website unfortunately only in german:
http://www.cs.tu-bs.de/rob/forschung/auszeichnungen/dagm2006.htm
here they’ll also post the software sooner or later.

but the paper on the technology is available in english:
http://www.cs.tu-bs.de/rob/literatur/download/
swi_2006_09_konferenz_dagm.pdf

jan


October 31, 2006 | 6:24 am

a very interesting *product* is trackIR from natural point.
they sell a 120pfs(!) 1bit camera that tracks a reflective ir dot , they
also offer a thingy called 3vector that adds 6DOF to the equation.
it costs 120-170$ and comes with a non commercial
sdk< http://www.naturalpoint.com/trackir/05-developers/which-sdk.html>.

80% of the guys on this thread can get it to track multi dots ;)

http://www.naturalpoint.com/trackir/02-products/product-TrackIR-4-PRO.html

http://www.naturalpoint.com/trackir/05-developers/which-sdk.html

On 10/31/06, Jan Klug wrote:
>
> >
> > this relatively cheap ($2500) tabletop 3D
>
> yeah, "cheap" sometimes is relative ;-)
>
> some time ago i found an article about a tech award for students who
> built a 3d scanner based on a webcam and a laserpointer.
> website unfortunately only in german:
> http://www.cs.tu-bs.de/rob/forschung/auszeichnungen/dagm2006.htm
> here they’ll also post the software sooner or later.
>
> but the paper on the technology is available in english:
> http://www.cs.tu-bs.de/rob/literatur/download/
> swi_2006_09_konferenz_dagm.pdf
>
> jan
>


October 31, 2006 | 10:36 am

> process them both so that I am left with one black and white movie signal
> where closer objects are whiter and farther objects are increasingly
> darker.

hello,

simple left-right difference might just tell you that there is a change in
depth, but it would not tell you how far pixel is from camera.
imagine completely white room with one small red ball in it, something like
redball.mov, but in 3d. now imagine that the red ball is so small, that on
your camera it is just one red pixel. it is standing still, at say
coordinates 160 120 on the left image. if it is far enough from camera, it
would be at the same coordinates in the right image, right? but if it is
close enough to camera, what was red in left image at 160,120, on the right
one it would be white, difference would be significant, but you would not be
able to extract the depth from this difference. to get the depth, you would
have to know how far that red pixel moved from 160 120, and that’s where the
hell joost,jean-marc and wesley mentioned, starts:)

best,
n


October 31, 2006 | 1:12 pm

Quote: yair r. wrote on Mon, 30 October 2006 08:05
—————————————————-
> i am pretty sure this is an example of the bumblebee2 output.
>
> http://www.uva.co.uk/wp/wp-content/projects/colder/colder.mov
>

The Bumblebee2 output is just the two images, one is colour, the other is greyscale. The SDK, however, provides the functionality for computing a depth map from the left and right images. The depth map is more or less what the original message was asking about. The SDK also provides tools for creating 3D models from the depth map and by using the colour image as a texture, you get something that looks like what’s in the movie above. The sample application that comes with the SDK displays this functionality, so for most people that’s one of the first things you’ll see when trying out the camera.


October 31, 2006 | 3:36 pm


October 31, 2006 | 5:10 pm


Viewing 13 posts - 1 through 13 (of 13 total)