Structured Lighting in Jitter?
Hello all,
Can jitter do Structured Lighting by using a regular camera (not necessarily Kinect) and a projector that would project white lines?
They talk about this possibility in the following video presenting the app Madmapper.
https://vimeo.com/27253954
Thank you,
ygr
Yes, one can do this in Jitter, but Jitter is a programming language, not an application, meaning that you or another programmer would have to write the code to do what you want.
If the question would be: "Is Jitter a good language with which to program structured lighting techniques?" the answer would by "yes".
Of course, lembert.dome. I didn't ask properly. What I meant was "can it be done in Jitter?" I have a bit of experience as to know jitter is a programming language. Thanks for correcting me.
Any ideas how, though? Like what should I look for?
Thank you
My reply was not meant to be disparaging, excuse me if it was. I thought that perhaps you were as yet unfamiliar with Jitter.
What do you want to do, specifically? As a simple example: years ago I made a simple disance sensor by projecting two red points on a wall and detecting them with a cheap web cam. The pixels between the points in the webcam feed could be used to calculate the distance.
Now, I am working on a max object which calculates the transforms between a projector and a camera based on a projected calibration pattern.
What are you looking to do? If you want depth data, I strongly reccomend using a Kinect; they are cheap and effective. When they came on the market, I had been waiting for such a thing for a couple of years already. If you want to emulate that sort of function with Jitter, you will probably be running into heavy CPU use and some real lag time. It is possible, however.
one of the clue could be: use existant tools.
you can find intrinsic and extrinsic points with opencv (chessboard or non symetrical calibration), multiply them to find the 3x3 cvmat camera transformation. You can deduce then the 4x4 transform matrix wich can be used in opengl context.
I have some ressources about those methods if you are interested.
You can Take a look at openframeworks wich has some opencv functions included too.
ad
@Ad: yes, I'm interested, I am just beginning this query. I was going to try to dig up some material from a workshop I attended seven years ago, so if you have available info it would save me some time.
Basically, I will build an object which outputs a pattern which can be projected, finds the transformation and outputs that so that it can be used by jit.gl.camera. After that it would be nice to add a routine to compensate for lense distortion.
I looked briefly for threads in this forum and will do so again in more depth, but it did not seem that this was available yet.
Hi guys,
Thank you for the info! Don't worry, lembert.dome! I didn't feel offended in any way! :)
So there isn't an already made solution for jitter yet. Hmm..
Ad, I am trying to figure out what you mean by "find intrinsic and extrinsic points with opencv (chessboard or non symetrical calibration), multiply them to find the 3x3 cvmat camera transformation. You can deduce then the 4x4 transform matrix wich can be used in opengl context." The only experience I have with opencv is using face detection :) What objects do you think I should look into from the cv.jit package?
lembert.dome, maybe you can reverse engineer MadMapper or Johnny Lee's examples (http://johnnylee.net/projects/ where, I am guessing, the creators of MadMapper got their inspiration from).
I would like to invite you all to this thread. I would love to get all of your input on this...