Augmented Reality - Need help to insert 3D models into live video feed
I am trying to create an Augmented Reality (AR) and have started off by working with whatever is within Max5. I have connected live video with color tracking and now am trying to put in a 3D model now. Is it possible to connect the 3D model into the live video?
I am very new to all of this and am not expecting much but at the least I plan to create the simplest of AR's.
Here is my patch.
copy compress your patches. this one looks like it got truncated... no one will be able to open it :(
Thank you and here is the new patch:
Try this... This was adapted from a Patch I made that Rob fixed up for me to use a videoplane with @transform_reset 2 enabled.
Sorry for the inconvenience but I'm having trouble connecting this patch to the live video feed.
Just change jit.qt.movie to jit.dx.grab and give it an open message.
I didn't get your patch to work with live feed but I got a 3d model on the same screen as the live video feed. I say the same screen because it flashes between both the 3d model and its background and the live video feed. Is there anyway to combine them properly? and take away the 3d models background?
As well I've been trying to coordinate the position of the 3d model with the color tracking. I started this by attempting to coordinate the position of the mouse with the 3d model so I could see the relationship as I make this patch. I have been somewhat successful because I put the x and y coordinates of the 3d model to follow the mouse, but separately. I cant seem to combine both x and y coordinates together.
Don't take this the wrong way: but when things flash like that you're doing it wrong.
I noticed that with the example I posted, when I changed the position attribute of jit.gl.videoplane to -90 it seemed to work. GIve that a try.
In your example:
The image you are putting into jit.window is in direct competition with the render context.
3D models render to a render context, 2D images must be converted into a format that works in these 3D coordinate systems.
Hello,
Sorry to barge in on your discussion here, but this is something I'm also trying to figure out how to do for my own project.
Cap10, When I open your patch (with the apple) I can't see the video as well.
I can only here the soundtrack when I load a video. I assume this has to do with what you wrote; "3D models render to a render context, 2D images must be converted into a format that works in these 3D coordinate systems. "
If that is the case,
1. how should this rendering be done?
2. is it possible to use the built in camera for your patch as well?
Thanks,
(it's my first post so be gentle :)
jit.gl.videoplane to -90 does work but it is blinky. The biggest issue I have now is positioning the 3d model on x y coordinates.
btw thanks for all the help cap10subtext.
Hi guys, I don't mean to be rude, but it's important to remember that these are advanced concepts, particularly how the live feed works in relation to the projected image. Putting the videoplane for the webcam in the background isn't an exact science in Max, there are several approaches. For starters, be clear which coordinate system you are working in, 3D or Screen.
If you are trying to put the love feed directly in the background you need to find examples with draw_pixel (I believe this is the approach in the jit.artkmulti help patch, so you can look there.
If you are rendering on a videoplane it's much more complicated, basically you are working with the @transform_reset 2 attribute argument and I'm not 100% sure about all the quirks. I do know the position of the videoplane matters but can't recall how or why at the moment.
You shouldn't be dropping the live video matrix directly into jit.window EVER, you need to use draw pixels. Doing this tells me you need to be looking through the jitter tutorials more carefully and understanding what each part you copy into your patch does. I hope this helps.
Sorry I misspoke, you shouldn't drop it into jit.window if you are using it as a render context.
I am not aiming that high in terms of how well I create this AR. My short term goal is to put live video and 3d model together than use color tracking to find the x y coordinates which the 3d model is to follow. The only thing really left for this to work is to properly connect the 3d model with the coordinates of the color tracking.
Thanks Cap10, I think you're right it is a bit advanced.
I just finished the Jitter tutorials so I guess I should go over them again
to understand the concepts better. I wish they had a tutorial about this specific subject though because it's so interesting and full of potential.
there's a lot of questions in this thread, so hopefully this patch will answer them.
it uses the color tracking from the jitter tutorial, a jit.gl.sketch object to convert screen coords to world coords, the layer attribute and a jit.gl.videoplane to overlay a model on a movie.
please let me know if anything is not clear. if this doesn't give you a direction to move in, please restate your question being as specific as possible.
This is a very handy patch but im having trouble inputting live video instead of the movie. Any solutions?
"This is a very handy patch but im having trouble inputting live video instead of the movie. Any solutions?"
SangMinLee: I'm just having trouble wrapping my head around this... What are you having trouble with. All you have to do is change jit.qt.movie to jit.dx.grab (on a PC) and send an open message. Run this patch:
And if neither one accesses your camera, it's a problem with the webcam drivers OR your webcam is being used by another program.
Unless you can be more specific about what your "trouble" is, we can only guess....
Hi, sorry to dredge up an old thread, but this is actually exactly what I'm trying to figure out as well except I'm also trying to take the finished project (e.g. Robert's great apple flying around over dishes) and turn it into a usable matrix. I tried using jit.gl.asyncread but it only output a gray matrix. It seems to only be able to handle one rendered object at a time (which makes sense to me, but just barely). What kind of setup can I use to render the complete package?
Thanks
jit.gl.asyncread is what you nead to readback the GL output to a matrix.
make sure you set the @layer attribute of asyncread to something high (like 1000).
this ensures the asyncread will render after all other gl objects in the context.
also, make sure you really want to render back to a matrix from GL land, as this will make your patch less efficient.
Thanks, Robert. The @layer attribute was exactly what I needed to alter.