Geometry Matrix for Anaglyph
I'm trying to make an automatic way of creating an anaglyph view of an OpenGL render by sending a geometry matrix to two jit.gl.render objects with slightly different camera positions. After this step however, I need to process the images to remove the correct colour channels in each view, and then overlap them. Is there a way to do this whilst still being able to process the render on the GPU rather than the CPU? (which is what I believe would happen if I tell jit.gl.render to output a matrix rather than draw to a window).
You need to use jit.gl.sketch as well as jit.gl.render - here's the basic recipe:
Thanks for that pseudostereo, I've seen this method of achieving anaglyph before. However I'm more interested in using a geometry matrix to accomplish this as it saves any possible confusion caused by having to send the rendering instructions twice.
However, the glcolormask command you used looks like it might help me to achieve what I want.
If you can stand by for a couple days, I will be publishing a Jitter Recipe that might help with what you are trying to accomplish.
That would be great, thanks Andrew I'll look out for it.
Anaglyph Recipe is here:
https://cycling74.com/tutorials/jitter-recipes-book-3/#anaglyphrender
Hope that helps.
Now don't get me wrong, Andrew, I love the Jitter Recipes, but...
Because you're converging the left and right cameras (that is, moving them farther apart while leaving the target point the same), you can end up with some pretty nasty keystone distortion as the distance between the cameras increases.
To demonstrate, just load simple_test patch and bang the two jit.texture objects without loading a texture first, so that they have the default checkerboard pattern. If you increase the "camera shift" (the interaxial) past + or -1.0, you will begin to see vertical parallax in the corners of the videoplane. It happens because the cameras are no longer facing the videoplane head-on, so they no longer see corresponding horizontal points at the same height. Of course, this is a large interaxial, but it can happen with much smaller ones depending on the scene and the view angle. And with stereo, any vertical parallax is too much.
You could fix it by rotating the lefty and righty videoplanes slightly to compensate - but you'll still have the gaps on the left and right edges when you adjust "camera spread" (convergence).
This is why the generally preferred method for stereo (at least in the virtual realm) is to use parallel cameras (increasing the distance between the left and right target points at the same rate as the interaxial), and to control the horizontal shift with an off-axis frustum (as in my example above).
David, I'm not sure why you have a problem with sending rendering instructions twice. In my example, I could have banged one message twice for the left and right views - I just set it up the way it is for clarity. In fact I don't know of any stereo method that doesn't have to render the same scene twice, one way or another.
hi pseudostereo,
Thanks for the feedback. In the interest of providing a recipe that actually represents a sort of current best practice, I'd love to incorporate your ideas. I'll check out your patch in more detail and post a revision here soon.
Pseudostereo, I know that obviously I will have to render the scene twice. My interest is in creating wrapper patches for jit.gl.sketch and jit.gl.render that will allow me to render anaglyph scenes without thinking too much about it, ie by using it in exactly the same way as the original objects but with a stereo output. It's purely to improve my workflow, I have a very big project coming up making very heavy use of anaglyph imagery and I'm trying to find the most efficient and flexible way to render the scenes.
Ok, below you'll find a revision of the "object_register" subpatch that includes some of pseudostereo's suggestions. Hopefully that works. I tend to get a little foggy when viewing frustums come into play.
Also, if you are interested in this subject, Paul Bourke has an informative article here: http://local.wasp.uwa.edu.au/~pbourke/exhibition/vpac/theory.html
That's it exactly Andrew.
David, I understand your concern, but as long as you keep your scene generation functionality separate from your stereo render parameters (like Andrew's done with his RenderMaster framework) it shouldn't be much more complicated than everyday 2D rendering.
And Andrew - while while we're on the subject of stereo 3D - do you know if we will ever have access to the stereo buffers of jit.gl.render so that we can do active quad buffered (frame sequential) stereo with any of the fairly cheap 3D-capable projectors that are now available? That would be nice...
For active stereo, you should check out Wesley and Graham's COSM objects, which manage this:
And for all you anaglyph enthusiasts, here's some Nebular fun, courtesy of the Hubble telescope: