Geometry Matrix for Anaglyph

    Sep 07 2010 | 2:45 pm
    I'm trying to make an automatic way of creating an anaglyph view of an OpenGL render by sending a geometry matrix to two objects with slightly different camera positions. After this step however, I need to process the images to remove the correct colour channels in each view, and then overlap them. Is there a way to do this whilst still being able to process the render on the GPU rather than the CPU? (which is what I believe would happen if I tell to output a matrix rather than draw to a window).

    • Sep 07 2010 | 3:54 pm
      You need to use as well as - here's the basic recipe:
    • Sep 07 2010 | 4:07 pm
      Thanks for that pseudostereo, I've seen this method of achieving anaglyph before. However I'm more interested in using a geometry matrix to accomplish this as it saves any possible confusion caused by having to send the rendering instructions twice.
      However, the glcolormask command you used looks like it might help me to achieve what I want.
    • Sep 07 2010 | 11:44 pm
      If you can stand by for a couple days, I will be publishing a Jitter Recipe that might help with what you are trying to accomplish.
    • Sep 08 2010 | 1:07 pm
      That would be great, thanks Andrew I'll look out for it.
    • Sep 14 2010 | 9:43 pm
    • Sep 15 2010 | 4:48 am
      Now don't get me wrong, Andrew, I love the Jitter Recipes, but...
      Because you're converging the left and right cameras (that is, moving them farther apart while leaving the target point the same), you can end up with some pretty nasty keystone distortion as the distance between the cameras increases.
      To demonstrate, just load simple_test patch and bang the two jit.texture objects without loading a texture first, so that they have the default checkerboard pattern. If you increase the "camera shift" (the interaxial) past + or -1.0, you will begin to see vertical parallax in the corners of the videoplane. It happens because the cameras are no longer facing the videoplane head-on, so they no longer see corresponding horizontal points at the same height. Of course, this is a large interaxial, but it can happen with much smaller ones depending on the scene and the view angle. And with stereo, any vertical parallax is too much.
      You could fix it by rotating the lefty and righty videoplanes slightly to compensate - but you'll still have the gaps on the left and right edges when you adjust "camera spread" (convergence).
      This is why the generally preferred method for stereo (at least in the virtual realm) is to use parallel cameras (increasing the distance between the left and right target points at the same rate as the interaxial), and to control the horizontal shift with an off-axis frustum (as in my example above).
      David, I'm not sure why you have a problem with sending rendering instructions twice. In my example, I could have banged one message twice for the left and right views - I just set it up the way it is for clarity. In fact I don't know of any stereo method that doesn't have to render the same scene twice, one way or another.
    • Sep 15 2010 | 4:39 pm
      hi pseudostereo, Thanks for the feedback. In the interest of providing a recipe that actually represents a sort of current best practice, I'd love to incorporate your ideas. I'll check out your patch in more detail and post a revision here soon.
    • Sep 15 2010 | 5:31 pm
      Pseudostereo, I know that obviously I will have to render the scene twice. My interest is in creating wrapper patches for and that will allow me to render anaglyph scenes without thinking too much about it, ie by using it in exactly the same way as the original objects but with a stereo output. It's purely to improve my workflow, I have a very big project coming up making very heavy use of anaglyph imagery and I'm trying to find the most efficient and flexible way to render the scenes.
    • Sep 15 2010 | 8:10 pm
      Ok, below you'll find a revision of the "object_register" subpatch that includes some of pseudostereo's suggestions. Hopefully that works. I tend to get a little foggy when viewing frustums come into play.
      Also, if you are interested in this subject, Paul Bourke has an informative article here:
    • Sep 16 2010 | 4:24 am
      That's it exactly Andrew.
      David, I understand your concern, but as long as you keep your scene generation functionality separate from your stereo render parameters (like Andrew's done with his RenderMaster framework) it shouldn't be much more complicated than everyday 2D rendering.
      And Andrew - while while we're on the subject of stereo 3D - do you know if we will ever have access to the stereo buffers of so that we can do active quad buffered (frame sequential) stereo with any of the fairly cheap 3D-capable projectors that are now available? That would be nice...
    • Sep 16 2010 | 6:14 pm
      For active stereo, you should check out Wesley and Graham's COSM objects, which manage this:
      And for all you anaglyph enthusiasts, here's some Nebular fun, courtesy of the Hubble telescope: