I've been experimenting with various multiple screen arrangements in order
to generate a CAVE-like environment. What I found out about jitter's
behavior is truly baffling.
Here's a hypothetical situation:
1 camera is positioned at 0,0,1 (xyz) and looks at 0,0,0
1 camera is positioned also at 0,0,1, but it looks at -1,0,1 (therefore it
is angled 90 degrees left from the other camera)
two cameras are being rendered to a separate matrix/window with identical
openGL content, thus emulating the same scene with two viewports.
Now the logical thing would be to pick 90 degree angle of the camera lens
and the two would mesh perfectly together, making a 45-degree diagonal line
0,0,1 to -1,0,0 the vertical seam which connects the two viewports.
What I found out, however, is that the only way this connection works
seamlessly is not by using camera angle of 90, but rather of 73.
So, does anyone know why is this so? The only logical conclusion would be
that the camera is represented not as a point which spreads out into a
rectangle but rather a smaller rectangle which grows into a larger one. Yet,
I have no idea as to what is exactly the size of the smaller rectangle nor
what actual angle it requires (other than the aforesaid .811111 ratio).