true 360° video
I have been able to render a true 360° image by changing the camera rotation for each pixel in the image and saving the 1x1 pixel texture in a matrix using setcell. Obviously this is very slow since I have to render the entire world again for each pixel in the image. What would be a quicker way to do this? I have been able to record 360° videos in 8K using 4 cameras (youtube link) but the resampling of the textures results in some distortion and there is always an empty spot in the top and bottom of the image. Also I would like to move the camera around ever so slightly for each pixel (or each column of pixels) to enable "true" omni directional stereo imaging. That will not work properly with 4 cameras or any amount of cameras that is less than the amount of pixels in a row. Does anyone have some thoughts about this?
Thanks!
This is my current (slow!) true 360 patch:
Maybe use more than one camera that outputs a texture and then use jit.gl.pix with a shader to stitch the 4 (or more )inputs together ?
This would then be on the gpu not the cpu.
That is exactly what I did to create the linked youtube video. This does have the desired result for the reasons that I mentioned in my original post.
*does not
But thanks for the reply! Any other ideas?
Just to be clear, this is what I am going for. The cameras for the left eye (upper image) and right eye (lower image) move around a circle while rotating to capture each column of pixels. Both dodecahedrons are centered in the world, around the cameras. The purple one is just smaller and therefore closer so it shows up on respectively the right and left side of the green one due to the moving positions of the cameras. This cannot be done with stationary cameras.
I hope I have been able to communicate my idea clearly now, does anyone have an idea how to do this any quicker?
Afaik there are 3 options:
1. Render pixel by pixel as you are currently. Which is pretty much what you can do with raytracing. Or increase the patch size to multiple pixels... It might not be apparent in the result. Google scholar omnistereo for a method using cylindrical slices.
2. Render to cubemaps with each cube map meaning six renders with 90 fov cameras. You want the cube faces to have a higher resolution than your final image and/or use multisampling to eliminate aliasing in the final output. To get omnistereo you can render with an offset to the position for each vertex in the scene, but the stero disparity needs to peter out at the vertical poles. (And obviously this means one cubemap per eye). It works so long as your models have plenty of vertices. After the cubes are rendered there's a final resample pass to warp them to the 360 format e.g. equirectangular.
3. Do something clever with the depth buffer to offset pixels - this is how cheap plugins make regular 3d games stereoscopic. And what some stitching algorithms do. It requires some heuristics to fill in missing data.
Thanks! I really like your second idea, if I understand it correctly.
Do you mean instead of moving the cameras, I would rotate each vertex in the scene around the camera in opposite directions for each eye, the amount depending on its (horizontal) distance to the camera? That might actually work!
It does indeed work -- I used exactly this method in the infrastructure for the AlloSphere (a 3-storey spherical VR instrument with nearly total 360 stereoscopic coverage), and there are many projects in there using it. :-)
There's a part of the process here -- might be a useful source of the relevant GLSL code; the omni_render() function effectively replaces the projection matrix in the vertex shader:
https://github.com/AlloSphere-Research-Group/AlloSystem/blob/devel/alloutil/alloutil/al_OmniStereo.hpp
That is one awesome sphere!
I'm definitely going to try to implement this method into my previous rectilinear to equirectangular conversion patch. I'll see if I can use (but first understand - this is not my area of expertise) any of your code. Thanks! Btw I think there should be a tangent function in there somewhere to get the exact displacement for each vertex instead of an approximation.
I think it worked! I used your method of rotating each vertex in the scene instead of moving the camera and now I can render everything at once (or once per eye at least). I have also added the top and bottom cameras to complete the sphere and have less distortion during the conversion to equirectangular. Does anyone know a way to transform all vertices in a scene at once? Now I have to add a jit.gen to every object in the scene separately, would be nice to do that all at once.
Thanks!
interesting,
by now I've solved by creating the faces for a cubemap inside of max with 6 jit.gl.camera objects,
then routing via Spout/Syphon into TouchDesigner that have a "one-click" solution to generate
equirectangular view from vertical cross cubemap. Tricky but working.
Would be nice to have a "one-click" solution inside of max too for this kind of operations
without reinventing the weel!
Can you be explicit what the one-click solution would do?
Hi Graham,
of course:
basically in TD you have a so called "projection TOP" node with the possibility to switch to equirectangular render mode if cubemap is given as input.
TD's render object can output a rendered cubemap straight from the GL context instead of just a 2D textured plane, then you can process this as an equirectangular image with the projector top mentioned above and just record it and inject the video with the youtube app to have a 360 video.
TOPs are GPU based "Texture OPerators"
So a TOP would be something like a jit.gl.pix or jit.gl.slab I guess.
Here's a quick attempt feeding six cameras as a cubemap into jit.gl.pix.
It's possible that this could be made more efficient, but it works. If you wanted to move the viewpoint around (rather than animating the scene as I did), a chain of jit.anim.nodes on the cameras would do the job. It's also possible that it might be better to render into a single destination texture (as an unfolded cube), which might help for the texture resolution, for example (and also be handy as an intermediate format).
Animating the camera:
And here's a better version, that captures the scene into a regular unfolded cubemap image (like a Finnish flag), and then processes this in to an equirectangular image. It's wrapped up into a subpatcher that can be parameterized on the cube face and output resolutions, as well as the scene context name, and navigation jit.anim.node name. Note that for equirectangular mode, the cubemap lens angle is slightly increased above 90 degrees, to avoid causing seams when sampling near the texture boundaries.
This is as close to one-click as I can get, I think.
Graham, I know your post is a bit old, but I'm messing with 360º now and I found your patch. It is working pretty well (actually the second to last version), but when I tried to stream it to YouTube, the geometry seems to be distorted. I uploaded a test screen capture here: https://drive.google.com/file/d/1L959ELXvBltlA52FZHkGiNHeWMcU6KD9/view?usp=sharing
In this example bellow I created a sphere and, as you can see, it is weirdly distorted. It seems that is is longer than wider and the top and botton points are also distorted. Any ideas of what is happening?