Forums > Jitter

true 360° video


Jun 04 2016 | 3:49 am

I have been able to render a true 360° image by changing the camera rotation for each pixel in the image and saving the 1×1 pixel texture in a matrix using setcell. Obviously this is very slow since I have to render the entire world again for each pixel in the image. What would be a quicker way to do this? I have been able to record 360° videos in 8K using 4 cameras (youtube link) but the resampling of the textures results in some distortion and there is always an empty spot in the top and bottom of the image. Also I would like to move the camera around ever so slightly for each pixel (or each column of pixels) to enable "true" omni directional stereo imaging. That will not work properly with 4 cameras or any amount of cameras that is less than the amount of pixels in a row. Does anyone have some thoughts about this?

Thanks!

This is my current (slow!) true 360 patch:

— Pasted Max Patch, click to expand. —
Attachments:
  1. true360-test2

    true360-test2.png

Jun 04 2016 | 3:53 am

Maybe use more than one camera that outputs a texture and then use jit.gl.pix with a shader to stitch the 4 (or more )inputs together ?
This would then be on the gpu not the cpu.

Jun 04 2016 | 3:57 am

That is exactly what I did to create the linked youtube video. This does have the desired result for the reasons that I mentioned in my original post.

Jun 04 2016 | 4:23 am

*does not

But thanks for the reply! Any other ideas?

Jun 04 2016 | 7:44 am

Just to be clear, this is what I am going for. The cameras for the left eye (upper image) and right eye (lower image) move around a circle while rotating to capture each column of pixels. Both dodecahedrons are centered in the world, around the cameras. The purple one is just smaller and therefore closer so it shows up on respectively the right and left side of the green one due to the moving positions of the cameras. This cannot be done with stationary cameras.

I hope I have been able to communicate my idea clearly now, does anyone have an idea how to do this any quicker?

Attachments:
  1. exportleftright6.fw_

    exportleftright6.fw_.png

Jun 04 2016 | 12:21 pm

Afaik there are 3 options:

1. Render pixel by pixel as you are currently. Which is pretty much what you can do with raytracing. Or increase the patch size to multiple pixels… It might not be apparent in the result. Google scholar omnistereo for a method using cylindrical slices.
2. Render to cubemaps with each cube map meaning six renders with 90 fov cameras. You want the cube faces to have a higher resolution than your final image and/or use multisampling to eliminate aliasing in the final output. To get omnistereo you can render with an offset to the position for each vertex in the scene, but the stero disparity needs to peter out at the vertical poles. (And obviously this means one cubemap per eye). It works so long as your models have plenty of vertices. After the cubes are rendered there’s a final resample pass to warp them to the 360 format e.g. equirectangular.
3. Do something clever with the depth buffer to offset pixels – this is how cheap plugins make regular 3d games stereoscopic. And what some stitching algorithms do. It requires some heuristics to fill in missing data.

Jun 05 2016 | 6:43 am

Thanks! I really like your second idea, if I understand it correctly.
Do you mean instead of moving the cameras, I would rotate each vertex in the scene around the camera in opposite directions for each eye, the amount depending on its (horizontal) distance to the camera? That might actually work!

Jun 05 2016 | 7:35 am

It does indeed work — I used exactly this method in the infrastructure for the AlloSphere (a 3-storey spherical VR instrument with nearly total 360 stereoscopic coverage), and there are many projects in there using it. :-)

There’s a part of the process here — might be a useful source of the relevant GLSL code; the omni_render() function effectively replaces the projection matrix in the vertex shader:
https://github.com/AlloSphere-Research-Group/AlloSystem/blob/devel/alloutil/alloutil/al_OmniStereo.hpp

Jun 05 2016 | 8:32 am

That is one awesome sphere!
I’m definitely going to try to implement this method into my previous rectilinear to equirectangular conversion patch. I’ll see if I can use (but first understand – this is not my area of expertise) any of your code. Thanks! Btw I think there should be a tangent function in there somewhere to get the exact displacement for each vertex instead of an approximation.

Jun 06 2016 | 1:16 pm

I think it worked! I used your method of rotating each vertex in the scene instead of moving the camera and now I can render everything at once (or once per eye at least). I have also added the top and bottom cameras to complete the sphere and have less distortion during the conversion to equirectangular. Does anyone know a way to transform all vertices in a scene at once? Now I have to add a jit.gen to every object in the scene separately, would be nice to do that all at once.
Thanks!

Attachments:
  1. complete360stereo

    complete360stereo.jpg

Mar 17 2017 | 6:30 am

interesting,
by now I’ve solved by creating the faces for a cubemap inside of max with 6 jit.gl.camera objects,
then routing via Spout/Syphon into TouchDesigner that have a "one-click" solution to generate
equirectangular view from vertical cross cubemap. Tricky but working.
Would be nice to have a "one-click" solution inside of max too for this kind of operations
without reinventing the weel!

Mar 22 2017 | 10:19 am

Can you be explicit what the one-click solution would do?

Mar 22 2017 | 10:32 am

Hi Graham,

of course:
basically in TD you have a so called "projection TOP" node with the possibility to switch to equirectangular render mode if cubemap is given as input.
TD’s render object can output a rendered cubemap straight from the GL context instead of just a 2D textured plane, then you can process this as an equirectangular image with the projector top mentioned above and just record it and inject the video with the youtube app to have a 360 video.

Mar 22 2017 | 10:45 am

TOPs are GPU based "Texture OPerators"

Mar 24 2017 | 9:38 pm

So a TOP would be something like a jit.gl.pix or jit.gl.slab I guess.

Here’s a quick attempt feeding six cameras as a cubemap into jit.gl.pix.

It’s possible that this could be made more efficient, but it works. If you wanted to move the viewpoint around (rather than animating the scene as I did), a chain of jit.anim.nodes on the cameras would do the job. It’s also possible that it might be better to render into a single destination texture (as an unfolded cube), which might help for the texture resolution, for example (and also be handy as an intermediate format).

— Pasted Max Patch, click to expand. —
Mar 24 2017 | 9:46 pm

Animating the camera:

— Pasted Max Patch, click to expand. —
Viewing 16 posts - 1 through 16 (of 16 total)

Forums > Jitter