Manipulating VR source video without changing the dimensions
Hi all,
I have a simple VR camera that produces 2800 X 1800 VR files.
I can upload these directly to YouTube, but I don't want to do that.
I want to open the file within Jitter, jump to specific frames, then change the area that is the "front" of the viewer.
Right now I can open the file within Jitter and jump to specific frames.
However, when I want to change what is in front of the viewer, I have only been able to do that by placing the file in a sphere, using a jit.texture and jit.window, and then point a camera at it. The "camera" that it uses will only go to about 90-degrees wide. The result is that when I export the video using jit.gl.asynchread to a file for YouTube, the image is just a fraction of the 2800 X 1800 original (it's been cropped), and so when I view it using Cardboard, the resolution is way too low.
I believe to keep the resolution when I take it on to YouTube, I need a camera that has a 360-degree "lens angle", and then spin the sphere containing the video image, but the jit camera seems to stop working (the image goes grey) around 90 degrees.
Does this make sense? Or am I approaching this wrong? How can I manipulate the video file, exporting it to a new video file, without losing the full resolution?
Thanks in advance for any and all help.
Robert
Have part of my answer from Tutorial 32.
if you post a patch cleaned up patch showing your attempts and indicating what's not working, we may be able to better help.
Thanks Rob. I have made progress in other areas with the patch, but still could use help with the 360-export. I will upload a patch either this evening or tomorrow evening at latest.
Hi All.
I am trying to create an engine that will create output that I can view through Cardboard as a 360-VR video.
Ideally, I could stream it live from my Jitter patch. Minimally, I'd like to save it from the patch in the proper 360 format, so I can play it back from the saved 360 file.
For my particular (and peculiar) use, I need the following:
- ability to perform jumps to random frames at a search and playback rate of 30 fps on a fast MacBook Pro
- Ability to perform horizontal and/or vertical pans within the 360 stream according to incoming offset numbers, in real time (so that if you held your head still, the 360 image would pan according to the input offset numbers).
I have attached a patch that can read in the 360 video, read frame jump numbers (from an outside patch not included here),
and output the video. The jit.gl.handle function should work to rotate the video, though it is not presently working, and I would rather move the position within the frame numerically rather than with a mouse anyway.
What I don't know how to do:
- Read a 360 video in jit.movie in real time, perform a frame number jump and a pan, and output the video (for viewing and/or recording) retaining the 360-degree format and as fast a frame rate as possible.
Thanks in advance for your help!
Robert
The current patch is here:
Will upload update in about an hour.
I've updated my patch. You can use it to load a 360-degree VR file.
It maps that file to the inside of a sphere.
You can then move your viewpoint inside of the sphere, and rotate the sphere using your mouse.
The questions I have now are specific:
1. How do I ensure a perfect mapping of the file to the inside of the sphere?
2. How do I get an output (for viewing or recording) that will provide the same VR projection mapping that the original video has?
Thanks in advance for any suggestions.
I anticipate that others may be interested in a working vr engine.
I've included the compressed patch below.
Since even a short 360 video file is too large to attach, here is a link: https://vimeo.com/205710951
You can use it to download a short 360-video file to test this patch.
Best,
Robert
There's no need to muck around with mapping onto a sphere and rendering in 3D -- everything you need to do can be done in image-space operations.
Here's a patcher demonstrating applying rotations to cyclindrical 360 VR footage. The trick is to convert the texel coordinate into a 3D unit vector, then rotate that vector as desired, then convert it back to cylindrical texel coordinate to read from the source image.
Graham
Ah, thanks Graham. I'm trying this out right now.
Looks like a much more elegant approach than I was using.
Robert