I have a simple VR camera that produces 2800 X 1800 VR files.
I can upload these directly to YouTube, but I don't want to do that.
I want to open the file within Jitter, jump to specific frames, then change the area that is the "front" of the viewer.
Right now I can open the file within Jitter and jump to specific frames.
However, when I want to change what is in front of the viewer, I have only been able to do that by placing the file in a sphere, using a jit.texture and jit.window, and then point a camera at it. The "camera" that it uses will only go to about 90-degrees wide. The result is that when I export the video using jit.gl.asynchread to a file for YouTube, the image is just a fraction of the 2800 X 1800 original (it's been cropped), and so when I view it using Cardboard, the resolution is way too low.
I believe to keep the resolution when I take it on to YouTube, I need a camera that has a 360-degree "lens angle", and then spin the sphere containing the video image, but the jit camera seems to stop working (the image goes grey) around 90 degrees.
Does this make sense? Or am I approaching this wrong? How can I manipulate the video file, exporting it to a new video file, without losing the full resolution?
Thanks in advance for any and all help.