Matrox TripleHead2Go HELP with Video Scaling, Sizes etc.
I have a data base of videos rendered at 720×576 — and I’m trying to upscale them to the Matrox TripleHead2Go 3 screen resolution of 3840×1024
I’m wondering what is the best way for me to do this, is there a scaling setting I can do in Jitter to minimise the stretching and loss off apsect ratio. What would the optimum resolution to render images at to maintain aspect ratio with reverting to the TripleHeads full-res?
Any help woul dbe greatly appreciated.
That is a huge upscale…and the aspect ratio doesn’t exactly match either.
If you scale up 720 x 576 to 1280 x 1024, that’s the same aspect ratio (and one of your 3 screens).
So either you have black bars on the sides, both 1 screen wide, or you lose 1/3 of your video both on top and on the bottom (and the video you see on one screen only has a source of 240 x 192, which is tiny)
So my answer, in short, would be to re-render the videos. Or play 3 copies of the video on the screens.
OK so it’s not an option to have 3 separate copies on each screen, it needs to be 1 video across the 3 screens, it’s interactive and follows the viewer from side to side.
for the sake of playback speed would rendering my videos at 1920×512 and then upsizing to 3840×1024 yield better results??
my main concerns is the aspect ratio being maintained otherwise the image looks warped but all maintain good fps rates.
thanks for your help
For playback speed, it really depends what kind of machine you’re using to play back the videos.
For example, I just finished a theatre project using a DualHead2Go to achieve edge blending. The source videos were 1792 by 768. I couldn’t get my laptop to play them back at all, not even at a very low frame-rate. Whatsmore, I couldn’t get any of the brand new iMacs and Mac Pros in my uni media lab to play them back.
In the end I halved the resolution to 896 by 384. The videos then played, but whenever I attempted to fade from one into another, playback became jittery. I also had to render the edge blend directly onto the videos as processing it on-the-fly slowed playback down to a crawl.
Admittedly, this whole project was using QLab rather than Jitter, but I’d imagine performance rates in video playback would be similar.
My advice for you would be to experiment a lot with various resolutions, I’d worry that if you upscale your videos to something massive beforehand you might have difficulty playing them back at all.
If you can rerender the videos i’d pick a resolution with the same aspect ratio as your output. An Imac can play a full hd video (1920 x 1080) with the right settings, but don’t have much experience with upscaling. Fading is another story too (i’d do those on the gpu) If things get too heavy, maybe it is an option to use a lower output resolution at all, like 3 x 800 x 600…
That is a pretty huge upscale: if it needs to span the 3 monitors and you want to maintain the aspect ratio, you’ll be doing a lot of cropping. The TripleHead2Go is connected to three 4:3 monitors, so the overall aspect ratio of the three screens is 12:3. If you crop your 720×576 video to fit, you will end up with 720×180, which is… pretty low. So you might as well use the TripleHead’s lowest resolution.
Or, you probably want to re-render at something higher.
As for playback speed, I’ve found Vade’s Jitter optimizations invaluable:
Using these techniques, I’ve easily managed HD playback from a laptop, and massive multi-channel HD from a Mac Pro.
Bottlenecks are likely to be a) codecs and b) disk speed, and these are related. Highly compressed files (eg. H.264) are slower to decode, whereas less compressed files (eg. ProRes) are faster to decode, but much larger, and hence require a faster disk drive.
WOW some really great feedback here. So if I render my videos at 2400×600 and use the TripleHead2Go at the lowest res 800×600 the upscale will be less right?
May be VisiCrop will help you)