Forums > Jitter

scaling framddump

March 13, 2013 | 8:59 am

Hey folks,
I am trying to create a batch processor to increase the length of a video using framedump. I have an 11.2 second (6720 via getduration) video and I want it to be 15 seconds (assumedly 9000 in QT time… 6720/600=11.2, so 15*600 = 9000). To output the new video and get the appropriate length, I need to produce a new FPS, which is passed via my write statment. I though 6720/224=30 so 9000/224 should render my new FPS, but no dice. Totally sticking on this one, I appreciate any insite.


March 14, 2013 | 12:41 am

i don’t know wether this is any helpful and this would be less of a solution but more of a workaround but what if you just read your frames into a matrix or matrixset and trigger that with a counter so you can control the speed just the way you want…?
k


March 14, 2013 | 1:13 am

Karl,
Thanks for the reply. I tried something like this, the problem that I run into is that the quality suffers. Framedump (NOT framddump!) gives far better results for rendering video. I’ll try again and see if maybe I can optimize it.
topher


March 14, 2013 | 9:47 am

what do you mean in terms of quality? the image data should be just the same, try setting your matrix to @adapt 1.
does quicktime perform something like frameblending when retiming? you could do that as well with a matrix but preferably on the gpu.
if you want to lower the framerate much, you will need some sort of advanced interpolation beyond frameblending and i guess when it comes to that, you are way more flexible with a matrix anyway.
k


Viewing 4 posts - 1 through 4 (of 4 total)