frame resampling/interpolation

Roman Thilenius's icon


what are your personal ways to up- and downsample the framerate in moving images?

say like, you produce in 100fps but want to export in 24fps.

sure, the best way would be to use an external conversion tool (such as quicktime and its successors), but there might be reasons where you want to do it inside max.

Matteo Marson's icon

To downsample, the best way is always to have a target fps that is an integer divisor of the original rate (e.g. 100fps to 25fps). When this is not possible, there are some approaches you can consider:
1) drop some frames without caring much. Sometimes, this works well and nobody can see the difference
2) use linear interpolation of the color values. The interpolation factor "i" [0;1] is the fractional part of
i = ( frame_number * targetFPS / nativeFPS ) % 1and to interpolate the frames
final_color = (1 - i)*current_frame + i*next_frameThis works well for quite static scenes. Sometimes, dropping some frames results in more transparent downsampling (IMO). For example, if the target rate is close, but not an exact integer divisor of the original rate, you get a sort of "periodic blurring" that increases and decreases in the time span of a second. Besides, to make a correct blending of the frames, you should move the color to linear color space, compute the blending, and transform the color back to sRGB
linear_color = (1 - i)*pow(current_frame, 2.2) + i*pow(next_frame, 2.2)
final_color = pow(linear_color, 1/2.2)
3) If you feel adventurous, you can consider combining the color interpolation with a displacement of the intra-frames. You can estimate the optical flow of your original video (using jit.cv objects), and use the same value "i" to guide the amount of displacement to apply. This method works great but requires a bit of preparation.

About upsampling, I'd say that the only convincing approach is the third one, as the first simply makes no sense, and the second is usually quite noticeable
That said, I prefer to use external video editors for the job :)

Roman Thilenius's icon


while i am somewhat familiar with the options how i can calculate things, but until now i was only using basic jitter objects and did it all by myself - and only with factors of 2 (which is why i mentioned the specific 100/24 example)

but arent there eventually readymade solutions (aka compiled externals) i am missing? i am familar with maybe 15% of the current jitter externals.

also, until now i did not need it to work in realtime, so i based it more or less around a multistage shiftregister of jit.matrix objects. (7 or 9 stages for 100:25 and so on.)

3)
there is a scenario where i was thinking in that direction: it would be great to track when a frame is overwhemingly different from the last one ("scene cut") and then use a different interpolation method and maybe even move that moment of transition to the closest new frame. this is something one might consider in regards to later compression.

Matteo Marson's icon

I'm not aware of any external that can take care of rate conversions, but if you can do it in non-real-time, i suggest to use the [shell] object and ffmpeg for changing the fps.

About sene cuts, that is something worth being considered. Something possible to catch scene cuts, is to compute the difference between next frames (maybe blurring the images?), summing the value of all the pixels and test the result against a fixed treshold