Is there an OpenGL equivalent of jit.scissors?

Mar 12, 2010 at 2:02am

Is there an OpenGL equivalent of jit.scissors?

Like the msg says… is there an OpenGL equivalent of jit.scissors?

I want to read a video, slice it into vertical strips, and send each strip to a separate videoplane.

Mar 12, 2010 at 3:24am

I’ve tried reading in my movie, then sending it to five videoplanes, using tex_scale_x set to 5, and tex_offset_x at 0.0, -0.2, -0.4, -0.6 and -0.8 to select each of the different slices.

This works – the display looks the same as when using jit.scissors… but… the framerate drops from 30fps using jit.scissors to 15fps using the tex_scale and tex_offset method.

I’ve attached the patch that uses jit.scissors (v3) and the patch that uses tex_offset and tex_scale (v4).

What could be wrong? Is there a (much) better way to do this? I’m trying to squeeze out as many fps as possible.

Mar 12, 2010 at 4:51pm

I think td.resample.jxs (in slab helpers– texdisplace) should do this.

Mar 12, 2010 at 9:40pm

The problem is you are sending the fullscale video many times to the GPU. For each rendering context you can just use one contextname @colormode uyvy to send to multiple videoplanes in the same context. This way you transfer the memory (expensive) once for each context rather than five times, and that texture is used by the video planes (cheap).

If you are able to used shared contexts (not sure on your setup if they are both on the same card). Then you should be able do something like output2ch @shared_context output3ch, and just use one to send to all five videoplanes. This would be your fastest option if you are not using multiple graphics cards.

Mar 12, 2010 at 10:50pm

Great, thanks!

So does output2ch @colormode uyvy actually replace the as I use it, or does it just go in the chain before it?

Mar 12, 2010 at 11:37pm

OK, adding two (one per output context) before the led to an improvement, but it is still slower than the jit.scissors approach (maybe 60% of the fps).

Having the feed into a (two, one per output context), and hence to the videoplanes (five, one per output channel) got speeds back up the same level as jit.scissors. This is *without* @colormode uyvy in the texture.

Removing the entirely, and having the @colormode uyvy feed the videoplanes led to a drop in framerate. It seems that using the cc.uyvy2rgba.lite.jxs shader in the is faster than using @colormode uyvy in the

I think maybe I need to use td.resample.jxs to cut out the slices before sending them to the videoplane…

Mar 13, 2010 at 3:39am

Yes, definitely on windows, cc.uyvy2rgba.lite.jxs will be much faster. I didn’t examine your patch heavily. Just use one for each context (or one if you’re sharing), and send the output texture to the repective videoplanes. I doubt that you will get much boost by using td.resample.jxs.

Mar 13, 2010 at 3:24pm

Thanks, everything is working well now. Sending the output of the directly to 2, and from there to the 5 videoplanes, seems to work well. The seems to be unnecessary, though adding it to the chain makes little difference.

Jun 10, 2011 at 9:35am


I know this post is old, but I just came across it as I’m experimenting with

gpvillamil, can you quickly explain how you get the 2 to the 5 video planes and use the td.resample.jxs ? I don’t seem to get it…


Sep 29, 2013 at 3:00am

I’d like to second Alain’s request here. Could someone demonstrate a simple slicing of an image evenly into several planes with

Sep 29, 2013 at 5:53am

Go to cycling 74 examples and check out the resample example. Play around and you’ll have it figured out in no time !

Sep 29, 2013 at 8:59am

Well, I seem to be able to resize alright – the thing I don’t understand is how to get the two planes to merge as one, though.. Is there another example I’m missing?

  1. resample.maxpat
Oct 1, 2013 at 1:56pm

other than fixing your patching problems (pak is an object, not a message), you need to enable alpha-blending on your gl.videoplanes (@depth_enable 0, @blend_enable 1)


– Pasted Max Patch, click to expand. –



You must be logged in to reply to this topic.