Is there an OpenGL equivalent of jit.scissors?
Like the msg says… is there an OpenGL equivalent of jit.scissors?
I want to read a video, slice it into vertical strips, and send each strip to a separate videoplane.
I’ve tried reading in my movie, then sending it to five videoplanes, using tex_scale_x set to 5, and tex_offset_x at 0.0, -0.2, -0.4, -0.6 and -0.8 to select each of the different slices.
This works – the display looks the same as when using jit.scissors… but… the framerate drops from 30fps using jit.scissors to 15fps using the tex_scale and tex_offset method.
I’ve attached the patch that uses jit.scissors (v3) and the patch that uses tex_offset and tex_scale (v4).
What could be wrong? Is there a (much) better way to do this? I’m trying to squeeze out as many fps as possible.
I think td.resample.jxs (in slab helpers– texdisplace) should do this.
The problem is you are sending the fullscale video many times to the GPU. For each rendering context you can just use one jit.gl.texture contextname @colormode uyvy to send to multiple videoplanes in the same context. This way you transfer the memory (expensive) once for each context rather than five times, and that texture is used by the video planes (cheap).
If you are able to used shared contexts (not sure on your setup if they are both on the same card). Then you should be able do something like jit.gl.render output2ch @shared_context output3ch, and just use one jit.gl.texture to send to all five videoplanes. This would be your fastest option if you are not using multiple graphics cards.
So does jit.gl.texture output2ch @colormode uyvy actually replace the jit.gl.slab as I use it, or does it just go in the chain before it?
OK, adding two jit.gl.textures (one per output context) before the jit.gl.slab led to an improvement, but it is still slower than the jit.scissors approach (maybe 60% of the fps).
Having the jit.gl.texture feed into a jit.gl.slab (two, one per output context), and hence to the videoplanes (five, one per output channel) got speeds back up the same level as jit.scissors. This is *without* @colormode uyvy in the texture.
Removing the jit.gl.slab entirely, and having the jit.gl.texture @colormode uyvy feed the videoplanes led to a drop in framerate. It seems that using the cc.uyvy2rgba.lite.jxs shader in the jit.gl.slab is faster than using @colormode uyvy in the jit.gl.texture?
I think maybe I need to use td.resample.jxs to cut out the slices before sending them to the videoplane…
Yes, definitely on windows, cc.uyvy2rgba.lite.jxs will be much faster. I didn’t examine your patch heavily. Just use one jit.gl.slab for each context (or one if you’re sharing), and send the output texture to the repective videoplanes. I doubt that you will get much boost by using td.resample.jxs.
Thanks, everything is working well now. Sending the output of the jit.qt.movie directly to 2 jit.gl.slabs, and from there to the 5 videoplanes, seems to work well. The jit.gl.texture seems to be unnecessary, though adding it to the chain makes little difference.
I know this post is old, but I just came across it as I’m experimenting with jit.gl.slab.
gpvillamil, can you quickly explain how you get the 2 jit.gl.slabs to the 5 video planes and use the td.resample.jxs ? I don’t seem to get it…
I’d like to second Alain’s request here. Could someone demonstrate a simple slicing of an image evenly into several planes with jit.gl.slab?
Go to cycling 74 examples and check out the resample example. Play around and you’ll have it figured out in no time !
other than fixing your patching problems (pak is an object, not a message), you need to enable alpha-blending on your gl.videoplanes (@depth_enable 0, @blend_enable 1)
----------begin_max5_patcher---------- 1625.3oc6bs0biZCF8Y6eELL8wTOnq.8osyz+Ec5jQ1H6nTtsfxk1c1+6EIg ioYCXgiQ3MNylwNbIvQG9Ne2jX+1xE9qKdlW668ad+o2hEea4hE5co1wh1sW 3mwddSJqVeZ947mJVeu+MlCI4OK06Nsfkjwqq8BVQ1evsE4xZw+xUm..rhDS .QgsGq3AYJWJ+mRt4l6668WsGJ+gLQdyA02OP6NKYxM2Ix2caEeiz7mP.AqB twCo+Lhp9DDuJ3kqiHQirFz9qjvtXJmkouq9+dkfk5e31Z.k49p142WtT8wM mMpA5JtACsmantfa1lVzbQ5YzGivgHPei9l+Tlz+FO+0r7cimJnAHMUfIpuH ZFQwNuEWfhNNWTxpZ1ujWcKOmsNUeFAuIOA+ohmHDn87T7ULOE.rlmvAyMO0 iOoR1e6kTKSDYMdkT+.T+bRzYOLHY.mSzPMCFpIR.bXJD7w18D1XAYm4D75U 1ghs2MNldEySX6ciiCud4IXr8twIWxtwqq1LOtwgDh8twIevciCw16Fm.uTM mp3k77DO8M+bZEMj+p1JUhi0kpDOHwg9XaDEAzw3.A1jB9Uru6PJzZdBbMyS .f87Tzmkp7FMOHTWpB.YSLNP7G7NoPCr1bBFb8J6Z63jcxNzUeGmrimHW8cb xNdB+YoJuUGmHivMN7pniS1YNA9rTkC5Pyzo.HVTpBD6BiHlTV8fvQyoDMvX p.Z+JZnYUBO9bgZ2qZLYdDWTKjhhb+KdhAnURXif5XDSzrRL8HmtWHWsKc0i hDdQYJKm6sk48EYEKudaQUVyfslK8fdeoQemmzJz8.deIgWJua+1AmDg2buu MiIqDOq7oMd5GDErhzP+gsMRXX5ez5R3YmkqSYq076VQCoISV0vtrrxT9p6e t9jovco2ptQOTwGfFgGmFQZ+ZpsFhFItfF2Tjkwyk+.OVKq3M32i0D.nINPq bvqc36Ix8f+gM731pcq0QqVELVaNijGYlYcSnf93Jzn4pfyHWMf40V1Fd2wY WpBhVAhCPg3yEUQ5XUgf8vTXWvTCHNeRjmT7Ti17jTg5zwNMWXFCHHV8UHdH qIX7L5.qrRbvDaHtw9bJBUNcfTsSaDZvQdzLac7yZ.x8IlDQrH.IL7y.jCSi lY54HAHgTWPipkUFaG+G3wcbotvnTQs7blD7PrCLVqjCfc7gE1C4Pl8JrGd3 +dmCdnVhgCOTfce1IfYuOo8YCUwjbueA3HymVJiFef35y5wIcYuOVoVVT5HJ AFiOPIMoLQ5mRhlSJohyRbEkP5XkfFzGybtheUgw9pbUVwiBtZBtUwLlmRiC MEnX5bCnc5.6wUDc9C7W0j6DuRG5unRdWgGbdRDukuPvAW3zyHeI8V27OdEq l+tXn8eatRmHcQ0os21h89ZXvXIKzjFwuY3IxZNUcEsjoZdan6cPox5dvVDb g1k8u1bGqdWpvw2FJSF.ZAHYvWbAmLMMxhc6ReQi8CiQU4vmpSlv8yev.S+x 3e52cPpuj9oh7W+J9nAkZ+++QdcwCUa1Oz1+dy3c.YI7ZoHmo687gSR8x.z4 jtSjjvy6BvDQsByI8WR9YEO3P2gGpE3Qsb+tjvC9BCOfKrmWpkZfiviZxfN9 yKr6vC4BCOwVfGn6refzKP7.NFdhNBdxDIkEMQxZCRPBIldxpy9k1Fmh1sm8 mQEIxFKN2oHQVoHahK6gcCdhsEOH2nHCrEOP2fGns3A3F7.rIi.z3Tj3lx3I pZHLMv2zoU8VSfhDaUNMtSAfCsEOtQAPBrEOtQAPf1hG2n.HSgB.RwtSA.w1 Dkm5tXR1n.TK9bGoHAViG2nHAQ1hG2nHgA1hG2nHAVkEMdbJRZjNlDv7R.RP wur0DnHA1jknZc65JE.wV73HE.1V73FE..XKdbjB.NAJ.bH1cJ.ajvzwgeDR u1aZW7ZQj8aLAnOZBPe.RA3XXGzGOM0nFd9QOjp49XRWzOMVNzKqBroWV4Vg snAJi7YKAneZF08QazzTplMYZ6TpD9NQyqnRiGVy+SM.i1+6S.SZS9EtKbEj bNDIuNbkYJHgP8jpgAZpUu0TTBkMNcblkIvlDxc2rXArzX6TPiYJ+XkkOxqp aujZf3mwtuP+9iPuYoYFvMapmnR+J9ih8mORuGV0l6DR9F8BlTMgjOGQ8Wpt Oee4+AfKLHpH -----------end_max5_patcher-----------