jit.gl.slab with greyscale images
I’m trying to optimise a complicated patch that uses lots of GPU based processes. One part of the patch deals with keystone correction of an incoming image from a grab object. I’m using the td.repos.jxs shader for this, and it works nicely, but I’ve just realised that I’m doing a lot of unnecessary processing here, as I’m doing all the calculations in full RGBA space, and yet the camera image is greyscale only. What I’m wondering is…
Is there a way to pass a greyscale image to a jit.gl.slab object? I am happy to write a customised shader to perform the repos. operation, but don’t want to embark on this if getting a greyscale image into the slab is going to be too difficult. Does anyone have any experience with this?
Thanks for the input
short answer is no, i asked this question before and got some alternatives
from wesley or Jkc dealing with packing/unpacking luma images into different
color spaces (every quarter to diffrent channel). but i think it was only
mentioned relating to cpu>gpu bottleneck and not internal gpu processing
which looks like a world of pain, shader wise.
i’ll try to find the thread but later.
here it is
What you could conceivably do is interleave your luminance data into
a vec4 (@dimscale 0.25 1.) in a similar fashion to the uyvy or grgb
code, and then once on the CPU, use [jit.coerce 1 char] to unfold the
On Wed, Feb 11, 2009 at 5:26 PM, Tom W
> Afternoon all.
> I’m trying to optimise a complicated patch that uses lots of GPU based
> processes. One part of the patch deals with keystone correction of an
> incoming image from a grab object. I’m using the td.repos.jxs shader for
> this, and it works nicely, but I’ve just realised that I’m doing a lot of
> unnecessary processing here, as I’m doing all the calculations in full RGBA
> space, and yet the camera image is greyscale only. What I’m wondering is…
> Is there a way to pass a greyscale image to a jit.gl.slab object? I am
> happy to write a customised shader to perform the repos. operation, but
> don’t want to embark on this if getting a greyscale image into the slab is
> going to be too difficult. Does anyone have any experience with this?
> Thanks for the input
A couple of points:
– I’m not sure reducing the calculations from full RGBA to LUMA in a
slab is going to speed things up much. Graphics cards have dedicated
vector processing hardware so operation on vec4 is going to be just as
fast (or negligibly slower) than floats. The key to getting faster
shaders is to reduce instruction count, not vec4->float.
– Slabs use framebuffer objects. FBOs on OS X especially but most
systems in general only allow for 4 channel buffers. In other words,
you can’t have just grayscale results from slabs without setting each
color channel to the same value. Even if you could operate on 1
channel, the hardware would allocate another 3 because of limitations
with its design.
– You can use any texture as input to a slab, so if you can get
grayscale data into a texture, then you can use that as input to a
slab, no problem.
----------begin_max5_patcher---------- 1209.3oc0Y9sbhpCGG+Z7oHCydoGGRPPXu5bdN1YGmfDwzFBtjXa2cm8c+jD PEzfkhT11oyH0PB4a976eg3um43lT7BQ3B9J3a.GmeOywwzjtAm5u63lieYC CKLcykSdtH4A24U2RRdQZZ9ApbQFagfgS.YLv+tkxH.Y5hGwLBMsXwCuHNNF ZpYDpmx+3GbrwsEbIGmSL25+JoX1w6rGK2rixyVWR1HqTJJvag2bfuOReAtL zbIZgG360CheHmxYDoQynyMVbPdYq5YVP+kYlg5maUyU8T9y8jp4TuBWmwVq WwGJItyAttfuq66elMS+w7QAfOQSIE6YXNwfQYIlK1VTlqV8BhDfrAwk2EDC poG71PDNZPLGKKou7dAvZ6ihd1Pk+8fJTbr9xJzTQp2Y2sxrDD6PN1FnP2En BWpuDh9r3RkSDBbF4JJURvofTpXGQrHu3IafBNLPEGX.kGZQvbPjAWvv2VBL 3aDTit+yOjZnPI.ejG.szyBdPw2kejugRwq9jmZpDjf4Y1vSzcgmJuG+3aiG u+hdOcGXoplYAHgChGPOiGRjuAG9eJClTkt1g4orp59T9NRoxg7T8e.zVxm6 gVPuPMtfgQSQz0viojEYYLhkUOL3FKReSJ0fU5kHpNZ4ssBgcsTnb4nZ8+QN QVV.Nu2jlqwgYhqV8QlzCACZ2wu03ASBtwjJhtxYBQCzquhCJznfxxAsaWuN fx3spkfD0ejRrvpGO7N7FfdlUdvfB28Gh6v7yWqVQio+wd.QrwFi7Fl6QfAR UdG0.6cthZcHSSMQJqAQMITOL06Q+DoTPK3MlCG2yJ2jby7Q3RykkQMxzona FqXyijzFB2wMkrs+Ogh8Dtsde7i18jx2qqtykXYsnaNq3CL45VDBsv582h2P 5bvVMqNtYkzzBtVDsFot4iSm1RaLtAM0soGb7dKCVVTvRvkOQEzDFokMP42h 4T0dFIRZkdPdmFGMeeooRQimkJQONEKwUBogIp4gvzv52J1nU62JF4h7nDF. sp4srrcqaGuzULS0tPgHSh0k9WVjsyRMcuchqieN6cby7L0oVbNldo5elO93 7QxO6fkAiAK8LWBhOtsktYoWmrb4.Yo1actsKiJaql1NXX7qQo52Z13wgBN9 Y2TB1IkZhhME44jpXUWWKq0yQpLJuqHUib022NDDEGJ2bD009Kf1JOkHjT9o zle6TL5E8aGMMscVJCkoo6KTlKwwrK8xn8VUcXOUc7Hp55FUxupTo.+DIcsZ VT9EqwR0qrmbPVYVZU5r6BEuVgHUIyDLqtj7omwogas9zXWTa1YJLVuwTzs1 .TbiM.8Q4Ml553C1dfwDaJIDN3K1dw3gch30HPsCYEBBW8o7XDdlxSKd1bFB oj8xcIG1tkTZ8zCP2CjhflCOvObJN7fKpxOpG5RIgmp.j0etf650sphjVA+6 AHyL1tjUEutLoeM0tJYueqZTcjoW+yO0nSWkl2dJ9qrg8VS98QSASqlP8QS9 SplPw8QSnoUSQ8PSWH72caG7imlB6ioaZoTOTDbRUDrO4lfSKkf8BSSLmfe7 Lc5oC8wSRvWSRSa1R8IY9pVtnIUR8I+8ztcfk8I68cnH0W9yr+GP14lGA -----------end_max5_patcher-----------
Thanks for both the replies. I suspected as much about the 4-channel processing being as fast as single channel on a graphics card, but thought I’d investigate to see whether there were any gains to be had. If it looks like it’s not going to be worth the pain, then I have to thank you for saving me lots of time!
The CPU > GPU bottleneck was also something I was concerned about, as I’m transferring a largish (720×576) texture. I notice that if I pass a single channel matrix to the GPU, it turns up in the shader as a 4-channel texture with the same values in every channel. Is this 1 -> 4 channel conversion happening before or after the CPU > GPU transfer? That is, am I saving any processing time by converting my qt.grab output to luma before passing to the GPU?
Thanks for the input,
On Feb 11, 2009, at 10:12 AM, Tom W wrote:
> That is, am I saving any processing time by converting my qt.grab
> output to luma before passing to the GPU?
I would expect that if you use jit.qt.grab @colormode uyvy, and pass
to the graphics card for conversion with jit.gl.slab @file
cc.uyvy2rgba.jxs @dimscale 2. 1, it would be your fastest option with
the current architecture.
Of course, that makes sense – I’ll give that a go. If my (limited) understanding is correct, UYVY maintains 100% of the luma info anyway, and as I’m working in greyscale I shouldn’t even notice the difference.
Should’nt the @colormode luminance or lumalpha or something in jit.gl.slab have to do something here?
I can’t get it working.
No, jit.gl.slab and jit.gl.videoplane don’t support all the colormodes of jit.gl.texture as their own arguments. Just uyvy and rgba. Please insert a jit.gl.texture with the color mode of your choice for luminance or lumalpha colormode prior to jit.gl.slab or jit.gl.videoplane. Once on the card and rendering to jit.gl.slab’s output, the information is rgba, regardless of incoming texture type (rgba,uyvy,lumalpha,luminance).
That being said, jit.qt.grab also doesn’t offer a luminance only colormode, so uyvy -> the graphics card will most likely be faster than uyvy->luma conversion on CPU -> the graphics card.
Forums > Jitter