Forums > Jitter

jit.gl.slab with greyscale images

February 11, 2009 | 3:26 pm

Afternoon all.

I’m trying to optimise a complicated patch that uses lots of GPU based processes. One part of the patch deals with keystone correction of an incoming image from a grab object. I’m using the td.repos.jxs shader for this, and it works nicely, but I’ve just realised that I’m doing a lot of unnecessary processing here, as I’m doing all the calculations in full RGBA space, and yet the camera image is greyscale only. What I’m wondering is…

Is there a way to pass a greyscale image to a jit.gl.slab object? I am happy to write a customised shader to perform the repos. operation, but don’t want to embark on this if getting a greyscale image into the slab is going to be too difficult. Does anyone have any experience with this?

Thanks for the input

Tom


February 11, 2009 | 5:41 pm

short answer is no, i asked this question before and got some alternatives
from wesley or Jkc dealing with packing/unpacking luma images into different
color spaces (every quarter to diffrent channel). but i think it was only
mentioned relating to cpu>gpu bottleneck and not internal gpu processing
which looks like a world of pain, shader wise.
i’ll try to find the thread but later.

here it is
"
What you could conceivably do is interleave your luminance data into
a vec4 (@dimscale 0.25 1.) in a similar fashion to the uyvy or grgb
code, and then once on the CPU, use [jit.coerce 1 char] to unfold the
data.

-Joshua
"

http://www.cycling74.com/forums/index.php?t=msg&goto=86185&#msg_86185

On Wed, Feb 11, 2009 at 5:26 PM, Tom W wrote:

>
> Afternoon all.
>
> I’m trying to optimise a complicated patch that uses lots of GPU based
> processes. One part of the patch deals with keystone correction of an
> incoming image from a grab object. I’m using the td.repos.jxs shader for
> this, and it works nicely, but I’ve just realised that I’m doing a lot of
> unnecessary processing here, as I’m doing all the calculations in full RGBA
> space, and yet the camera image is greyscale only. What I’m wondering is…
>
> Is there a way to pass a greyscale image to a jit.gl.slab object? I am
> happy to write a customised shader to perform the repos. operation, but
> don’t want to embark on this if getting a greyscale image into the slab is
> going to be too difficult. Does anyone have any experience with this?
>
> Thanks for the input
>
> Tom
>


February 11, 2009 | 5:57 pm

A couple of points:

- I’m not sure reducing the calculations from full RGBA to LUMA in a
slab is going to speed things up much. Graphics cards have dedicated
vector processing hardware so operation on vec4 is going to be just as
fast (or negligibly slower) than floats. The key to getting faster
shaders is to reduce instruction count, not vec4->float.

- Slabs use framebuffer objects. FBOs on OS X especially but most
systems in general only allow for 4 channel buffers. In other words,
you can’t have just grayscale results from slabs without setting each
color channel to the same value. Even if you could operate on 1
channel, the hardware would allocate another 3 because of limitations
with its design.

- You can use any texture as input to a slab, so if you can get
grayscale data into a texture, then you can use that as input to a
slab, no problem.

wes

– Pasted Max Patch, click to expand. –

February 11, 2009 | 6:12 pm

Thanks for both the replies. I suspected as much about the 4-channel processing being as fast as single channel on a graphics card, but thought I’d investigate to see whether there were any gains to be had. If it looks like it’s not going to be worth the pain, then I have to thank you for saving me lots of time!

The CPU > GPU bottleneck was also something I was concerned about, as I’m transferring a largish (720×576) texture. I notice that if I pass a single channel matrix to the GPU, it turns up in the shader as a 4-channel texture with the same values in every channel. Is this 1 -> 4 channel conversion happening before or after the CPU > GPU transfer? That is, am I saving any processing time by converting my qt.grab output to luma before passing to the GPU?

Thanks for the input,

Tom


February 11, 2009 | 6:28 pm

On Feb 11, 2009, at 10:12 AM, Tom W wrote:

> That is, am I saving any processing time by converting my qt.grab
> output to luma before passing to the GPU?

I would expect that if you use jit.qt.grab @colormode uyvy, and pass
to the graphics card for conversion with jit.gl.slab @file
cc.uyvy2rgba.jxs @dimscale 2. 1, it would be your fastest option with
the current architecture.

-Joshua


February 11, 2009 | 6:44 pm

Of course, that makes sense – I’ll give that a go. If my (limited) understanding is correct, UYVY maintains 100% of the luma info anyway, and as I’m working in greyscale I shouldn’t even notice the difference.

Cheers

Tom


February 12, 2009 | 2:38 pm

Should’nt the @colormode luminance or lumalpha or something in jit.gl.slab have to do something here?

I can’t get it working.


February 12, 2009 | 5:23 pm

No, jit.gl.slab and jit.gl.videoplane don’t support all the colormodes of jit.gl.texture as their own arguments. Just uyvy and rgba. Please insert a jit.gl.texture with the color mode of your choice for luminance or lumalpha colormode prior to jit.gl.slab or jit.gl.videoplane. Once on the card and rendering to jit.gl.slab’s output, the information is rgba, regardless of incoming texture type (rgba,uyvy,lumalpha,luminance).

That being said, jit.qt.grab also doesn’t offer a luminance only colormode, so uyvy -> the graphics card will most likely be faster than uyvy->luma conversion on CPU -> the graphics card.

-Joshua


Viewing 8 posts - 1 through 8 (of 8 total)