Forums > Jitter then gaussian blur over it using

June 9, 2011 | 6:45 pm

I’m always a bit lost with openGL:

1 – I create some points* using, like in the, using ‘draw_mode points’ (So i can see theses points using

2 – Then i want a gaussian blur on theses points, using the openGL accelerated gaussian blur that is in

How can i combine that ?


* or lines or polygons

June 9, 2011 | 11:28 pm

I would render the to a using the @capture attribute, then blur the texture using Here’s an example that uses commands to for render order and capture:

– Pasted Max Patch, click to expand. –

June 10, 2011 | 12:27 am

There are several examples on how to accomplish this in the forums and in the examples, but here is a quick basic patch.
Good luck

– Pasted Max Patch, click to expand. –

June 10, 2011 | 9:49 am

Thanks a lot guys !

Is there an advantage to use the to do the blur instead of using ?

June 10, 2011 | 3:53 pm

The most obvious advantage is probably the fact that is supported on Mac and Windows while only runs on Mac OS X.

About the gaussian blur, I have no idea what blur algorithm has the best implementation regarding performance: jitter’s included shader or Apple’s imageunit? It seems that a gaussian blur is a very heavy algorithm. Has anybody compared this two?

June 14, 2011 | 5:42 am

Apples Gaussian blur uses Core Image, so that entails going through a gamma and color correction pre-pass phase, then the filter is run multiple times (its a multi-pass algorithm), and then color is re-corrected to the original images color space and gamma, so you have *at least* 2 additional passes than in Jitter with OpenGL.

My understanding is that Core Image by default operates internally on 16 bit per channel floating point buffers for color accuracy, which is slower than the typical 8 bit per channel (usual for char data type in Jitter), but depending on the context, can work on 32 bit per channel temporary images before returning an output. This is good for fidelity, but tends to be bad for speed as its more memory moving about, etc. I also believe that Core Image does not always use FBOs, but sometimes uses slower but more broadly supported PBuffer objects for temporary image rendering. PBuffers have a non-trivial context switching overhead associated with them.

I’ve personally noticed drastic performance differences between the same "kernel" running in GLSL compared to Core Image Kernel Language. I would suggest avoiding Core Image if you can, unless you really need to leverage some of its functionality via 3rd party or built in Image Units.

C74 provides a nice and fast separable gaussian blur GLSL shader. I’d say use that, and look for the help file in understanding how to separate the passes for speed.

Viewing 6 posts - 1 through 6 (of 6 total)