jit.gl.mesh then gaussian blur over it using jit.gl.imageunit

Alexandre's icon

Hi,
I'm always a bit lost with openGL:

1 - I create some points* using jit.gl.mesh, like in the jit.gl.mesh.maxhelp, using 'draw_mode points' (So i can see theses points using jit.gl.render)

2 - Then i want a gaussian blur on theses points, using the openGL accelerated gaussian blur that is in jit.gl.imageunit

How can i combine that ?

Thanks,

* or lines or polygons

Jesse's icon

I would render the jit.gl.mesh to a jit.gl.texture using the @capture attribute, then blur the texture using jit.gl.slab. Here's an example that uses commands to jit.gl.sketch for render order and capture:

Max Patch
Copy patch and select New From Clipboard in Max.

Pedro Santos's icon
Max Patch
Copy patch and select New From Clipboard in Max.

There are several examples on how to accomplish this in the forums and in the examples, but here is a quick basic patch.
Good luck

Alexandre's icon

Thanks a lot guys !

Is there an advantage to use the jit.gl.slab to do the blur instead of using jit.gl.imageunit ?

2343.glbluronpointsslower.maxpat
Max Patch
Pedro Santos's icon

The most obvious advantage is probably the fact that jit.gl.slab is supported on Mac and Windows while jit.gl.imageunit only runs on Mac OS X.

About the gaussian blur, I have no idea what blur algorithm has the best implementation regarding performance: jitter's included shader or Apple's imageunit? It seems that a gaussian blur is a very heavy algorithm. Has anybody compared this two?

vade's icon

Apples Gaussian blur uses Core Image, so that entails going through a gamma and color correction pre-pass phase, then the filter is run multiple times (its a multi-pass algorithm), and then color is re-corrected to the original images color space and gamma, so you have *at least* 2 additional passes than in Jitter with OpenGL.

My understanding is that Core Image by default operates internally on 16 bit per channel floating point buffers for color accuracy, which is slower than the typical 8 bit per channel (usual for char data type in Jitter), but depending on the context, can work on 32 bit per channel temporary images before returning an output. This is good for fidelity, but tends to be bad for speed as its more memory moving about, etc. I also believe that Core Image does not always use FBOs, but sometimes uses slower but more broadly supported PBuffer objects for temporary image rendering. PBuffers have a non-trivial context switching overhead associated with them.

I've personally noticed drastic performance differences between the same "kernel" running in GLSL compared to Core Image Kernel Language. I would suggest avoiding Core Image if you can, unless you really need to leverage some of its functionality via 3rd party or built in Image Units.

C74 provides a nice and fast separable gaussian blur GLSL shader. I'd say use that, and look for the help file in understanding how to separate the passes for speed.