Forums > Jitter

motion blur and other effects for jitter open gl please



Anonymous
October 10, 2009 | 6:20 pm

jit.gl.slab-slide.maxpat

under Max5/examples/Jitter-examples/render/slab

November 9, 2009 | 6:49 pm

to elaborate further, motion blur is a post processing 3d effect (same as glow, gauss blur, etc.). this means you must capture your 3d scene or individual 3d objects to a texture, and process the texture with slabs.

search the forum for many examples, as well as the jitter recipes.


t
February 5, 2012 | 4:19 pm

so the only way to achieve motion blur in 3D space is to use erase color attribute in jit.gl.render? otherwise it is possible only to blur textures?

Cheers

February 5, 2012 | 5:35 pm

"so the only way to achieve motion blur in 3D space is to use erase color attribute in jit.gl.render? otherwise it is possible only to blur textures?"
You want to do blurry 3d objects?
Me too, shaders are the way to go.
Another way is to do it would be multiple render passes.. not sure how well this works with jitter.

February 7, 2012 | 12:39 pm

In this patch, I illustrate a way to obtain motion blur by capturing a rendered object and processing it with pixel shaders: a feedback system with gaussian blur.

I also illustrate the possibility of doing sub-frame motion blur: calculating and adding intermediary object positions between frames. This way, for instance, we can have a patch calculating 240 fps but only showing 60 fps (useful for video, by reducing the fps but maintaining the fluidity of movement).

I hope it helps.

Another technique I haven’t done before is Vector Motion Blur. Anyone?

– Pasted Max Patch, click to expand. –

`


t
February 7, 2012 | 3:17 pm

Thank you Pedro, very nice motion blur! There is only one thing that is bothering me: when the picture is still, the blur does not disappear. If I set the cycle frequency to 0, the circle is still blurry.

February 7, 2012 | 7:07 pm

Yeah, I’ve noticed it.

Did a lot of changes:
Changed the feedback shader from "screen" to "lighten"

The blur shader used is now Andrew Benson’s luminance based gaussian filter (http://cycling74.com/forums/topic.php?id=18001). In this case, I use the luminance parameter as the amount of change between successive frames. This way, if the image is the same, no blur will be processed.


t
February 7, 2012 | 7:34 pm

Thank you Pedro for the update! But when I changed the circle into torus for instance, with polymode 1 1, the blur still remains on the surface. Just wanted to let you know.

– Pasted Max Patch, click to expand. –
February 7, 2012 | 8:05 pm

This fixes it, but the results are not very nice with poly_mode 1 1.

– Pasted Max Patch, click to expand. –
February 17, 2015 | 6:49 am

I want to make 2 rendering channels for shapes that has motion blur and no effects.
The problem is that my light can’t go behind the cube shape for a very obvious reason, it is a blend of 2 gl textures.

any tips ?

– Pasted Max Patch, click to expand. –
February 17, 2015 | 9:48 am

Hello. In your situation, I guess you would need to:
Draw the light and a depth channel of the scene.
Draw the cube and a depth channel of the scene.
Compare the values of the two depth channels to obtain the information of what to show in each pixel (image A or B).
Use this "image mask" to composite the two rendered images.

I’ve never done this, it’s just speculation on my part…
More experienced users or Rob Ramirez might be able to offer a much easier solution…

February 18, 2015 | 6:10 am

Thing I get what you are advising me, simple question : who can i filter the depth channel to use it as a mask ?

February 18, 2015 | 7:04 am

If you’re using Max 7, search for pass.rebuild.depth.maxpat example in the file browser. It shows exactly what you want to do. I probably wouldn’t bother doing it in Max 6 now.

February 18, 2015 | 10:56 am

hi guys. yes this is a tricky thing to pull off, however there are some features that may help you out, as demonstrated in pass.rebuild.depth. this patch demonstrates a special gl.pass technique for rebuilding the depth-buffer in one context (i.e. capturing gl.node) and pass it along to another context (i.e. the main rendering context), allowing you to depth-blend between the two contexts.

there’s also a little hidden attribute called "depth_drawto" that allows you to specify which context to draw this depth pass to. e.g. if you want to share with another capturing gl.node rather than the main context. currently there’s a bug where depth_drawto can only be set via attrui. an additional caveat is objects must be bound to a gl.material in order to rebuild their depth.

however, as this feature is currently implemented, i don’t think it will work for you without getting your hands dirty and hacking at the shader and pass files. the relevant files are "mrt.depth.jxs" shader file, and "depth.jxp" pass file.

if i can find some time, i’ll try and come up with a solution, but in the meantime i hope this gives you enough info to start experimenting. please post back here if you get things sorted out.

February 18, 2015 | 11:28 am

Hi, Rob. Thanks for the clarification!
Regarding the mentioned patch example, I’ve noticed that jit.gl.node is configured with @capture 2. I’ve searched the documentation and the forums to no avail and didn’t see any info on it. I suppose @capture 1 outputs the color buffer and @capture 2 outputs the color and depth buffers. Is that the case? Is so, are there any more modes?

Thanks for the help!

February 18, 2015 | 6:44 pm

jit.gl.node @capture simply indicates the number of render targets currently enabled. if users want complete control, they can manually set @capture and write custom shaders to control what gets written to those targets (the gl_FragData array in the fragment shader). you can see this demonstrated in the mrt.deferred.shading example, where gl.node @capture is set to 3, and each object in the sub-context is bound to a shader that writes data to those 3 capture targets.

when capture is set and a jit.gl.material is bound, the gl.material object generates a shader that writes color data to target 1, normals and depth to target 2, and velocity to target 3. this is demonstrated and explained in the mrt.basic.material example.

the final wrinkle is jit.gl.pass. when gl.pass is bound to a gl.node sub-context, it takes over the gl.node @capture attribute based on whatever effect is currently loaded. eg, if an effect needs depth or normals gl.node capture is set to 2, if an effect needs velocity info, gl.node @capture is set to 3. this is explained in the jit.gl.pass help and reference files.

hope this helps!

February 19, 2015 | 1:25 am

Thank you Rob for the additional info. Somehow I missed those files.
I’m used to doing multiple render passes the old way, with to_texture and manual triggering (automatic 0) and that makes sense to me because I understand the process ordering.
With jit.gl.node, in spite of being more practical to setup, the real process is hidden.
Now in Max 7, with jit.gl.pass and multiple render passes it’s even more abstracted, so it’s very useful to understand the inner workings…
Thank you once more and great job on these Max 7 features!

February 19, 2015 | 1:44 am

All of theese are good to know, but my problem stills.

In the "motion blur" part of my patch (February 17, 2015 | 6:49 am of this topic) I am extracting the render as a texture because of a feedback processes. As far as I understood, in your methode you can chain shaders effectec and manage sub renders, whitch doesn’t allows me to make a motion blur on a objetct that goes in front of and behind my cube. Maybe there is another way to make a motion blur, but in any case this processes requiers a feedback loop of a texture.

Any ideas ?

February 19, 2015 | 1:21 pm

ok here’s the working example based on NICOLASNUZILLARD’s patch. i had to create a new jxp pass file and jxs shader file. these two files will eventually be added to the distro, along with a demo patch.

the technique involves taking the depth data from one sub-context, passing as a texture to another sub-context, reconstructing the view-space positions of each context, and discarding pixels that fail the depth-test. the two color outputs are then blended together.

there are different ways this technique can be tweaked to get different results.

February 19, 2015 | 3:07 pm

Thank you soo much for the explication and this add on my patch !!
Those kind of tips with depth are not very documented for beginners and advanced jitter users, which is frustrating as light effects and motion blur motion in a 3D context are very commun ideas for VJing.

Viewing 20 posts - 1 through 20 (of 20 total)

Forums > Jitter