to elaborate further, motion blur is a post processing 3d effect (same as glow, gauss blur, etc.). this means you must capture your 3d scene or individual 3d objects to a texture, and process the texture with slabs.
search the forum for many examples, as well as the jitter recipes.
In this patch, I illustrate a way to obtain motion blur by capturing a rendered object and processing it with pixel shaders: a feedback system with gaussian blur.
I also illustrate the possibility of doing sub-frame motion blur: calculating and adding intermediary object positions between frames. This way, for instance, we can have a patch calculating 240 fps but only showing 60 fps (useful for video, by reducing the fps but maintaining the fluidity of movement).
I hope it helps.
Another technique I haven't done before is Vector Motion Blur. Anyone?
Thank you Pedro, very nice motion blur! There is only one thing that is bothering me: when the picture is still, the blur does not disappear. If I set the cycle frequency to 0, the circle is still blurry.
Changed the feedback shader from "screen" to "lighten"
The blur shader used is now Andrew Benson's luminance based gaussian filter (http://cycling74.com/forums/topic.php?id=18001). In this case, I use the luminance parameter as the amount of change between successive frames. This way, if the image is the same, no blur will be processed.
hi guys. yes this is a tricky thing to pull off, however there are some features that may help you out, as demonstrated in pass.rebuild.depth. this patch demonstrates a special gl.pass technique for rebuilding the depth-buffer in one context (i.e. capturing gl.node) and pass it along to another context (i.e. the main rendering context), allowing you to depth-blend between the two contexts.
there's also a little hidden attribute called "depth_drawto" that allows you to specify which context to draw this depth pass to. e.g. if you want to share with another capturing gl.node rather than the main context. currently there's a bug where depth_drawto can only be set via attrui. an additional caveat is objects must be bound to a gl.material in order to rebuild their depth.
however, as this feature is currently implemented, i don't think it will work for you without getting your hands dirty and hacking at the shader and pass files. the relevant files are "mrt.depth.jxs" shader file, and "depth.jxp" pass file.
if i can find some time, i'll try and come up with a solution, but in the meantime i hope this gives you enough info to start experimenting. please post back here if you get things sorted out.
Regarding the mentioned patch example, I've noticed that jit.gl.node is configured with @capture 2. I've searched the documentation and the forums to no avail and didn't see any info on it. I suppose @capture 1 outputs the color buffer and @capture 2 outputs the color and depth buffers. Is that the case? Is so, are there any more modes?
jit.gl.node @capture simply indicates the number of render targets currently enabled. if users want complete control, they can manually set @capture and write custom shaders to control what gets written to those targets (the gl_FragData array in the fragment shader). you can see this demonstrated in the mrt.deferred.shading example, where gl.node @capture is set to 3, and each object in the sub-context is bound to a shader that writes data to those 3 capture targets.
when capture is set and a jit.gl.material is bound, the gl.material object generates a shader that writes color data to target 1, normals and depth to target 2, and velocity to target 3. this is demonstrated and explained in the mrt.basic.material example.
the final wrinkle is jit.gl.pass. when gl.pass is bound to a gl.node sub-context, it takes over the gl.node @capture attribute based on whatever effect is currently loaded. eg, if an effect needs depth or normals gl.node capture is set to 2, if an effect needs velocity info, gl.node @capture is set to 3. this is explained in the jit.gl.pass help and reference files.
All of theese are good to know, but my problem stills.
In the "motion blur" part of my patch (February 17, 2015 | 6:49 am of this topic) I am extracting the render as a texture because of a feedback processes. As far as I understood, in your methode you can chain shaders effectec and manage sub renders, whitch doesn't allows me to make a motion blur on a objetct that goes in front of and behind my cube. Maybe there is another way to make a motion blur, but in any case this processes requiers a feedback loop of a texture.
ok here's the working example based on NICOLASNUZILLARD's patch. i had to create a new jxp pass file and jxs shader file. these two files will eventually be added to the distro, along with a demo patch.
the technique involves taking the depth data from one sub-context, passing as a texture to another sub-context, reconstructing the view-space positions of each context, and discarding pixels that fail the depth-test. the two color outputs are then blended together.
there are different ways this technique can be tweaked to get different results.
Thank you soo much for the explication and this add on my patch !!
Those kind of tips with depth are not very documented for beginners and advanced jitter users, which is frustrating as light effects and motion blur motion in a 3D context are very commun ideas for VJing.