to elaborate further, motion blur is a post processing 3d effect (same as glow, gauss blur, etc.). this means you must capture your 3d scene or individual 3d objects to a texture, and process the texture with slabs.
search the forum for many examples, as well as the jitter recipes.
"so the only way to achieve motion blur in 3D space is to use erase color attribute in jit.gl.render? otherwise it is possible only to blur textures?"
You want to do blurry 3d objects?
Me too, shaders are the way to go.
Another way is to do it would be multiple render passes.. not sure how well this works with jitter.
In this patch, I illustrate a way to obtain motion blur by capturing a rendered object and processing it with pixel shaders: a feedback system with gaussian blur.
I also illustrate the possibility of doing sub-frame motion blur: calculating and adding intermediary object positions between frames. This way, for instance, we can have a patch calculating 240 fps but only showing 60 fps (useful for video, by reducing the fps but maintaining the fluidity of movement).
I hope it helps.
Another technique I haven't done before is Vector Motion Blur. Anyone?
Thank you Pedro, very nice motion blur! There is only one thing that is bothering me: when the picture is still, the blur does not disappear. If I set the cycle frequency to 0, the circle is still blurry.
I want to make 2 rendering channels for shapes that has motion blur and no effects.
The problem is that my light can't go behind the cube shape for a very obvious reason, it is a blend of 2 gl textures.
Hello. In your situation, I guess you would need to:
Draw the light and a depth channel of the scene.
Draw the cube and a depth channel of the scene.
Compare the values of the two depth channels to obtain the information of what to show in each pixel (image A or B).
Use this "image mask" to composite the two rendered images.
I've never done this, it's just speculation on my part...
More experienced users or Rob Ramirez might be able to offer a much easier solution...
hi guys. yes this is a tricky thing to pull off, however there are some features that may help you out, as demonstrated in pass.rebuild.depth. this patch demonstrates a special gl.pass technique for rebuilding the depth-buffer in one context (i.e. capturing gl.node) and pass it along to another context (i.e. the main rendering context), allowing you to depth-blend between the two contexts.
there's also a little hidden attribute called "depth_drawto" that allows you to specify which context to draw this depth pass to. e.g. if you want to share with another capturing gl.node rather than the main context. currently there's a bug where depth_drawto can only be set via attrui. an additional caveat is objects must be bound to a gl.material in order to rebuild their depth.
however, as this feature is currently implemented, i don't think it will work for you without getting your hands dirty and hacking at the shader and pass files. the relevant files are "mrt.depth.jxs" shader file, and "depth.jxp" pass file.
if i can find some time, i'll try and come up with a solution, but in the meantime i hope this gives you enough info to start experimenting. please post back here if you get things sorted out.
Hi, Rob. Thanks for the clarification!
Regarding the mentioned patch example, I've noticed that jit.gl.node is configured with @capture 2. I've searched the documentation and the forums to no avail and didn't see any info on it. I suppose @capture 1 outputs the color buffer and @capture 2 outputs the color and depth buffers. Is that the case? Is so, are there any more modes?
jit.gl.node @capture simply indicates the number of render targets currently enabled. if users want complete control, they can manually set @capture and write custom shaders to control what gets written to those targets (the gl_FragData array in the fragment shader). you can see this demonstrated in the mrt.deferred.shading example, where gl.node @capture is set to 3, and each object in the sub-context is bound to a shader that writes data to those 3 capture targets.
when capture is set and a jit.gl.material is bound, the gl.material object generates a shader that writes color data to target 1, normals and depth to target 2, and velocity to target 3. this is demonstrated and explained in the mrt.basic.material example.
the final wrinkle is jit.gl.pass. when gl.pass is bound to a gl.node sub-context, it takes over the gl.node @capture attribute based on whatever effect is currently loaded. eg, if an effect needs depth or normals gl.node capture is set to 2, if an effect needs velocity info, gl.node @capture is set to 3. this is explained in the jit.gl.pass help and reference files.
Thank you Rob for the additional info. Somehow I missed those files.
I'm used to doing multiple render passes the old way, with to_texture and manual triggering (automatic 0) and that makes sense to me because I understand the process ordering.
With jit.gl.node, in spite of being more practical to setup, the real process is hidden.
Now in Max 7, with jit.gl.pass and multiple render passes it's even more abstracted, so it's very useful to understand the inner workings...
Thank you once more and great job on these Max 7 features!
All of theese are good to know, but my problem stills.
In the "motion blur" part of my patch (February 17, 2015 | 6:49 am of this topic) I am extracting the render as a texture because of a feedback processes. As far as I understood, in your methode you can chain shaders effectec and manage sub renders, whitch doesn't allows me to make a motion blur on a objetct that goes in front of and behind my cube. Maybe there is another way to make a motion blur, but in any case this processes requiers a feedback loop of a texture.
ok here's the working example based on NICOLASNUZILLARD's patch. i had to create a new jxp pass file and jxs shader file. these two files will eventually be added to the distro, along with a demo patch.
the technique involves taking the depth data from one sub-context, passing as a texture to another sub-context, reconstructing the view-space positions of each context, and discarding pixels that fail the depth-test. the two color outputs are then blended together.
there are different ways this technique can be tweaked to get different results.
Thank you soo much for the explication and this add on my patch !!
Those kind of tips with depth are not very documented for beginners and advanced jitter users, which is frustrating as light effects and motion blur motion in a 3D context are very commun ideas for VJing.