jit.gl.pass _ where to put jxp files in library?
It seems that max is not finding jxp files in user/library or package, etc...
but it finds mrt shaders : jxs
Is the only solution is to put them in the Program? :
Max 7/ressources/media/jitter/passes
The same for an application?
Windows 8.1 and 10
Max 7.0.6
bump
there's no need to bump a topic after one day.
you should be able to put them in any standard package location, but they must be in Packages/your-package/media/jitter/passes/yourpass.jxp
you should also be able to read in passes from any location, with an absolute path.
Hi Rob, sorry for that.
Given the new (more) rigid file structure since max 7. Perhaps that could be specified somewhere.
Ah, and 'old' files like jxs does not have this problem, they can be anywhere in Library. Is it by legacy?
Should we take as example for file structure the ressources folder of Max 7 app?
For example for materials, models, etc...
Bump...
It's just that many specific question about openGl get sometimes no answer.
I never had any hints about openCL neither...
I suppose you are quite busy.
Thanks for the hard work.
Patrice
i just tested by copying a jxp file into my Max 7/Library folder, named myfx.jxp, started max, opened the help file, and send "read myfx.jxp" into the gl.pass, and it worked as expected. so i'm not entirely sure what problem you are experiencing, and will need some exact details of steps you are taking to determine if there's a bug.
note, this won't add the effect to the fxname parameter, as that require effects placed in the specific locations i mentioned above.
let me know if that doesn't clear things up.
ok, so it finds it in library, by sending a read message.
but if I put in my max obj : @fxname myfx.jxp , it does not find it in the library.
or @file myfx.jxp
with jxs, it finds it in library with: @file myfx.jxs
By the way, I'm comparing passes against mrt.jxs in terms of flexibility for post processing.
mixing light and blending..
I did not manage to obtain blending with the automated pass system.
If i blend_enable 1 , dof.jxp gets killed. it does not get anymore the right depth info
So i'm trying to make a mrt shader for my obj3d that maintain the alpha channel and depth.
a kind of simplified multi_output shader. i'm z-sorting my objects with layers
see atachment. mrt.rgba_nd.shade.jxs
Is my depth formula ok?
do you have the lambert and blinn fragment so i could include them with alpha?
Perhaps i should start a new thread...
Do you recommand to always attach a material to an obj3d instead of sending texture directly to the obj3d?
Is it cos of the fix pipeline problem? (nvidia gfx980M)
given that ive got many objects, does many materials (1 mat/texture by object) brings down ressources?
Thanks
@file myfx.jxp should also work fine, it's the exact same codepath as read. i would check that again. @fxname will not work unless the jxp's are in a package passes folder, as mentioned.
you will have to modify the built-in shaders wherever the alpha value gets killed. you can plug in a jit.matrix to each output of the jit.gl.pass, and unpack the alpha channel to see where this happens. it's tricky with something like DOF, where a screen space blur is happening. what should the alpha value be for the blurred part?
the depth formula we use in jit.gl.material is length(position.xyz) / far_clip
where position and far_clip are calculated as they are in your example. this is a normalized, linear depth value in view space.
lambert is diffuse = max(dot(N, L), 0.);
where N is surface normal, and L is:L = -normalize(position.xyz - gl_LightSource[0].position.xyz);
blinn is:vec3 H = normalize(L + V);return pow(max(dot(N, H), 0.), Ns);
where V is vec3 V = normalize(-vec3(position));
and Ns is a shininess value.
we don't do any batch rendering based on material value, so it shouldn't make much of a difference how many unique materials you have (other than memory use), but of course you should do some analysis of your own situation to determine this.
mrt.render.directional.jxs is another example of calculating lambert and blinn values without using gl.material, and demonstrated in mrt.deferred.shading.maxpat.
I was just looking at mrt.render.directional.jxs, but it works in another kind of matrix transform.
Never mind, i've got the parts to build the shader.
I'm aware of the conflict imposed by transparency and depth writing in the buffer.
That's why i'm z-sorting my obj3d.
I join a patch that looks to me like a bug with blend_enable 1
the depth val is not correct for no apparent reasons.
perhaps writing the depth with a shader for each object could allow me to control this depth writing mixed with alpha
like in mrt.depth.jxs : gl_FragDepth
____
There's a dirty solution that i translated years ago to max with 2 pass_transparency
I started an other thread for this
https://cycling74.com/forums/sharing-2pass_transparency-_-possible-with-jit-gl-node/
_____________________
thanks for the patch. i'll see if there's a bug to be fixed on our end.
some comments:
length(position.xyz) / far_clip
gives a spherical depth.
dont you think that :
-position.z / far_clip
is more appropriate for depth calc?
this way, a plane in front of the camera has the same depth. like a real camera.
it's also more logical when using dof
did you have time to look at my patch:
blend_depth-problem.maxpat
for depth calculation problems when blend_enable=1
Thanks
thanks for the suggestion. you may be correct, but without studying this further i can't say whether we can implement this as you suggested without breaking behavior.