Jitter/Shaders - Applying jit.gl.pix to a gridshape object (Gravitational Lensing)
I'm wondering if it is possible to apply the output of a jit.gl.pix shader to an arbitrary object such as a gridshape. All the documentation/examples I've seen that use jit.gl.pix all map the output to a videoplane. I'd like to be able to apply the output to, ideally, a sphere or use the jit.gl.pix as a shader for the sphere. I don't know enough glsl (yet) to be able to write a 'proper' shader, but if push comes to shove I'll just have to give it a go.
EDIT: Just to clarify, I don't want to use the output of the pix as a texture, rather what I would like to do is somehow apply pix as a shader to the sphere. Maybe see the screenshot a couple of posts down to see what I mean.
I'm trying to make like a lens sort of thing. So the sphere should behave effectively like a glass ball or something and bend/warp/distort the background object, in this case a skybox (or whatever is behind the sphere) accordingly. I was hoping to achieve this somehow by using the pix object.
Alternatively, if anyone knows any other ideas on how to make a 3d lens sort of object in Jitter that can be interacted with, I'd love to hear about it.
Any help on this would be extremely appreciated. Thanks in advance!
If you just run the output of the PIX through a jit.gl.texture, or stick it in the first input of a jit.gl.material, you can then link these objects to a GL object.
Mmm, unfortunately I now realise that perhaps the wording of my question left a little to be desired. Following your suggestion, this lets me map what would have been on the videoplane onto the sphere as a texture; I realise my original wording makes it sound like this is what I was after, so my bad, but unfortunately that's not quite what I was after. I don't want to use the output of the pix as a texture, rather what I would like to do is somehow apply pix as a shader to the sphere.
As I said, I'm trying to make like a lens sort of thing. So the sphere should behave effectively like a glass ball or something and bend/warp/distort the background object, in this case a skybox (or whatever is behind the sphere) accordingly. I was hoping to achieve this somehow by using the pix object. I'll edit my original post in case anyone else reads this.

hi Tony.
i believe what you want is an environment map. attach a jit.gl.material to your gridshape, and send in the cubemap texture to the environment_texture input. check out the gl.material help file for some more details.
This comes a little closer but it's still pretty far from what I'm trying to do. I said 'lens' in an attempt to not overcomplicate matters but perhaps if I'm frank it might shed some more light on what I'm trying to do; namely, I'm actually trying to simulate a gravitational lens. Now from what I've seen there are two main options:
1. Write a proper a shader that actually does ray tracing/marching to bend the light paths accordingly. https://www.shadertoy.com/view/llGXWm more or less hits the nail on the head. I'm trying to avoid this if at all possible since time is becoming something of an issue and I really don't have the knowledge to write a full GLSL vertex + fragment shader, at least I don't think I so.
2. Use either something like a normal map or a refraction shader or something to just get an approximate effect going. https://unity3dt.wordpress.com/2015/08/28/interstellar-black-hole-gargantua-tutorial/ looks promising but it's for unity3d.
I figure surely, I can transfer some of those techniques over to jit.gen.
http://www.rittertec.at/blackhole/ (http://www.rittertec.at/marcel/bakk/bakk_bh.pdf) demonstrates a shader made in maya that also looks pretty good.
Note in these examples it basically boils down to place a sphere in the scene and get it to distort the image, sampled from the skybox/background texture/whatever.
I've managed to get the effect going with jit.gl.pix but it's only planar, see the pasted patcher. So maybe a better question is, given the patch below, how can I transfer it into a 3D environment?
Surely this is possible since as I understand it, jit.gen and gl.pix and so forth, act like gl.slab but with nice native max/gen expressions instead of having to write the shader code from scratch.
You might want to put the patch below into presentation mode to make it a bit easier to read.
My apologies for perhaps not making it clear before, hopefully this clarifies what I'm after.
i played around a little. maybe this gets you closer.
basically, render the same scene twice, first time send the output through your pix shader and map it to a view-aligned gridshape, applying some circular masking to the final texture.
This is really excellent Rob, you've pretty hit the nail on the head I think. At the very least, this gave me a pretty good basis to work from. Using a gl.node and stuff like that is the sort of thing I just didn't know how to apply but your patch has been quite instructive.
I have a couple of follow up questions now but maybe they're pushing the limit of what can be done:
1. I've managed to now sample arbitrary geometry in the scene and send it through the shader (just by connecting the 2nd output of the jit.gl.node to any other geometry I want to sample), which works pretty well, but now I'm wondering if I can somehow incorporate 'depth' into the shader. By this I mean, if I have, say, a sphere infront of the 'hole' can I get it such that the sphere isn't sampled, but if then if the sphere moves 'behind' the hole, it gets sampled. I have a feeling that this might not be possible just with the gl.pix since as I understand it, it just samples whatever is on screen and that's that, but if there a way to incorporate depth somehow that'd be awesome.
2. I've been mainly ignoring the masking and just looking at that version of the scene by enabling locklook on the camera, this is fine for all intents and purposes, but I got to thinking if instead of just a sample fade on the mask, I could somehow interpolate (I think that's the best term to use?) the pixels to the background scene. The attached image attempts to illustrate what I mean.
2b. If I could manage this then I think I could see a way of solving 1. if I could then also figure out to sample the geometry and render the geometry in both the 'shader' version of the scene and the regular version.

Again, thanks for all your help on this, it's been a great learning experience just to get this far.