render to texture and checkerboard edges with @rectangle 0


    Jul 11 2015 | 3:57 pm
    I'm trying to do vertex displacement on a mesh with a texture that's being drawn elsewhere. I get checkerboard edges on my texture if I set it to rectangle 0 and no drawing at all on my if I set rectangle to 1. I've tried stretching my texture so the checkerboard bits don't get drawn on the mesh but I can't seem to make that work either, I'm stuck, any ideas?
    Just to be clear, how do I get rid of the checkerboard?

    • Jul 11 2015 | 9:00 pm
      Hi Robin,
      Everything seems to work fine for me with rectangle 1. I mean it draws fine. no checkerboard Max7.0.4
      May I ask why not use [ jit.gl.node @capture 1] instead ? Just curious
      phiol
    • Jul 11 2015 | 11:56 pm
      Hmmm, what OS & gfx card you got?
      When I enable rectangle I just get a black mesh and no vertex displacement, when I set to zero it draws checkerboard round the edge.
      I can't use gl.node for my application because I'm not using automatic rendering.
    • Jul 12 2015 | 12:11 am
      Hi Robin
      My Mac is : Retina, 15-inch, Mid 2014 Processor 2.8 GHz Intel Core i7 Memory 16 GB 1600 MHz DDR3 Graphics NVIDIA GeForce GT 750M 2048 MB Software OS X 10.9.5 (13F34)
    • Jul 13 2015 | 7:33 am
      I can confirm that rectangle 1 stretches the image across the whole window, showing no checkerboard, and rectangle 0 shows the image in just over half the window and checkerboard for the rest.
      Also, whenever you hit the |floating| attrui box, the checkerboard appears for a fraction of a second (regardless of rectangle 0 or 1) before it is replaced by the image.
      This checkerbaord is plaguing my own patch (where many slab and 'non-slab' effects are combined) whenever I go full screen and it takes ages to get the checkerboard 'blended out'. Where does the checkerboard come from?
      MBP (17") - Mid 2011 AMD Radeon 6750M Max 7.0.0 OSX 10.10.1
    • Jul 13 2015 | 2:27 pm
      the checkerboard is simply the default jit.gl.texture image. if you are seeing the checkerboard when you go fullscreen, it's because the entire render context is being rebuilt, and possibly texture images have not been uploaded by the time they are rendered, hence the checkerboard is drawn. if you want to avoid this, then i would not toggle fullscreen while rendering, or wait a few frames before enabling your feedback effects. you can also change the default image using the jit.gl.texture @defaultimage attribute (or if using jit.gl.slab, "sendoutput defaultimage black").
      Robin Price, the problem with your patch is that the vert-displace shader needs the displacement texture as a non-rectangular texture (it's defined as "uniform sampler2D dm" rather than "sampler2DRect" in the shader file). therefore @rectangle 0 is required on the displacement texture in order for it to function. however since you are capturing with to_texture, rather than jit.gl.node, your capture dimensions are coupled to the render context window.
      you have a couple options: either send the capture texture to a secondary texture (jit.gl.texture @filter nearest @type float32 @rectangle 0 @adapt 0 @dim 100 100) before sending to the gl.mesh, and set @rectangle 1 on the capture texture. this will capture the scene using the native context dimensions, in rectangular mode, and then copy to a non-rectangular texture for use with the vertex-shader.
      second option is to capture using "jit.gl.node @capture 1 @adapt 0 @dim 100 100" (or whatever dim you want). this will decouple the capturing size from the render window size, and allow you to specify square dimensions and eliminate the dreaded checkerboard. you will also have to force the gl.node output to non-rectangular with "sendoutput rectangle 0".
    • Jul 13 2015 | 10:56 pm
      Cheers Rob, that was both informative and helpful.
      One more question am I right in thinking jit.gl.node does not currently support @automatic rendering 0? That's the only reason I'm rendering to texture atm.
    • Jul 15 2015 | 4:18 pm
      jit.gl.node does not support non-automatic capture to texture.
      is there any particular reason you need this feature?
    • Jul 15 2015 | 5:03 pm
      I couldn't make my application work using jit.gl.node and capturing to texture so I ended up rendering to_texture instead. It's a ripple generator / wave simulator that takes an initial texture / video then feeds it into a glsl ripple thing that is a port of this technique
      It then feeds that to the mesh for z plane displacement. It was all very dependent on render order, i.e. the initial texture that displaces the ripples has to be rendered before the ripple bit (which is itself quite fussy about the render order) which has to then be fed to the last part. I had tried to use nodes capturing to textures but came unstuck when I couldn't control the order in which things happened.
      Would @layer have done the trick?
    • Jul 16 2015 | 4:54 pm
      yeah, you should be able to have complete control over draw order of automatic, non-automatic, and capturing nodes using gl.node @layer, the jit.world prerender draw bang, and using the gl.node capture output to trigger.
      see if this patch helps:
    • Jul 16 2015 | 7:58 pm
      Thanks, that example clears up some questions. I'll try and see if I can do it the node way when I have the next big tidy up.
    • Nov 23 2015 | 9:12 pm
      reviving this old thread as I run into the same questions :
      - @Rob_Ramirez : "jit.gl.node does not support non-automatic capture to texture. is there any particular reason you need this feature?" => Its been asked a few times accross the forum, but sure : here are some of the reasons (non exhaustive list) : 1) benefit jit.gl.node / jit.gl.camera elegant setup (no more sending clock: use layers) 2) benefit from its straightforward C implementation (rather than dumping all viewing parameters for each view with the to_texture method, treating list in colls with needs to handle reserved words like "bang", "set", "mode", etc. in the chain) 3) benefit from jit.gl.node's output texture size attribute to be able to perform low-def pre-rendering (e.g. for depth-grabbing, or HUD compositing) 4) benefit from composition of hierachical viewing transforms, rendering at the desired stage 5) being able to render the same objets in various nodes ... Who needs anymore reasons ? :)
      BTW, there is another alternative to handle the "checker problem" : using tex_zoom / tex_anchor attributes on the target object. You need to rescale these considering that your output texture "canvas" size is equal to the nearest-superior-power-of-2. In other word, a 1024x768 jit.window will render to_texture with a size of 1024x1024, and the size of the "checker" stripe is related to this difference between 768 and 1024.
    • Nov 24 2015 | 7:01 pm
      hi vincent.
      these all sound like reasons for jit.gl.node automatic-mode capturing. i'm looking for a concrete example of a situation or technique that requires non-automatic mode capturing to work, where there are no workarounds without it.
      hopefully it's obvious, but this is not a trivial feature request, and there are reasons it's not currently implemented.
    • Nov 25 2015 | 12:10 am
      hi Rob,
      trying to make a demo patch made me realize that node do not capture indepently from child/parent ndoes, and that enabling capture on a node preempt capturing on child/parent nodes.
      So maybe what I had in mind are actually two different features : - one is related with the viewing transform, to be able to pipeline the various transformations taking place in different nodes - one is related to the rasterization, to be able to render the same scene in several different sub-contexts
      Now, it is not very clear to me how and when those 2 actions are performed in the jit.gl.nodes, so maybe I miss the point... If there is a documentation explaining this somewhere, I am interested in reading it.
      Besides, I noticed the capture output of node exhibit a slow framerate (see in the patch below). I already noticed that I had to "re-contextualize" the node output to the main render context to prevent glitches in situation of feedback... don't know if this other bug is already know to you. If not, I'll try to make an example patch. And there's a crash happening in one situation detailed in the patch.
    • Nov 25 2015 | 9:18 pm
      hi vincent.
      thanks for the patch and the explanation. you can capture multiple views from a scene using multiple jit.gl.camera objects. the patch below demonstrates this. i realize this is not exactly what you're asking, but perhaps it's a technique you haven't thought of.
      i understand the request better now, and i think i can frame it more as a feature to allow sharing geometry among different node sub-contexts. consider this request registered.
      i'm not seeing any glitches or crashes in the patch you posted.
      the slowdown is due to sending a gl-texture to a jit.pwindow. if you want fast preview windows, you should use shared contexts create a separate gl.render/jit.pwindow pair, and enable @shared 1 on all p/windows).
    • Nov 26 2015 | 3:05 pm
      Hi Rob, OK, I know jit.gl.cameras, but the goal was not so much to display thoses 4 balls on 2 videoplanes (for which there's more than one way to do, anyway) but truly to improve an overall ergonomy of jit.gl.node (which are already great objects, don't get me wrong). I understand this may not be a "trival feature", though.
      For the crash, it happens only with Max 6(.1.10) and sometimes needs a few hectic clickings before it happens. But it is pretty much systematic.
      Thanks for the explanations on the slowdown issue in pwindow.
    • Nov 24 2016 | 11:21 am
      does anybody knows how to get the light work properly with vertex displacement? The light works with the sphere but not with the displacement of the sphere. I found various displacement patches but the light always not work correct. It affect ever only the base shape not the displacement on it.