jit.gl.node and jit.gl.camera pipepline (+bug report)

    Nov 26 2015 | 4:13 pm
    I am trying to figure out how cameras and nodes respond to each other and it is still unclear to me. I don't know if this is the expected result or not, but as this seems strange and not-intuitive to me I thought it worth posting to ask.
    For the jedis able to solve it mentally without even opening the patch, I'll try to explain here. Otherwise, just copy/paste the patcher below.
    The patch consists in a node "B" with 2 subnodes "C and D" attached to it, each containing a gridshape. A(render) => B(node) =>C(node containing red shape) =>D(node containing green shape) I also have a camera @capture 1. Now, I want to switch between drawto's on the camera. If I drawto A or B, I get both shapes displayed in the texture output by the camera, with the viewing transform of node B/C/D. And that is fine.
    Now if I drawto C : 1) node D go its own way thru B to the render, therefore is displayed in the window, which is OK. 2) rasterization of node C is intercepted by the camera which is ok, but the red plane is also undergoing the viewing transform from B.
    I would have expected than the camera capture the red plane, but only with the viewing transforms of node C, as this would kind of reproduce what happens if I set the gridshapes context to A (aka the main render) : the gridshape does *not* undergo the viewing transform.
    Maybe there are good reason for that, but I fail to understand it so far.
    BTW, there is also an initialization bug mentionned in the patch.
    What do you think ?

    • Nov 27 2015 | 1:59 pm
      ok, after thinking of it for a while, i understand the node hierarchy, which seems somehow related to a parent/child articulation philosophy that make sense for articulated bodies. However, it sounds like a limitation and a constraint in the wonderful "connect anything to anything" logic in Max.
      As for the rasterization, here is a demo patch, and the question is : is it possible to do the same with jit.gl.node / camera (without duplicating objects) ?
    • Nov 30 2015 | 8:00 pm
      the below gets you close. can't really say if this is more useful for you than your existing technique.
    • Nov 30 2015 | 8:57 pm
      thanks for the reply... that's a smart solution ! ;) However I assume there would be more performance hit as the number of objects increases, since there is a separate rendering + GL mixing for each object. I can only make an assumption here, as I am not sure about how GPU handle optimization at that low-level.
      The other thing that makes me prefer the first solution at the moment, is the fact that I targetting a general rendering pipeline approach where I can easily switch destinations without having to instanciate new objects for the purpose (like the pix alphablend, in your case).
      The ugly thing in the to_texture technique is having to explicitely send all those bangs. That makes a lot of function call where I assume the gl.node approach to be lighter.
      At the moment, using "drawto" messages tie the objet to undergo both viewing transform *and* capture in the gl.node/camera chain. Would it be relevant at some point to dissociate the "context" into 1- a viewing transform context on hand 2- a rasterization context on the other hand I can imagine the jit.gl.camera using an @automatic 0 attribute and a name (e.g. "myCamera1") and the jit.gl.*whateverObject to have a "@capturer myCamera1 myCamera1 etc." attribute which would make them rendered by these cameras. I can't see as much as you if it would cause some backward-compatibility issue, though... just my 2 cents. What do you think ?
    • Dec 02 2015 | 5:48 pm
      of course it depends on the complexity in your scene, but the technique i posted will generally be faster for the specific case demoed in your patch, where you want the same geometry and modelview rendered in different places, because the geometry is only rendered once and the remaining operations are performed using textures. your technique renders the blue ball geometry twice.
      again, if your technique is working for you, then i would stick with it.
      i'll add the rest of your thoughts as notes to the feature request we discussed in the previous thread.
    • Dec 02 2015 | 11:36 pm
      well, if I was completely satisfied with my technique, I wouldn't investigate these possibilities.
      A much simpler solution (from a user perspective) than the things I said I said in the previous post and that would work nicely, is to have the possibility that the parent node output the texture (with @capture 1) *even if* some child node is capturing too. In the demo patch, that would make the parent node output *both* red and cyan shape, while the child nodes would output the red and cyan shape *respectively*. Does that make sense ?
      For the record, below is the version using Nodes with the feedback effect achieved with planes as in my original post. Just a question about it : why do one need to put a texture (the one called "tmp" in the patch below) attached to main context rather than using directly jit.gl.node's output ? What happens here exactly ?