Converting Gl into jit.matrix

    Sep 21 2013 | 12:37 am
    Hello all, Ive been working on a project that uses one of the physics a day patches. I am not so great at Open Gl in max yet, and was hoping someone could help me out.
    The problem I am having is I was to convert the gl stuff in my patch to a matrix. I want to do this so I can use the jit.scissors object to break up the frames of the video. For some reason I cant figure out which objects need the rendering context names, I have tried several things and it keeps "breaking" it every time.
    If someone could possibly show me in the patch (below) which ones I need to have a specific name for, and how to connect the patch into a jitter matrix I would be super greatful! Thank you for reading this.

    • Sep 21 2013 | 1:38 am
      you need to render directly into a matrix, so you won't have your floating window display... or after the matrix. For that, give a name to your, and the same name to a matrix which will be receiving everything.
      BUT (that "but" is important) that's probably not the way to go : if you do that, exit, or or any facility with the window, because the jit.window won't be the primary context render target. Moreover you have some constraints with your matrix that are not necessarly easy to figure out. You had better stay in the opengl world once the render is done. That's all because can only render into one context, so you generally want to do any other operation before. using or
    • Sep 21 2013 | 4:31 am
      Thank you vichug, i tried out the first part and yeah it rendered really slow and bad lol. In the patch below that is basically what I was trying to do with scissors. (the interlacing part)
      But! yeah if I want this interlaced effect how do i do it with the phys patch i posted in the first message? I am having a hard time understanding how to do so with either jit.node or plane. Sorry if your just repeating yourself, having a tough time understanding.
    • Sep 21 2013 | 1:27 pm
      well what you want to do is not easy ! least obvious part is what it how it will behave with picker. Also the problem is that i don't have a kinect to test it myself. but idea is like this:
      you'll notice 10 instances of, each has a viewport defining an area of the destination where it is rendered. Those zones are strips which are 1/10th of the rendering zone in width. You need then to replace the position of the cameras manually. I created a for testing. If you have two different "things" to render : make each of those "things" inside a, both will render in the same, and each camera will target a different
    • Sep 22 2013 | 3:12 am
      Awesome, i was playing with it and this seems to be a step in the right direction. But I am having a bit of a problem. With so many gl cameras it is eating up way too much fps getting like 15 when i have the ten up and the "particles" going. I think i can simplify the patch a bit, and in doing so do you think there is another way to achieve this interlacing thing I want?
      What I mean is instead of the picker being controled by kinect position I just change the central_force based on kinect data. It will do the same thing I want it to do, so therefore I can just skip using the phys.picker and moveable object entirely, only using the static points.
      I tried this with the method, and it was smother but the fps only went up by a little bit. I was researching this a bit more and some people suggested using the and, but I couldnt get that to work, any ideas?
      Here is the updated patch with the picker gone and everything controlled by a slider on the right. This will work perfectly, now if i could only get the interlacing look.
    • Sep 22 2013 | 10:53 am
      i don't know how work, nor, so i can't really help you on that, sorry... is for writing shaders and it's another programming language entirely. If you want better performances though, it's probably the way to go.