jit.gl.render to two different targets at two widely different resolutions?
I think that is what I want.
I am still trying to understand the jitter pipeline at a fundamental level... and am not quite sure I fully understand contexts and jit.gl.nodes. My basic understanding is that jit.gl.mesh is an OpenGl source that needs and OpenGl target (or context) to render to - and the dimensions of the output is determined by the target (i.e., jit.window).
The part where I get confused is when you add in a jit.gl.node sub-context and on top of that a jit.gl.videoplane rendering of that jit.gl.node.
Ok, that sounds kinda confusing... Here is what I am trying to accomplish. I have a single source of data - that is represented by a jit.gl.mesh. I want to render that data to two separate targets simultaneously. One target is actually a high-resolution jit.gl.syphonserver (which I have connected to a jit.gl.node sub-context). The other target is a low-resolution jit.window (which should be just a small preview of what is being sent to the jit.gl.syphonserver).
I have this working sorta using the following block diagram:

The problem is, the mesh is being rendered at the resolution determined by the jit.gl.node (which in this example is 1920x1080) and is being output to Syphon at this resolution (which is great) - it is also being output the the jit.window - however, the jit.window is a down-sampled view of the jit.gl.videoplane capture of 1920x1080 jit.gl.node. I don't want this.
Is it possible to render to two different resolutions? I basically want to render my jit.mesh to two separate contexts (I think context is the right concept - still not sure exactly what a context is). One giant res to jit.gl.syphonserver and one "preview" size (lets just say 320xsomething of similar aspect) to the jit.window.
I think what I want to do is send the output of the jit.gl.mesh (which I am not sure if this is still a jitter matrix or is now an OpenGl texture...) to two different jit.gl.nodes - each of different sizes - however, I am not sure of the syntax of how to do this using named contexts.
I hope all these questions make sense. If anyone can point me to a document which explains these concepts with nice flow-diagrams and calls out what happens where, that would be awesome. Also, if anyone actually understands what I am trying to accomplish and knows how to accomplish this and can provide some pointers, that would be awesome too!
I think Rob Ramirez posted this either in the forums or facebook group, but I can't seem to locate it. Anyways I don't think there's a need for a videoplane or node. You can render the mesh directly to the window and use [jit.world @output_texture 1] to send directly to a syphonserver. Using the "sendnode" message, you can change the resolution of the jit.world's output_texture.
hi LSJ, i think greg's solution is more or less what you're already doing.
you can achieve this with two capturing jit.gl.camera objects. set the first to @adapt 0 and the dims of your syphon output, and the second can adapt to your render context window:
oh nice, your example gives the ability to have different aspect ratios. pretty useful.
Thanks all! This is some really helpful input...
There are still 2 unsolved mysteries...
1) Does jit.world support a "pwindow" mode - where the preview is not a window but is embedded in your presentation layer? Or would I have to just do old-school qmetro->jit.render->jit.pwindow?
2) It seems that jit.gl.syphonserver (or client) is inverting the Y axis (or the viewport is inverted). In my previous method, I was able to invert the jit.gl.videoplane Y axis using a -1 for Y scale to get the two objects to match up - however, it doesn't seem that the jit.gl.camera object supports changing of the scale parameter. Is there any way to normalize the view of both objects so they match? Here is a simple patch which shows what I mean about inverted Y axis. (I am viewing the syphon output with the Simple Syphon Client - it might be the one doing the viewport inversion).
1) no. yes.
2) yeah this happens in certain situations. simply send the output through a jit.gl.texture or jit.gl.slab between the jit.gl.camera and the syphonserver
Thanks Rob!
Adding a jit.gl.slab between the jit.gl.camera and syphonserver did the trick. I didn't even have to adjust any of the scale parameters. Is there a reason that would help me understand why this magically fixed the viewport discrepancy?

Slightly OT - The flowchart at the beginning of this post makes me think that a flowchart/anatomy-style chart visually describing the hierarchies of various Jitter objects would be extremely useful to people learning " Jitter - The Big Picture "
(me : )
Sort of a "Knee bone connects to the thigh bone, but only if you do this first" with a little bit of "and here's why" added in.
I think the word "pipeline" was used above...
Has anybody made up such a thing?
curious
jd
I haven't seen anything online and I have tried to read everything I could find - I had to make something like that to help me understand the flow as the use of named contexts was really confusing me. I think the Max documentation does a really good job of providing lots of examples - and provides good docs on the properties of each object - but the "Big Picture" of what is going on is not very well represented.
One thing I have found to help, especially with the jitter pipeline - is to use patch chords instead of named contexts / symbols. They may clutter up your patch a bit, but it is much easier to follow a signal flow .
In the patch I posted above, I have laid out things in a relatively top to bottom, left to right flow. One of the confusing things I am still unsure of is the fact the jit.gl.node object seems to use it's output node as both an input and an output. I.e., you connect it to the "input" of an OpenGL 3D object. This seems to kinda break the whole top=in bottom=out paradigm of objects. Also what is not very well represented visually is the difference between plain jitter matrices and the pipeline once things have been converted to the OpenGl realm. So everything going into the jit.gl.mesh object are standard jitter matrices - however, everything after the jit.gl.mesh seems to be a new type of object - a 3D OpenGl object - I am not sure if this has a specific name. It would be helpful if these had a different graphical representation - kinda like how jitter matrix patch chords look different than just plain patch chords. The output of the jit.gl.camera (an OpenGl texture object) does use a differently colored patch chord - but the connection between OpenGl 3D objects don't.
Hey Rob, not sure if you are still monitoring this thread, but one thing I noticed I could do is connect the output of the jit.world object directly to a jit.pwindow - and turn on either output_texture or output_matrix for the jit.world. Here in lies the question... Is there a fundamental difference between the two in this context? They visually look and behave the same, but I am not sure that means they are doing the same thing.
they are fundamentally the same thing. when a jit.pwindow receives a texture, it coverts to a matrix internally.
in both cases you are doing a matrix readback from the GPU. this is fine for situations where it's necessary, however if all you want is a preview window, this readback is unnecessary and can impact framerate. much better to create a shared context preview window, as explained here: https://cycling74.com/wiki/index.php?title=OpenGL_Preview_Window
Well, I think I have finally arrived at getting the results I was originally looking for - but I am still not sure I have figured out the best way to replace the jit.world with a jit.pwindow using the jit.gl.camera @capture solution. I did notice that using the output_texture method resulted in a reduced framerate / quality solution - it just did not feel smooth.
In this latest attempt I have the jit.gl.camera being sent to jit.gl.videoplane that is part of the jit.gl.render->jit.pwindow context). This is what I ended up getting to work tonight... it is still using a jit.gl.videoplane to render to the jit.pwindow - and I am still confused if this is necessary - but I couldn't figure out any other way - though, a lot of my effort is trial-and-error and still not yet based on fundamental understandings... Here is my latest patch - two output destinations - 1 is a jit.pwindow - 2 is jit.gl.syphon. Independent resolution renderings / capture - framerate is awesome - and no down-sampling I can see...