How do I manually control render chains?
Hi
I am struggling with controlling the chain of events in multiple jit.gl.nodes:
but I have not been successfull controlling them by hand (without layers).
I would like to send each jit.gl.nodes (@automatic = 0) a bang to control the render sequence. If I switch on the metro (1) nothing happens as expected.
I have two switches (2) and (3) to control which node gets a first bang. But this has no effect on the print-output -> the cameras captures do follow a different order (on my machine first 2 then 1). and changeing the @layer numbers do not make a difference.
Now switching @automatic = 1, it should be possible to controll the render sequence with @layer, but again, no success. this only starts to work if I send each node a "drawto nowhere, drawto ctx" message.
if I delete the first jit.gl.node/jit.gl.camera combo and undo the delete (reseting the objects) again the same result -> the layer number does not work. Again, the objects behave only after sending the above message.
And if I switch off @automatic = 0 and I dont send any direct bangs to the nodes, the rendering still keeps going, which to my understanding of the documentation it should not.
Can somebody tell me the right way to do it? Is there I misconception on my side?
jit.gl.node and jit.gl.camera capturing only works in automatic mode rendering.
if you want to turn off automatic rendering, you can use the old-style jit.gl.render "to_texture" message for capturing.
see if the following does what you need:
Excellent, thank you for this quick reply. Its exactly what I am looking for.
But now I have two additional question:
1. is it possible to get the depth-map out of the rendering too?
I tried to accomplish this with this adjustments:
but the depth map seems to be, well, not what I expect. it repeats itself four times in a row, with three quaters of the texture black. are there some settings to correct this?
I want to do multiple renderings of a scene, with different cameras and from different angles. first I need to get the depth_maps of each rendering in order to pass all of them as textures to different models to do some shader-magic for the final rendering. (a bit similiar to rendering casting shadows).
the alternative would be to write my own depth_map shader and render it beforehand - but if there is already a method that seems to promise this, I would prefer to go the above way.
2. if I apply a shader to a node, does it apply automatically to all models in the nodes subcontext?
if yes: what happens to the shaders each model has individually set?
if yes: what happens if in one render pass I apply a shader to a node, and in a second one I dont? does the model revert to its own shader or do I have to set it again manually?
the depth_grab works fine, afaict.
i think the problem is displaying the depth texture in the pwindow.
send it to a gl.videoplane and it looks as expected.
if you write your own depth shader, you will have more control over the depth format, but might not be necessary.
you can't apply shaders to nodes, at this time.
you will have to continue to use the draw bang trigger to control everything. eg:
send shader depth_shader to every object,
draw,
capture,
send shader normal_shader (or no args to unbind)
draw
capture,
process textures,
final draw
the model will keep whatever shader you last assigned to it.
thank you for your explanations. I really appreciate the time you take. It makes perfect sense now. And since I have multiple objects that need to be rendered under one node and maybe also under another, this approach seems to be the only one availble to have full control. But I still have some questions:
1. I assume the depthmap is normalized by the cameras frustum? even if it is a customized frustum?
2. I further assume that the depthmap rendered this way is a bit more performative? And that it has the same resolution as the rendered texture?
3. When is the depthmap created? I assume with the bang to the node? together with the texture? is always created, even if not requested with the message "depth_grab"?
4. when exactly is the "normal" texture availble? I assume just after jit.gl.render has received the message "to_texture" and before it gets "erase"?
5. HOW do I change the resolution of the texture? I tried @dim all over the place but nothing seems to work, even with @adapt = 0.
I adapted the patch accordingly:
I've tried to use the depth_grab command before, but found out it was unreliable, the resulting values were not linearized and its use had a very negative performance impact. So I built a very simple depth shader in order to implement depth of field effect (selective blur). The last patch I attached to the following topic includes the shader and makes use of the procedure suggested by Rob Ramirez.
Good luck!
I really struggle at the moment with the texture size. I have it now working in my patch, but the texture sizes can only be ajusted by the size of the window, and thats very inconvienient, since
A I do have different texture dimension requirements for different render stages
B I didnt intend to use the window as an output.
So: is it possible to have individual texture output sizes for each node? - I can obviously set the dimension of the jit.gl.texture object, which gives me indeed a texture of the specified size, but the captured texture does not fill it. And once I resize the window above the dimensions of the jit.gl.texture object, then I get a
jit.gl.texture: error disabling texture unit: GL Error: Invalid value
Some of the question above might have been answered with my head scratching excercises:
-> I assume now that the textures get only available once jit.gl.render gets a bang, and the 'draw' you are refering to is the bang to the jit.gl.render object and not the bang sent to the jit.gl.node object?
Basically the 't b b b b erase' - chain allows to setup a sequence of renders, which then are executed with the final bang (draw) to the jit.gl.render object?
@pedro I just saw your post and I will have a look at it tomorrow. thank you for the links and the hint.
@pedro After a good nights sleep I was able to have a look at your patch (excellent work by the way) and thanks to it more clarifications happend, and now I understand that the to_texture message actually initializes the render, with the bang to the node (or the objects in pedros patch) you indicate what you wanna have drawn. the bang (or draw) to the jit.gl.render only updates the window with the jit.gl.render's context. Correct?
At least thats what happens in pedros patch, while in mine (unfortunatly rather massive) patch, it neads the final bang to jit.gl.render to get the textures out. So I seem still to missunderstand some of the mechanisms in the background of this black box.
But my main issue stays the same: jit.gl.render only renders out textures in the size of the window. so what I am missing here is an @adapt = 0, but for the jit.gl.render. because I am unable to change the values of @dest_dim.
a simple message to send to jit.gl.render like indicated to this patch would be what I am looking for:
It looks a bit as if I am talking to myself in this thread...
I came up with an alternative to @robs to_texture solution, and it looks as if it has all the benefits I was looking for:
Its working with @automated=1 but it sequentially @enables the different nodes on and off, thus I am able to capture with the @capture=1 and the camera while I am not dependent of the size of the window for the textures resolution.
Now I made some extensive preformance tests to see how it compares to all the other solutions above and one more I want to discuss later on:
here the two patches:
the first patch are three tests with two different variations of my suggested solution. It also includes a normal render chain which you could control with @layer but you have no way to set individual shaders on each pass. It is expected to be the fastest of all the tests and is used as a reference.
the second patch contains @robs solution with the equivalent of my two variations as a comparison.
the result:
reference: 86 fpsmean with 19.5% CPU
robs 1: 96 fpsmean with 21% CPU
maybites 1: 95 fpsmean with 21% CPU
robs 2: 73 fpsmean with 27% CPU
maybites 2: 79 fpsmean with 28% CPU
this tests were made on a macbook pro 2008 / OSX 10.7.5 / Max 6.1.3 32 bit
from the performance side there is not much of a difference. but in the way to handle it the differences are huge:
first: in order to work with window invisible, robs solution has to start with the window @visible = 1, once the patch is running the window can be set invisible. very inconvenient if you just want to use jit.gl.render for rendering textures.
second: with robs solution it is not possible to set the texture resolution individually from the windowsize (according to my current knowledge - see previous posts). and thats very very invonvenient if the window is invisible...
now since rob is the crack here: what aspect am I missing in my solution? I have a curious hunch there are some repercussion going my way...
now to my second variation: I wanted to see how the performance is when two render contexts are sharing the same 3d-model, in this case a simple jit.gl.gridshape. this because I want to populate my scenes with loads of models (jit.gl.model) and I am reluctant to load a model for each render context (i.e. node subcontexts), but rather share this resource through sending a "drawto" message each time I make a new renderpass.
And I am surprised how big the preformance penalty is: average 20 fps less and 8% more CPU.
I know I dont understand barely anything that happens under the hood, and I am sure you must do a lot of mindwarping voodo to allow all this javascripting and java-externals and what-not to interact with your c++ classes.
But I try a whish from my side: Could'nt it be possible to allow each model to draw to multiple contexts? so that @drawto for models actually allows for more than just one symbol? :-)
Another strange thing: In my first test-patch I actually included robs solution aswell, but inside this patch it simply doesnt want to play. no errormessages, no texture output....
@pedro After a good nights sleep I was able to have a look at your patch (excellent work by the way) and thanks to it more clarifications happend, and now I understand that the to_texture message actually initializes the render, with the bang to the node (or the objects in pedros patch) you indicate what you wanna have drawn.
Hi, Maybites.
The to_texture doesn't initialize the render.
If you look closely, you'll see the following order after the initial erase command:
Configuration of the object to use the appropriate shader, immediately followed by a BANG (since the objects are with the "automatic 0" attribute). It's this bang that actually renders the object. The to_texture message only copies the resulting image (that we don't get to actually see) to a named texture.
the bang (or draw) to the jit.gl.render only updates the window with the jit.gl.render’s context. Correct?
Exactly. We only see the last render pass, in this case the drawing of the videoplane with the resulting texture applied.
Hi Pedro, thank you for the clarifications. I dont want to start a dispute here, but I am not convinced - reason is: you actually send two bangs, each object receives as bang. and since nothing happens at the same time, one object receives the bang first, but how does the renderer know that there is actually another bang comming and another object needs to be added to the render? a test has shown (actually within your patch) that the bang adds the object to the render -> if you send the shader to one object without the bang message it doesnt appear. So I still believe its the to_texture message that does it. However, I think this question is not important, since its clear that it needs a bang to an object and a to_texture to the render. And the only person that can conclusivly answer it has to have access to the code, because how should we figure it out? if we dont send the to_texture message we dont get a proof that it was drawn in the first place :-)
but I adaped your patch according to my last post to see if my approach gives similar performance results as yours:
and indeed, I got it working, with a slightly better framerate but it needs more CPU, maybe because I need another render context to render it out? albei the real advantage is the decoupling of the texture size from the window size - if one puts importance on this.
Side note: I was unable to set the positions and rotations of the objects in such a way that they got stored with the patch - how do you do that?
to_texture simply copies the current drawbuffer to the named texture. if an non-automatic object was banged prior to the to_texture message, then it is in the drawbuffer.
and yes, it forces the size to be equal to the destination window size.
let me know if there's still anything unclear.
it looks like you're solution of enabling/disabling capturing nodes is meeting your needs for capturing to non-adapting textures, so i would stick with that.
let me know if there’s still anything unclear.
Well, don't get me started...
In regards to this topic I think I am through now, actually I found my love with jit.gl.node almost drove me into madness, until I figured out that my setup works without jit.gl.node, too, and now I can draw all the objects into the same context without having to use the drawto message - which seems to be a performance killer - and simply switching them on and off.
BUT, I still have some questions in regards to rendercontext's, shared textures, and multiple GFX cards.
In my soon to be app I plan a renderchain with up to 18 renderings / per frame before it goes to the final 4 to 6 jit.gl.videoplane's to be displayed on 4 to 6 projectors, driven by up to two GFX-cards. This is the reason why I so acutely look at the slightest performance benefits.
And I know:
1. the more context's - the less performance
2. spanning a window over two displays that are connected to different GFX cards means even much less performance (noteably on OSX - machines).
Are there some rule of thumbs here?
My approach would create for each GFX card one context with one window but multiple viewports (or accordingly sized jit.gl.videoplanes), and one additional context to do all the prerenderings and than share the resulting textures with the GFX-card contexts. I know under Windows and SLI - texturesharing is less an issue than on OSX where there is no SLI (or the ATI's equivalent -> the new MacPro will probably be an exception)
Hey folks, I am very interested in this thread as well. I would like to try some of the patches above but it looks like I am missing some shaders.
Where can I find ab.lumagauss.jxs?
follow pedros link to his thread:
inside his zip you will find the rest.