Rendering from multiple cameras in different windows with UI.
Hi All, I'd like to render my 3D scene to multiple windows, each with its own interactive camera like 3D software (orbit, pan, zoom etc) and I can't figure out how to do it. I searched all over the forums and everything and found a few approaches, but I can't get any of them to behave.
1. split viewport
I just have one window but split it up into viewports, each with its own camera.
pros:
- very easy
cons:
- when you resize the viewport, it crops the content instead of scaling. Would need more work to scale contents properly
- can't make each viewport interactive
2. textures
Use a jit.gl.node and render to textures, then videoplane.
pros:
- useful if I want previews (i.e. the same render displayed in different places)
cons:
- I don't want the viewports as pwindows but their own floating windows. can't figure out how to do that
- Since the final rendered images are videoplanes, I can't make the cameras interactive in the viewport
3. render the 3d scene multiple times, once for each window
I tried two methods for this:
3a. Manually creating an instance of jit.gl.render for each window
3b. using one instance of jit.gl.render, but changing its drawto in a loop
pros:
- since each window is its own 3d render, I'd probably have most control with this method
cons:
- there seem to be some ghosting artifacts (Trying zooming in and out with asdw). I'm not sure if I've done something wrong and stuff is being rendered multiple times or out of order (I'm still getting to grips with the whole t b b l b l b l stuff).
- I still can't make each window interactive! I've used jit.anim.drive ui_listen 1 but only one of the windows is interactive :(
thanks for your help
im not on my computer that has max installed, but it doesnt look like you've tried redering to texture direct from camera - check your camera object - it will allow you to output texture directly. Make four cameras, output texture to four videoplanes, and that may do the trick.
Hi. First of all, it's great to see you here, Memo! I've seen your website and appreciate your work.
I've only had time to see your first approach (1. Split viewport). Regarding the cons:
1. the viewports are vertically normalized, so in your example with a horizontal split, they're not scaling. Even in your example, if you resize the window, you'll see that they are in fact scaling. In my example with four cameras this is seen more clearly.
2. You can make each viewport interactive. My example uses just one anim.drive that targets each camera based on the mouse position on the jit.window.
Good luck!
Hi, thanks for the quick replies!
@Daddymax
Actually that is the 2nd method I mention above. I mention the problems with it above too.
@PedroSantos
Thanks for the example, that definitely solves the 2nd con (making each viewport interactive) and I learnt something new! I think I could probably apply the same technique (targetname) to the multirender method.
The problem regarding scaling though is still there. I wasn't referring to when you resize the window, but when you change the viewport sizes. See the example below (I've modified yours to add sliders). It actually works when I change the vertical divide, but not when I change the horizontal divide.
The final issue with the viewport method is, can I make the viewport sizes adjustment interactive? (Directly inside the viewports themselves, like in 3D software by dragging handles like the edge of a window - instead of using sliders :)
I just realized there's a logic error in the code I pasted above, in the detection of which viewport should be active for interactive. The logic (vexpr $f1 / $f2 -> round) isn't taking into account the viewport divide sizes :/
The problem regarding scaling though is still there. I wasn’t referring to when you resize the window, but when you change the viewport sizes. See the example below (I’ve modified yours to add sliders). It actually works when I change the vertical divide, but not when I change the horizontal divide.
The problem with scaling here has to do simultaneously with size AND proportion. If you want Max to scale when the dimensions change horizontally and vertically the resulting image would be distorted. So, to normalize, in order to scale the content, the system has to reference the vertical or the horizontal position, never both. I guess you're prioritizing the horizontal dimension but the vertical dimension is usually a more practical reference (from a broadcast perspective at least...): a 4:3 image must contain the essential information and a 16:9 maintains all the content and adds additional information on the sides.
Regarding the sliders in window, I don't think it's very practical to implement it because the sliders themselves would have to be 3d objects in the scene. It would be easier to use the key object and use the up, down, left and right keys to implement it.
Regarding the vexpr, I did it that way thinking in a 4x4 system with equal sizes... the approach would have to be dynamic in your case, taking into account the information from your sliders...
Regarding the last problem:
Hey, yes that last patch does the job nicely. That pretty much solves the viewport method (#1). Thanks a lot for your help!
There will be times, where having separate windows might be needed (with interactive camera navigation, ray picking etc) . E.g. having multiple monitors with different resolutions and different monitors having different windows etc. In that case method #3 would probably be the best way? Is the way I'm trying to do it wrong?
- there seem to be some ghosting artifacts (Trying zooming in and out with asdw). I’m not sure if I’ve done something wrong and stuff is being rendered multiple times or out of order (I’m still getting to grips with the whole t b b l b l b l stuff).
I haven't been able to analyze the patch in detail, but noticed something right away: if you want to explicit control the drawing order, you must configure the drawing objects to @automatic 0. Otherwise, they'll trigger (bang) automatically and when you explicitly send a bang (hence the duplication, mabye?).
Hey yea that's totally it, thanks that nailed it! (I'm still quite new to the way Max/Jitter works :/ )
I tried using the same technique of setting targetname for the jit.anim.drive but it doesn't work. I also tried using a gate to route the output of the jit.anim.drive to different cameras depending on the active window, but again it didn't work :/ (I'm using mouseidle and mouseidleout to determine which window is the current one). surely there should be a simpler way to do this!?
i would strongly advise against changing drawto destination dynamically.
if you need unique windows, then create unique gl.render contexts. if the windows are going to be displayed on different ports of the same graphics card, then you can set @shared 1 on both windows, and share texture resources between them (e.g. capture texture output from a camera in one context, and display that texture on a gl.videoplane in another context). if the windows are displayed from different gpu's, sharing resources won't work.
if you want complete functionality from each window for UI elements (jit.anim.drive, jit.gl.handle, jit.phys.picker, etc) then you should simply create two separate scenes. you can use jit.anim.node objects to sync transforms of your gl objects between the two scenes, if desired.
capturing to texture using jit.gl.node or jit.gl.camera is the recommended route, but this requires leaving all objects with @automatic 1, and controlling the drawing order with @layer.
hope this helps.
Thanks for the tips Rob. Out of curiosity (so I can learn) why is it bad to change the drawto destination dynamically? Is it because behind the scenes it's allocating new opengl contexts each time?
Also do you advise against it just for the jit.gl.render node? or also for each of the shapes? I.e. is the following ok? (I explicitly create a node for each jit.gl.render context, but change the drawto for the shapes dynamically).
Note in this version I couldn't get it to work with automatic:0 (even though I was sending bangs after changing the drawto). And I still can't get the anim.drive to work on different windows. (I don't want different physics etc in the different windows, I want them to be identical, except I want to be able to interactively control each camera independently. Just like in a traditional 3D software).
different objects will do different things when their drawto is changed. but yes, they must free and re-allocate any resources that depend on the drawing context.
you can change the drawto dynamically, but you should not do it every frame. in your patch, each frame you are re-allocating a vertex-buffer for that specific render context.
one possibility in your patch, is to enable @shared 1 on both windows, add every object and camera to view1, set @capture 1 on your view2 gl.camera, and send that texture to a gl.videoplane bound to view2.
in this case, however, you can't use the @ui_listen feature of anim.drive, and will have to hack something in order to control the camera transforms with mouse or keyboard input.
otherwise, i would simply duplicate your scene entirely, and sync transforms using anim.node (or simply getposition, getrotate every frame). this will be less efficient obviously (you have to duplicate all your resources), however unless you are maxing out your cpu, this is not a concern. an additional step would be to create separate patches for each scene, run each patch in a separate standalone (or separate copy of the max app), and send messages using udpsend/udpreceive over the local network. this is actually the best solution, if cpu performance is a problem. may be overkill for your needs.
Ok thanks, this does seem like a much cleaner solution. I'm guessing jit.gl.render view1 should be on the right of view2 in order to be rendered first I guess? So the two views are not a frame out of sync?
The only issue is the interactive camera controls. When I attach the anim.drive on view1 it works, it's just the other views that don't work now. But I should be able to hack together a camera navigation for each window I think.
A few questions regarding this:
1. this seems to rely on shared context. So if I moved one of the windows onto a separate graphics card would this not work?
2. when I resize the window for view2 it was squashing the image. I guess because of the transform_reset 2. I changed that to transform_reset 1 and now it preserves aspect ratio, but of course clips the edges sometimes when you resize since the viewport its rendering to doesn't match the window.
3. Also the texture resolution is quite low. I tried @adapt 1 but it made no difference. What exactly does adapt do and why didn't it work?
To address the last two issues I tried getting the size of the window and setting the dimensions of the camera texture. It seems to work but I'm not sure if this is the best way to do it since I might be re-allocating a texture for the camera every frame :/
1 - correct, you can't share texture resources across GPUs (or rather, i believe it's possible, but causes significant slowdowns in rendering)
your solution for the last two issues is correct. you might want to stick a zl.change after your route size, so that you are only setting the dimension attribute when the size changes.
you might want to stick a zl.change after your route size, so that you are only setting the dimension attribute when the size changes.
Doh, i was trying to use change and going crazy that it wasn't working, and unpacking and packing again. zl.change is much better thanks!