I played a bit with near & far clip OB3D attributes.
We can use them on .. each visual object and on the camera too.
Used on the camera, it is more global.
In my system, I have to evaluate all distances relative to cam, so I can (and I have to) activate/inactivate OB3D visual objects depending on distance. This improves performances
But what are the benefits of near & far if they "only" provide a way to visually hide & show things between 2 distances values without having any impact over activation & inactivations of visual object in the context ?
Adjusting near/far clips also affects the depthbuffer calculation. In addition to the overhead of passing geometry to the GPU, sometimes fill rate is also a performance consideration, especially when rendering several large transparent objects. Clipping helps to avoid filling pixels that shouldn’t be within view.
Julien, given the nature and frequency of your questions around these topics, I’d highly recommend that you spend some time researching general OpenGL concepts and architecture (the Red Book), as these are not Jitter-specific topics and there is a deep wealth of info out there that extends beyond what this community can provide.
Thanks a lot for your answer Andrew.
Yes it is probably now the time to dive inside that book (and for many other reasons, I’ll need that too)
So if I understand correctly, using my own system which inactivate objects outside of a particular range + using proper near/far clipping is a common pattern?
I mean, it makes sense no?
yes, it is common to cull objects in your application before sending them through the opengl pipeline.
C74 RSS Feed | © Copyright Cycling '74