I want to pass on something i found out today while experimenting. First, my cut down scene is made up of a sketch object drawing a matrix that is constructed in a nurbs object with @matrixoutput 1. I’m also rendering a mesh shape that fills up a large part of the scene (but not obscuring the sketch primitive). in the sketch command list, before the command to draw the matrix, i have a bunch of commands for 2 lights. now, I’ve always thought that the lights from sketch were exclusive to it’s command list ie not shared with other primitives which had only one light source controlled in the render object. Not so I found out today, by turning off the pushstate attribute in sketch the second light is then shared across other primitives and my mesh object was also lit up by my 2nd light (light1 in the sketch command list).
Now a question, I had understood that opengl clips all parts of primitives that are not going to make it to the framebuffer for rendering, thus saving on processing. I’m sure that it does this but i suppose my question is at what stage does the clipping take part? If I render a matrix and scale the the primitive very large so that the majority of it will not appear in the viewport, then i find that the rendering speed slows down. Why? also, if i bind the phong glsl shader to this shape there is a further slowing down in the framerate. Why does this happen? Doesn’t the shader do its work on the gpu? Is it having to work out lighting for the whole of the primitive before it gets clipped?
If you look at this page:
http://www.glprogramming.com/red/chapter03.html at figure 3-2, you’ll
see where vertices are transformed into clip space. This is where the
frustum culls vertices. If you’re drawing alot of vertices, then they
will still be processed even if they are out of view but they will not
be rasterized (unless partially viewable). The reason you’re probably
seeing a slowdown has less to do with how many vertices you’re
rendering (unless it’s alot) and more with how many pixels the mesh
fills up. If you have for example alot of transparent layers that
fill the screen five times over, the GPU has to fill the screen that
many times and for all but the best GPUs this can really kill
framerate. This is even more true when using fragment shader heavy
shaders like a Phong shader which performs a decent number of
calculations per-fragment (basically per-pixel but not quite the same
The short answer is that pixel fill-rate is the biggest bottleneck on
As for the multiple lights, yes second and third and fourth lights
will leak through to other objects because the ob3d code that all
jitter opengl objects use to manage state only handles the first
light, GL_LIGHT0, so if you enable others it’s quite easy for it to
light up subsequently drawn geometry.
yes of course, i wasn’t thinking about the fact that more pixels had to be drawn if the screen was filled. although i had worked out the fact that blending layers requires more processing i hadn’t actually enabled blending when i was spurred to write. thanks.
Forums > Jitter