GeForce 7950 GX2 test

Aug 11, 2006 at 5:15pm

GeForce 7950 GX2 test

Hi all,

Just did a little openGL test on a PC with a GeForce 9750 GX2 (http://
http://www.nvidia.com/page/geforce_7950.html) with 1Gb of VRAM, pentium D
3.2Ghz, 1 GB RAM
What we tested was the difference between dual gpu and dual screen
mode and the results were somewhat puzzeling: in dual gpu mode, using
both gpu’s for one screen, fps was 50, in dual screen mode, using one
gpu on each of 2 displays, fps only dropped 1 or 2 frames.

We were expecting a much bigger drop of framerate in the second mode,
since only 1 gpu is used to do the same? Or is this a false
assumption…

rgds

b.

#27119
Aug 11, 2006 at 5:24pm

On Aug 11, 2006, at 10:15 AM, Brecht Debackere wrote:

> Just did a little openGL test on a PC with a GeForce 9750 GX2
> (http://www.nvidia.com/page/geforce_7950.html) with 1Gb of VRAM,
> pentium D 3.2Ghz, 1 GB RAM
> What we tested was the difference between dual gpu and dual screen
> mode and the results were somewhat puzzeling: in dual gpu mode,
> using both gpu’s for one screen, fps was 50, in dual screen mode,
> using one gpu on each of 2 displays, fps only dropped 1 or 2 frames.
>
> We were expecting a much bigger drop of framerate in the second
> mode, since only 1 gpu is used to do the same? Or is this a false
> assumption…

For real benchmarking, make sure you turn of he vertical sync
attribute of jit.window i.e. @sync 0. Otherwise you will see only he
framerate of the vertical sync and no higher. I would imagine that
you may see a difference in thsi case. However, you might not be
executing enough graphics instructions for it o make any difference
in using the dual GPUs.

-Joshua

#81755
Aug 11, 2006 at 10:49pm

sync was set to 0, but you’re absolutely right about the graphics
instructions. Will try some other stuff and post it if people are
interested.
b.
On Aug 11, 2006, at 7:24 PM, Joshua Kit Clayton wrote:

>
> On Aug 11, 2006, at 10:15 AM, Brecht Debackere wrote:
>
>> Just did a little openGL test on a PC with a GeForce 9750 GX2
>> (http://www.nvidia.com/page/geforce_7950.html) with 1Gb of VRAM,
>> pentium D 3.2Ghz, 1 GB RAM
>> What we tested was the difference between dual gpu and dual screen
>> mode and the results were somewhat puzzeling: in dual gpu mode,
>> using both gpu’s for one screen, fps was 50, in dual screen mode,
>> using one gpu on each of 2 displays, fps only dropped 1 or 2 frames.
>>
>> We were expecting a much bigger drop of framerate in the second
>> mode, since only 1 gpu is used to do the same? Or is this a false
>> assumption…
>
>
> For real benchmarking, make sure you turn of he vertical sync
> attribute of jit.window i.e. @sync 0. Otherwise you will see only
> he framerate of the vertical sync and no higher. I would imagine
> that you may see a difference in thsi case. However, you might not
> be executing enough graphics instructions for it o make any
> difference in using the dual GPUs.
>
>
>

#81756
Aug 12, 2006 at 12:08am

On Aug 11, 2006, at 3:49 PM, Brecht Debackere wrote:

> sync was set to 0, but you’re absolutely right about the graphics
> instructions. Will try some other stuff and post it if people are
> interested.

Then perhaps the metro is only set to 20ms, since your card should
probably run much faster than 50fps unless there really were a lot of
graphics instructions or CPU load which is the bottleneck.

One test for raw GPU performance would be to use some large geometry
in a displaylist (e.g. jit.gl.gridshape @dim 400 400 @displaylist
1) , so that you’re just rendering polygons in bulk via the
displaylist and not submitting vertices or having other CPU based
vertex calculation.

Best of luck, and of course, it’d be great if you share some findings.

-Joshua

#81757
Aug 12, 2006 at 12:58am

OK. Test results are in, using gridshape with a sphere @dim 400 400

Dual gpu mode @displaylist 1 :

1280×1024 = 200fps
1024×768 = 200fps
800×600 = 120fps

Here I was expecting a higher framerate at a lower resolution?

Dual screen mode using 1 screen :

1280×1024 = 120fps

Using 2 screens, @displaylist 1 doesn’t seem to work…? I duplicated
the gl.render, jit.window and gl.gridshape

1280×1024 = 12.5 fps
1024×768 = 11.8

Are displaylists not supported when rendering to 2 screens at a time?
I’d think that with one card that might be a problem, but since this
one is basically 2 cards stuck together, they’d have their own
storage capabilities for the displaylist…

also, in all these, cpu was running up to 100%… I don’t understand
what is going on on the cpu side?

rgds,

Brecht.

On Aug 12, 2006, at 2:08 AM, Joshua Kit Clayton wrote:

>
> On Aug 11, 2006, at 3:49 PM, Brecht Debackere wrote:
>
>> sync was set to 0, but you’re absolutely right about the graphics
>> instructions. Will try some other stuff and post it if people are
>> interested.
>
> Then perhaps the metro is only set to 20ms, since your card should
> probably run much faster than 50fps unless there really were a lot
> of graphics instructions or CPU load which is the bottleneck.
>
> One test for raw GPU performance would be to use some large
> geometry in a displaylist (e.g. jit.gl.gridshape @dim 400 400
> @displaylist 1) , so that you’re just rendering polygons in bulk
> via the displaylist and not submitting vertices or having other CPU
> based vertex calculation.
>
> Best of luck, and of course, it’d be great if you share some findings.
>
> -Joshua
>
>
>

#81758
Aug 12, 2006 at 1:09am

On Aug 11, 2006, at 5:58 PM, Brecht Debackere wrote:

>
> Using 2 screens, @displaylist 1 doesn’t seem to work…? I
> duplicated the gl.render, jit.window and gl.gridshape
>
> 1280×1024 = 12.5 fps
> 1024×768 = 11.8
>
> Are displaylists not supported when rendering to 2 screens at a
> time? I’d think that with one card that might be a problem, but
> since this one is basically 2 cards stuck together, they’d have
> their own storage capabilities for the displaylist…

Sorry there’s a bug currently with multiple instances of
jit.gl.gridshape @displaylist 1which is fixed in the forthcoming
JItter 1.6 beta for PC. In the meantime you might be able to use
matrixoutput->jit.gl.mesh instead for fast vertex buffer based
rendering.

> also, in all these, cpu was running up to 100%… I don’t
> understand what is going on on the cpu side?

It might be due to individual vertex submission or some other aspect
of this bug.

-Joshua

#81759
Aug 12, 2006 at 1:14am

cheers, thx for the info. Looking forward to seeing 1.6 running on PC
and compare speeds.

On Aug 12, 2006, at 3:09 AM, Joshua Kit Clayton wrote:

>
> On Aug 11, 2006, at 5:58 PM, Brecht Debackere wrote:
>
>>
>> Using 2 screens, @displaylist 1 doesn’t seem to work…? I
>> duplicated the gl.render, jit.window and gl.gridshape
>>
>> 1280×1024 = 12.5 fps
>> 1024×768 = 11.8
>>
>> Are displaylists not supported when rendering to 2 screens at a
>> time? I’d think that with one card that might be a problem, but
>> since this one is basically 2 cards stuck together, they’d have
>> their own storage capabilities for the displaylist…
>
> Sorry there’s a bug currently with multiple instances of
> jit.gl.gridshape @displaylist 1which is fixed in the forthcoming
> JItter 1.6 beta for PC. In the meantime you might be able to use
> matrixoutput->jit.gl.mesh instead for fast vertex buffer based
> rendering.
>
>> also, in all these, cpu was running up to 100%… I don’t
>> understand what is going on on the cpu side?
>
> It might be due to individual vertex submission or some other
> aspect of this bug.
>
> -Joshua
>
>

#81760
Aug 14, 2006 at 10:35pm

FWIW, make sure to have a look in the Options->Performance menu and
play with settings there.

I’ve been using an ATI FireGL V7350 and was expecting blazing
performance, but was suprised to see the frame rate dropping pretty
quickly with 60 gridshapes with attendant max objects for movement
algorithms, some audio stuff, and a few textures. Once I noticed
that only 50% of my CPU was in use, I tweaked the performance
settings and more than tripled my polygon count, with 90% cpu usage.
Seemed like the Max scheduler itself was the limiting factor.

On Aug 11, 2006, at 6:14 PM, Brecht Debackere wrote:

> cheers, thx for the info. Looking forward to seeing 1.6 running on
> PC and compare speeds.
>
> On Aug 12, 2006, at 3:09 AM, Joshua Kit Clayton wrote:
>
>>
>> On Aug 11, 2006, at 5:58 PM, Brecht Debackere wrote:
>>
>>>
>>> Using 2 screens, @displaylist 1 doesn’t seem to work…? I
>>> duplicated the gl.render, jit.window and gl.gridshape
>>>
>>> 1280×1024 = 12.5 fps
>>> 1024×768 = 11.8
>>>
>>> Are displaylists not supported when rendering to 2 screens at a
>>> time? I’d think that with one card that might be a problem, but
>>> since this one is basically 2 cards stuck together, they’d have
>>> their own storage capabilities for the displaylist…
>>
>> Sorry there’s a bug currently with multiple instances of
>> jit.gl.gridshape @displaylist 1which is fixed in the forthcoming
>> JItter 1.6 beta for PC. In the meantime you might be able to use
>> matrixoutput->jit.gl.mesh instead for fast vertex buffer based
>> rendering.
>>
>>> also, in all these, cpu was running up to 100%… I don’t
>>> understand what is going on on the cpu side?
>>
>> It might be due to individual vertex submission or some other
>> aspect of this bug.
>>
>> -Joshua
>>
>>
>

#81761

You must be logged in to reply to this topic.