Verticies Per Frame

545a4a0985e6053873a2500bd757fd1b
0
robirt 101 Jul 30, 2003 at 02:49

I was looking at the Nvidia site and read over 300 million verticies per second for GeForce FX 5900, and i was wondering if anyone knows how that compares to actual throughput and if anything besides quantity affects that rate.

11 Replies

Please log in or register to post a reply.

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Jul 30, 2003 at 04:11

i just know that in about all real comparisons it does not beat a radeon9800pro really..

now for that number.. i guess its a theoretical max number. don’t know real numbers.. but you can possibly read up in some 3dmark2001 results how fast the vertex transformations are..

i think those are best case (==no screen image actually rendered, possibly just one pixel.., vertexdata with .. color? and nothing else, vertexshader just transforming and done, and all that..)

i’ll check if i find some reliable info.. thats quite difficult because nvidia spread a lot of own info, means not to be taken serious, and made a lot of trouble to actually measure real performance with all their cheats..

0684f9d33f52fa189aad7ac9e8c87510
0
baldurk 101 Jul 30, 2003 at 18:08

you will never ever ever ever ever get the vertex rate or tri rate in a real situations. As davepermen said, it’s all best-case, meaning they often skip translation, lighting, etc. Transformed, textured, lit tris are a lot slower to render.

9275cef0ad2f15ec1813d63b0c5b0fad
0
rogerdv 101 Jul 30, 2003 at 19:10

To put it in raw, unrealistic, ideal numbers, I would say that 300 million vertices are 100 million polygons. Considering 60 fps, you get about 1 million and 666666 polygons per frame (please, correct me if Im wrong). That looks good for me, except for many other niceties like new OpenGL 1.5, does it support it?

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Aug 03, 2003 at 20:37

@rogerdv

except for many other niceties like new OpenGL 1.5, does it support it?

it will have to support it.. but nvidias gl support never was that fantastic.

espencially on gfFX cards, if you only use straight gl paths, you will get quite bad performance for example in fragment programs..

nvidia is wellknown to (ab-)use opengl for just exposing its own hw features instead of trying to work for a good evolving real opengl..

ati is working much harder for a good unified opengl, and their cards like that don’t expose that much “features”, but more “solutions”..

like nvidia exposes a feature that means you can have for the NV_texture_rectangle_ext floatingpoint textures (not the opengl formats, but GL_R, GL_RG, GL_RGB, GL_RGBA, not luminance, luminance_alpha, etc.. again, not following gl standard ways..), but only for use in nv fragment programs, and only for nv texture rectangles, and only nv float, and with tons of restrictions..

ati instead just defined GL_RGBA_FLOAT32_ATI, and some others, and you can use it for ANY texture at ANY place in the pipeline..

ati runs fast and well and simple in opengl.. its really beautiful to use..

C24eb7e6aaefba78b94c831ddc7b4d0b
0
donBerto 101 Aug 03, 2003 at 22:10

I just hope nvidia doesn’t become proud and refuse to learn from this. can they even change their format?

:yes:

065f0635a4c94d685583c20132a4559d
0
Ed_Mack 101 Aug 03, 2003 at 22:44

dave, if you point out the good things about, ati you should point out some of the better bits about nVidia.

For fragment programs, it’s the usual matter of quality vs. quantity. The nVidia does higher quality data (129bit I think), but ati does lower quality - Allowing ATI to have better processing rates. But, if you’re really wanting maximum rate, you can go for 64bit on the nVidia. The cards are even. It’s the usual pros and cons for both, just different ones.

For drivers, I don’t use the windows ones, so can’t make a mainstream comment. But, on linux ATI is pretty sucky. Just to use their drivers you have use an older version of X (the thing that handles the display and other things). Also, no pre-built rpms, and you have to have their specific version of gcc by the sound of their checker script. This is a big differnce from the nVidia driver, which took about 30 seconds to run, and was flawless (nice installer too).

As for extensions, you have to try NV’s UltraShadow. It’s hardware optimised ofcourse, and takes a big chunk out of processing time for you. This seems inivitive. And, you can argue that it would be a standard if ATI would adopt it :)

Anyway, tit for tat, they’re both great cards.

545a4a0985e6053873a2500bd757fd1b
0
robirt 101 Aug 04, 2003 at 04:16

Thats weird. It always appeared to me that ATI was more in to supporting DirectX, based on what i saw on the developer area on their website. I didnt think they would run OpenGL better. Im glad to know that, now.
BTW, about the drivers,do you know how good ATI drivers are for FreeBSD? It is my OS of choice and i know Nvidia officially supports it, but i havent heard anything about ATI support for it.

065f0635a4c94d685583c20132a4559d
0
Ed_Mack 101 Aug 04, 2003 at 04:37

I wouldn’t know what to look for in their FreeBSD specs (if they do support it), so go dig :) www.ati.com

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Aug 04, 2003 at 06:36

i don’t know anything about the linux drivers, sorry.. there everything sais nvidia is bether..

about the fragmentprograms:

actually, it was one of the MAIN issues in 3dmark03, that nvidia cheated by not using their 128bit power but settle back to 16bit floats in much cases, and run in 64bit mode!!

nvidias fragment programs are ****ing slow in 128bit mode, while they are quite fast in 64bit mode. atis fragment programs are what is required for dx9, namely 96bit precision, and run in that mode as fast as possible.

because nvidia did not supported real 96bit mode they got very bad results in 3dmark and started to cheat all the way around. their hw is just wayoff any standards.

and, i prefer to have a hw wich has precision issues at 0.00152587890625% in floatingpoint calculations BUT CAN DO ABOUT EVERYTHING AND EVERYWHERE FLOATINGPOINT than having a hw that can do 32bit per float, but can not store them in regular textures, use them in regular fixed function pipeline, etc.

if you want to use floatingpoint on nv-hw, you have to rely entierly on nv-extensions currently.

and, while the nv hw has the full floatingpoint precision, they have NOT the full floatingpoint requirements!! their floats don’t follow the IEEE standards in any way actually! so you still cannot use them for cientific calculations really, result is not exactly defined.

9275cef0ad2f15ec1813d63b0c5b0fad
0
rogerdv 101 Aug 04, 2003 at 14:30

@davepermen

actually, it was one of the MAIN issues in 3dmark03, that nvidia cheated by not using their 128bit power but settle back to 16bit floats in much cases, and run in 64bit mode!!

Well, I thought that issue have been clarified and 3dmark recognizend that Nvidia didnt cheated.

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Aug 04, 2003 at 14:42

money solves any problems.. hehe