0
109 Aug 05, 2012 at 04:48

What is the best way to display a static model in OpenGL, which is compatible with NVidia and ATI? and be the most compatible, v1.15 maybe? I have an older ATI card and it has the latest driver GL v6.xxx but Load_GL_version_3_3 fails. So I need something that can work with v1.15 maybe? or v2? donno!

I want to make an obj viewer, but some of them are big (600k triangles) that when I rotate the scene, it takes more than a few seconds to update! Especialy when there are transparencies because I sort them by z.

I don’t know if I should use shaders or not, glDrawArrays or not etc. I don’t need textures, only vertex, normals and colors.

What I do right now is so slow, I cull the back and draw the front, then cull the front and draw the back, then draw the front transparencies, then the back transparencies (after sorting them!).

Thanks!

#### 33 Replies

0
151 Aug 06, 2012 at 09:18

If you need transparency, it’s going to be slow. Sorry you need to reduce the number of tri’s you are displaying somehow.

Is the camera close enough to the object to allow you to do some quad tree or something?

0
167 Aug 06, 2012 at 17:10

600K triangles isn’t that much, even for an older card. Why do you draw the opaque stuff twice? Surely you have the depth buffer turned on, so you should be able to just draw it once (without culling, if you want to see backfaces) and let the depth buffer sort it all out.

IIRC, in OpenGL for static geometry you should create a vertex buffer object and copy the vertices into it once at load time, then re-use the same VBO for rendering each frame. This should be quite fast. If you don’t have access to VBOs, at least use glDrawArrays.

0
109 Aug 06, 2012 at 20:17

I draw the front then the back because that’s the only way I know how to do it with textures! and then I draw the front then back transparencies as you told me before I had to sort them by eye-dist to get it rigjht.

From what I understand, glDrawArrays can not be used to draw front and back textures all at once, is that right?

Why can’t OpenGL provide array pair for all this? I mean, a struct of vertex and front/back colors and textures…

struct gl_Vertex_Data {
vert1,vert2,vert3: vec4;
frontTexUV, backTexUV: vec2;
frontColor, backColor: vec4;
frontTexID, backTexID: uint
}


Bind all our texture to one unit and just use the texture ID. The way I understabd it now is only the active texture is used, and if you want 50 of them, you need to combine them all in a single giant texture! makes no sense, not with the technology we have now!

Then all we have to worry about is rotate, panning and stuff, and, if we pass the data as a pointer instead of uploading it to the card, it can become dynamic instead of static. Why are they making it so hard and complicated? c++ is easier to understand\~!

0
167 Aug 06, 2012 at 20:33

So - you want to put two different textures on the front and back sides? In that case, it does make sense to draw front and back in separate draw calls. Anyway, you could use a shader approach for this as well but that may be beside the point.

0
151 Aug 06, 2012 at 21:56

I’m not sure why you want to have different textures on each side of a polygon, but that is whats causing you problems.

If I were you I would look at the models and decide which triangles absolutely need to have different textures each side, and in your 3d editor duplicate those polygons.

Also extract all the transparent polygons and put them in a separate object.

Then render all the solid polygons with culling enabled,

Draw them with culling off with a painters algorithm

It depends on the number of polygons that need to be duped, and the number of transparent ones, but it should be a lot quicker

0
109 Aug 07, 2012 at 00:57

Thank you guys. but I’m not understanding your question! A poly has 2 face, and that means each face can have a color or a texture. Why would I be forced to have the back identical to the front?

Reedbeta, how am I suppose to draw both sides on a single call? Say, one poly with a carpet texture on the front and a brick texture on the back. How can I do that with a single call without a shader? Or am I suppose to draw only onr face since only one face can be visible at any one time, and if so, how does this work without a shader?

0
167 Aug 07, 2012 at 01:10

Well, first realize that drawing both front and back sides of a polygon, especially with different textures, is not a “normal” way to render things. You can do it if you want to. But maybe you should consider modeling your art with two separate polygons for the front and back sides (with opposite winding), like Stainless said. Then in your modeling program you can put separate UVs, textures, etc. on them, no problem. You’d render it with backface culling turned on, and without doing separate passes for front and back (just one pass over the model - separate draw calls for each material, as usual). That would be the usual way to do this sort of things. The renderer doesn’t have to know anything about front/back sides, since it’s all baked into the art.

I was just saying that it’s possible to write a shader that takes two textures, and based on whether you’re looking at the front or back of a particular pixel, samples one texture or the other. That’s not necessarily the best way to do this though. I’m just saying it’s possible.

Anyway, all this yadda yadda about front/back and different textures etc. may be a red herring. Have you been able to narrow down your performance problem at all? For instance, if you turn off transparencies, or if you only render front faces, does it improve the speed? By how much? If it’s still really slow when just drawing the opaque front faces, you have some other problem that may be totally unrelated to the whole front/back business.

0
109 Aug 07, 2012 at 01:51

I see what you mean Reedbeta and I model the way you suggest, but when I want to see obj models, many of them are dual face. Not all models are made with only front faces.

I use the glBegin(GL_TRIANGLES); glEnd; to set the texture, UV, normal and vertex for each poly. And it’s not bad at all, not that slow on 600k poly doing the front and back, without transparencies. But the more transparencies I have, the bigger the list to sort, and again with the front then the back, so it get’s very slow.

I wrote a shader that I pass everything to and use glDrawArrays, and the shader uses gl_FrontFacing to know which side, but it only works fine with colors since from what I can tell, you can’t pass textures as well.

So I’m back to glBegin(GL_TRIANGLES); glEnd; and I can pass one texture using glBindTexture(GL_TEXTURE_2D…); but how do I pass a 2nd one, and not just that, how do I also pass the 2nd color, and the 2nd UV???

0
167 Aug 07, 2012 at 02:00

First of all, using glDrawArrays is definitelyFAR faster than using glBegin/glEnd and passing each vertex separately. Especially for large models, it’s no surprise you are getting terrible performance using glBegin/glEnd! Using glDrawArrays should be independent of what kind of shader you use, the front-facing/back-facing stuff, etc. And of course you can use textures with glDrawArrays and other batching methods, I’m not sure what the trouble is? You’ll use glTexcoordPointer to set up the UVs and glBindTexture to set the texture, just as normal.

For multiple textures, use glActiveTexture to switch which texture unit is the “current” one, then glBindTexture to assign a texture to the current texture unit. Like wise use glClientActiveTexture and glTexcoordPointer to set up multiple sets of UVs. So you’d set up all your vertices, UVs, textures, etc. and then do glDrawArrays to send everything off.

0
109 Aug 07, 2012 at 02:57

I was under the impression that glDrawArrays only work on one texture at a time, and I had to call it for as many textures I have!

ok Reedbeta, you really got me puzled here because I seached and search the net for days and nowhere have I found a way to do what you’ve explained.

Can you please enlighten more Reedbeta?

Here’s what I do to setup the vertex…

glGenBuffers(1, &vboVertex);
glBindBuffer(GL_ARRAY_BUFFER, vboVertex);
glBufferData(GL_ARRAY_BUFFER, SizeOfTheArrayInByte, PointerTotheArray, GL_STATIC_DRAW);
vboAtribVertex = glGetAttribLocation(ProgramID, "inVertex"); // in the vertex shader I have layout(location =  0) in vec4 inVertex;
glEnableVertexAttribArray(vboAtribVertex);
glBindBuffer(GL_ARRAY_BUFFER, vboVertex);
glVertexAttribPointer(vboAtribVertex, 4, GL_FLOAT, GL_FALSE, 0, 0);


Then i do the same for the normals, the colors and UVs, but I have no clue how to add textures to all this?

0
167 Aug 07, 2012 at 04:30

Textures work exactly the same way whether you use vertex buffers or not. For one texture, you just do glBindTexture(GL_TEXTURE_2D, …) sometime before the glDrawArrays call. For multiple textures, you’d do something like

glActiveTexture(0);
glBindTexture(GL_TEXTURE_2D, aTexture);

glActiveTexture(1);
glBindTexture(GL_TEXTURE_2D, anotherTexture);

glActiveTexture(2);
glBindTexture(GL_TEXTURE_2D, yetAnotherTexture);
// etc.


That’s all there is to it. The numbers passed to glActiveTexture are the numbers of the texture units. They run from 0 up to however many simultaneous textures your card can support. You can probably get the texture unit numbers from glGetAttribLocation, too, to match up with specific textures declared in the shader.

To provide multiple UVs, just have the vertex shader declare multiple texcoord inputs and use glVertexAttribPointer multiple times, once for each set of UVs.

0
109 Aug 07, 2012 at 04:38

ok, but then how do I access them in the shader? Do I have to declare uniform sampler2D tex; one for each texture?

0
167 Aug 07, 2012 at 05:15

Yes, one for each texture.

0
109 Aug 07, 2012 at 16:22

really? but how do I access the right one in the shader? I can’t simply have a bunch of “if then” for each tex1, tex2, tex3 etc. ???

0
179 Aug 07, 2012 at 18:05

See this.

0
109 Aug 07, 2012 at 18:18

Great, thank you. But I can only load 32 textures! I was trying it on a model that had 52.

Any other work around that can work on more than 32 sampler?

Thanks guys, I appreciate the help.

0
179 Aug 07, 2012 at 18:28

I won’t ask what you’re doing that requires 52 textures, but if you are doing something crazy like that I would suggest you look into texture atlases to cut down on the number of textures you need to feed to the shader. If you need something more dynamic, you can look into the megatexturing algorithm, or possibly 3d textures and use each slice as a new texture (haven’t really used these much). You really shouldn’t be pushing more than 4 to 8 textures, since most hardware on average will cap at those ranges.

0
167 Aug 07, 2012 at 18:38

Well, usually when people use multiple textures it’s because they have a color map, normal map, specular map, etc. So they want to sample all the textures, not choose one. Note that you’d still normally draw each material in its own draw call, drawing only the polygons that use that material, then looping over all the materials. A “material” here means one color map, one normal map, one specular map, etc. A typical model might have several different materials in it, applied to different polygons.

For using different textures on the front and back sides, you would indeed use an if-statement. There’s a pixel shader semantic, I forget what it’s called, that tells you whether you’re on the front or back side (i.e. it has a value of 1 for front-facing pixels and -1 for back-facing pixels, or some such). You could use that in an if-statement to determine which texture to sample.

As I mentioned earlier in the thread, this isn’t necessarily the best way to do it. I think it’s totally reasonable to draw the front faces in one call, and then switch textures and culling modes and do another call to get the back faces. Then the shader would only look at one texture. All of the stuff I posted was by way of if you want multiple textures in one shader, here’s how you’d do it.

0
109 Aug 07, 2012 at 18:41

oh!!?? But most models have way more than that! Walls, chaires, desk, floor, magazines, vase, plants you name it!

0
167 Aug 07, 2012 at 18:43

Is your model an entire level environment or something? In that case, it might well have many materials.

0
109 Aug 07, 2012 at 18:56

They are obj models many of them converted from 3ds models. Some have 100+ textures, some only 10 or so.

I was trying google Sketchup and it uses OpenGL to model your scene, and it has no problem doing front/back and transparencies. I tried it full screen on a model with 200+ textures, it’s a big house, many windows, pool etc. and you can rotate, pan and zoom with no noticable delay. I just want to make a viewer that can rotate, pan and zoom only and not crawl at doing it, but not just for little models of 4 or 8 textures!

0
167 Aug 07, 2012 at 19:09

For big environments to draw efficiently you’re getting into the realm of frustum and occlusion culling. I.e. you can’t just throw all the polygons at the GPU and expect it to sort them out; you need some kind of scene graph to efficiently cull out the stuff that’s offscreen.

But it sounds like you’re not at the point yet where you can feasibly code that kind of system, if you’re still learning how to do basic drawing with vertex arrays and textures, etc. You might have to ramp down your expectations a bit.

0
109 Aug 07, 2012 at 19:18

Even using frustum and occlusion culling would do no good in full screen with the full model in view, since there will be nothing to clip off the edges.

But I know what you’re saying, I just loaded that huge house and doing it with glDrawArrays is super fast. No textures tho, just colors, but ti does have transparencies, it’s 3.5 miilion poly, it’s super fast without frustum and occlusion culling.

0
109 Aug 07, 2012 at 19:39

Can I use texture_3d or that too has a 32 depth limitation?

0
117 Aug 08, 2012 at 09:03

Eh… I just went through the whole topic. The whole point is, that your level is made up of several (it can range from single, to dozens, hundreds, etc.) pieces called meshes, for each this mesh you will have own VBO, own material, etc.

Then when you load Obj model, you render all these meshes in a single loop:

for(i = 0; i < mesh_count; i++)
{
materials[i].bind();
mesh[i].render();
}


Where materials::bind() method (or procedure, or function, or whatever - based upon programming paradigm you’re working in) binds textures to texture units (and now I mean diffuse map, normal map, specular map and whatever map for exact material).
And mesh::render() method binds VBO (or VAO if you decide to optimize with Vertex Array Objects) and renders single mesh from whole model (through glDrawArrays, or glDrawElements mostly, though somethimes there are different procedures used (ranged elements drawing, or instancing - but thats just for optimization!)).

After you have this working (e.g. the basic Obj (+Mtl for materials) loader, plus rendering of loaded Obj model with lots of textures), you can start adding:
1.) Optimization stuff - Frustum culling (Kd-trees, BVH, Grids, etc. awaits!), Occlusion culling (Hierarchical Z-Buffer culling for example, or PVS precomputed data - like in BSP files), Dynamic streaming of huge data sets from HDD (if you have some 16 GiB meshes, you can’t hold it in RAM/VRAM at once - this is where streaming comes in), etc.
2.) Fancy stuff - Transparency, Better lighting, Shadows, well…. you know the stuff ;)

I can post some basic obj + mtl loader stuff later in the evening (right now I’m forced to help my girlfriend with some work) :D

0
151 Aug 08, 2012 at 09:20

Yes Vilem has spotted what you are doing wrong.

A SCENE is not a single mesh. You might create it as a single file in your 3d editor,but it is NOT a single mesh.

You may have hundreds of meshes in a scene. Each of them can have it’s own textures, so you can easily get large numbers of textures in the scene.

(Note you should try to avoid this as if you run out of video ram to store the textures you can get a big overhead as textures are thrown between machine ram and video ram)

When you display a scene you can do all sorts of speed ups.

Frustum culling can eliminate whole objects from the display list saving you lot’s of t states.

You need to think about what you are trying to do rather than how to display a huge mess of triangles

0
109 Aug 08, 2012 at 17:27

ah, that makes sense! thank you guys, I have it working nicely doing it this way! What a big difference from the gl_begin/end old way of doing it. Now I can view an obj model with any number of textures, and even loaded 4.9 million poly model and the delay was about 1/2 sec when rotating in full screen! I guess that’s not bad at all is it?

0
117 Aug 08, 2012 at 18:58

No, it’s not bad … though you can do \~5M triangle scenes without any problems at realtime framerates (thats our average game scene, note… without tessellation). What is important now is optimization - your goals should be (Note. I’m not saying this is the only and best way, but it should give you a picture of your TODO):
1.) For each mesh create AABB and build a hierarchy of your scene (this is often called scene-graph - add ability to dynamically add/remove objects, while refitting tree after doing so)
2.) With this scene-graph it’s quite trivial to add frustum culling (this gives quite a speedup)
3.) It’s worth to try to implement occlusion culling (hierarchical Z-buffer) which might (and also might not) give you a speedup - it really depends (also for most common cases it gives slightly better performance, for some it gives huge speedup). Or if you have static scene, trying some PVS (potentially visible sets) is a way to implement always faster occlusion culling.
4.) You definitely know that shader-change, texture-change and such are slow operations, pre-sorting them for achieving as less as possible state changes also speeds stuff (and sometimes a LOT).

So far this should give you quite decent speed to handle practically any geometry (that is enough small to fit to your RAM/VRAM), and for most games this is enough. Although if you’re planning really huge worlds (or worlds with extreme detail), you can’t go far with your RAM/VRAM - then you’ll need to start doing LOD (Level of Detail) and streaming of data from HDD (the background streaming thread rulez). With this extension (which is quite huge, because it’s not as simple as it sounds) you’ll probably be able to handle any scene I can handle now :) - that means pretty much everything, but still a lot artist dependent (someone must create those LODs … a generator can do that, or you can use just mesh and impositor, or … there are lots of ways to “get rid of more artist work”).

Note that someone really should make an article how to build optimal renderer (e.g. all the stuff with frustum/occlusion culling, getting rid of so much state changes, LOD, etc.).

And second note, i’m not saying that the way I mentioned is the best, but it at least works correcly and isn’t the slowest out there, so it kinda works well.

0
109 Aug 08, 2012 at 21:09

I know exactly what you mean. I did this, I sorted the poly by materials, and gained way more speed. Then, I used 8 samplers and grouped the polys into one draw array for each 8 textures, and wow, what a speed. Then I did the max for my card, 32 samplers, and guess what, no delay at all on that 4.9M poly. You guys are genius, thank you all.

What I would like to see on this site for others that starts with glsl/OpenGL as I did are short but very clear tutorials…

1. OpenGL basics, how those binding works, and where and how to use them, textures, how are they used etc etc.

2. Vertex shader, what does it process exactly, what variables are being interpolated, what can you pass to it and how, limits etc.

3. Fragment shader, same as above.

You know, the stuff that we puzzle all the time. I didn’t know why some of the vars in my vertex shader were not the same when passed to my fragment shader. Just to found out it was getting interpolated and I had to use flat to prevent that. All those little things are obscured, and very hard to fiund out, especially if you don’t know about flat, or that all your vars are getting interpolated.

Simple things, not just code example, there are plenty, but why, what happens when I do activeTexture, and why do I have to call bindTexture afterward, what happens, and how is it attributed to shaders, and why it doesn’t. blah blah blah.

0
117 Aug 08, 2012 at 22:01

As for those tutorials (articles) - to actually explain what is happening, it’s probably the best to write whole article about how 3d rendering works (now I mean rasterization) - the best would be CPU implementation, that means loads and loads of math.

In my opinion it would be far better than jumping directly into OpenGL/Direct3D - why, well simply because you’ll understand whats going on in the api, what your calls actually do and thus figure out what exactly you have to do/supply. Of course explanation how you can build your own shading language and implement it in software renderer isn’t exactly what I mean, but explaining basically and generally how stuff works with related code for software renderer gives a good idea what actually GPU does.

I can’t promise anything (basically because I don’t have much free time - put together paid work, unpaid work, school, crazy girlfriend (+ anime in this field), games, cycling, fencing and linux - and generally time chaos begins (but I think that pretty much everyone here doesn’t have much free time)) - but I can give it a try and put together few lines of text & code.

0
167 Aug 08, 2012 at 23:09

The best modern OpenGL tutorials I have found are at http://www.opengl-tutorial.org/ . These are all-in on VBOs, shaders, and so forth from the beginning, which is nice since they avoid teaching you at all about outdated things that don’t perform well, like glBegin/glEnd, which you usually see in older OpenGL tutorials, but then have to unlearn.

0
109 Aug 09, 2012 at 00:00

Nice tutorial Reedbeta, but it lacks the “why” parts! Like, it tells you how to draw your first triangle, and show you the code, but there is no “why do I have to call glGenBuffers, what does this do?” kind of things. Not so much to explain the language and functions, but rather, the core, what’s going on when you call glGenBuffers, does it talk to the card, or just the driver? What about states? who knows about that beside the experts like you guys. I didn’t, untill you guys told me to try not to change states so much to get more speed. Those are the kind of tutorial I think is better to read first, then you go to opengl-tutorial.org and learn more about the language and functions etc.

It’s like trying to learn how to drive a car not knowing it has wheels to make it move (not knowing what make it move). I’d rather learn how a car is made first, then when I learn how to drive it, I can relate my actions to the machanics and perform stunts wich I’d never thought I could do or be done. Just like this draw arrays deal, I always thought it can only use one texture at a time!

0
103 Aug 20, 2012 at 21:58

you gotta know the difference between drawing from video, and drawing from SYSTEM CALLS. b gives you shit all speed, its basicly not even reaching anywhere near the power of the video card, and is speed dependant on the motherboard, funnily enough.

good for you you at least managed a video brute force, thats the first thing to get over the pipeline bottleneck.

no pro programmer would ever get a job without knowing this problem in its entirety.