# OpenGL & Transparency

38 replies to this topic

### #1Alienizer

Member

• Members
• 435 posts

Posted 24 March 2012 - 06:34 PM

I know that you are suppose to sort your triangles by transparency order so that they are displayed properly in the z-buffer, I did that and it works, but kind of. Some shows throught walls, some are colored, but most are fine. Is there a specific way to do this? I feel like I'm not doing everything, but only part. Thanks.

### #2Stainless

Member

• Members
• 582 posts
• LocationSouthampton

Posted 24 March 2012 - 09:48 PM

You need to sort the transparent tris by transformed z coordinate, is that what you are doing?

### #3Alienizer

Member

• Members
• 435 posts

Posted 24 March 2012 - 09:55 PM

No, I just sort them by transpency only! What is " transformed z coordinate" ? ?
Thanks for helping

### #4Alienizer

Member

• Members
• 435 posts

Posted 25 March 2012 - 03:26 AM

ok, I tried to sort by distance to camera, but it didn't work as good as if I sort by transparency!?

### #5Stainless

Member

• Members
• 582 posts
• LocationSouthampton

Posted 25 March 2012 - 09:37 AM

Stick a video on youtube and we'll have a look

Basically you draw the entire scene without any transparent polygons.
Then depth sort the transparent polygons
Then draw them, farthest away from the camera first.

### #6geon

Senior Member

• Members
• 939 posts

Posted 25 March 2012 - 11:06 AM

What do you mean with "by transparency"? The opacity of each triangle? That makes no sense at all.

Transparent triangles needs to be drawn back-to-front. That can be more complicated than is sounds like. You need to sort by distance to the camera, but what distance? The center point? Still, that can give you wrong results In some cases.

Also, there can be circular overlap, where a triangle must be split in order to sort them. Have a look at BSP trees for more details. It is a technique that is mostly obsolete by now, but it explains the problems very well.

I think the most popular approach is to make the sorting work in most cases and restrict the art to not cause any of the known problems.

### #7TheNut

Senior Member

• Moderators
• 1701 posts
• LocationCyberspace

Posted 25 March 2012 - 11:25 AM

A picture is worth a thousand words, so..

In the above screenshot, the rendered image was done in two passes. The first pass you render only opaque geometry, ideally from front to back to maximize fillrate efficiency. In this case, the sphere is rendered first, then the blue rectangle, and finally the ground. In the second pass, you render transparent objects from back to front. This is also known as the painters algorithm because that's how a painter draws. In this case, the red rectangle draws first, then the green rectangle, and finally the yellow rectangle.

Try to avoid scenarios like this. If you must have cross sections, divide the mesh up into smaller pieces. Instead of having 3 rectangles, split them up into 6 and perform the same algorithm as I listed above.
http://www.nutty.ca - Being a nut has its advantages.

### #8Alienizer

Member

• Members
• 435 posts

Posted 25 March 2012 - 12:57 PM

Thank you TheNut, It couldn't be any clearer, and it works

But it seems that I have to re-sort every time I move the camera, is that correct? Because I use glNewList(TriList, GL_COMPILE); only once and then glCallList(TriList); on every frame. So do I have to give up the glNewList and re-sort on each frame if the camera has moved? Wouldn't this be real slow?

### #9TheNut

Senior Member

• Moderators
• 1701 posts
• LocationCyberspace

Posted 25 March 2012 - 09:39 PM

Yes, you need to resort your list every frame. It's not that expensive though. 100,000 items sorts in about 10 milliseconds on my PC. You're not going to have anywhere near that many items, so you're well in the safe zone.

You should avoid programming with display lists as that API is deprecated. GL 2.X and newer states you should be using vertex buffer objects. If you must stick with the 1.X standard, then at least use vertex arrays as they're more robust than display lists. Then you can selectively decide which geometry you want to render and which you do not, as well as adjust the order in which your geometry is rendered.
http://www.nutty.ca - Being a nut has its advantages.

### #10Alienizer

Member

• Members
• 435 posts

Posted 26 March 2012 - 12:06 AM

It works almost perfect thank you but the reason it's "almost perfect" is because I sort the tri using the centroid of it, but when they are displayed in an angle, I sometime see some weird results, but not often.

I didn't know about the display lists! thanks for letting me know.

I wonder why OpenGL is not capable if doing transparency itself. Do you know why? It would seems so simple for a microchip doing all kinds of complex stuff!!! Why give us the burden to make complex algo that can't be perfect, and waste time on sorting when we could be computing something else that's more important for the game or whatever it is!

### #11}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 26 March 2012 - 08:31 AM

It's not OpenGL that isn't capable of processing transparency, it's hardware itself. The reason is that for true transparency you need multiple Z values in Z-buffer. You can simulate that with different techniques (alpha to coverage, monte-carlo, etc...), but it's not easy.
Sorry my broken english!

### #12geon

Senior Member

• Members
• 939 posts

Posted 26 March 2012 - 09:51 AM

Alienizer, on 26 March 2012 - 12:06 AM, said:

I wonder why OpenGL is not capable if doing transparency itself. Do you know why? It would seems so simple for a microchip doing all kinds of complex stuff!!! Why give us the burden to make complex algo that can't be perfect, and waste time on sorting when we could be computing something else that's more important for the game or whatever it is!

Hardware manufacturers went down that path once before, with the fixed function pipeline. You had a fixed set of features the hardware could handle, and that was it. If you wanted anything more innovative, you had to do it in software.

It was soon very clear that this was not going to work. Instead pixel/vertex shaders became the norm. Then multiple rendeer targets, instantiation and hardware tessellation, etc. Graphics hardware is now moving in the direction of becoming a completely generic CPU, optimized for extremely parallell tasks, such as graphics, but also physics, password cracking and all kinds of number crunching.

This means we are not limited to what the hardware manufacturer had in mind, but can invent new techniques. The only drawback is, you have to do more yourself...

### #13Stainless

Member

• Members
• 582 posts
• LocationSouthampton

Posted 26 March 2012 - 10:20 AM

Yes, OpenglES is a prime example of this.

ES1.0 functioned much the same as opengl, you had the same matrix stack etc. The only real difference was that you couldn't do glBegin(...)/glEnd(), you had to use buffers

ES 2.0 all that went out the window and you now just have shaders. You have to do all your own matrices etc, but you do have hardware shaders.

Pain in the proverbial for the first project, after that really nice.

### #14Alienizer

Member

• Members
• 435 posts

Posted 26 March 2012 - 12:56 PM

hmm, yeah, I see what you all mean!

But really, they couldn't do something as simple as transparency???

If we can do it in software by simply sorting our object by z-dist, why can't the GPU do it instead? and even better than by object, do it by pixel, so it's flawless all the time!

They can make microchip, send rovers to Mars, but yet, they can't do transparency

### #15.oisyn

DevMaster Staff

• Moderators
• 1842 posts

Posted 26 March 2012 - 01:13 PM

Alienizer, on 26 March 2012 - 12:56 PM, said:

But really, they couldn't do something as simple as transparency???
Transparency is not simple. You think it's simple because you're not accounting for all the corner cases.

Quote

If we can do it in software by simply sorting our object by z-dist, why can't the GPU do it instead?
You can't "simply sort objects by z-dist". A polygon that is closer to the camera than another polygon for one pixel, may be farther away for another (see the case of intersecting polygons given by TheNut a few posts back). The only thing you can do that always works is sort per pixel. Nowadays you can do that on the GPU, but compared to a simple (but possibly wrong) per poly sort it's quite expensive.

Also, for the GPU's to be able to sort polies, they first needs to know all the polygons. Which is completely contrary to how they work - they simply execute all draw commands in the order you give them. Another thing is that the GPU doesn't think about translucency as you do. The GPU just blends the pixels with those already in the backbuffer, given some factors for both values. You think of translucencies in terms of src_alpha and one_minus_src_alpha, but that's just one of the available permutations. You could also do simple additive blending, which is order independent. Or perhaps you simply want to blend in a rectangle, regardless of its z-value.
-
Currently working on: the 3D engine for Tomb Raider.

### #16Alienizer

Member

• Members
• 435 posts

Posted 26 March 2012 - 02:46 PM

I worte a simple 3D engine once. And that's what I was doing, draw all pixels from back to front in an off-screen buffer, and transparency was handled perfectly. Very simple to do, that's why OpenGL should do it, or at least, have that option.

I think that the GPU is still very primitve. They add things to it, but in the end, it makes the programmer's life more difficult. Why can't we simply send all our poly to the GPU, textures, normals, UVs and all, and simply rotate our cam, have a callback for collision, things like that, even some GPU RayIntersection stuff. Everyone could write some peofesional games without spending a lifetime trying to understand how to make a rotating cube. It's now so complex that it makes no more sense. By the time I understand all of OpenGL and able to write somethng worth while, it'll be in a few years when they'll have more for me to learn, so I spend my time learning what it does and never do anything with what I learn, because it's already deprecated! Or maybe I'm just too dumb.

### #17geon

Senior Member

• Members
• 939 posts

Posted 26 March 2012 - 02:50 PM

.oisyn, on 26 March 2012 - 01:13 PM, said:

Transparency is not simple. You think it's simple because you're not accounting for all the corner cases.

Also, it needs to work together with the rest of your rendering. This is easily getting VERY complex.

- Oh! You wanted your cascading shadowmapping to cast filtered semi-shadows? And you have refraction on that water surface? Well, at least you don't use deferred rendering... Oh you do?

Just because it is simple in YOUR case, it isn't simple in general. So why would they waste development time and silicon on features everyone will have to reimplement themselves anyway?

### #18Alienizer

Member

• Members
• 435 posts

Posted 26 March 2012 - 03:13 PM

What I'm saying is, we use video playback, games, renderers, and that's about it. So a video card should do just that. Cuda lets you do renderes, in real time if you have the hardware. But Cuda is not a chip, it's software.

We should have a video chip to only do renders, games, and playback videos. That's it. CPU for running our OS and GUI, and another card, with 32768 RISC cores (of very limited instructions) to let us do the number crunching stuff. That would be a real super computer and not hard on the programmers. The more features, the more complicated, the more questions, the more bugs, the more problems, the more time wasted. Lets see, Windows 3.1 use to run as fast on a 286/66mhz as it does today with Windows7 on a 3GHZ. They should re-design the standard rather than trying to modify it. OpenGL is very old, and is now very big, too big.

### #19TheNut

Senior Member

• Moderators
• 1701 posts
• LocationCyberspace

Posted 26 March 2012 - 03:36 PM

Try not to look at OpenGL as a final solution. It's a tool no more different than a paint brush is to a painter. It's not the brush that paints the picture, it's the painter who commands the brush. What you're looking for is a high level engine built on top of OpenGL. Unreal, Crytek, Unity, etc. These are examples of software that hide or abstract such low level mechanics. Ever wondered why Unreal Engine is so commonly printed on game boxes? It's for the very reason that programmers do not want to reinvent the wheel.

I also wouldn't say life is made more difficult because of the state of things. Rather, I would say that there is simply more things to do. Although I think you're starting a debate about rasterization vs ray tracing. Ray tracing is a more straight forward rendering technique, but it has some disadvantages. The most noticeable being that the two most popular rendering APIs (GL & DX) are using rasterization. Take a look at this article.
http://www.nutty.ca - Being a nut has its advantages.

### #20Alienizer

Member

• Members
• 435 posts

Posted 26 March 2012 - 03:57 PM

i agree with you 100% but I think the technology is not going the right way. Maybe I'm wrong about it, but it seems they simply patch old tech to make it new tech and we are left with all kinds of things that we have to know, like oh, can't do this because it's not working with this and that, or it's deprecated, so use this instead, but only if the card can do it, but if it's ATI do this instead, and if Nvidia, do it that way etc etc etc. It gets very confusing.

I'm still trying to make an OpenGL context on a window to use anti-aliasing 4x 2x !?, and I'm still reading all that stuff, it's been 2 weeks I'm on it, and I still get a black window
Sure I can copy codes from free stuff, but that doesn't mean I'll understand what it does. Maybe I need to go to an OpenGL school? if there is such thing!

#### 2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users