# FS use tex or color?

31 replies to this topic

### #21Alienizer

Member

• Members
• 435 posts

Posted 23 July 2012 - 09:32 PM

ok, I did this and now the model is showing...

attribute vec3 Vertex;
void main(void)
{
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * vec4(Vertex, 1.0);
}


is that what I need to do? Or do I need to do this in the main app instead?

### #22Alienizer

Member

• Members
• 435 posts

Posted 23 July 2012 - 09:45 PM

Not all the poly were drawn, I had to do glDrawArrays(GL_TRIANGLES, 0, totPoly*3); is that correct to do totPoly*3 ?

### #23TheNut

Senior Member

• Moderators
• 1699 posts
• LocationThornhill, ON

Posted 24 July 2012 - 03:55 AM

Alienizer said:

is that what I need to do
If you want to transform from object space to clip space, yes. This is required by the rasterizer in order to properly clip and draw polygons in the window. You could for example supply clip space vertex positions to begin with and avoid the matrix multiplications (typically for doing user interfaces), but I wasn't sure what you were doing.

As for glDrawArrays, you supply the total number of vertices, not polygons. The later is used with glDrawElements. If totPoly represents the number of triangles, then yes. Each triangle has 3 vertices and so you should have totPoly * 3 vertices.
http://www.nutty.ca - Being a nut has its advantages.

### #24Alienizer

Member

• Members
• 435 posts

Posted 24 July 2012 - 04:19 AM

Thanks again TheNut. Now I got it working, but I would like to know if I'm doing it right...

I setup (from your examples) a vertex buffer, normals, colors and UV. It all gets colored properly and the normals are working.

I use the following in the vertex shader for the normal...

nrm = normalize(gl_NormalMatrix * inNormals);

is that correct?

But, for the colors to work, I had to duplicate the poly color 3 times, one for each vertices! This doesn't sound right, unless it's suppose to be that way if a triangle has a different color at each vertices to be interpolated across the whole triangle. But in my case, the whole triangle is a solid color. I was wondering if there is a better space saver to do this?

One big thing I'm stuck on is, if I have many textures, can I bind them all and use some kind of texture array? I mean, I created a buffer just like the vertex and normals etc, but it's the texture IDs. So I thought I could use that to select which texture to use for the current vertex, but I'm failing miserably! Can you guide me please?

### #25TheNut

Senior Member

• Moderators
• 1699 posts
• LocationThornhill, ON

Posted 24 July 2012 - 10:45 AM

gl_NormalMatrix, gl_ProjectionMatrix, and gl_ModelViewMatrix are all part of the old fixed-function compatibility profile for GLSL. You shouldn't really be using those because that means you're still using OpenGL 1.X functions to set matrices, which is deprecated. If you look at my posted shader example, you'll see that I was using uniform variables for matrices. Remember what I said, avoid using any built-in functions that are listed as compatibility mode in the GLSL document posted on OpenGL or the Khronos website. Everything you should avoid is all there under chapter 7. Download and read this.

Alienizer said:

I had to duplicate the poly color 3 times, one for each vertices! This doesn't sound right,
This is in fact correct. A vertex shader is just that, a programmable piece of code operating on each vertex. Just like vertex bones, vertex normals, vertex UVs, colours are also per-vertex. Now, I don't know how you setup your data. It is possible to assign a colour to the entire mesh by simply using a uniform colour variable. If however you have several polygons within the same VBO you're rendering and each poly can be different, then vertex colouring is the way to go. You could also use a texture map, but this has a couple drawbacks and requirements on your mesh as well.

Alienizer said:

if I have many textures, can I bind them all and use some kind of texture array
There's a limit to the number of texture's your allowed to use. You have to call glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &myInt) to find out how many you can use at any given time (or use a 3d texture), although in general you should not have such heavy requirements. A shader is meant to be simple, with logic pertaining only to the bare minimum mathematical formulas and procedures required to pull off the effect. The keyword here is effect. It's the duty of the CPU to sort and organize the polygons being sent to the shaders. If you're trying to bind every texture and then have a shader find out what sampler to use, then your design is flawed. Your shaders should be more specific than that.
http://www.nutty.ca - Being a nut has its advantages.

### #26Stainless

Member

• Members
• 578 posts
• LocationSouthampton

Posted 24 July 2012 - 11:54 AM

Sounds like you've got yourself well and truly confused

Okay lets go through what happens when you render a triangle.

The pipeline does the following operations

Work out what parameters need passing to vertex shader (in your simple case position, texture coordinates, and colour)

Call vertex shader with these parameters for vertex 1
Extract results

Call vertex shader with these parameters for vertex 2
Extract results

Call vertex shader with these parameters for vertex 3
Extract results

for each pixel within triangle
work out texture coordinates by interpolating across the triangle
work out colour value by interpolating across the triangle

That's why you need a colour per vertex, not per triangle. The pipeline wants vertex colours, this is incredibly useful, you can use it for ambient occlusion mapping, gouraud shading, all sorts of things.

When you are optimising your code, the key things to look out for are nasty state changes. If you have a different texture for every polygon, your code is going to be really slow.

There are two standard setups now, depending on if you need to alpha blend the triangles.

Both are based on you throwing triangles at a class that eventually renders them.

If you need alpha blending

1) Setup a default render state
2) When you receive a triangle, check to see if it breaks the render state (different texture is the only one in your case)
If it does
2a) Render all the stuff stored so far, change state to new state
if it does not
2b) Add this triangle to the store
3) At end of rendering, render anything still pending

If you don't need alpha blending
1) when you get a triangle, check to see if it uses a new texture
if it does
1a) create a new heap for this texture, add triangle to this new heap
if it does not

2) At end of rendering, loop through all the heaps binding the relevant texture, and drawing the triangles

### #27Alienizer

Member

• Members
• 435 posts

Posted 24 July 2012 - 10:46 PM

So if I understand correctly, no need to use any compatibility calls like glPushMatrix, glMatrixMode(GL_MODELVIEW) and thing like that.

Then in the Draw function, I have to set the vbo to hold all the triangles that have the same texture material, bind and activate the texture and glDrawArrays(GL_TRIANGLES...); then repeat for each textures. this don't seem right to me for some reason! I'm probably not understand this still, because what if some materials have alpha, then I have the sort all the triangles from far to near, but then, if I have to draw them in chunk of textures, it's no longer sorted by depth. So I'm lost, again. Please help me again!!!

### #28Alienizer

Member

• Members
• 435 posts

Posted 28 July 2012 - 03:50 AM

ok, I did tons of reading and research, but I'm still stuck a bit...sorry if I'm so ignorant.

I understand that I have to maintain my own translation matrix and send them as uniform to the shader. What I'm not sure of are the values in matrix.

Wold = Translation * Rotation * Scale
with...

[ 1 0 0 x ]
[ 0 1 0 y ]
[ 0 0 1 z ]
T = [ 0 0 0 1 ]

[ r r r 0 ]
[ r r r 0 ]
[ r r r 0 ]
R = [ 0 0 0 1 ]

[ sx 0  0  0 ]
[ 0  sy 0  0 ]
[ 0  0  sz 0 ]
S = [ 0  0  0  1 ]


So for the translation matrix, what goes in for the xyz? I mean, where do the numbers come from? the up vector? the cam pos?

And the rotation, how does that one works?

The scale matrix is easy enough, but, how is it used? when we zoom in/out?

So if I want to make my own gl_NormalMatrix, I have to compute the transpose of the modelview inverse. So how do I compute the modelview matrix to start with?

Thanks for helping!

### #29TheNut

Senior Member

• Moderators
• 1699 posts
• LocationThornhill, ON

Posted 28 July 2012 - 11:19 AM

Take a look at this website for an overview of transformation. There's more to the transformation equation than a single matrix. You have one for the object, one for the camera, and another for the projection. In some cases, you'll have an additional bone matrix for skinning purposes. Each matrix contributes to the final location of each vertex.

$v^{'} = P * V^{-1} * M * v$

Where
v is your object space vertex

M is the modelview matrix, which transforms the object into world space. The rotation changes its angle, scale adjusts its size, and translation affects its position in the world.

$V^{-1}$ is the inverted view matrix. It's inverted because the world around you moves in the opposite direction of where you're looking at it. When you move left, the world around you is actually moving right. In the view matrix, the 3 rotation axes in the matrix represent your viewing basis vectors. The Z-axis represents the direction you're looking at. You should be able to easily visualize this in your head when you think about it. The Y-axis and X-axis represents the up-vector and left-vector. These are used to help orient the viewer. If you're looking down the horizon and your up-vector is pointing down, then the viewer is upside down. If the up-vector is pointing toward the sky, but the left-vector is pointing to the right, then the viewer is looking backwards. As with all basis vectors, they must be orthogonal to each other and normalized. If not, then your view will start to sheer and scale abnormally. The translation of the view matrix is just that, where in the world is the camera. You don't scale the view matrix. It doesn't have the effect you think it does of zooming in or out. Instead, it will cause your camera to orbit (rotate) differently. If you wanted to zoom in and out, you should scale your z-axis / direction vector. Remember how I said this is your direction vector? Well, if this vector is short, then the objects appear closer. If the vector is longer, the objects appear further away.

The projection matrix is quite specific to the maths involved behind perspective or orthographic projection. You would not apply your standard rotation, scaling, and translation here.

FYI, your normal matrix is identical to your modelview matrix, just without the translation or any scaling (you want your normals kept normalized!). The easiest way to handle this is to keep your scale separate from the modelview matrix and convert it from a 4x4 matrix to a 3x3 matrix in your GLSL shader. Then just apply this to your normals, which as you can see is just a simple rotation.
http://www.nutty.ca - Being a nut has its advantages.

### #30Alienizer

Member

• Members
• 435 posts

Posted 28 July 2012 - 02:17 PM

Thanks TheNut. I've read it and undestand most of it, but, the one that I'm stuck on is the the world coord. Say my model fit in a bbox of 100x200x300 (withe,height,depth) and the lower left corner is at coord 30,60,90 (xyz). Any vertex the vertex shader process must be converted to world space. I know how to do that with the matrix, simple, but what is world space? is it a -1..+1 system? If so, then what's the logic to transform a point into that world space? I guess I lack the way OpenGL see's things, and what it expect.

### #31TheNut

Senior Member

• Moderators
• 1699 posts
• LocationThornhill, ON

Posted 29 July 2012 - 01:58 PM

Alienizer said:

Any vertex the vertex shader process must be converted to world space
Correction, it must be converted to clip space. Clip space is where the GPU will determine if the vertex is within the viewing frustum and clip it if it is not (hence clip space). The idea is that your (x,y,z,w) vertex coordinate in clip space will be within the range -w and +w. When you perform perspective division x/w, y/w, and z/w (note, this is automatically done for you by the GPU after the vertex shader stage), then the vertex coordinate should map into the normalized device coordinates, where -1 <= (x,y,z) <= 1. This is then multiplied by the viewport matrix (again, automatically done for you by the GPU) to give you the window coordinates. At this point, your fragment shader gets called in order to colour the pixel at (x,y) and depth z.
http://www.nutty.ca - Being a nut has its advantages.

### #32Alienizer

Member

• Members
• 435 posts

Posted 29 July 2012 - 03:51 PM

Now I get it. Thanks! It seemed so complicated at first, no gl_normal, no gl_texcoord and all that good stuff. But now I know how to make my own and pass them to the shaders. Things like out vec4 MyColor; in the fragment shader, who would have known that MyColor would be assigned to the fragment color? Just like gl_Color did. It's not explained anywhere.

Now everything is working fine but I still have a problem with the way to draw all poly using VBO as you suggested in your reply #17. I have it working to draw the ploys, but no textures.

The old way, you used glBindTexture(GL_TEXTURE_2D, ...); before you use glBegin(GL_TRIANGLES); and that worked good. But how do you do this on a whole block of poly where many of them do not have the same texture, and color and normals??

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users