FS use tex or color?

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 21, 2012 at 17:12

In a fragment shader, how do you know when to use the texture color, or the solid color? Right now, I test if the texel is black, but that’s not the right way..

  vec4 texel = texture2D(tex, gl_TexCoord[0].st);
  if (texel.rgb == vec3(0,0,0)) {...

31 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jul 22, 2012 at 03:43

Well, what are you trying to do? You use a texture or a solid color, whichever is more appropriate for what you want the shader to do! :) Honestly this is like asking whether to use an int or an array.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 22, 2012 at 04:13

Here is what I do…

if there is a texture for this poly, then...
 
glBindTexture(GL_TEXTURE_2D, TextureList[j].texID);
glColor4f(1,1,1,1);
 
else
 
glBindTexture(GL_TEXTURE_2D, 0);
glColor4f(clr.r, clr.g, clr.b, clr.a);

Now, in the fragment shader, I want to know if I need to use the solid color, or the texture color… gl_FragColor = ?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jul 22, 2012 at 05:11

So some polygons have a texture, while others don’t? Typically you’d then use two different shaders, one that uses the texture for the textured polygons, and one that uses the solid color, for the solid-colored ones. Alternatively, you could just make a 1x1 texture for each solid color.

B5262118b588a5a420230bfbef4a2cdf
0
Stainless 151 Jul 22, 2012 at 09:53

The easiest way is to create a small texture that is totally white, and bind that for un-textured polygons.

This saves the overhead of swapping shaders.

Then in the fragment shader you can do …

vec4 texel = texture2D(tex,texcoord);
return texel * colour;
6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 22, 2012 at 11:00

Both Stainless and Reed provide good answers. Try to separate your shaders if the logic varies a lot, or if in this particular case you may be able to get away with just assigning all your polygons a texture. Try to avoid any conditional logic in shaders if possible. It’s best to have the GPU plow through fragments as quickly as possible and have the CPU do any preparation logic (ie: sort polygons by shader).

Also, why are you using glColor? You should be using VBOs. Also, you don’t need to waste cycles by resetting the active texture to null. Best just to leave it at the last texture used and switch it when you’re ready to use a new texture. Not a major performance boost, but gets you into the habit of reducing state switching.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 22, 2012 at 16:05

Can I create a 1x1 texture and change the data in it for the color before binding it? or do I have to create a 1x1 texture for each color in the entire model?

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 22, 2012 at 16:10

@TheNut

why are you using glColor? You should be using VBOs

hmmm, I don’t know! It’s just what I’ve been learning to do!

How can I use VBO to do this instead? I mean, don’t I have to creat a VBO the size of the viewport and fill it with the pixel color? That would take too long!

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 22, 2012 at 16:50

The minimum texture size in OpenGL is 64x64. I have seen cases where you can go lower, but it depends on the video driver devs wanting to support such resolutions.
@Alienizer

or do I have to create a 1x1 texture for each color in the entire model?

The idea that was presented here is that, for whatever reason, if you have one object that is textured and another object that is not, you can render both objects using the same shader by giving the object without a texture a simple white texture. If looks like in your case that you want to support both vertex colours and textures. With vertex colouring, you supply a separate buffer just like you do with vertex, UVs, and normals. Your vertex shader will pass the colour attribute to the fragment shader in a varying variable, where it will be interpolated across the polygon. In your fragment shader, you typically assign the fragment colour as vertex_colour * texture_colour. The default vertex colour should be set to white to maintain the colour of any textures you apply to the polygon. When supplying a white texture, you can modify the output colour by adjusting the vertex colour. If you wanted to supply a material colour for the whole object, you could more easily create a uniform vec4 colour variable in your fragment shader. I would not use this as a substitute for setting each polygon’s colour in a loop though. A vertex colour buffer is better suited for that and allows you to render the whole geometry in one direct shot.

To understand this better, you need to learn about VBOs, or vertex buffer objects. I’m rather surprised you don’t already know this if you’re working with shaders. I find it bizarre that any learning material on shaders would still be using classic OpenGL rendering routines. You can google for topics on this, but I would recommend reading the specification guides at OpenGL.org to avoid learning deprecated APIs.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 22, 2012 at 17:40

What I have is many objects of different colors, and other objects with textures. Some textures have transparency like PNG images. Objects with colors only can also have transparencies.

I’m learning from this board and from docs found on google, but it seems that I’m getting more and more confused. Some tutorials tells you to do it “this way” and other “that way”. When I try one way and finally make it somewhat work, and then use something I need from another tutorial requiring #version 130 or something, then everything I’ve done before gives me a bunch of deprecated messages, so I have to redo everything from scratch, and don’t know how to start!

Now I’m looking at the PDF from http://www.khronos.org/opengl/ and in the deprecated section E, looks like everything is deprecated!!?? So I’m lost, very very lost.

Where do you guys turn to when you need to lookup some references? What docs should I read to get in the right direction and be on the same wavelength as you guys?

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 22, 2012 at 17:50

Now I’m looking at http://www.opengl.org/sdk/docs/manglsl/ and under Built-in Variables, there is only a handfull. gl_FrontColor is gone, gl_Color too etc. Am I reading this correctly? I don’t know what to use anymore :blink:

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 22, 2012 at 20:54

Mostly nowadays I refer to official documentation, such as you found on OpenGL’s website. I only read websites that are 6 months to 1 year old tops, with some exceptions for older material. Even then, I’m only interested in the theory and not the code. The core and GLSL ref docs on the Khronos website will provide accurate support documentation for the platform you want to target. It won’t give you examples, but you can search for those online and reference the doc to follow along. You can download their latest version 4.20, although I would personally lean more towards 2.X documentation as that platform is widely supported and is quite capable at making games. I would also avoid any built-in or global parameters. Use only the mathematical routines provided by the lowest GLSL version you want to support and do the rest yourself. Those built-in vars were to cater to old-school devs that continued to use GL 1.X functions, which has all but phased out now (thank god).

Vertex colouring + texturing will definitely give you what you want. FYI you can pass in any vertex colouring format, including RGBA data in byte, short, or floating point format. Your shader should look something like this.

// Vertex shader

attribute vec4 Vertex; // xyzw data from VBO
attribute vec2 Uv; // uv data from VBO
attribute vec4 Colour; // RGBA data from VBO

uniform mat4 ProjectionMatrix; // camera projection
uniform mat4 ViewMatrix; // camera transformation
uniform mat4 ModelMatrix; // object transformation

varying vec4 vertexColour; // Output to fragment shader
varying vec2 vertexUv; // Output to fragment shader

void main ()
{
    // Don't use ftransform() - assume everything is your responsibility
    gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * Vertex;

    vertexColour = Colour
    vertexUv = Uv
}

  

// Fragment shader


uniform sampler2D MyTexture; // Set via C++. Objects without a texture will pass in a white texture by default.

// Inputs interpolated from vertex shader
varying vec4 vertexColour;
varying vec2 vertexUv;

void main ()
{
    gl_FragColor = vertexColour * texture2D(MyTexture, vertexUv);
}
88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 22, 2012 at 21:36

oh! so you mean that ProjectionMatrix, ViewMatrix and ModelMatrix are passed from the main program?

So I see what you mean. Use glsl as computation only, and everything else camera(pos,dir), lights etc are all defined in the main program. So no need to use one or more of the 8 bilt-in lights anymore, but we have to do every shading, and that’s what the fragment shader will do right?

What I’m not sure about your code example is the VBO part, then rest I understand fully, if I’m right about what I just said above.

So what is the VBO? Is it like a texture2d where we put in info to be passed to the vertex shader?

Thanks for the info and the time to explain! I appreciate it.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 23, 2012 at 02:45

@Alienizer

and that’s what the fragment shader will do right?

Not just the fragment shader, but also the vertex shader, the tessellation shader (added in GL 4.X), and the geometry shader (added in GL 3.X). The freedom and programmability allows you to, for example, use a uniform variable in any one of these shaders. You could for instance access the ViewMatrix from the fragment shader if you wanted to. Often you spread out your uniforms to both vertex and fragment shaders. For example, you often process transformations in a vertex shader, but in a fragment shader I may pass in several lights to perform per-pixel lighting. I could optionally perform those computations at the vertex level if I wanted to improve performance at the expense of image quality. It’s all up to the dev to decide what the shader does and what data it needs. The backend part (C++) will provide all that data in the form of attributes and uniforms, and a vertex shader can optionally send data to be interpolated by the time it reaches the fragment shader using varyings.

To understand VBOs, you have to rewind the clock a bit. It all started with glBegin, glVertex3f x 1000 calls and finally glEnd. At first this was simple and it just worked. Then better hardware came out and more polygons were being pushed. The function call overhead on the CPU was starting to become problematic and so they moved to vertex arrays. Instead of calling glVertex a couple thousand times, you now prepared an array of all your vertices, uvs, normals, etc. and made one call to render the whole thing in one shot. This was great for a while, but then hardware again improved and more polygons were being pushed. This time, the bottleneck was the bus between system memory and video memory. Sending tens of thousands of polygons every frame was creating New York city style traffic congestion, slowing everything down. To resolve this, they brought in VBOs. Like vertex arrays, except the geometry data is now optimally stored in video memory, like texture data. When it comes time to draw, instead of pushing all those polygons down the bus, the video card just accesses the data from it’s own memory, which is blazing fast in comparison. It’s like how every game developer sleeps on the company sofa. No need to waste time going home. Just wake up right at the office :D Booyah!

As for the buffers themselves, it’s up to you to decide how you want to craft them. The vertex shader accesses this data via the attribute variables. Some people create one massive array and stride the data. Others may create several buffers, one to store each component of the geometry. For example, one buffer for vertices, one for UVs, another for colour. This is how I often organize my data because sometimes I may want to update a VBO, but not have to resend the entire geometry back to the video card. For example, with particle engines you will update the vertex coordinates of each particle using something called a dynamic VBO. You don’t need to reupload UV data, so it’s much more efficient to just update the vertex buffer and leave the others alone. The only drawback is that instead of activating a single VBO, you have to activate several for each object before you render it. As you can see, it starts to add a little function overhead, but from my tests it’s quite negligible. To each their own.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 23, 2012 at 03:44

Thank you TheNut, that explanation is very well understood. Now it makes more sense to me. I know why my terrain with a tank on it runs at 0.01 FPS.

So we don’t need to use any of the gl_* built-in variables anymore? Or do we still need to use gl_Vertex, gl_Normal, gl_FragColor and others? But none of the gl_Lights and stuff?

What about…
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;

do we still need this if everything is in the VBO?

I don’t know how to access the VBO from a shader. Is it via the uniform? Something like…
glBindBufferARB(GL_ARRAY_BUFFER_ARB, FragmentsVBO);

Could you guide me to a good site, or have, a skeleton in C++ on how to use the VBO for the basic stuff? I understand clearly your explanation, but now I just need a little kick start to know the layout of the c++ code and how the VBOs are accessed in the shaders. Sorry if I’m so demanding, but I really want to do it right this time and come out of the cave I’m in right now!

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 23, 2012 at 04:14

aaaaaaaaah I’m getting more confused again…

http://www.opengl.org/wiki/VBO_-_just_examples

why is it deprecated??? or am I worrying for nothing?

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 23, 2012 at 10:48

@Alienizer

So we don’t need to use any of the gl_* built-in variables anymore?

Generally yes, but with some exceptions. Every shader requires output data. A vertex shader must set the gl_Position vector. Without this, the card won’t be able to render the triangle. Fragment shaders must set the gl_Fragment vector to actually colour the pixel. You can optionally access the fragment’s current coordinates (and depth) by using the gl_FragCoord input vector, which is essentially the interpolated window coordinates from the gl_Position vector you provided in the vertex shader. Geometry shaders are a bit different. It’s like executing a vertex shader as many times as you want. For each new vertex you wish to create, you supply the gl_Position + any varyings (normals, uvs, etc.) and then call EmitVertex(). You then call EndPrimitive() to complete the polygon and either continue or exit the shader when you’re done. For a complete list of inputs and outputs of each shader, see chapter 7 “Built-in Variables” in the 4.20 GLSL Shader Language core doc. Any section in that doc that says “Compatibility Profile” you should really ignore. That’s all the 1.X fixed stuff like gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0. That’s the kind of GLSL programming you should steer away from. gl_TextureMatrix for example is set when you switch the OpenGL projection state to GL_TEXTURE and then call glLoadMatrix() (or optionally glMultMatrix). This kind of behaviour is no longer necessary. If you want to upload a texture matrix, just create a uniform matrix variable in your shader and upload the texture matrix values. Except for the mandatory outputs from each shader, you’re responsible for all the inputs.

@Alienizer

I don’t know how to access the VBO from a shader. Is it via the uniform? Something like…

The VBO data is accessed from the attributes, once for each vertex. Uniforms are like your function parameters. Where the attributes will change for each new vertex you process, the uniforms will remain the same throughout each vertex. I can’t seem to find any up-to-date decent site explaining this, so I’ll just jot down the steps. Here’s a simple VBO containing only the vertex data, from start to finish.

// Create vertex buffer
int vboVertex;
glGenBuffers(1, vboVertex);
if ( vboVertex == 0 )
{ error! }

// Upload vertices to vertex buffer
float vertices[] = {x, y, z, x, y, z, x, y, z......};
glBindBuffer(GL_ARRAY_BUFFER, vboVertex);
glBufferData(GL_ARRAY_BUFFER, NumberOfVertices * sizeof(float), vertices, GL_STATIC_DRAW);

// You can optionally delete the vertices array at this point to save memory. The video card now has everything.

// Now set the shader attribute (assuming you compiled and loaded your shader)
int atribVertex = glGetAttribLocation(MyShaderProgramID, "Vertex"); // In vertex shader, you should have "attribute vec* Vertex", otherwise this function will return 0.
glEnableVertexAttribArray(atribVertex);

// Bind the vertex buffer to the attribute
glBindBuffer(GL_ARRAY_BUFFER, vboVertex);
glVertexAttribPointer(atribVertex, 3, GL_FLOAT, false, 0, 0);

// Draw
glDrawElements(...) or glDrawArrays(...)

That’s the gist of it. Basically do this for each buffer, or create one massive buffer with everything and define the vertex stride parameter when you call glVertexAttribPointer. You’ll note that I haven’t suffixed any ARB extension to the end of these methods, as you will see many websites do. A long long time ago these were APIs introduced by the ARB to extend OpenGL. They have since become part of the core with OpenGL 2.X. Many people haven’t caught on yet :)

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 23, 2012 at 20:21

Thank you again TheNut. I have it in without errors, but in the draw function, I have…

glDrawArrays(GL_TRIANGLES, 0, totPoly);

and in the vertex shader…

attribute vec3 Vertex;
void main(void)
{
  gl_Position = vec4(Vertex, 1);
}

and in the fragment shader…

void main(void) {
  gl_FragColor = vec4(0.1, 0.4, 0.9, 1.0);
}

But nothing is drawn, just the blue background set by glClearColor

Can you enlighten me a little more please?

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 23, 2012 at 20:47

I presume your vertex coordinates are already in clip space (clamped between -1.0 and 1.0)? You’re not performing any transformations in your vertex shader, so if you’re using actual 3D world coordinates then it’s likely the polygons are getting clipped due to being outside the viewport. Otherwise I would suggest posting a bit more code. I’m not sure right off the bat what could be your problem. So far what you’ve done looks right, but there might be something else missing.

– Edit
BTW, GLSL is quite formal in that you should declare any floating point values with a decimal notation. For example, vec4(Vertex, 1) is considered bad form and will throw errors on certain GLSL profiles. You should declare it as vec4(Vertex, 1.0). Make sure to check for compiler errors to avoid these sort of mishaps.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 23, 2012 at 21:28

Yes, I did 1.0 as oppose to 1, thanks for poiting it out, I usually do this!

I also added glEnableClientState(GL_VERTEX_ARRAY); before glDrawArrays and glDisableClientState(GL_VERTEX_ARRAY); after it. Is this correct? Or I don’t need to do this?

I did now I had to clamp my vertices to -1.0…1.0 I just passed them as they are. ow do you make them in this range? I mean, a scene bbox could be -100..1000 or even 500…600 !??

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 23, 2012 at 21:32

ok, I did this and now the model is showing…

attribute vec3 Vertex;
void main(void)
{
  gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * vec4(Vertex, 1.0);
}

is that what I need to do? Or do I need to do this in the main app instead?

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 23, 2012 at 21:45

Not all the poly were drawn, I had to do glDrawArrays(GL_TRIANGLES, 0, totPoly*3); is that correct to do totPoly*3 ?

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 24, 2012 at 03:55

@Alienizer

is that what I need to do

If you want to transform from object space to clip space, yes. This is required by the rasterizer in order to properly clip and draw polygons in the window. You could for example supply clip space vertex positions to begin with and avoid the matrix multiplications (typically for doing user interfaces), but I wasn’t sure what you were doing.

As for glDrawArrays, you supply the total number of vertices, not polygons. The later is used with glDrawElements. If totPoly represents the number of triangles, then yes. Each triangle has 3 vertices and so you should have totPoly * 3 vertices.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 24, 2012 at 04:19

Thanks again TheNut. Now I got it working, but I would like to know if I’m doing it right…

I setup (from your examples) a vertex buffer, normals, colors and UV. It all gets colored properly and the normals are working.

I use the following in the vertex shader for the normal…

nrm = normalize(gl_NormalMatrix * inNormals);

is that correct?

But, for the colors to work, I had to duplicate the poly color 3 times, one for each vertices! This doesn’t sound right, unless it’s suppose to be that way if a triangle has a different color at each vertices to be interpolated across the whole triangle. But in my case, the whole triangle is a solid color. I was wondering if there is a better space saver to do this?

One big thing I’m stuck on is, if I have many textures, can I bind them all and use some kind of texture array? I mean, I created a buffer just like the vertex and normals etc, but it’s the texture IDs. So I thought I could use that to select which texture to use for the current vertex, but I’m failing miserably! Can you guide me please?

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 24, 2012 at 10:45

gl_NormalMatrix, gl_ProjectionMatrix, and gl_ModelViewMatrix are all part of the old fixed-function compatibility profile for GLSL. You shouldn’t really be using those because that means you’re still using OpenGL 1.X functions to set matrices, which is deprecated. If you look at my posted shader example, you’ll see that I was using uniform variables for matrices. Remember what I said, avoid using any built-in functions that are listed as compatibility mode in the GLSL document posted on OpenGL or the Khronos website. Everything you should avoid is all there under chapter 7. Download and read this.

@Alienizer

I had to duplicate the poly color 3 times, one for each vertices! This doesn’t sound right,

This is in fact correct. A vertex shader is just that, a programmable piece of code operating on each vertex. Just like vertex bones, vertex normals, vertex UVs, colours are also per-vertex. Now, I don’t know how you setup your data. It is possible to assign a colour to the entire mesh by simply using a uniform colour variable. If however you have several polygons within the same VBO you’re rendering and each poly can be different, then vertex colouring is the way to go. You could also use a texture map, but this has a couple drawbacks and requirements on your mesh as well.
@Alienizer

if I have many textures, can I bind them all and use some kind of texture array

There’s a limit to the number of texture’s your allowed to use. You have to call glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &myInt) to find out how many you can use at any given time (or use a 3d texture), although in general you should not have such heavy requirements. A shader is meant to be simple, with logic pertaining only to the bare minimum mathematical formulas and procedures required to pull off the effect. The keyword here is effect. It’s the duty of the CPU to sort and organize the polygons being sent to the shaders. If you’re trying to bind every texture and then have a shader find out what sampler to use, then your design is flawed. Your shaders should be more specific than that.

B5262118b588a5a420230bfbef4a2cdf
0
Stainless 151 Jul 24, 2012 at 11:54

Sounds like you’ve got yourself well and truly confused :D

Okay lets go through what happens when you render a triangle.

The pipeline does the following operations

Work out what parameters need passing to vertex shader (in your simple case position, texture coordinates, and colour)

Call vertex shader with these parameters for vertex 1
Extract results

Call vertex shader with these parameters for vertex 2
Extract results

Call vertex shader with these parameters for vertex 3
Extract results

for each pixel within triangle
work out texture coordinates by interpolating across the triangle
work out colour value by interpolating across the triangle
pass to pixel shader

That’s why you need a colour per vertex, not per triangle. The pipeline wants vertex colours, this is incredibly useful, you can use it for ambient occlusion mapping, gouraud shading, all sorts of things.

When you are optimising your code, the key things to look out for are nasty state changes. If you have a different texture for every polygon, your code is going to be really slow.

There are two standard setups now, depending on if you need to alpha blend the triangles.

Both are based on you throwing triangles at a class that eventually renders them.

If you need alpha blending

1) Setup a default render state
2) When you receive a triangle, check to see if it breaks the render state (different texture is the only one in your case)
If it does
2a) Render all the stuff stored so far, change state to new state
if it does not
2b) Add this triangle to the store
3) At end of rendering, render anything still pending

If you don’t need alpha blending
1) when you get a triangle, check to see if it uses a new texture
if it does
1a) create a new heap for this texture, add triangle to this new heap
if it does not
1b) add to current heap

2) At end of rendering, loop through all the heaps binding the relevant texture, and drawing the triangles

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 24, 2012 at 22:46

So if I understand correctly, no need to use any compatibility calls like glPushMatrix, glMatrixMode(GL_MODELVIEW) and thing like that.

Then in the Draw function, I have to set the vbo to hold all the triangles that have the same texture material, bind and activate the texture and glDrawArrays(GL_TRIANGLES…); then repeat for each textures. this don’t seem right to me for some reason! I’m probably not understand this still, because what if some materials have alpha, then I have the sort all the triangles from far to near, but then, if I have to draw them in chunk of textures, it’s no longer sorted by depth. So I’m lost, again. Please help me again!!!

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 28, 2012 at 03:50

ok, I did tons of reading and research, but I’m still stuck a bit…sorry if I’m so ignorant.

I understand that I have to maintain my own translation matrix and send them as uniform to the shader. What I’m not sure of are the values in matrix.

Wold = Translation * Rotation * Scale
with...

  [ 1 0 0 x ]
  [ 0 1 0 y ]
  [ 0 0 1 z ]
T = [ 0 0 0 1 ]

  [ r r r 0 ]
  [ r r r 0 ]
  [ r r r 0 ]
R = [ 0 0 0 1 ]

  [ sx 0  0  0 ]
  [ 0  sy 0  0 ]
  [ 0  0  sz 0 ]
S = [ 0  0  0  1 ]

So for the translation matrix, what goes in for the xyz? I mean, where do the numbers come from? the up vector? the cam pos?

And the rotation, how does that one works?

The scale matrix is easy enough, but, how is it used? when we zoom in/out?

So if I want to make my own gl_NormalMatrix, I have to compute the transpose of the modelview inverse. So how do I compute the modelview matrix to start with?

Thanks for helping!

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 28, 2012 at 11:19

Take a look at this website for an overview of transformation. There’s more to the transformation equation than a single matrix. You have one for the object, one for the camera, and another for the projection. In some cases, you’ll have an additional bone matrix for skinning purposes. Each matrix contributes to the final location of each vertex.

\[v\^{‘} = P * V\^{-1} * M * v\]

Where
v is your object space vertex

M is the modelview matrix, which transforms the object into world space. The rotation changes its angle, scale adjusts its size, and translation affects its position in the world.

\(V\^{-1}\) is the inverted view matrix. It’s inverted because the world around you moves in the opposite direction of where you’re looking at it. When you move left, the world around you is actually moving right. In the view matrix, the 3 rotation axes in the matrix represent your viewing basis vectors. The Z-axis represents the direction you’re looking at. You should be able to easily visualize this in your head when you think about it. The Y-axis and X-axis represents the up-vector and left-vector. These are used to help orient the viewer. If you’re looking down the horizon and your up-vector is pointing down, then the viewer is upside down. If the up-vector is pointing toward the sky, but the left-vector is pointing to the right, then the viewer is looking backwards. As with all basis vectors, they must be orthogonal to each other and normalized. If not, then your view will start to sheer and scale abnormally. The translation of the view matrix is just that, where in the world is the camera. You don’t scale the view matrix. It doesn’t have the effect you think it does of zooming in or out. Instead, it will cause your camera to orbit (rotate) differently. If you wanted to zoom in and out, you should scale your z-axis / direction vector. Remember how I said this is your direction vector? Well, if this vector is short, then the objects appear closer. If the vector is longer, the objects appear further away.

The projection matrix is quite specific to the maths involved behind perspective or orthographic projection. You would not apply your standard rotation, scaling, and translation here.

FYI, your normal matrix is identical to your modelview matrix, just without the translation or any scaling (you want your normals kept normalized!). The easiest way to handle this is to keep your scale separate from the modelview matrix and convert it from a 4x4 matrix to a 3x3 matrix in your GLSL shader. Then just apply this to your normals, which as you can see is just a simple rotation.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 28, 2012 at 14:17

Thanks TheNut. I’ve read it and undestand most of it, but, the one that I’m stuck on is the the world coord. Say my model fit in a bbox of 100x200x300 (withe,height,depth) and the lower left corner is at coord 30,60,90 (xyz). Any vertex the vertex shader process must be converted to world space. I know how to do that with the matrix, simple, but what is world space? is it a -1..+1 system? If so, then what’s the logic to transform a point into that world space? I guess I lack the way OpenGL see’s things, and what it expect.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jul 29, 2012 at 13:58

@Alienizer

Any vertex the vertex shader process must be converted to world space

Correction, it must be converted to clip space. Clip space is where the GPU will determine if the vertex is within the viewing frustum and clip it if it is not (hence clip space). The idea is that your (x,y,z,w) vertex coordinate in clip space will be within the range -w and +w. When you perform perspective division x/w, y/w, and z/w (note, this is automatically done for you by the GPU after the vertex shader stage), then the vertex coordinate should map into the normalized device coordinates, where -1 <= (x,y,z) <= 1. This is then multiplied by the viewport matrix (again, automatically done for you by the GPU) to give you the window coordinates. At this point, your fragment shader gets called in order to colour the pixel at (x,y) and depth z.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jul 29, 2012 at 15:51

Now I get it. Thanks! It seemed so complicated at first, no gl_normal, no gl_texcoord and all that good stuff. But now I know how to make my own and pass them to the shaders. Things like out vec4 MyColor; in the fragment shader, who would have known that MyColor would be assigned to the fragment color? Just like gl_Color did. It’s not explained anywhere.

Now everything is working fine but I still have a problem with the way to draw all poly using VBO as you suggested in your reply #17. I have it working to draw the ploys, but no textures.

The old way, you used glBindTexture(GL_TEXTURE_2D, …); before you use glBegin(GL_TRIANGLES); and that worked good. But how do you do this on a whole block of poly where many of them do not have the same texture, and color and normals??