Vertex and pixel shaders

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 17, 2012 at 03:53

Starting to look at vertex and pixel shaders…

Pardon if this is really basic, but–

What’s the interplay/flow between a vertex shader and a pixel shader? What I can’t quite understand right now is how they are called an equal number of times; i.e., there are only 3 vertices usually, but many pixels in between those vertices. How can the vertex and pixel shaders be called one right after the other over and over? Does each triangle get its three vertices run through the vertex shader, than the pixel shader works with pixels over? Or whatever…

Probably not asking the question right, but, well, trying to get the question out somehow.

14 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 May 17, 2012 at 06:19

They’re not called equal numbers of times; the vertex shader’s called once for each vertex and the pixel shader once for each pixel covered by each triangle. Each triangle gets its three verts run through the vertex shader, then the rasterizer (part of the hardware) figures out which pixels that triangle covers and runs all those pixels through the pixel shader.

The inputs to the pixel shader are based on the outputs from the vertex shader, but they are interpolated linearly across the triangle, to generate values for all the pixels between the verts.

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 18, 2012 at 15:15

Ha, thanks Reedbeta; that helps a lot. That’s sort of what I figured, but for some reason, in all the various tutorials and whatnot I’ve been reading up on, those facts were really never explicitly set forth. Thanks again.

6837d514b487de395be51432d9cdd078
0
TheNut 179 May 18, 2012 at 16:19

It also helps to read the specification documents (OpenGL or DirectX). It can be daunting due to the amount of reading involved, but it does a good job clearing up a lot of questions. You’re more likely to learn things in official documentation than you would anywhere else. Tutorials are great if you just want to get the gist of it.

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 19, 2012 at 00:20

Yes, I agree; tutorials seem to usually just get “the gist” of it (annoyingly). I also had this Q…

In my shader learning here, I’m running accross Texcoord0 and Texcoord1 a whole bunch. Like the following code from a tutorial that wasn’t really explained that well…

BindChannels {
Bind “Vertex”, vertex
Bind “texcoord”, texcoord0
Bind “texcoord1”, texcoord1
}

BindChannels {
Bind “Vertex”, vertex
Bind “texcoord”, texcoord
Bind “Color”, color
}

I assume those are the X,Y (rather, U,V) of the particular UV map? Or is texcoord0 and texcoord1 both individual SETS of UV coords? Once again, the tutorials I’ve started to read on these issues never really explain what’s happening in a clear and detailed way. Makes for reading the example code really confusing if one’s just entering the shader arena.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 May 19, 2012 at 00:41

Things like TEXCOORD0 and TEXCOORD1, COLOR0, etc. (called “registers” or “semantics”) usually represent a 4-component vector, with XYZW components. It’s possible to store whatever data you wish in this vector. Commonly, you would store your UVs in the first two components (the XY components) of TEXCOORD0, for example. The other two components (ZW) could be left unused, or you could put something else in there if you wish. And if you have more than 4 pieces of data to send, you can use TEXCOORD1, TEXCOORD2, etc.

It’s worth noting that the names of these semantics are kind of a throwback to days of fixed-function hardware where there were specific registers for specific purposes, such as color, texture coordinates, normals, etc. In modern times you can store any data you like in any of the registers, so the semantic is just a label to match things up from vertex buffer to vertex shader, or from vertex shader to pixel shader. In fact, Direct3D10-11 HLSL lets you make up your own semantics. Instead of TEXCOORD0 you could write MY_AWESOME_VALUES or whatever you want. I’m not sure about GLSL.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 104 May 19, 2012 at 01:32

when you get into the general purpose gpu type stuff, youll be wanting to use these registers for storing absolutely anything, including geometry, id’s all sorts.

6837d514b487de395be51432d9cdd078
0
TheNut 179 May 19, 2012 at 02:59

I take it this is Unity? I’ve seen the page you’re reading from. There’s a key point listed here.

BindChannels has no effect when programmable vertex shaders are used, as in that case bindings are controlled by vertex shader inputs.”

So it seems you’re working with the fixed function pipeline, or possibly Unity has some sort of “fixed shader” running behind the scenes. The “texcoords” does refer to the (u,v,w) texture coordinates for each texture unit for the purposes of multitexturing. In traditional fixed function graphics, typically there’s a 1:1 mapping between texture coordinates and textures used. When you work with programmable shaders (that is you write your own vertex and fragment shaders), you’re in full control over how things work. In most scenarios, you use one set of UVs for several different textures. Fixed function is a bit dumb and doesn’t know what your intentions are, so it generally follows the 1:1 mapping with simple blending operations to mix all textures together.

In a programmable graphics pipeline, at least with OpenGL, you are in full control of the inputs, which are called “attributes”. There’s no keywords for “vertex” or “texcoord”, but for obvious reasons it helps when you write your shaders that you use identifiable variable names. The bindings are done in C/C++, so you already know in advance what goes where. If you were programming shaders in C/C++, it would make sense what “texcoord” and “texcoord1” means because you would be physically telling the graphics API what the associations are. “Bind my normals here, bind my texture UVs there, etc…”. Since you’re using Unity, you are conforming to Unity’s mapping scheme.

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 19, 2012 at 03:12

Ah…that makes a lot of sense. Thanks all, thanks a bunch.

TheNut: so, what IS channel binding? Like the following code, which I pasted above, with the comments included:

// Maps the first UV set to the first texture stage
// and the second UV set to the second texture stage
BindChannels {
Bind “Vertex”, vertex
Bind “texcoord”, texcoord0
Bind “texcoord1”, texcoord1
}

// Maps the first UV set to all texture stages
// and uses vertex colors
BindChannels {
Bind “Vertex”, vertex
Bind “texcoord”, texcoord
Bind “Color”, color
}
Not quite understanding some elements on that page; things like “target hardware” or whatnot, UV “sets”, etc.

B5262118b588a5a420230bfbef4a2cdf
0
Stainless 151 May 20, 2012 at 10:20

The number of inputs available in a GPU varies massively. Modern PC graphics cards can do all sorts of fun things, but the graphics cards in mobile phones are a lot more primitive.

BindChannels seem to be a Unity way of coping with this issue.

When you think of what is actually happening in a pixel shader, you get a block of “stuff” in and output a colour

How the “stuff” is organised needs to be defined in the shader code, but the shader code is a completely separate entity than the code running on the CPU.

So how do you get the “stuff” organised so that the CPU can supply the GPU with something that matches what the GPU expects? After all if you pass in a colour and a texture coordinate in the wrong order, the pixel shader doesn’t know you have screwed up, it just does what it’s been told on the data you have supplied.

There are lots of ways of doing this, it is not standardised. XNA has one way of doing it, OpenGL another, Opengles another, etc.
Looks like Unity has another.

The first structure is saying “give the shader a vertex position and two sets of texture coordinates”.
What that actually gets compiled to should be ignored while you are learning. There are many different possibilities depending on the shader compiler.

The second is saying “give the shader a vertex position, one texture coordinate, and one colour”

That’s all it really means.

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 20, 2012 at 18:47

Thanks Stainless, very appreciated. That makes sense. There always seems to be a clearer explanation out there than what’s usually presented in tutorials and documentation, lol. I usually find things to be a bit simpler in concept than what they initially seem in regards to complexity.

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 20, 2012 at 19:19

PS–on shaders: so, I assume shaders can be/are used to generate textures sometimes; a form of procedural generation really. Can one generate noise via shaders, for instance? I know there are noise generation algorithms, and I sort of assume they are implemented using shaders. But of course, that’sonly noise; I’m just curious about textures via shaders in general.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 May 20, 2012 at 23:30

It’s possible to create a render target, do some drawing into it and then later turn around and use the rendered image as a texture. This is often called “render-to-texture”. So yes, shaders can be used to generate textures using this approach. Noise can indeed be implemented in a pixel shader and generated this way, though it’s not completely straightforward - most noise algorithms (and indeed most image processing algorithms in general) are usually described as if you were going to implement them on the CPU, so translating that into a GPU implementation can be nontrivial (especially for an efficient GPU implementation).

B5262118b588a5a420230bfbef4a2cdf
0
Stainless 151 May 21, 2012 at 08:32

Be warned though that render to texture is NOT supported as standard on OpenGLES.

Khronic…. err sorry Kronos were very limp wristed when defining the API and didn’t require it.

Which is a pain in the proverbial.

Generating images in shaders can still be done though, you just have to do it straight to the screen.

This vid http://www.youtube.com/watch?v=ikOe0gM8nQE shows an example. The shader is given pixel x, pixel y, and time. No textures are used.

I found http://forthsalon.appspot.com/ and loved it, so ended up writing some code to convert Forth to GLSL.

What can I say, I’m a strange guy.
:huh:

54e97bcc4e4a2fc8f5f05594afd1683a
0
Gnarlyman 101 May 23, 2012 at 01:26

Thanks Reedbeta, that helps a lot. I’d indeed like to play around with texture generation via shaders as I get more experienced with them. I suppose the trick, as usual with shaders, is that you don’t have the usual certainty in knowing that the texture will be perfectly done in every situation as you would know with a pre-made one. However, I think there’s a lot of potential there in shader-generated textures.

Stainless: wow…the forthsalon joint is pretty rad! I’m checking it out right now. Totally up my alley, lol. Wonder who came up with that idea.