# Vertex and pixel shaders

14 replies to this topic

### #1Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 17 May 2012 - 03:53 AM

Starting to look at vertex and pixel shaders...

Pardon if this is really basic, but--

What's the interplay/flow between a vertex shader and a pixel shader? What I can't quite understand right now is how they are called an equal number of times; i.e., there are only 3 vertices usually, but many pixels in between those vertices. How can the vertex and pixel shaders be called one right after the other over and over? Does each triangle get its three vertices run through the vertex shader, than the pixel shader works with pixels over? Or whatever...

Probably not asking the question right, but, well, trying to get the question out somehow.

### #2Reedbeta

DevMaster Staff

• 5305 posts
• LocationBellevue, WA

Posted 17 May 2012 - 06:19 AM

They're not called equal numbers of times; the vertex shader's called once for each vertex and the pixel shader once for each pixel covered by each triangle. Each triangle gets its three verts run through the vertex shader, then the rasterizer (part of the hardware) figures out which pixels that triangle covers and runs all those pixels through the pixel shader.

The inputs to the pixel shader are based on the outputs from the vertex shader, but they are interpolated linearly across the triangle, to generate values for all the pixels between the verts.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #3Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 18 May 2012 - 03:15 PM

Ha, thanks Reedbeta; that helps a lot. That's sort of what I figured, but for some reason, in all the various tutorials and whatnot I've been reading up on, those facts were really never explicitly set forth. Thanks again.

### #4TheNut

Senior Member

• Moderators
• 1695 posts
• LocationThornhill, ON

Posted 18 May 2012 - 04:19 PM

It also helps to read the specification documents (OpenGL or DirectX). It can be daunting due to the amount of reading involved, but it does a good job clearing up a lot of questions. You're more likely to learn things in official documentation than you would anywhere else. Tutorials are great if you just want to get the gist of it.
http://www.nutty.ca - Being a nut has its advantages.

### #5Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 19 May 2012 - 12:20 AM

Yes, I agree; tutorials seem to usually just get "the gist" of it (annoyingly). I also had this Q...

In my shader learning here, I'm running accross Texcoord0 and Texcoord1 a whole bunch. Like the following code from a tutorial that wasn't really explained that well...

BindChannels {
Bind "Vertex", vertex
Bind "texcoord", texcoord0
Bind "texcoord1", texcoord1
}

BindChannels {
Bind "Vertex", vertex
Bind "texcoord", texcoord
Bind "Color", color
}

I assume those are the X,Y (rather, U,V) of the particular UV map? Or is texcoord0 and texcoord1 both individual SETS of UV coords? Once again, the tutorials I've started to read on these issues never really explain what's happening in a clear and detailed way. Makes for reading the example code really confusing if one's just entering the shader arena.

### #6Reedbeta

DevMaster Staff

• 5305 posts
• LocationBellevue, WA

Posted 19 May 2012 - 12:41 AM

Things like TEXCOORD0 and TEXCOORD1, COLOR0, etc. (called "registers" or "semantics") usually represent a 4-component vector, with XYZW components. It's possible to store whatever data you wish in this vector. Commonly, you would store your UVs in the first two components (the XY components) of TEXCOORD0, for example. The other two components (ZW) could be left unused, or you could put something else in there if you wish. And if you have more than 4 pieces of data to send, you can use TEXCOORD1, TEXCOORD2, etc.

It's worth noting that the names of these semantics are kind of a throwback to days of fixed-function hardware where there were specific registers for specific purposes, such as color, texture coordinates, normals, etc. In modern times you can store any data you like in any of the registers, so the semantic is just a label to match things up from vertex buffer to vertex shader, or from vertex shader to pixel shader. In fact, Direct3D10-11 HLSL lets you make up your own semantics. Instead of TEXCOORD0 you could write MY_AWESOME_VALUES or whatever you want. I'm not sure about GLSL.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #7rouncer

Senior Member

• Members
• 2718 posts

Posted 19 May 2012 - 01:32 AM

when you get into the general purpose gpu type stuff, youll be wanting to use these registers for storing absolutely anything, including geometry, id's all sorts.
you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive.

### #8TheNut

Senior Member

• Moderators
• 1695 posts
• LocationThornhill, ON

Posted 19 May 2012 - 02:59 AM

I take it this is Unity? I've seen the page you're reading from. There's a key point listed here.

"BindChannels has no effect when programmable vertex shaders are used, as in that case bindings are controlled by vertex shader inputs."

So it seems you're working with the fixed function pipeline, or possibly Unity has some sort of "fixed shader" running behind the scenes. The "texcoords" does refer to the (u,v,w) texture coordinates for each texture unit for the purposes of multitexturing. In traditional fixed function graphics, typically there's a 1:1 mapping between texture coordinates and textures used. When you work with programmable shaders (that is you write your own vertex and fragment shaders), you're in full control over how things work. In most scenarios, you use one set of UVs for several different textures. Fixed function is a bit dumb and doesn't know what your intentions are, so it generally follows the 1:1 mapping with simple blending operations to mix all textures together.

In a programmable graphics pipeline, at least with OpenGL, you are in full control of the inputs, which are called "attributes". There's no keywords for "vertex" or "texcoord", but for obvious reasons it helps when you write your shaders that you use identifiable variable names. The bindings are done in C/C++, so you already know in advance what goes where. If you were programming shaders in C/C++, it would make sense what "texcoord" and "texcoord1" means because you would be physically telling the graphics API what the associations are. "Bind my normals here, bind my texture UVs there, etc...". Since you're using Unity, you are conforming to Unity's mapping scheme.
http://www.nutty.ca - Being a nut has its advantages.

### #9Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 19 May 2012 - 03:12 AM

Ah...that makes a lot of sense. Thanks all, thanks a bunch.

TheNut: so, what IS channel binding? Like the following code, which I pasted above, with the comments included:

// Maps the first UV set to the first texture stage
// and the second UV set to the second texture stage
BindChannels {
Bind "Vertex", vertex
Bind "texcoord", texcoord0
Bind "texcoord1", texcoord1
}

// Maps the first UV set to all texture stages
// and uses vertex colors
BindChannels {
Bind "Vertex", vertex
Bind "texcoord", texcoord
Bind "Color", color
}

Not quite understanding some elements on that page; things like "target hardware" or whatnot, UV "sets", etc.

### #10Stainless

Member

• Members
• 575 posts
• LocationSouthampton

Posted 20 May 2012 - 10:20 AM

The number of inputs available in a GPU varies massively. Modern PC graphics cards can do all sorts of fun things, but the graphics cards in mobile phones are a lot more primitive.

BindChannels seem to be a Unity way of coping with this issue.

When you think of what is actually happening in a pixel shader, you get a block of "stuff" in and output a colour

How the "stuff" is organised needs to be defined in the shader code, but the shader code is a completely separate entity than the code running on the CPU.

So how do you get the "stuff" organised so that the CPU can supply the GPU with something that matches what the GPU expects? After all if you pass in a colour and a texture coordinate in the wrong order, the pixel shader doesn't know you have screwed up, it just does what it's been told on the data you have supplied.

There are lots of ways of doing this, it is not standardised. XNA has one way of doing it, OpenGL another, Opengles another, etc.
Looks like Unity has another.

The first structure is saying "give the shader a vertex position and two sets of texture coordinates".
What that actually gets compiled to should be ignored while you are learning. There are many different possibilities depending on the shader compiler.

The second is saying "give the shader a vertex position, one texture coordinate, and one colour"

That's all it really means.

### #11Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 20 May 2012 - 06:47 PM

Thanks Stainless, very appreciated. That makes sense. There always seems to be a clearer explanation out there than what's usually presented in tutorials and documentation, lol. I usually find things to be a bit simpler in concept than what they initially seem in regards to complexity.

### #12Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 20 May 2012 - 07:19 PM

PS--on shaders: so, I assume shaders can be/are used to generate textures sometimes; a form of procedural generation really. Can one generate noise via shaders, for instance? I know there are noise generation algorithms, and I sort of assume they are implemented using shaders. But of course, that'sonly noise; I'm just curious about textures via shaders in general.

### #13Reedbeta

DevMaster Staff

• 5305 posts
• LocationBellevue, WA

Posted 20 May 2012 - 11:30 PM

It's possible to create a render target, do some drawing into it and then later turn around and use the rendered image as a texture. This is often called "render-to-texture". So yes, shaders can be used to generate textures using this approach. Noise can indeed be implemented in a pixel shader and generated this way, though it's not completely straightforward - most noise algorithms (and indeed most image processing algorithms in general) are usually described as if you were going to implement them on the CPU, so translating that into a GPU implementation can be nontrivial (especially for an efficient GPU implementation).
reedbeta.com - developer blog, OpenGL demos, and other projects

### #14Stainless

Member

• Members
• 575 posts
• LocationSouthampton

Posted 21 May 2012 - 08:32 AM

Be warned though that render to texture is NOT supported as standard on OpenGLES.

Khronic.... err sorry Kronos were very limp wristed when defining the API and didn't require it.

Which is a pain in the proverbial.

Generating images in shaders can still be done though, you just have to do it straight to the screen.

This vid shows an example. The shader is given pixel x, pixel y, and time. No textures are used.

I found http://forthsalon.appspot.com/ and loved it, so ended up writing some code to convert Forth to GLSL.

What can I say, I'm a strange guy.

### #15Gnarlyman

Valued Member

• Members
• 109 posts
• LocationMilwaukee

Posted 23 May 2012 - 01:26 AM

Thanks Reedbeta, that helps a lot. I'd indeed like to play around with texture generation via shaders as I get more experienced with them. I suppose the trick, as usual with shaders, is that you don't have the usual certainty in knowing that the texture will be perfectly done in every situation as you would know with a pre-made one. However, I think there's a lot of potential there in shader-generated textures.

Stainless: wow...the forthsalon joint is pretty rad! I'm checking it out right now. Totally up my alley, lol. Wonder who came up with that idea.

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users