0
101 Sep 19, 2008 at 14:22

Hello.
After, finally, completed the shadow volume example, now, before make a journey in long shadow mapping way, i would like to make a little water example.

I’m reading the ShaderX2 article, i will explain you what am i doing, hoping that someone will find the error.

Rendering the Underwater Scene (First Render Pass)
We want to simulate a close-to-reality view into the water. The seabed and objects
like fish or plants should be distortable by faked bumps (see the section “Animated
Surface Bumping”). So, we have to render the underwater scene view each
frame again into a render-target texture. For this job, we use the original camera.
A clip plane cuts off the invisible part of the scene above the water surface

For this, i created another texture and, after getting the backbuffer, i made a CopyResource and it works. So, no render target it’s needed (i hope)

The second pass it’s the
Modifications Dependent on Water Depth
It say to use a vertex and pixel shader, but i did not understand where. On the water-grid mesh? Anyway, if i use it, i can’t see the plane on the screen.

// VERTEX SHADER (for DX9 hardware and better) VS-1
// FUNCTION: Modifies underwater scene texture
//
// INPUT:
// v0 = position (3 floats)
// c0 - c3 = world/view/proj matrix
// version instruction
vs 2 0
210
Figure 4: Reduction formula exp ( –d *  )
// declare registers
dcl position v0
// transform position into projection space
dp4 oPos.x, v0, c0 // c0 = first row of transposed world/view/proj-matrix.
dp4 oPos.y, v0, c1 // c1 = second row of transposed world/view/proj-matrix.
dp4 oPos.z, v0, c2 // c2 = third row of transposed world/view/proj-matrix.
dp4 oPos.w, v0, c3 // c3 = forth row of transposed world/view/proj-matrix.
// transfer position to pixel shader
mov oT0, v0 // We pass the vertex position to the pixel shader as tex coord.


I traslated it in this way.

float4x4    ViewMatrix;
float4x4    WorldMatrix;
float4x4    ProjMatrix;
float4      LightDirection;
float4      MaterialColor;
float3      EyePosition;
texture2D   Texture;
texture2D   CurrentBackbuffer;

SamplerState SamplText
{
Filter = MIN_MAG_MIP_LINEAR;
};

struct VS_INPUT
{
float4  Pos :   POSITION;
float3  Tex :   TEXCOORD;
float3  Nor :   NORMAL;
};

struct PS_INPUT
{
float4  Pos :   SV_POSITION;
float4  WorldPos:   WORLDPOSITION;
float3  Tex :   TEXCOORD;
float3  Nor :   NORMAL;
};

PS_INPUT vs_main(VS_INPUT Input)
{
PS_INPUT Out = (PS_INPUT)0;
Out.Pos = mul(mul(mul(Input.Pos,WorldMatrix),ViewMatrix),ProjMatrix);
Out.WorldPos = mul(Input.Pos,WorldMatrix);
Out.Tex = Input.Tex;
Out.Nor = mul(Input.Nor,WorldMatrix);
return Out;
}

PS_INPUT vs_plane(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;

output.Pos = mul(input.Pos, transpose(WorldMatrix));
output.Pos = mul(output.Pos, transpose(ViewMatrix));
output.Pos = mul(output.Pos, transpose(ProjMatrix));

output.Nor = mul( input.Nor, WorldMatrix );

output.Tex = float3(output.Pos.x,output.Pos.y,output.Pos.z);
return output;
}

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
return CurrentBackbuffer.Sample(SamplText,saturate(input.Tex));
}

float4 ps_main(PS_INPUT In) :   SV_TARGET
{
return Texture.Sample(SamplText,In.Tex);
}

technique10 WaterEffect
{
Pass PositionTextured
{
}

Pass Plane
{
}
}


The second part of the article say modification of the texture depend of water depth.
Ok, but where have i to apply the texture? On the grid plane mesh?
Anyway, if yes or not, the second pass make this, but it seems to not work.

Hints?

70 Replies

0
104 Sep 19, 2008 at 15:28

I know the algorythm your talking about, your kicking a bit of but if you get this into your game, it looks pretty cool, i imagine it looks really nice with specular.
i havent implemented it myself yet, ive only got a rough idea in my head how it works, so i cant really help ya.
good going tho, hope you get it into your game system.

0
101 Sep 19, 2008 at 16:20

I’m sorry, but It’s not entirley clear to me what your problem is. You’re trying to realistic water when your view-point is above the water right (or just not under the water surface).

I’m also not totally familiar with that wording the book is using. I do know exactly the technique your talking about though, and it was MUCH easier to implement than I had predicted (much less complicated than shadow volumes anyway).

So is your first problem that you need to get the image of the refraction (underwater scene) onto the water itself? That involves a much simpler concept than the book may have you belive.

PS_INPUT vs_plane(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;

output.Pos = mul(input.Pos, transpose(WorldMatrix));
output.Pos = mul(output.Pos, transpose(ViewMatrix));
output.Pos = mul(output.Pos, transpose(ProjMatrix));

output.Nor = mul( input.Nor, WorldMatrix );

output.Tex = float3(output.Pos.x,output.Pos.y,output.Pos.z);
return output;
}

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
return CurrentBackbuffer.Sample(SamplText,saturate(input.Tex));
}


I can kind of see where you thought that may be right, placing the texture coordinates according to the world space, but I can;t mathematicly see how that would work and that’s certainly more complicated than the way I did it.

//modify PS_INPUT to include another float4 called Pos2

PS_INPUT vs_plane(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;

output.Pos = mul(input.Pos, transpose(WorldMatrix));
output.Pos = mul(output.Pos, transpose(ViewMatrix));
output.Pos = mul(output.Pos, transpose(ProjMatrix));

output.Pos2 = output.Pos;

output.Nor = mul( input.Nor, WorldMatrix );

//this line does nothing important, you can remove it
//output.Tex = float3(output.Pos.x,output.Pos.y,output.Pos.z);
return output;
}

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
float2 ViewTexC = 0.5 * input.Pos2.xy / input.Pos2.w + float2( 0.5, 0.5 );
ViewTexC.y  =   1.0f - ViewTexC.y;

return CurrentBackbuffer.Sample(SamplText,ViewTexC);
//the "saturate" instruction is not needed because the projective texture coords cannot go past 0-1
}


It’s funny you should mention shadow-maps because both of these techniques use the same concept, that is, projective texturing. The camera is essentially projecting the image from your view point onto the water surface. All it takes to modify it is a += or -=, and there are literally hundereds of ways to do that.

EDIT: btw, what’s with the “transpose” commands? The raw matricies themselves should correctly transform the geometry.

0
101 Sep 19, 2008 at 17:04

Finally someone that can help me!
Thank you, at first!!!

I will try now to explain the problem.

Now if you read article, he divides the effect in some Passes:

Rendering the Underwater Scene (First Render Pass)
I make this by using CopyResource function from backbuffer, without use a RenderTarget and render again the same geometry.

Modifications Dependent on Water Depth
(Second Render Pass)

This is what i do not understand. As he say from the shader:

// VERTEX SHADER (for DX9 hardware and better) VS-1
// FUNCTION: Modifies underwater scene texture
//
// INPUT:
// v0 = position (3 floats)
// c0 - c3 = world/view/proj matrix
// version instruction
vs 2 0
210
Figure 4: Reduction formula exp ( –d *  )
// declare registers
dcl position v0
// transform position into projection space
dp4 oPos.x, v0, c0 // c0 = first row of transposed world/view/proj-matrix.
dp4 oPos.y, v0, c1 // c1 = second row of transposed world/view/proj-matrix.
dp4 oPos.z, v0, c2 // c2 = third row of transposed world/view/proj-matrix.
dp4 oPos.w, v0, c3 // c3 = forth row of transposed world/view/proj-matrix.
// transfer position to pixel shader
mov oT0, v0 // We pass the vertex position to the pixel shader as tex coord.


So, even if i did not understand the shader, i decided to use transposed version of matrix.

And here is the problem…for other information, just ask for them!

0
101 Sep 19, 2008 at 18:51

well unfortunatley I’m not good at reading assembly language =/

I looked at that article… and this is ****ing rediculous. I’ve never understood why the people who write those articles have to take the simplest tasks and stretch them out into the most impossible to understand jibberish.

Personally, I would just forget the article and start over in your own words, because they are making it WAY more complicated to take in than it needs to be.

Here are some ground rules for rendering water, and this is the way I do it:

1. Render the water plane normally, no wierd transformations, no “transposing”, no inverting, ect. Just render it like you do everything else.

2. In the vertex shader, add the semantic that I did (the position2 variable) and make it equal to the final output of position1 (after all matrix transformations)

Now, this should give you just a flat, normal water plane

1. In the pixel shader, use the position2 parameter in the equasion described to calculate the ViewTexC vector. These are the texture coordinates that you will use to sample the refraction image (the copied back-buffer)
note: if you’re wondering why you need a second parameter for position2, I actually have no idea. The shader simply won’t compile if you use the output position.

2. Now the water should become “invisible”. If it’s not invisible you did something wrong. The image will be perfectly projected onto the plane and therefore you it should be like a window. To make the plane visible again, darken the colors a slight bit (or raise if you’re doing HDR, totally up to you).

3. There are lots of ways you can make realsitic water. Personally I stretch a normal-map across the plane and modify the ViewTexC coordinates by those and fresnel effects

0
101 Sep 19, 2008 at 20:37

@starstutter

well unfortunatley I’m not good at reading assembly language =/

Me too. I really do not understand nothing in that language.

I looked at that article… and this is ****ing rediculous. I’ve never understood why the people who write those articles have to take the simplest tasks and stretch them out into the most impossible to understand jibberish. Personally, I would just forget the article and start over in your own words, because they are making it WAY more complicated to take in than it needs to be.

Yes, i noticed it. And the same thing happens when reading the GPUGems articles…they are considered as the Byble of gems programming, but in my opinion, people who write articles should better use words…anyway…

1. Render the water plane normally, no wierd transformations, no “transposing”, no inverting, ect. Just render it like you do everything else.

Ok, even if i really would understand why all that things…

1. In the vertex shader, add the semantic that I did (the position2 variable) and make it equal to the final output of position1 (after all matrix transformations)

Now, this should give you just a flat, normal water plane

1. In the pixel shader, use the position2 parameter in the equasion described to calculate the ViewTexC vector. These are the texture coordinates that you will use to sample the refraction image (the copied back-buffer)
note: if you’re wondering why you need a second parameter for position2, I actually have no idea. The shader simply won’t compile if you use the output position.

Done.
I know about the Position as a inusable value…mabye becouse it’s a System Value semantic ( : POSITION)

1. Now the water should become “invisible”. If it’s not invisible you did something wrong. The image will be perfectly projected onto the plane and therefore you it should be like a window. To make the plane visible again, darken the colors a slight bit (or raise if you’re doing HDR, totally up to you).

Yes, it works, but i would better understand why have i to make that particular equation:

float2 ViewTexC = 0.5 * input.Pos2.xy / input.Pos2.w + float2( 0.5, 0.5 ); //what when why and becouse
ViewTexC.y  =   1.0f - ViewTexC.y; //Ok i understand it


I understand only the second line (becouse texture space it’s different from projection space, ok)
But the first line?

1. There are lots of ways you can make realsitic water. Personally I stretch a normal-map across the plane and modify the ViewTexC coordinates by those and fresnel effects

0
165 Sep 19, 2008 at 22:10

In that code snippet you posted, what’s going on is pos2.xy / pos2.w is the perspective projection: dividing eye space x and y by eye space depth, stored in w, giving you screen space x and y. That gives you x and y both in the range of -1 to 1, but for a texture lookup you want them to be in the range of 0 to 1 instead. So, you multiply by 0.5 and add 0.5 to each component and that remaps them from [-1, 1] to [0, 1].

0
101 Sep 20, 2008 at 02:47

@XVincentX

Well, unfortunatley I didn’t ever find a resource that didn’t have the same problems of all the other articles (that is, overly and necessarily complicated). Not that I have not quite completed my algorithem yet so this is simply a rough model to follow:

1. First thing is just something to keep in mind. The water plane is essentially an object, so treat it as such. You can apply normal mapping to it as you would any other object, this is the first step.

2. Use the normals to distort the image. I currently use a VERY rough approximation just to get the job done, I plan on modifying this later:

ViewTexC += (normals.xy * (1-VdotN) ) * .001;

or something like that… really you’ll need to just play around with it.

1. Now for some primitive reflection. First off is lights. Obviously for this we would use a specular component (phong shading). Raise the power very high and adjust the intensity accordingly. The normal maps should be enough to provide very good lighting detail. As you can probably suspect, water doesn’t really need a diffuse component unless it’s really… nasty water… but that’s kind of a different subject.

2. Now for world reflections. This early in, I would stick to cube maps. Just environment map it like you would anything else, and do a fresnel term ( pow(1-VdotN, 10) ). Tweek it to how you like.

Now, doing realtime and local reflections is not as an advanced of a topic as you might think. In essense you just flip the scene upside down and render it, along with some other steps, but its suprisingly hard to implement because of so many complications. So local reflections are awesome, but for now just stick to cube mapping and see where you get.

0
101 Sep 20, 2008 at 03:03

I figured I would show you what this algorithem would give you. Honestly I don’t remember entirley if this is what it was because these pics are like… a year old. The first is showing more refraction and the second shows better realtime reflection.

0
101 Sep 20, 2008 at 07:44

@Reedbeta

In that code snippet you posted, what’s going on is pos2.xy / pos2.w is the perspective projection: dividing eye space x and y by eye space depth, stored in w, giving you screen space x and y. That gives you x and y both in the range of -1 to 1, but for a texture lookup you want them to be in the range of 0 to 1 instead. So, you multiply by 0.5 and add 0.5 to each component and that remaps them from [-1, 1] to [0, 1].

1)I do not understand yet…making the 3 matrix multiplication, should i have already got position in perspective position? Why have i to / pos.w?
2) Can’t i use saturate or clamp instead add 0.5? (Just to understand)
3)What a nice rendering! I would reach the same result!!!
4)What can you say me about water movement?
5)I would avoid the use of cubemap, anyway today i will try so write some other code, thank you again!

0
104 Sep 20, 2008 at 09:23

just learn environment mapped normal maps, then animate em, simply said.

thats how you do water like an old goon. like star stutter before he was old!

but a real goon sets up a grid of springs that only move up and down but pass
energy to each other then you just stick specular on the vertices!

then you get real ponds when things react apon it, and you dont need no
silluette super parralax mapping.

0
101 Sep 20, 2008 at 14:35

@XVincentX

1)I do not understand yet…making the 3 matrix multiplication, should i have already got position in perspective position? Why have i to / pos.w?

I belive its because the X and Y coords output by the vertex shader are not divided by W yet, so they still have extremley high values. I think the W divide is done automaticly by the system. I could be wrong though. That’s just a guess.

2) Can’t i use saturate or clamp instead add 0.5? (Just to understand)

Saturate or clamp only restricts the values to 0-1, it doesn’t move the coords. Doing this would cut off half the screen and give you a wierd “line” effect.

Anyway, really don’t worry about why you need it. I didn’t understand wht it did untill about a month after I started using it. I just knew that I needed it to work right. I think one mistake a lot of people make is they assume they need to understand everything about the method to use it properly. My experience has been that’s a waste of time that just keeps you from your work. It’s good to understand what they do, but that knowlege will just come to you as you continue working on it, and dwelling keeps you from moving on.

3)What a nice rendering! I would reach the same result!!!

Thanks! :)

4)What can you say me about water movement?

That’s one I havent quite gotten yet. A while ago I implemented 2 techniques that I decided to use in parallel for level of detail.

One method was using an off-screen render target and drawing dynamic ripples on in the form of normal maps. Doing a little simple algebra it was possible to convert objects world-space positions into positions over the pool of water. From that I just used ripples like a particle system and emmited them from the body of the characters. The result was pretty damn cool.

It wasn’t convincing though at steep angles, so up close I converted the water to a higher-tesselation mesh and then used a mass spring system. I also experimented with using texture reads in the vertex shader to animate, but this was not very cost effective as the tesselation had to be impractically high to get visually pleasing results. I admit though I never finished this implementation.

5)I would avoid the use of cubemap, anyway today i will try so write some other code, thank you again!

Well for things like small puddles you don’t have much choice because realtime reflections would be totally impractical for those. One thing you should consider is using local reflections and cube maps.

You can use local reflections for moving objetcs and cube maps for the envrionment around them. The technique is pretty effective and cheap, it’s currently being used by Mirrors-Edge I belive.

0
165 Sep 20, 2008 at 18:19

@starstutter

I belive its because the X and Y coords output by the vertex shader are not divided by W yet, so they still have extremley high values. I think the W divide is done automaticly by the system. I could be wrong though. That’s just a guess.

If you use a projective texture function like tex2Dproj (TXP in assembly, I think) then it will do the divide by w for you. However, there’s then a minor complication because you have to do the [-1, 1] to [0, 1] remapping. tex2Dproj does *not* do this remapping after it divides by w before looking up in the texture. So you have to tweak the coordinates slightly before you send them to tex2Dproj. It’s not hard and can in fact be rolled into the projection matrix so it doesn’t need a shader calculation at all.

0
101 Sep 21, 2008 at 10:00

Oh man, i’m very sorry but i can’t understand all your sentences, even if i use a dictionary!! Sorry if i will repeat the same things, sometimes.

So, let’s make a summary.

1)About the water movement, i found a shader code that mabye can work (i have not tested yet)

//---------------------------------------------
// Blending States
//---------------------------------------------
BlendState NoBlending
{
BlendEnable[0] = FALSE;
};

//---------------------------------------------
// Buffer Variables
//---------------------------------------------
Texture2D txDiffuse;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
};

// this lot are set in application
matrix World;
matrix ViewProjection;

float4 vLightDir;
float4 vLightColour;

float time_0_2PI; // cycle through 2PI using say step size of of 0.0025f every frame update
float rippleSpeed; // 20.0f is good
float distortion; // 0.0001f just a little

//----------------------------------------
struct VS_INPUT
{
float4 Pos : POSITION;
float3 Norm : NORMAL;
float2 Tex : TEXCOORD0;

};

struct PS_INPUT
{
float4 Pos : SV_POSITION; //system variable - homogeneous projection space
float3 Norm : TEXCOORD1;
float2 Tex : TEXCOORD0;

};

//--------------------------------------------
//--------------------------------------------
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;

// compute distortion
input.Pos = input.Pos + distortion * sin(dot(input.Pos.xy, input.Pos.xy) + rippleSpeed * time_0_2PI) +
distortion * sin(dot(input.Pos.xz, input.Pos.xz) + rippleSpeed * time_0_2PI) +
distortion * sin(dot(input.Pos.yz, input.Pos.yz) + rippleSpeed * time_0_2PI);

output.Pos = mul( input.Pos, World );
output.Pos = mul( output.Pos, ViewProjection );

//Move the incoming normal into world space
output.Norm = mul( input.Norm, World );

output.Tex = input.Tex;
return output;
}

//--------------------------------------------
//--------------------------------------------
float4 PS( PS_INPUT input ) : SV_Target
{

float4 finalColour = 0;

//do NdotL lighting
finalColour += saturate( dot( (float3)vLightDir,input.Norm) * vLightColour );

finalColour.a = 1;

// modulate against diffuse texture color
return (txDiffuse.Sample( samLinear, input.Tex )) * finalColour;

}

//----------------------------------------------
technique10 Render
{
pass P0
{
SetBlendState( NoBlending, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF );
}
}


Your solution seems good, but also expansive…or not?

2) I understand the texcoord question. Ok.

3) Leave the movement and continue on water effect.
Following the article, how can i make the light absorption?

0
101 Sep 21, 2008 at 14:32

Honestly I’m not real sure what that shader code is supposed to do… The only thing it’s really doing is making waves in the water mesh, but it’s not doing image distortion or anything.

Actually the methods really not expensive at all. It’s just a few instructions more than regular lighting. The screenshots you saw (if I remember right) ran at about 240 fps on a Geforce 7900 GS. There’s really nothing expensive about it.

Following the article, how can i make the light absorption?

What do you mean by absorbtion? Do you mean having the water be cloudy or dirty?

I’m going to assume that’s it for a moment (but correct me if I’m wrong) but this involves a few more steps and it can get more expensive. For this, computing the fog effect is very simple. The code is literally:

occlusion = saturate(input.Pos2.z - sceneDepth) * 0.001;
color *= occlusion;

The complicated issue here is “sceneDepth”. The only thing you need to know to calculate the fog is the amount of water between your eye and the surface your looking at. For this you need 2 pieces of info:

distance from eye to water (which is ->) input.Pos2.z
distance from eye to surface (which is ->) sceneDepth

To get the scene depth is a bit complicated though. What you need to do is draw the scene onto a render target in the format of D3DFMT_R32F. You’re going to have to create a new shader as well. The vertex shader for this pass consists of just a normal world position tranformation, as in, just draw as you normally would, but forget about normals and texture coordinates. Also you need to include the Pos2 variable like with the water.

The pixel shader for it is one line:
color.x = input.Pos2.x;

Now once you’ve drawn this info onto the render target, use it as a texture in the water shader. The sceneDepth variable is filled by:

sceneDepth = txDiffuse.Sample( DepthTex, ViewTexC )

Now, keep in mind there is a feature that lets you acess depth information in the shader new to DirectX 10, but according to nVidia it disables early-z culling and that could cut your framerate in half.

0
101 Sep 21, 2008 at 17:17

I’m really get confused by your ideas (that are very good) and the ShaderX article.

About the Z access in the Shader, the DepthStencil surface, in D3D10, it’s managed as a Texture and, if it had god as bind option the D3D10_BIND_SHADER_RESOURCE it can be passed in a shader like a simple texture and manage for it.
So i may go on this way.

I have the completed “glass-effect” on my scene. Now, can you please make a todo-list, to proceed in the effect?

0
101 Sep 21, 2008 at 19:54

hmmm, well I would worry about basic distortion first.

1. get a normal map with the image of a wave (or ripple) pattern

2. use the texture coordinates from the mesh in the shader (the incoming UV’s, not the screen projection. In other words, place the normal map on it just like any ordinary texture on any ordinary object. No special steps here.

3. modify the image (ViewTexC) coordinates so that the refraction in the water becomes wavy. Modify them (of course) becore you use them for the image lookup.

4. Animate the water. You do this by simply declaring a new float at the top and using it as a timer. Outside the shader you incriment it by 1, and then do the “SetFloat” function. In the shader, just modify the UV coordinates with this:
normal = tex2d(normalMap, input.uv + modify.xy);

this will give you a slow steady flow that will have you go “woah…” the first time you see it. :)

0
101 Sep 22, 2008 at 11:15

float4x4    ViewMatrix;
float4x4    WorldMatrix;
float4x4    ProjMatrix;
float4      LightDirection;
float4      MaterialColor;
float3      EyePosition;
texture2D   Texture;
texture2D   CurrentBackbuffer;
texture2D   NormalMap;

SamplerState SamplText
{
Filter = MIN_MAG_MIP_LINEAR;
};

struct VS_INPUT
{
float4  Pos :   POSITION;
float3  Tex :   TEXCOORD;
float3  Nor :   NORMAL;
};

struct PS_INPUT
{
float4  Pos :   SV_POSITION;
float4  WorldPos:   WORLDPOSITION;
float3  Tex :   TEXCOORD;
float3  Nor :   NORMAL;
};

PS_INPUT vs_main(VS_INPUT Input)
{
PS_INPUT Out = (PS_INPUT)0;
Out.Pos = mul(mul(mul(Input.Pos,WorldMatrix),ViewMatrix),ProjMatrix);
Out.WorldPos = mul(Input.Pos,WorldMatrix);
Out.Tex = Input.Tex;
Out.Nor = mul(Input.Nor,WorldMatrix);
return Out;
}

PS_INPUT vs_plane(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;

output.Pos = mul(input.Pos,mul(mul(WorldMatrix,ViewMatrix),ProjMatrix));
output.Nor = mul( input.Nor, WorldMatrix );
output.Tex = input.Tex;
output.WorldPos = output.Pos;

return output;
}

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
float2 ViewTexC = 0.5 * input.WorldPos.xy / input.WorldPos.w + float2( 0.5, 0.5 );
ViewTexC.y  =   1.0f - ViewTexC.y;
return NormalMap.Sample(SamplText,input.Tex.xy + EyePosition.xy) * CurrentBackbuffer.Sample(SamplText,ViewTexC);

}

float4 ps_main(PS_INPUT In) :   SV_TARGET
{
return Texture.Sample(SamplText,In.Tex);
}

technique10 WaterEffect
{
Pass PositionTextured
{
}

Pass WaterMovement
{
}
}


And it works! Here is the result

It’s not bad for basic water effect, even if the water looks like…so…dirty? Mabye it’s a normal map problem?
Anyway, i would like, with your help, to go beyond and reach better results. The first issue i can see it’s that not all surface it’s watered…here is an example

So?
Thank you again

0
101 Sep 22, 2008 at 19:41

The first issue i can see it’s that not all surface it’s watered

Ummm, how much experience do you have in graphics programming? I think we need to get on the same page here.

It’s not bad for basic water effect, even if the water looks like…so…dirty? Mabye it’s a normal map problem?

Your outputting the normal map color, not the distorted image ;). So, as the question above suggests, do you know how to do normal mapping (tangent space normal mapping). If not, that’s fine, there’s other options, but stretching a normal map over a polygon is not normal mapping.

0
101 Sep 22, 2008 at 20:18

Ummm, how much experience do you have in graphics programming? I think we need to get on the same page here.

Not so much! I use Direct3D10 from less than a year, but i used D3D9 very years ago, so i forgot a lot of concepts. What about this, for example?

Your outputting the normal map color, not the distorted image . So, as the question above suggests, do you know how to do normal mapping (tangent space normal mapping). If not, that’s fine, there’s other options, but stretching a normal map over a polygon is not normal mapping.

Normal mapping…if i remember well, in normal mapping there was a particular multiplication with tangent, normal and binormal, even if, now, i do not remember why…i did it so time ago…
So, have we to stop for now?

0
101 Sep 22, 2008 at 20:26

well no I was just wondering if you got the concept. It would be good if you did but it’s not completley necessary for creating water. Baiscly you just need to make a small modification. Don’t output the normals color to the screen, but instead use the normals colors to shift the ViewTexC coordinates. Then do the texture lookup of the screen image (refraction). Output those colors to the screen.

0
101 Sep 22, 2008 at 20:36

Really thank you for all your help.
What do you mean for shift? The dictionary does not help me! Can you explain it better?

And what about the first problem?

0
101 Sep 22, 2008 at 20:40

right before the screen image texture lookup, do this:
ViewTexC += normalMap.rg * .0001;

scale the .0001 as you need.

As for the first problem, I cant say anything but there isn’t one. It’s functioning as it should. Can you tell me what you’re trying to get it to do?

0
101 Sep 22, 2008 at 20:46

@starstutter

As for the first problem, I cant say anything but there isn’t one. It’s functioning as it should. Can you tell me what you’re trying to get it to do?

Oh yes, you’re right. To have a complete spot of water i must create a decent geometry, like a cube. I’m really going crazy, i should make a break :D

I will try your new code, (now i understand what do you mean for shift, damn dictionary…you meaned like the byteshift in C language :) ), but i’ve not understand why have i to make this…the concept behind it!

0
101 Sep 22, 2008 at 20:53

@XVincentX

I’m really going crazy, i should make a break :D

Probably a good idea. When I get stuck on a problem, I just walk away and think of the answer like 15 minutes later. Kind of a Daoist work ethic, it’s cool :lol:

damn dictionary…you meaned like the byteshift in C language )

HA! That’s dictionaries for ya! XD

0
101 Sep 22, 2008 at 21:01

Yes, i will try all tomorrow.
Now, if you want, i would talk about your 2 images, and next steps for water effect

Images questions:
a) D3D9 or D3D10? (d3d9, i suppose)
b) Are you using antialiasing? Anisotropic texture filter?
c) How many polygons there are in that scene?
f) Max and normal framerate?

Water questions:
I read a little bit the physics that manages the water and how all works, so
a) Refraction: how to simulate it? I thought to make a component to add, using the different refraction index in some ways….(1.0003 for air, 1.3333 for water)

Reflection: Via cubemap? i will study it

0
165 Sep 22, 2008 at 21:03

@XVincentX

i must create a decent geometry, like a cube.

Rather than a cube you should just create a flat water surface that extends all the way to where the water meets the ground.

i’ve not understand why have i to make this…the concept behind it!

The idea is that instead of looking at the geometry under the water at the same point where it would be if the water wasn’t there, you look at the geometry at a slightly different point. You move the point by a distance that depends on the water’s normal. It produces a distorted image of what’s under the water. It’s not really refraction but can still look rather good.

0
101 Sep 22, 2008 at 21:06

@Reedbeta

Rather than a cube you should just create a flat water surface that extends all the way to where the water meets the ground.

And if i would go underwater?

0
165 Sep 22, 2008 at 21:38

Underwater needs a different solution.

Although you can use this same technique to render the water surface as seen from below, distorting the view of objects above the water.

0
101 Sep 22, 2008 at 21:42

ok! i do not need to go underwater for now :)

0
101 Sep 22, 2008 at 23:36

@XVincentX

a) D3D9 or D3D10? (d3d9, i suppose)

D3D9, but understand that besides new features, there’s not an incredible amount of difference between the two. Not nearly as different as microsoft would have you belive

b) Are you using antialiasing? Anisotropic texture filter?

Antialiasing, no. My engine is a deferred renderer (well, it wasn’t back then, but still) and that is incompatable with anti-alias. There are however post process solutions that work pretty well on modern hardware, and are techincally free if you program in Depth of Field effects.

Anisotropic, yes. Why wouldn’t you? It makes surfaces look a lot better and it’s a one word command in the shader declarations :)

c) How many polygons there are in that scene?

Not many, the left one probably had about 140 or so, and the right had around 600-700, but that was mostly the teapot mesh

f) Max and normal framerate?

for the left image, max framerate was when looking straght down into the water. If I remember right it came up to 300 or so, and the average was around 235-240.

The right image was using a higher polycount and an HDR environment so that drained it some. I think it was somewhere around 125-140 fps

Keep in mind however these framerates were recorded in windowed mode, which is substancially slower than fullscreen (in certain cases I see %25 speed increases).

Water questions:
I read a little bit the physics that manages the water and how all works, so
a) Refraction: how to simulate it? I thought to make a component to add, using the different refraction index in some ways….(1.0003 for air, 1.3333 for water)

You’ve pretty much already got refraction. Physically based approaches are impractical for modern hardware and some of the most recent games (like bioshock) used nothing but a simple distortion, and it’s hard to argue with those guys about how to do water :)

Reflection: Via cubemap? i will study it

Thats the way I would go first off.

0
101 Sep 23, 2008 at 06:29

The situation starts to get better, with your new formula, the water looks very better

This images uses this code

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
float2 ViewTexC = 0.5 * input.WorldPos.xy / input.WorldPos.w + float2( 0.5, 0.5 );
ViewTexC.y  =   1.0f - ViewTexC.y;
ViewTexC += NormalMap.Sample(SamplText,input.Tex).xy * 0.9f;
return CurrentBackbuffer.Sample(SamplText,ViewTexC + EyePosition.xy);

}


Even if the water movement makes the texture move everytime in a single direction. So i modify the code to move toward a range with little oscillations.

Anyway, the water seems too rich of miniwaves. I’ve tryied to play with the .0001 value, but no results makes me satisfied. Mabye it’s a normal map problem?

EDIT:

This code seems to work very better

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
float2 ViewTexC = 0.5 * input.WorldPos.xy / input.WorldPos.w + float2( 0.5, 0.5 );
ViewTexC.y  =   1.0f - ViewTexC.y;
ViewTexC += NormalMap.Sample(SamplText,ViewTexC).xy * 0.06f;
return CurrentBackbuffer.Sample(SamplText,ViewTexC + EyePosition.xy);

}

0
101 Sep 23, 2008 at 14:20

hmmm, image shack seems to be down.

0
101 Sep 23, 2008 at 14:47

Doh!

Let’s wait a while, else i will upload it again.

0
101 Sep 23, 2008 at 17:06

Oh nice, that looks good! There’s only one problem though and that’s that the overall image shift is way too powerful. It’s shifting the image incorrectly and making the entire thing move to another position. There should only be a small section rippling for that specific wave.

Make sure you’re only modifying the image using the r and g channels of the texture.

EDIT:
actually on second thought, try and image bias:

ViewTexC += (NormalMap.Sample(SamplText,ViewTexC).xy * 0.06f) - bias;

I can’t tell you the specific value of the bias, because that depends totally on your code. I would start at 0.1 and work your way either up or down (depending on which gets it closer) and just guess untill the image is aligned properly. Note that it’s possible that the coordinates may not be the same scale. That means the x and y shifts could react differently to different values. So make bias a float2( +-0.1, +-0.1) and see if positive or nagitve numbers for either give you the right outcome. You may not have to do this at all though. Start with just positive and see if that does the trick.

EDIT AGAIN:
Wait, what’s with the “+ eyePosition.xy” thing. That’s probably what’s messing the shift up. You should remove that as this is not yet a view dependant calculation.

0
101 Sep 23, 2008 at 18:41

No, EyePosition it’s only a name, but it contains the float that i increment for each frame rendered.
I was forced to used it becouse D3D10 has an orrible shader system:

float4x4    ViewMatrix;
float4x4    WorldMatrix;
float4x4    ProjMatrix;
float4      LightDirection;
float4      MaterialColor;
float3      EyePosition;
texture2D   Texture;
texture2D   CurrentBackbuffer;
texture2D   NormalMap;


Ok it works. If only i add a float, in this way

float4x4    ViewMatrix;
float4x4    WorldMatrix;
float4x4    ProjMatrix;
float4      LightDirection;
float4      MaterialColor;
float3      EyePosition;
texture2D   Texture;
texture2D   CurrentBackbuffer;
texture2D   NormalMap;
float          value;


D3D10 gives me Warnings becouse there are unbinded resources…so to make it work i’m forced to change variables position looking for the right order, and this is so bad.

D3D10 effect system, anyway has GetParameterByIndex and ByName, by ByName NEVER works…..and that’s it.

I will try with bias value soon.

0
101 Sep 23, 2008 at 23:36

well… still don’t understand why you need to add a time based variable to the view coordinates :: … I could understand adding them to the normalMap texture lookup to animate waves, but not to the view coordinate itself.

0
101 Sep 24, 2008 at 06:17

Anyway, i tried to use the bias value in this way

float4 ps_plane(PS_INPUT input) :   SV_TARGET
{
float2 ViewTexC = 0.5 * input.WorldPos.xy / input.WorldPos.w + float2( 0.5, 0.5 );
ViewTexC.y  =   1.0f - ViewTexC.y;
ViewTexC += ((NormalMap.Sample(SamplText,ViewTexC).xy  + EyePosition.xy) * 0.06f) - float2(-0.1,-0,1);
return CurrentBackbuffer.Sample(SamplText,ViewTexC);

}


And what i can see it’s only a texture movement on the surface. I tried some combinations: ++,+-,-+,– also using higher values: with 0.5 the texture disappears and the surface becomes blue (that it’s the Clear color in RenderTarget)

0
101 Sep 24, 2008 at 06:19

From my experience, the distortion values should never be higher than 0.04

0
101 Sep 24, 2008 at 06:27

With 0.05 i can’t see any change…

Without Bias:

With

0
101 Sep 25, 2008 at 14:10

Anyone here?

0
101 Sep 26, 2008 at 03:43

err, sorry, but I’m not seeing your problem. The water looks fine to me.

0
101 Sep 26, 2008 at 05:50

Ah, ok.
But now? Is it finished? What about reflection? How to give it the tipical sea color?

Anyway, the same effect with simple lambert illumination

0
101 Sep 30, 2008 at 06:00

ehiiiiiiiiiii

0
101 Sep 30, 2008 at 08:04

Btw … if you hit alt-printscreen it’ll only copy the active window to the clipboard …

0
101 Sep 30, 2008 at 17:05

Thank you, i will remember it

0
101 Oct 01, 2008 at 17:04

@XVincentX

But now? Is it finished?

Really, it’s finished whenever you want it to be, this is sufficient for now in my opinion :)

I’d suggest finding a tutorial on cube mapping, Maybe have a look at the SDK sample

How to give it the tipical sea color?

Well, I mentioned that technique above in another post (talking about depth maps) but a quick and dirty way to do it is to just multiply by the following:
FinalColor *= float4(0.5,0.5, 1, 1);

0
101 Oct 02, 2008 at 19:35

I already made a cubemap example but i do not know how to use in this case with the water!

0
101 Oct 02, 2008 at 19:46

@XVincentX

I already made a cubemap example but i do not know how to use in this case with the water!

There’s no difference. You just make the additive value of the cubemap color less so it doesn’t overpower the water effect. You just do normal cube mapping, and add it to the output color. But to not overpower it, you should multiply the reflection color by 0.1 or 0.2

0
101 Oct 02, 2008 at 21:42

Sorry but i do not use d3d from a lot of time, so…
I make a new empty cubemap.
I use it on what? on the water grid surface?
What have i to draw on it?

0
101 Oct 02, 2008 at 23:26

@XVincentX

I make a new empty cubemap.

Yeah that works for now.

I use it on what? on the water grid surface?

yup… what you’ve been doing everything on.

What have i to draw on it?

you mean the cube map? Well you can load one in as a DDS file, and you can probably find one of those as there’s plenty of them in the SDK. Sooner or later though you’re going to have to learn how to render cube-maps from the scene, which is effectivley putting the camera on a point and rendering the scene for each face of the cube. There’s entire articles written on that so I’m not attempting to get into it right about now… thats a feature you’d want to implement much later on anyway.

0
101 Oct 04, 2008 at 12:38

Yes, i already made a cubemap dynamic. I’ve only to refresh knowledges!

0
101 Oct 04, 2008 at 18:26

wellp, then all you have left to do is draw the scene to a cubemap and then apply that cubemap to the water just like you would any other object. The cubemap doesn’t have to be dynamic btw, that’s still to expensive IMHO.

0
101 Oct 05, 2008 at 09:31

I did not understand why i have not to use dynamic cube mapping…how can i generate it for my purpose from the context mesh??

0
101 Oct 05, 2008 at 15:41

wait… why on earth would a cube-map *need* to be dynamic :confused:

0
101 Oct 05, 2008 at 17:22

I’m very sorry but my usual bad english does not let me understand your senteces correctly.

So, let’s start from the begin again.
I’ve to make reflection on the water, i can do this throught CubeMap.

So i have to make it creating an empty cube texture and i will fill it with the data dynamically.
For dynamic, i want to say that i will fill it ONE TIME but in runtime, and not by loading it from a file.

Is this what you meant? Or not?

0
101 Oct 05, 2008 at 20:40

@XVincentX

I’ve to make reflection on the water, i can do this throught CubeMap.

correct

So i have to make it creating an empty cube texture and i will fill it with the data dynamically.
For dynamic, i want to say that i will fill it ONE TIME but in runtime, and not by loading it from a file.

OH! Ok, I see what you mean. By dynamic you mean filled at runtime. Yes, that’s what you should do, but not have it updated every frame. Usually when developers say dynamic they mean either rebuilt every frame or over the course of several frames.

Sorry, didn’t understand :)

0
101 Oct 06, 2008 at 19:09

It was my guilty.
I will update you soon with new results.

0
101 Oct 08, 2008 at 19:22

I was in this day just finding the right camera place to start rendering cubemap, and i talked about the issue with a my friend, who suggested me to use cubemap reflection only when i’m working with non-plane surfaces (so this should not be my case).

So, reading ShaderX article, to use the method suggested by book: use a particular matrix that…

Can you make me more clear ideas?

0
101 Oct 09, 2008 at 00:37

@XVincentX

So, reading ShaderX article, to use the method suggested by book: use a particular matrix that…

Well, if you’re talking about an article similar to one I read a while back, forget it, don’t even try to decifer it because it’s an absolute waste of time. Yes they describe an effective and simple method, but it takes 8 pages to tell you to just flip the scene upside down! I was so pissed off when I finished reading it because I was litterally taking notes to try and crack this cryptic lingo when I realized that all the complex matricies and equasions could all be replaced with this:

* -1 + water level * 2

I’m not sure we’re talking about the same article (I found it on gameDev).

So anyway, [/rant], but it’s really pretty simple to render the scene upside down and get a real-time reflection.

What you essentially do is take your camera, inverse the “up” vector so:
camera.up = (0, -1, 0)

and then move the camera’s position to compensate for the water level:
camera.y = camera.y - water.y*2
( but reset the camera’s position afterwards)

Now keep in mind that this only applies to plainiar surfaces that are perfectly flat and cannot be at an angle. This is generally ok though because how often do you see water in real life that is running down hill *and* you can see perfect reflection in?

The algorithem above may be a tiny bit off so play around with it. Keep in mind however that it will not look right at first, you need to implement clipping in the shader to cut the geometry off at the water level. Tell me when you get the flip step down.

I would go into further detail, but I’m very exausted right now… and need food… eat good -_-

0
165 Oct 09, 2008 at 04:39

@starstutter

Now keep in mind that this only applies to plainiar surfaces that are perfectly flat and cannot be at an angle.

You can of course use a similar technique to reflect the scene through any plane, angled or not.

0
101 Oct 09, 2008 at 06:28

@Reedbeta

You can of course use a similar technique to reflect the scene through any plane, angled or not.

I would have suggested that but I never quite figured that out (never really tried however).

0
101 Oct 09, 2008 at 18:59

I was following the article on ShaderX2 book (you can fond the link somewhere in this topic).
It should be the same.

Wow, it’s amazing how there books makes things so complicated.
Also in my opinion, GPU Gems grows difficulty level about 100% more.

So, in your personal opinion, what’s the right way to make real time reflection on water?

0
101 Oct 09, 2008 at 20:06

@XVincentX

So, in your personal opinion, what’s the right way to make real time reflection on water?

I think that reedbeta’s suggestion is the right way, but the technique I was talking about is the simpler way. If you’re going for a learning experience, I generally try to just get something working and then improve upon it later. Then again it might be worth biting the bullet and doing the more correct method.

0
101 Oct 10, 2008 at 03:42

@XVincentX

Also in my opinion, GPU Gems grows difficulty level about 100% more.

From what I’ve read of GPU gems, they’re ok, not as bad lingo, but they have real problems organizing their articles. I was reading an article on fluid dynamics and trying to develop an algorithem (and eventually did) but it took me forever to figure out what order to put the algorithems steps in. They just don’t give you a straight answer and IMO assume too much of the reader. Most people read the books to learn, but gpu gems is almost written like a reference.

In all fairness, I guess when you’ve worked with graphics software and hardware your entire life, it’s easy to forget just how unintuitive it is for most people to understand.

0
165 Oct 10, 2008 at 04:55

The GPU Gems chapters are written in a style like an academic paper. They’re not really intended to be tutorials (communicating from a professional to a novice) - they’re intended to communicate from one professional to another, so they assume the reader has the professional level of expertise needed to fill in the details themselves.

Of course, just because you’re a professional doesn’t mean you’re a good writer.

0
101 Oct 10, 2008 at 06:23

Lingo = language?

Anyway, let’s go for a simple reflection and after improve it, so
Have i to move camera ON the water before modify up vector and make camera modification, or have i to apply directly the equation?

Here is the first working result with CubeMap reflection:

It’s not the best, but i’m starting to work on it.
What about you? Ideas or suggestions?

I’m using this formula to compute the result

return CurrentBackbuffer.Sample(SamplText,ViewTexC) * (Lambert + Phong) * CubeMap.Sample(SamplText,reflect(-normalize(EyePosition-input.WorldPos),input.Nor));


Mmm…why the spot of water seems everytime darker?
Anyway, if i move the camera sometimes appears the render target surface cleared (green)…not rendered.

0
101 Oct 14, 2008 at 07:39

hop

0
101 Oct 15, 2008 at 15:14

hey!

0
101 Oct 15, 2008 at 16:16

Hey listen man, I like to help but you have GOT to figure some of this out for yourself. I can’t be helping you through every step, especially since I’m loaded with tests this week and next. You don’t learn nearly as much when someone just tells you the answers anyway.

Yes, lingo = language / vocabulary

Anyway, let’s go for a simple reflection and after improve it, so
Have i to move camera ON the water before modify up vector and make camera modification, or have i to apply directly the equation?

I really don’t usderstand what you’re asking here, can you clarify? Is this for realtime reflection or cube mapping?

The only suggestion I have for your shader is that you add the cube-map color instead of multiplying it. It’s more physically accurate. Reflected light (like specular) shouldn’t show the surfaces properties.

0
101 Oct 18, 2008 at 17:33

I apoligize for my insistence, i was taking the hand…

Following your next advice, making the simple sum did not help me: the final render target becomes very very close to white color.

So, just to learn better, i was tring to follow the method described in the book.
As i’ve understand, i’ve to

a) Move the camera on the water.
a.1) Where the camera must look?
b) Flip the camera using the particular matrix
b.1) How can i obtain water level?
c) Change up vector to -1 as y value
d) Render the scene on the target, clipping water values.
c) Apply texture on the water.

Is this correct?