0
101 Jun 27, 2012 at 16:47

Hello. I am currently working on a rendering engine which is meant to render character animations. Right now I am working on vertex blending to make the body parts move with the bones.

For each vertex declaration, I have a short that represents the first bone’s weight, and 2 shorts representing which bones the vertex is affected by. These 2 shorts also are indexes to an array of combined transformation matrices (in world space) which represent each bone’s transformation in world space.

When the model is loaded from the file, this loop is run for each vertex to build the array of vertices that will be uploaded to the vertex buffer:

for (int i=0;i<(int)_numVertex;i++)
{
cVertices[i].x = vertex[i].pos[0];
cVertices[i].y = vertex[i].pos[1];
cVertices[i].z = vertex[i].pos[2];
cVertices[i].normal.x = vertex[i].normal[0];
cVertices[i].normal.y = vertex[i].normal[1];
cVertices[i].normal.z = vertex[i].normal[2];
cVertices[i].tu = vertex[i].uv[0];
cVertices[i].tv = vertex[i].uv[1];
cVertices[i].su = (float)(vertex[i].normal[0]/2)+0.5;
cVertices[i].sv = (float)(vertex[i].normal[1]/2)+0.5;
_bone1List[i] = vertex[i].boneID[0];
_bone2List[i] = vertex[i].boneID[1];
_boneWeight1[i] = vertex[i].boneWeight1*0.01f;

cVertices[i].b1 = _boneWeight1[i];

cVertices[i].matIndices = ((_bone2List[i] & 0xFF) << 8) | (_bone1List[i] & 0xFF);
}


“b1” is the bone weight, and “matIndices” is the packed DWORD with the matrix palette indices.

For each frame, this loop is then called to update the world transformation matrices palette which contains each bone’s combined transformation matrix:

for (int i=0;i<boneController._numBones;i++)
{
}


The problem is, when the model is loaded, the entire character is overly stretched and goes off the top of the screen. Does anyone know why this could be happening?

Here is my FVF declatation:

#define MDLFVF (D3DFVF_XYZB2 | D3DFVF_LASTBETA_UBYTE4 | D3DFVF_NORMAL | D3DFVF_TEX1 | D3DFVF_TEX2)
struct MDLVERTEX
{
FLOAT x, y, z;
FLOAT b1;
DWORD matIndices;
D3DVECTOR normal;
FLOAT tu, tv;
FLOAT su, sv;
};


#### 4 Replies

0
167 Jun 27, 2012 at 17:48

Have you stepped through your code in the debugger to see where the bad values are coming in?

0
101 Jun 27, 2012 at 21:21

@Reedbeta

Have you stepped through your code in the debugger to see where the bad values are coming in?

I have checked, and all the bone matrices are correct. Each one is used to draw the bones on the screen, and each drawn bone is drawn correctly, including the ones that should be affecting the vertices that I am currently testing with. The bones are drawn like so:

1: disable vertex blending
2: set world matrix to the current bone’s combined transformation matrix
3: draw a sphere
4: set world matrix back
5: extract position vectors from current bone and parent bone (boneMatrix(3,0), boneMatrix(3,1), boneMatrix(3,2)) and (parentBoneMatrix(3,0), parentBoneMatrix(3,1), parentBoneMatrix(3,2))
6: draw linestrip with these 2 vectors
7: re-enable vertex blending

It looks to me like D3D is offsetting the vertex position by the position in the matrix; so if the bone matrix has a position vector with a y value of 10 and the vertex has a y value of 5, the vertex ends up at 15. Should the matrices used for blending be relative to the bone’s default position (combinedMatrix - defaultCombinedMatrix) or should they just be the raw combined matrices of the bones (the same ones used to draw the bones)?

0
167 Jun 28, 2012 at 08:02

It depends where your vertices are. The matrix for each bone needs to transform a vertex from the position stored in the vertex buffer to its final position, so if the vertex buffer has the model in some default pose (bind pose), the bone matrices need to include an inverse matrix for the bind pose (including any translation part) composed with the matrix for the desired pose (the one used to visualize the bones, if I understand correctly).

0
101 Jun 28, 2012 at 15:29

@Reedbeta

It depends where your vertices are. The matrix for each bone needs to transform a vertex from the position stored in the vertex buffer to its final position, so if the vertex buffer has the model in some default pose (bind pose), the bone matrices need to include an inverse matrix for the bind pose (including any translation part) composed with the matrix for the desired pose (the one used to visualize the bones, if I understand correctly).

So basically, the matrix palette should include each bone’s transposed (inverse) matrix of the bind pose multiplied with the world matrix for the current pose?

EDIT: This is now what is does:

*When the model is first loaded:
*The bind pose is stored in the combined transformation matrix (used to position and visualize the bones)
*The bind pose is also stored in a separate bind pose matrix
*The bind pose matrix is inverted and stored in the inverse bind pose matrix
*The inverse bind pose matrix is multiplied by the combined transformation matrix and stored in the skinning matrix
*Each frame:
*If the bone position is updated (combined transformation matrix changes):
*Inverse bind pose matrix is multiplied by the new combined transformation matrix and stored in the skinning matrix

Now when the model is first loaded, it is displayed correctly (probably because the combined transformation matrix is the same as the bind pose matrix) but whenever a bone position changes, the vertices that go with it start going all over the place.

This is what is looks like before bones are moved:

and after:

EDIT AGAIN:
This only seems to be happening when the rotation changes. When the bone’s position is the only thing that changes, the vertices move correctly.