# Converting 3x3 matrix to 4x4 matrix

7 replies to this topic

### #1Xcrypt

New Member

• Members
• 144 posts
• LocationBelgium

Posted 22 February 2012 - 08:29 PM

Hi, In my engine I'm working with 3x3 matrices for 2D graphics, only the rendering system is 100% 3D, so everything needs to be converted to 4x4 matrices in the end.

So I have this function:

D3DXMATRIX ToD3DXMATRIX() const
{
D3DXMATRIX m;

m._11 = _11; m._12 = _12; m._13 = _13; m._14 = 0.0f;
m._21 = _21; m._22 = _22; m._23 = _23; m._24 = 0.0f;
m._31 = _31; m._32 = _32; m._33 = _33; m._34 = 0.0f;
m._41 = 0.0f; m._42 = 0.0f; m._43 = 0.0f; m._44 = 1.0f;

return m;
}

Which works fine for scaling & rotation, but it acts really strange for translation.

So I tried this:
D3DXMATRIX ToD3DXMATRIX() const
{
D3DXMATRIX m;

m._11 = _11; m._12 = _12; m._13 = _13; m._14 = 0.0f;
m._21 = _21; m._22 = _22; m._23 = _23; m._24 = 0.0f;
m._31 = 0.0f; m._32 = 0.0f; m._33 = 1.0f; m._34 = 0.0f;
m._41 = _31; m._42 = _32; m._43 = 0.0f; m._44 = 1.0f;

return m;
}

But it behaves similar.

Anyone see what I'm doing wrong?

EDIT: It might be worthwile to mention that both the 3x3 matrix and the 4x4 matrix are row major matrices, and I'm using the DirectX Left Handed coordinate system (x = width(right), y = height(top), z = depth ('in' the screen))

### #2Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 22 February 2012 - 08:59 PM

A 3x3 matrix for 2D affine transforms should have 0, 0, 1 in the last column, just as 4x4 matrices have 0, 0, 0, 1 in the last column. Try this?
D3DXMATRIX ToD3DXMATRIX() const
{
D3DXMATRIX m;
m._11 = _11; m._12 = _12; m._13 = 0.0f; m._14 = 0.0f;
m._21 = _21; m._22 = _22; m._23 = 0.0f; m._24 = 0.0f;
m._31 = 0.0f; m._32 = 0.0f; m._33 = 1.0f; m._34 = 0.0f;
m._41 = _31; m._42 = _32; m._43 = 0.0f; m._44 = 1.0f;
return m;
}


If that doesn't work, I'm not sure what's going on. Translation is in the last row of a matrix (when using row vector convention) so copying row 3 to row 4 should be the right thing to do.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #3Xcrypt

New Member

• Members
• 144 posts
• LocationBelgium

Posted 22 February 2012 - 09:22 PM

You are right. I'm probably doing something wrong in the calculation of my inverse matrix... sigh!

### #4Xcrypt

New Member

• Members
• 144 posts
• LocationBelgium

Posted 22 February 2012 - 09:50 PM

By the way, I'm not sure what wikipedia means with the transpose of a 3D vector?

This should be an efficient algorithm for computing A^-1 for a 3x3 matrix, so I used that. Here is my calculation:

bool GetInverse(Mat3x3& matOUT) const
{
//rule of Sarrus
float det = GetDeterminant();

if (det == 0) {
return false; //matrix is not invertable
}
D3DXVECTOR3 X0(_11, _21, _31);
D3DXVECTOR3 X1(_12, _22, _32);
D3DXVECTOR3 X2(_13, _23, _33);

D3DXVECTOR3 a; D3DXVec3Cross(&a, &X1, &X2);
D3DXVECTOR3 b; D3DXVec3Cross(&b, &X2, &X0);
D3DXVECTOR3 c; D3DXVec3Cross(&c, &X0, &X1);

matOUT._11 = a.x; matOUT._12 = b.x; matOUT._13 = c.x;
matOUT._21 = a.y; matOUT._22 = b.y; matOUT._23 = c.y;
matOUT._31 = a.z; matOUT._32 = b.z; matOUT._33 = c.z;

matOUT *= (1/det);
return true;
}

Can you see if I did anything wrong?

### #5roel

Senior Member

• Members
• 698 posts

Posted 22 February 2012 - 09:51 PM

For the inverse, it should equal to the the 3x3-part transposed, if it is affine/orthonormal iirc. Just a check. And possibly faster to compute.

### #6Xcrypt

New Member

• Members
• 144 posts
• LocationBelgium

Posted 22 February 2012 - 10:03 PM

Well that explains... I must be thinking wrong then.

What I am trying to do is making a camera2D class.

The client should inherit from camera2D.
Camera2D has a 3x3 matrix datamember, which presents the camera in 2D-world space. (= matview inversed)

so in order to go from world space to screen space, I compute the inverse of that datamember(matV_inv^-1 = matV)
But it doesn't seem to work.

Now, following your logic that a inverse of a 3x3 affine transformation matrix should be equal to it's transpose, I understand why: the translation part (_31, _32) would go to _13, _23 respectively, which both should be 0

But I'm not sure what I should do to fix that?

### #7Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 22 February 2012 - 11:30 PM

Transposing a vector just switches between a row vector and a column vector. In that equation I think the vectors are columns, and they're saying to stash three vectors into the rows of a 3x3 matrix, so they're transposing them to turn them into row vectors so that they fit.

The inverse equals the transpose when the matrix is orthogonal, but only for the linear part of the transformation; transposing doesn't handle the translation part correctly. In 2D for an orthogonal transformation you can transpose the 2x2 upper-left submatrix and in 3D you can transpose the 3x3 upper-left submatrix, but don't transpose the translation part (the last row). To correctly invert an affine transformation using the transpose shortcut you have to negate the translation and then multiply it by the newly-tranposed linear part:

if y = x * L + T (L is the linear part and T is the translation)
then
y - T = x * L
(y - T) * inverse(L) = x
y * inverse(L) + (-T * inverse(L)) = x
so inverse(L) and (-T * inverse(L)) are the linear and translation parts of the inverse affine transformation.

A full-powered matrix inverse will work in any case; just throw it at the whole 3x3 or 4x4 matrix, translation and all. The above is only relevant when attempting to use the transpose shortcut.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #8Xcrypt

New Member

• Members
• 144 posts
• LocationBelgium

Posted 23 February 2012 - 12:38 AM

Reedbeta: in your equation, what does 'x' stand for? EDIT: nvm, I get it now

Btw, I found out that the formula on wikipedia I gave earlier is incorrect (or I'm computing it incorrectly). I found out that when I use the generic way of calculating the inverse matrix everything works fine.

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users