"Parametric trilinear transformation"

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 10, 2008 at 21:39

I am puzzled by something I just read in a paper and am trying to understand.

It’s called “parametric trilinear transformation” and is basically a way by the author to transform the coordinates of a unit box into any skewed box. Problem is, all the author tells me other than that are these three equations:

X(s, t, u) = Ax * s + Bx * t + Cx * u + Dx * st + Ex * tu + Fx * su + Gx * stu + Hx
Y(s, t, u) = Ay * s + By * t + Cy * u + Dy * st + Ey * tu + Fy * su + Gy * stu + Hy
Z(s, t, u) = Az * s + Bz * t + Cz * u + Dz * st + Ez * tu + Fz * su + Gz * stu + Hz

where 0 <= s, t, u <= 1 are the coordinates in the unit cube and x, y, z are the real world coordinates after transforming into the skewed box.

There isn’t even an explanation what A, B, …, H are, though I very much suppose they are the cube’s eight points in some way.

It’s rather obvious that the vector P = A * s + B * t + C * u will be the point for any input s, t and u if we had only an unskewed box, as it simply means moving along the sides. The rest of the equations have to be taking care of the skewing. But that I cannot figure out.

Perhaps someone knows his stuff around this topic? I’m rather confused by the lack of information.

18 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 10, 2008 at 22:07

The A…H vectors are constants based on the desired transformation. You are right that they are derived from the corners of the skewed box.

You can figure them out by solving a system of linear equations. There are 8 unknown vectors, with three components each, so 24 unknowns in total. If, for each corner of the box, you substitute the appropriate s, t, u on the right and and x, y, z values on the left in the above equations, you end up with a total of 24 linear equations in the components of A…H. This system can then be solved with Gaussian elimination, or your linear solver of choice.

Linear systems like this are a fairly common occurrence in graphics papers; I guess the author figured most readers would recognize it as linear and already know how to solve such things.

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 11, 2008 at 14:39

I see. I was starting to get the impression there was something special about this, because I googled the exact term and it came up with nothing but links to the same paper.

But then again, my experience here is limited, so I needn’t wonder. Still, could you please be a bit more detailed about the part with the substituting x, y, z, s, t and u? I didn’t quite get you there. I don’t have these six values all at once, do I? (Other than perhaps those for the eight cube points, if that’s what you mean.) How would one such resulting equation look like?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 11, 2008 at 17:18

Yes, I’m talking about the cube points. For each of the 8 corners, you take the three equations above and substitute the corner’s s, t, u into the right side and its x, y, z into the left side. If I number those points 1 through 8, you have for the first point:

x1 = Ax * s1 + Bx * t1 + Cx * u1 + Dx * s1 * t1 + Ex * t1 * u1 + Fx * s1 * u1 + Gx * s1 * t1 * u1 + Hx
y1 = Ay * s1 + By * t1 + Cy * u1 + Dy * s1 * t1 + Ey * t1 * u1 + Fy * s1 * u1 + Gy * s1 * t1 * u1 + Hy
z1 = Az * s1 + Bz * t1 + Cz * u1 + Dz * s1 * t1 + Ez * t1 * u1 + Fz * s1 * u1 + Gz * s1 * t1 * u1 + Hz

… and so on for points 2 through 8. It gives you a total of 24 equations which, as you can see, are linear in the vectors A…H.

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 12, 2008 at 22:44

Now, unless I’m totally off track here, this means I’m getting a 24 x 24 matrix, right? That sounds pretty bad if Gaussian elimination is supposed to be O(n\^3). But I don’t guess there’s any way that’s much faster? The matrix is practically empty, that doesn’t seem very optimal to me.

Also, I’d very much appreciatr it if you could perhaps just nudge me a link or something with info about this trilinear transformation business. I really couldn’t find anything.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 13, 2008 at 00:35

Yes, there will be a 24x24 matrix. Actually this is not that bad. It’s true that Gaussian elimination is O(n\^3), but 24 is still not that large of an ‘n’, so unless you need to do this hundreds or thousands of times per frame, I don’t think it’ll be a problem.

However, it seems likely that with all the zeros in the matrix, you may be able to arrange it in an upper or lower triangular format (by changing the order of rows and cols perhaps) and in that case, you can use a straight forward or backward substitution to get the results very quickly.

As for the trilinear transformation thing, your guess is as good as mine; I don’t really know anything about it (except that it seems misnamed, since the function is cubic in s,t,u and not linear…)

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Nov 13, 2008 at 05:09

Hmh, this is just a simple trilinear interpolation which is formulated a bit differently from the common form. For simplicity, if you think of bilinear interpolation, you have more common form of: P(s, t)=(x00*(1-s)+x10*s)*(1-t)+(x01*(1-s)+x11*s)*t, where x00, x01, x10 and x11 are the 4 corner values. Now if you rearrange that as coefficients to s, t and st, you get: P(s,t)=x00+(x10-x00)s+(x01-x00)t+(x00-x10-x01+x11)st, i.e.
P(s,t)=A+B*s+C*t+D*st, where A=x00, B=(x10-x00), C=(x01-x00) and D=(x00-x10-x01+x11)

If you know what the 8 corner values of the cube should be, then you can just do the above reformulation of the bilinear interpolation for trilinear and sort out what the coeffs would be. No need for really solving 24x24 matrix.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 13, 2008 at 05:40

Oh, interesting. I didn’t realize that a bilinear interpolation multiplied out to a quadratic function like that, though it’s obvious in hindsight.

With that in mind, yeah, for the trilinear case it should be straightforward to get closed form expressions for A…H in terms of the corners.

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 14, 2008 at 20:25

Yes, it does work like this. I’ve just had to find out which points are added and subtracted to form which coefficient vectors and now it’s working fine. Thanks for the help!

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 18, 2008 at 16:52

Okay, so here’s another problem closely related to the first.

I am now trying to find the point (s, t, u) for a point (x, y, z), i.e. I’m trying to do the inverse transformation. The author states that there is no closed form for this, and suggests doing “3D Newton-Raphson iteration”. I only know of Newton-Raphson for functions of one parameter. Yet of course the three coordinate equations we have here are multivariable and can’t simply be derived to fit into the procedure of Newton-Raphson.

What do I do?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 18, 2008 at 17:22

Try this: http://www.math.hmc.edu/funfacts/ffiles/30002.1-3.shtml

The division by the derivative in Newton-Raphson becomes a multiplication by the inverse Jacobian matrix.

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 23, 2008 at 01:48

I’ve tried this around for a while, but my implementation is flawed. I start out with parameters s, t, and u and then do an iteration. This will overshoot, however. For instance, if I know the parameters for a point must be (0.5, 0.5, 1) and I start out with (0.5, 0.5, 0.75), after the first iteration I’ll have (0.5, 0.5, 1.75). Next iteration, the change will be in the opposite direction, although much larger. After a few more iterations, all parameters are infinity.

I pretty much followed this site (chapter “Newton’s Method for Nonlinear Systems”) and I really do not see the problem. I only have a few guesses.
One: I work with row vectors, whereas this site used column vectors. Therefore, my Jacobian matrix will have to be transposed (i.e. mirrored along the diagonal), right?
Two: Somewhere on that site, it is mentioned that the sum of certain partial derivatives must not exceed one in order for the iteration to converge. But since the parameters A through H appear in the derivatives, that can’t always be the case (as for instance in the case when the skewed box has one dimension larger than one, which autoamtically makes one parameter larger). I think this is the cause for the very large changes of parameters. Though of course that doesn’t explain why the changes get even larger over time.

But perhaps I am wrong on a much more fundamental level. I seek the root for three functions making up the difference between the point to be parameterized and the point transformed back using the current parameters. This means that if I hit the right parameters, all of these functions are zero, since the points coincide. Is that even right, or am I making a wrong assumption here?

It’s late here, so please forgive any totally dumb questions.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 23, 2008 at 04:55

Hmm, that does sound like the behavior that can happen with Newton’s method when the initial guess isn’t close enough to the solution. It can overshoot and land further from the solution than it was previously, which causes it to overshoot even further on the second iteration, and so on.

A good test is to start it with a guess that is very close to the solution (like 0.01 away) and see if it still has this behavior. If it does, there’s probably something wrong with the math in the implementation.

Another good test is to try it on some affine functions, like f(p) = M*p + c where M is a constant matrix and c a constant vector. Newton’s method should find the root of a function like this in exactly one iteration, no matter what the initial guess is.

As for your questions -
1. Yes, the row-column convention means your Jacobian should be transposed relative to theirs.

  1. For convergence conditions, I don’t think the functions ‘g’ they talk about in the fixed-point iteration section are the same as the functions ‘f’ to which you’re applying Newton’s method in the following section. Rather, g is the function that maps from one guess to the next. So g is the whole expression g(p) = p - J\^(-1)(p) * f(p). I haven’t worked out what the derivatives of this are in terms of A…H, but I’d be interested to find out.

  2. Yes, you’ve got it right - when f is the difference between current point and desired point, it has a root where all three components of the difference are zero and the points coincide.

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 23, 2008 at 18:54

This whole thing’s a giant mess. Using your proposals, I found the error. It was my misusing the inverting method of my matrix class, so that I never actually used an inverted matrix.

But now I’m stuck with something else. Whenever I calculate the partial derivatives for the Jacobian, little rounding errors of magnitude 1E-8 remain. If I want to have a decent inverted matrix, I need to make these zero, otherwise I’m getting giant values in the inverted matrix and therefore huge leaps in some iteration steps and as a result it is impossible to get to any precise solution. But on the other hand, if I set the border of replacement by zero too low, I seem to destroy some essential information (as opposed to just rounding errors) and get the infinity thing again in some calculations.
For instance, a border of 1E-4 will cause jumps and impreciseness and 1E-6 will give infinity results. The lower the border, the more calculations are affected by this.

So now I can choose between wrong results and very imprecise results.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 23, 2008 at 21:52

That’s odd. Matrix inversion shouldn’t be so sensitive to rounding errors, unless the matrix is very close to singular (non-invertible). And I don’t see any reason the Jacobian matrix in this problem should be close to non-invertible.

Are you sure your matrix inversion code is working properly?

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 24, 2008 at 20:49

Well, looks like being close to singular is exactly what’s the case. I’ve run a few problematic examples through web applets and they all told me the thing was singular. In hindight, that is logical, since the determinant I caluclate in these cases is somewhere in the magnitude of 1E-8 as well. Of course I could at this point take all determinant values around zero as singular and be done, but for some reason that’ll just give me infinity results again.

I’m wondering anyway why I can mess up the iteration by cutting off these small values. There really must be very small valid values during the course of approximation which I destroy by this. To be precise, this should happen as we approach the nearest point, as the distances returned by the functions can become very small and blend in with the rounding errors. Double precision might very well solve this, but I can afford neither the additional time nor space needed for this.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 26, 2008 at 06:33

Here’s what I got for the Jacobian in terms of A…H (using column-vector convention, so maybe transposed relative to yours):

Ax + t*Dx + u*Fx + tu*Gx    Bx + s*Dx + u*Ex + su*Gx    Cx + t*Ex + s*Fx + st*Gx
Ay + t*Dy + u*Fy + tu*Gy    By + s*Dy + u*Ey + su*Gy    Cy + t*Ey + s*Fy + st*Gy
Az + t*Dz + u*Fz + tu*Gz    Bz + s*Dz + u*Ez + su*Gz    Cz + t*Ez + s*Fz + st*Gz

Can you confirm that’s what you got for your Jacobian as well?

855230605f07055d00f3f9c25c60fc63
0
Sniffman 101 Nov 26, 2008 at 16:56

Yes, that should be what I’m doing as well, although transposed. All derivatives by s into the first row, by t into the second and by u into the third.
Perhaps it is of importance that I save the Jacobian in a 4x4 matrix class designed for 3d transformation. But the determinant is only calculated for the 3x3 matrix and the inversion also will not use any information from the additional matrix fields for the ones of the 3x3 matrix. So that should in theory not be a problem.
Also of note, just in case it wasn’t clear, my matrix inversion is not any kind of Gaussian elimination or other algorithm. It is using the approach of adjugates and multiplies manually.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 29, 2008 at 05:23

Hmm, well it shouldn’t matter what method of matrix inversion you use as long as it works properly (although, Gaussian elimination is likely faster for large matrices). Also, the 4x4 vs 3x3 distinction shouldn’t matter either; as a matter of fact, if the last row and column of the 4x4 are [0, 0, 0, 1] then its determinant and inverse will match those of the 3x3.

I’m about out of ideas, except that I still don’t think the Jacobians for this problem should be coming out singular - just an intuition; I haven’t attempted to prove it. All I can say is, check your math again. Sorry to be so unhelpful. :(