"Parametric trilinear transformation"
Posted 10 November 2008 - 09:39 PM
It's called "parametric trilinear transformation" and is basically a way by the author to transform the coordinates of a unit box into any skewed box. Problem is, all the author tells me other than that are these three equations:
X(s, t, u) = Ax * s + Bx * t + Cx * u + Dx * st + Ex * tu + Fx * su + Gx * stu + Hx
Y(s, t, u) = Ay * s + By * t + Cy * u + Dy * st + Ey * tu + Fy * su + Gy * stu + Hy
Z(s, t, u) = Az * s + Bz * t + Cz * u + Dz * st + Ez * tu + Fz * su + Gz * stu + Hz
where 0 <= s, t, u <= 1 are the coordinates in the unit cube and x, y, z are the real world coordinates after transforming into the skewed box.
There isn't even an explanation what A, B, ..., H are, though I very much suppose they are the cube's eight points in some way.
It's rather obvious that the vector P = A * s + B * t + C * u will be the point for any input s, t and u if we had only an unskewed box, as it simply means moving along the sides. The rest of the equations have to be taking care of the skewing. But that I cannot figure out.
Perhaps someone knows his stuff around this topic? I'm rather confused by the lack of information.
Posted 10 November 2008 - 10:07 PM
You can figure them out by solving a system of linear equations. There are 8 unknown vectors, with three components each, so 24 unknowns in total. If, for each corner of the box, you substitute the appropriate s, t, u on the right and and x, y, z values on the left in the above equations, you end up with a total of 24 linear equations in the components of A...H. This system can then be solved with Gaussian elimination, or your linear solver of choice.
Linear systems like this are a fairly common occurrence in graphics papers; I guess the author figured most readers would recognize it as linear and already know how to solve such things.
Posted 11 November 2008 - 02:39 PM
But then again, my experience here is limited, so I needn't wonder. Still, could you please be a bit more detailed about the part with the substituting x, y, z, s, t and u? I didn't quite get you there. I don't have these six values all at once, do I? (Other than perhaps those for the eight cube points, if that's what you mean.) How would one such resulting equation look like?
Posted 11 November 2008 - 05:18 PM
x1 = Ax * s1 + Bx * t1 + Cx * u1 + Dx * s1 * t1 + Ex * t1 * u1 + Fx * s1 * u1 + Gx * s1 * t1 * u1 + Hx
y1 = Ay * s1 + By * t1 + Cy * u1 + Dy * s1 * t1 + Ey * t1 * u1 + Fy * s1 * u1 + Gy * s1 * t1 * u1 + Hy
z1 = Az * s1 + Bz * t1 + Cz * u1 + Dz * s1 * t1 + Ez * t1 * u1 + Fz * s1 * u1 + Gz * s1 * t1 * u1 + Hz
... and so on for points 2 through 8. It gives you a total of 24 equations which, as you can see, are linear in the vectors A...H.
Posted 12 November 2008 - 10:44 PM
Also, I'd very much appreciatr it if you could perhaps just nudge me a link or something with info about this trilinear transformation business. I really couldn't find anything.
Posted 13 November 2008 - 12:35 AM
However, it seems likely that with all the zeros in the matrix, you may be able to arrange it in an upper or lower triangular format (by changing the order of rows and cols perhaps) and in that case, you can use a straight forward or backward substitution to get the results very quickly.
As for the trilinear transformation thing, your guess is as good as mine; I don't really know anything about it (except that it seems misnamed, since the function is cubic in s,t,u and not linear...)
Posted 13 November 2008 - 05:09 AM
P(s,t)=A+B*s+C*t+D*st, where A=x00, B=(x10-x00), C=(x01-x00) and D=(x00-x10-x01+x11)
If you know what the 8 corner values of the cube should be, then you can just do the above reformulation of the bilinear interpolation for trilinear and sort out what the coeffs would be. No need for really solving 24x24 matrix.
Posted 13 November 2008 - 05:40 AM
With that in mind, yeah, for the trilinear case it should be straightforward to get closed form expressions for A...H in terms of the corners.
Posted 14 November 2008 - 08:25 PM
Posted 18 November 2008 - 04:52 PM
I am now trying to find the point (s, t, u) for a point (x, y, z), i.e. I'm trying to do the inverse transformation. The author states that there is no closed form for this, and suggests doing "3D Newton-Raphson iteration". I only know of Newton-Raphson for functions of one parameter. Yet of course the three coordinate equations we have here are multivariable and can't simply be derived to fit into the procedure of Newton-Raphson.
What do I do?
Posted 23 November 2008 - 01:48 AM
I pretty much followed this site (chapter "Newton's Method for Nonlinear Systems") and I really do not see the problem. I only have a few guesses.
One: I work with row vectors, whereas this site used column vectors. Therefore, my Jacobian matrix will have to be transposed (i.e. mirrored along the diagonal), right?
Two: Somewhere on that site, it is mentioned that the sum of certain partial derivatives must not exceed one in order for the iteration to converge. But since the parameters A through H appear in the derivatives, that can't always be the case (as for instance in the case when the skewed box has one dimension larger than one, which autoamtically makes one parameter larger). I think this is the cause for the very large changes of parameters. Though of course that doesn't explain why the changes get even larger over time.
But perhaps I am wrong on a much more fundamental level. I seek the root for three functions making up the difference between the point to be parameterized and the point transformed back using the current parameters. This means that if I hit the right parameters, all of these functions are zero, since the points coincide. Is that even right, or am I making a wrong assumption here?
It's late here, so please forgive any totally dumb questions.
Posted 23 November 2008 - 04:55 AM
A good test is to start it with a guess that is very close to the solution (like 0.01 away) and see if it still has this behavior. If it does, there's probably something wrong with the math in the implementation.
Another good test is to try it on some affine functions, like f(p) = M*p + c where M is a constant matrix and c a constant vector. Newton's method should find the root of a function like this in exactly one iteration, no matter what the initial guess is.
As for your questions -
1. Yes, the row-column convention means your Jacobian should be transposed relative to theirs.
2. For convergence conditions, I don't think the functions 'g' they talk about in the fixed-point iteration section are the same as the functions 'f' to which you're applying Newton's method in the following section. Rather, g is the function that maps from one guess to the next. So g is the whole expression g(p) = p - J^(-1)(p) * f(p). I haven't worked out what the derivatives of this are in terms of A...H, but I'd be interested to find out.
3. Yes, you've got it right - when f is the difference between current point and desired point, it has a root where all three components of the difference are zero and the points coincide.
Posted 23 November 2008 - 06:54 PM
But now I'm stuck with something else. Whenever I calculate the partial derivatives for the Jacobian, little rounding errors of magnitude 1E-8 remain. If I want to have a decent inverted matrix, I need to make these zero, otherwise I'm getting giant values in the inverted matrix and therefore huge leaps in some iteration steps and as a result it is impossible to get to any precise solution. But on the other hand, if I set the border of replacement by zero too low, I seem to destroy some essential information (as opposed to just rounding errors) and get the infinity thing again in some calculations.
For instance, a border of 1E-4 will cause jumps and impreciseness and 1E-6 will give infinity results. The lower the border, the more calculations are affected by this.
So now I can choose between wrong results and very imprecise results.
Posted 23 November 2008 - 09:52 PM
Are you sure your matrix inversion code is working properly?
Posted 24 November 2008 - 08:49 PM
I'm wondering anyway why I can mess up the iteration by cutting off these small values. There really must be very small valid values during the course of approximation which I destroy by this. To be precise, this should happen as we approach the nearest point, as the distances returned by the functions can become very small and blend in with the rounding errors. Double precision might very well solve this, but I can afford neither the additional time nor space needed for this.
Posted 26 November 2008 - 06:33 AM
Ax + t*Dx + u*Fx + tu*Gx Bx + s*Dx + u*Ex + su*Gx Cx + t*Ex + s*Fx + st*Gx Ay + t*Dy + u*Fy + tu*Gy By + s*Dy + u*Ey + su*Gy Cy + t*Ey + s*Fy + st*Gy Az + t*Dz + u*Fz + tu*Gz Bz + s*Dz + u*Ez + su*Gz Cz + t*Ez + s*Fz + st*Gz
Can you confirm that's what you got for your Jacobian as well?
Posted 26 November 2008 - 04:56 PM
Perhaps it is of importance that I save the Jacobian in a 4x4 matrix class designed for 3d transformation. But the determinant is only calculated for the 3x3 matrix and the inversion also will not use any information from the additional matrix fields for the ones of the 3x3 matrix. So that should in theory not be a problem.
Also of note, just in case it wasn't clear, my matrix inversion is not any kind of Gaussian elimination or other algorithm. It is using the approach of adjugates and multiplies manually.
Posted 29 November 2008 - 05:23 AM
I'm about out of ideas, except that I still don't think the Jacobians for this problem should be coming out singular - just an intuition; I haven't attempted to prove it. All I can say is, check your math again. Sorry to be so unhelpful.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users