I’m interested in learning the alogrithm(s) behind UV unwrapping. I’d
like to bat around generating my own textures for pre-existing 3D models
via code, but, of course, I have to know the UVW info first. What’s the
algorithm, and can one use said algorithm(s) to store the coordinate
points in some sort of array with values attached to the array elements
(so one can write code for the elements)?
First things first, of course, and that’s to find the algorithm(s).
Please log in or register to post a reply.
Perhaps the most popular of all UV mapping techniques is planar mapping.
It’s simple to understand and compute. You simply calculate the bounding
box of the model, iterate through each vertex, subtract the minimum
bounding box value from the vertex and divide by the (max - min)
bounding range. This will assign each vertex a UV within the normalized
UVW space. All that remains is deciding which plane you wish to map to.
XY, XZ, or YZ. If you want to split the planar space into two, one for
the front side and another for the back side of the model, you can use
the vertex normal to determine if the vertex is front-facing or
back-facing and adjust the UV coordinates accordingly. You could for
isntance pack front-facing polygons on the fist half of the X-axis and
pack the back-facing polygons on the next half of the X-axis. You may
have some stretching or resolution issues along the silhouette of the
model, but you can correct for this by treating these polygons as
special and giving them additional or dedicated UV space on the map. A
similar technique is used for cube mapping, except you analyze all 6
planes +-X, +-Y, and +-Z. You use the vertex normals to decide which
cube face the polygon should be mapped to and then make sure each cube
face has its own island in the UV space. This is only good for models
where each vertex can map to a unique UV coordinate. Cube mapping would
not work with a torus, for example.
An extension to the planar mapping algorithm is to give each polygon
it’s own dedicated, uniformly assigned coordinates. You end up with a UV
map that resembles a brick house, but it’s great for dealing with very
complicated models where you can’t be bothered to micro edit UV
coordinates from the more “smart” algorithms. It’s good in that it
maximizes the UV space. It’s also great for lightmapping.
Another algorithm is the least squares conformal maps. I heard good
things about this and it’s used in modeling packages like Blender. If
you google on it you can find some more info, or someone else here might
shed some light on it.
One thing to keep in mind with UV mapping is that you need to account
for texture bleeding. Linear interpolation in graphics hardware will end
up sampling neighbouring texels, so make sure to add padding in-between
your UV islands to account for that.
Thanks TheNut; that’s exactly what I was looking for. Excellent.
PS–can UV unwrapping be performed somehow via code/scripting in Unity