Lookin’ for some relatively inexpensive (…free?) software that will
take a texture and generate bumpmaps, height maps, etc. (Btw, are bump
maps/light maps/alpha maps all the same thing?)
And…can all that be done in Photoshop?
Please log in or register to post a reply.
You can do that with an art program like photoshop but it won’t give a
true bump or normal map. It will just basically change it to a 2 color
image and the light or dark places on the image can be used for height.
This may or may not be true to the image. To do a true normal map, you
need to do a light bake on the model. One program that does this is
xNormal. You can also do it with a modeling package like Blender.
Bump maps are maps that are used for defining normals in the 3d engine,
so they give the illusion of depth.
Light maps give the illusion of shadows, but they aren’t real time.
Alpha maps are used for transparency.
Thanks fireside, great stuff. That’s really helpful.
I wasn’t aware that light maps aren’t real time. What’s the purpose of
them then, if one can simply do said shading in the original texture
Light maps are a lot easier on the cpu than doing real time lighting. I
know Unity still uses them. If you are developing for lower end
hardware, they are helpful.
It’s useful to separate light maps from the texture maps because then
you can re-use a texture map many places and apply different lighting on
top of that. If you baked the lighting into the texture, the lighting
would need to be the same everywhere that texture was used. Lightmaps
are typically uniquely UV-unwrapped, so every surface has its own
section of lightmap that’s not used anywhere else. Also, the lightmaps
are lower resolution to save memory and performance, since lighting is
often smooth and does not require sharp details, especially indirect
lighting. It’s not uncommon these days to use lightmaps (or some other
kind of pre-baked representation) for indirect lighting, and use
real-time methods for direct light.
What’s the difference between “baking the lighting into the texture”
and, how to put it, “regular” light mapping? I know from studying the
Beast (or whatever it’s called) baking system in Unity that light
mapping is something that’s done during development, not real time, but
I don’t quite have a firm grasp of what light mapping really is. Bump
maps and alpha maps are quite obvious, but light mapping is still
something I’m getting a definitive grip on.
By “baking the lighting into the texture” I meant what (I think) you
said about “do[ing] said shading in the original texture art”. That is,
applying the lighting/shading to the material textures as a
precomputation, so that at runtime there is no separate lightmap and
material color map, just one texture with everything already “baked”
With light mapping, typically there are two separate texture layers -
the material texture, and the lighting texture. They’re composited in
real-time even though the lightmaps are precomputed and don’t actually
change at runtime. That’s what I was describing in my previous post.
Yeah, what the beast does, in Unity, is create a huge light map of the
whole scene. It’s probably not a texture, just pre-computation like
Reedbeta said. In modeling software, it’s done on a specific model and
added to the color texture. You can also make a black and white texture
for light for the model and some engines can use it as a separate
texture and add them together. That’s not done as much, I don’t think,
as it used to be because it’s better to do the whole scene. Some of it
might be done for high/low poly models, where the light and normals are
baked on a high poly model and then put on a low poly model to give it
the illusion of more detail. I know that’s done for normal maps because
you can make a low poly model appear to have much more detail with a
high poly normal map, so you basically make two models and bake the
normal map on the high poly one. That’s not that hard because you just
stop when you are building the low poly model and save it, and then
start adding detail, sometimes in a different type of modeler. Then you
use that map on the low poly model you saved. A normal map really isn’t
much different than a light map. It just darkens the low areas but it
uses diffuse light, so it’s not coming from a certain direction, or
something like diffuse light anyway, it might be using multiple
directions or something or not even using light, but the low areas end
up being darker.
Normal maps don’t store lighting; they store information about the
detailed shape of the surface so that arbitrary lighting can be applied
without changing the texture. They’re mainly used for dynamic lighting;
the shader will use the normal map data to calculate better lighting
than it could without the normal map, and the lights can move around and
change arbitrarily at runtime. Normal maps can also help with increasing
detail in static lighting. HL2 had a neat way of baking three different
lightmaps that would capture the lighting from three directions relative
to a surface, then use the normal map to blend those three together
per-pixel. It looks a lot nicer than standard lightmaps.
Oh. I guess because it was being done in Blender, I thought it had
something to do with lighting. I know the normals on the model are just
little direction lines sticking out of the model, but when you do the
texture, then it looks like a 2 color type texture, which is what Unity
asks for. It’s the same size as the color texture and gets applied in
the same way. I just do simple models so the most I do is use a texture
with an included normal map or make one from the texture so the model
looks like it has a little depth, otherwise it will just look flat. I’ve
seen the tutorials for using a high poly model and baking the normal map
and then using it on a low poly.
Just remember to wear oven mitts when you’re baking textures :D
You can also use tools like Gimp to produce normal maps. Normal maps are
not created from just multi-resolution models, but also from random
images to add deformities such as cracks, scratches, bumps. In some
cases like with relief mapping (parallax bump mapping), brick textures
can appear extruded while observing at certain angles. None of these
situation calls for an original high poly model. My
TexGen tool can procedurally generate
normal maps if that interests you. There use to be another good tool
called MaPZone (by Allegorithmic), but
it doesn’t look active anymore. At least not the free edition. He’s gone
pro now. There’s also
Genetica, which has been
around a bit longer. Blender can do these as well if you work with its
node editor. If you learn python, you could even write your own
For lightmaps, stick with a pro modeling toolkit. Ideally, use a real
level editor created by game devs (such as Havok) since they simplify
the matter for you. Blender can do this as well, but it’s a lot of
manual labour. When Blender renders an image, by default it renders the
scene to a new image. You can tell the renderer to only render lights
and shadows (or ambient occlusion) and specifically to a texture (one
that is already assigned to an object). However you need to save that
file afterwards and do this for every model. Best to write a python
script and go watch a movie while it bakes.