I have a question regarding nVIDIA Cg shader language (or I guess any
shader language for that matter, but I am using Cg.) Is it possible to
create a cubemap for point light shadows in the vertex/pixel shader
itself assuming you have the position of a given light, or must the cube
map be generated outside your shader and passed in? I ask because I have
all the necessary data available in my shaders however to get the data
in my C++ code would result in me going through with some potentially
I want to see if there is a way to construct the cube map per light in
the shader and just do my calculations from there, essentially putting
all my cube map code into the shader.
Please log in or register to post a reply.
To generate a cube map on the GPU, you would need to do separate draw
calls to first generate the cubemap and then use it. That is, you’d bind
each cube map render target and draw your scene into it, using a
standard vertex shader and a pixel shader that just returns zero (since
only depth is important for shadow mapping, not color). You have to do
this 6 times, once for each cube face, unless you use a geometry shader
to replicate the triangles into the 6 render targets on the GPU.
Then, you’d bind your main render target and draw your scene again using
a pixel shader that samples the cubemap to determined how much the pixel
is shadowed and incorporate that into the usual lighting/shading
So you can render the cube map on the GPU, but you do have to do some
extra work on the CPU to set everything up. The cubemap cannot be
generated and used entirely by shaders without any CPU involvement at
I thought it was as simpe as binding each 6 side textures using
GL_TEXTURE_CUBE_MAP_(POSITIVE/NEGATIVE)_(XYZ) then enable
If you already have a cube map image created that you’re loading from a
file or something, then it’s just like binding any other texture. The OP
was asking about dynamically rendering your own cubemap, not loading a
Thanks for the reply Reed..
That answers my question! :)
Do the cube maps have to be generated from the point of view of a given
light, whether done on CPU or GPU, or can a generic cube map be made and
then applied and transformed to work for a light at any position?
Could you render a “generic” image of your game world and then transform
it to work for any camera position? No, because the world looks
different from different places. :) Shadow maps are just views of the
game world from a camera placed at the same position as the light
source. If you want your shadows to look correct at all you must render
them from the correct position.
Right, got it.
Thanks again Reed!