Render world positions to a texture

8d4f6b9189be15fcd0364143fb221561
0
sveinn 101 Jun 06, 2009 at 09:40

I am trying to implement an image-space technique (caustics mapping) in OpenGL and Cg and I am having problems with one of its steps.

What I haven’t figured out is how to store world positions ( a 3d vector of integer values where each coordinate is in the range say [-300,300] ) in a texture instead of color values. Specifically, I would output the position of each fragment in the fragment shader instead of its color.

It seems like the storage format of textures (according to glTexImage2D description) is always clamped to the range [0,1].

Do I perhaps need some fancy extension to implement this?

Thanks for your time,
-Sveinn

4 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jun 06, 2009 at 17:01

There are two ways you could go about this:

First, if your positions are all within a certain range such as [-300, 300], you could map that range linearly into the [0, 1] interval. Then when you read the texture later on in another shader, execute the opposite mapping to get back the original values.

However, this has the problem that with a standard RGBA texture you still only get 8 bits of precision per component, and for representing positions that is often not enough (you’ll get ugly artifacts due to rounding errors). To get around this, it is possible to use extensions to get a floating-point texture format, which lets you store either a 16-bit half-precision or a 32-bit single-precision floating point number per component. Besides having more precision, these are also not clamped to [0, 1] and so you can just store the positions directly.

Floating-point textures are provided by ARB_texture_float. If you want to render to such a texture you’ll also need ARB_color_buffer_float and perhaps ARB_half_float_pixel. (BTW, all these are native under OpenGL 3.0, so if you have that you don’t need to mess around with extensions at all.)

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jun 06, 2009 at 19:37

Extensions? What’s wrong with just specifying GL_FLOAT in the glTexImage2D function? Works fine for me.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jun 07, 2009 at 01:18

If you’re talking about specifying it for the ‘format’ (7th) parameter, that just controls the input format, not the internal format. If the internal format is 8-bit then it will clamp all the pixels to [0, 1], round everything off to 8 bits, etc. as the image is uploaded.

To get a floating-point texture you need to specify RGBA32F or RGBA16F or similar for the ‘internalFormat’ (3rd) parameter. These enumerants are provided by ARB_texture_float.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jun 07, 2009 at 13:37

Hmm, you’re right. I just did a quick texture copy test and it did truncate my input. I always thought using the generic GL_RGB / GL_RGBA the driver would just chose the internal format based on your format type, but it appears it just defaults to GL_RGB8. A little ugly, but defining the internal types using the ARB extensions you mentioned does seem to do the trick.

So there you have it sveinn, lookup GL_RGB32F_ARB in glext.h and you’ll also see a bunch of other floating point formats to use should you need them.