Per pixel lighting without register combiners

Vifani 101 Sep 11, 2003 at 16:20

Hi guys,

someone have some ideas about how realize a per pixel lighting model in OpenGL without using nVidia register combiners ?

20 Replies

Please log in or register to post a reply.

anubis 101 Sep 11, 2003 at 18:11

if you don’t want to use register combiners you’re not in for fragment programs as well, right ? but first of all… why do you need to do this ?

Vifani 101 Sep 11, 2003 at 22:22

I have an ATi Radeon 9800 Pro graphic card so I can’t use nVidia Registers Combiner.

I want to realize a good lighting system for my game engine and I have decided to use a per pixel lighting instead of per vertex light and lightmap that I think are obsolete.

dk 158 Sep 11, 2003 at 22:48

Well, you don’t have to use the nVidia register combiners. You can simply use normal vertex and fragment shaders.

If you are implementing Phong lighting, check this link.

Hope it helps.

anubis 101 Sep 12, 2003 at 00:22

i have several fragment shader implementations ready for different lighting models. i was going to use them in my demo, which i won’t be able to complete. so if you want to see them feel free to contact me…

davepermen 101 Sep 12, 2003 at 09:47

just run ARB_fragment_program. its great, and on all new radeons and gfFX cards supported.

anubis 101 Sep 12, 2003 at 21:32

even on gf4 i think

davepermen 101 Sep 13, 2003 at 23:30

no. gf4 cannot run it.

reason one: it does not have floatingpoint but only 9bit fixedpoint math (or 10bit)
reason two: it cannot excecute that much instructions
reason three: it cannot do programable texture fetches, only confiruable predefined settings, and thats it

but the main reason: its no dx9 card. because ARB_fragment_program == ps2.0

anubis 101 Sep 13, 2003 at 23:39

oi mate, my per pixel lighting code just ran on a gf4 using the ARB_fragment_program extension. i saw it and for once i believe my eyes

baldurk 101 Sep 14, 2003 at 14:12

from my glxinfo output:

OpenGL renderer string: GeForce4 Ti 4600/AGP/SSE/3DNOW!
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
  GL_ARB_depth_texture, GL_ARB_imaging, GL_ARB_multisample, 
  GL_ARB_multitexture, GL_ARB_point_parameters, GL_ARB_shadow, 
  GL_ARB_texture_border_clamp, GL_ARB_texture_compression, 
  GL_ARB_texture_cube_map, GL_ARB_texture_env_add, 
  GL_ARB_texture_env_combine, GL_ARB_texture_env_dot3, 
  GL_ARB_texture_mirrored_repeat, GL_ARB_transpose_matrix, 
  GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_window_pos, 
  GL_S3_s3tc, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, 
  GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array, 
  GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays, 
  GL_EXT_packed_pixels, GL_EXT_paletted_texture, GL_EXT_point_parameters, 
  GL_EXT_rescale_normal, GL_EXT_secondary_color, 
  GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, 
  GL_EXT_shared_texture_palette, GL_EXT_stencil_wrap, GL_EXT_texture3D, 
  GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map, 
  GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, 
  GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, 
  GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod, 
  GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_vertex_array, 
  GL_HP_occlusion_test, GL_IBM_texture_mirrored_repeat, 
  GL_KTX_buffer_region, GL_NV_blend_square, GL_NV_copy_depth_to_color, 
  GL_NV_depth_clamp, GL_NV_fence, GL_NV_fog_distance, 
  GL_NV_light_max_exponent, GL_NV_multisample_filter_hint, 
  GL_NV_occlusion_query, GL_NV_packed_depth_stencil, GL_NV_pixel_data_range, 
  GL_NV_point_sprite, GL_NV_register_combiners, GL_NV_register_combiners2, 
  GL_NV_texgen_reflection, GL_NV_texture_compression_vtc, 
  GL_NV_texture_env_combine4, GL_NV_texture_rectangle, GL_NV_texture_shader, 
  GL_NV_texture_shader2, GL_NV_texture_shader3, GL_NV_vertex_array_range, 
  GL_NV_vertex_array_range2, GL_NV_vertex_program, GL_NV_vertex_program1_1, 
  GL_NVX_ycrcb, GL_SGIS_generate_mipmap, GL_SGIS_multitexture, 
  GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow

So anubis, you must be mistaken. Unless you were running in software mode.

anubis 101 Sep 14, 2003 at 17:03

i swear to god i saw it running on a gf4 smoothly…

Vifani 101 Sep 14, 2003 at 22:56

Hi guys,

I have some news for you. I have used Cg for a simple Per Pixel lighting system (specular + diffuse) and it works very well.

anubis 101 Sep 14, 2003 at 23:18

i have tried using cg but droped it again…
you should rather rely on fragmen/vertex programs. you could use the cg compiler to generate those and hand optimize them. please don’t use the cg runtime. also writing the assembler programs isn’t much harder if you are used to it, believe me. i won’t touch a hlsl again until glslang is out.

Noor 101 Sep 15, 2003 at 02:13

Why? What’s wrong with Cg?

anubis 101 Sep 15, 2003 at 03:05
  • their runtime library sux ( imo ). it’s way overcomplicated for no good reason
  • it’s still buggy
  • looking at the glslang spec you will notice that it’s design is much clearer
  • overall CG is just a big PR campaing to promte nvidias line of gforce cards
Noor 101 Sep 15, 2003 at 03:28

But Cg supports both OpenGL and DirectX.
glslang is only for opengl.

ain’t I right?

anubis 101 Sep 15, 2003 at 03:34

i was being ignorant about DirectX since i don’t use it… you are right if you want to write shaders for both you could make use of cg… but still i would use it just as a compiler and run the generated shaders after i went through them by hand

Vifani 101 Sep 15, 2003 at 07:51

I think Cg is very easy to use. I have implementated a per pixel lighting (diffuse + specular without bump mapping) in only 3 hours. So I haven’t found it overcomplicated.

Where can I find glslang spec ?

baldurk 101 Sep 15, 2003 at 17:34

Unless it’s changed since I checked, the Cg toolkit for linux is terrible. The documentation also seemed quite poor, although to be fair I didn’t really look into that much after the toolkit. Assuming this is still true, this negates the quality of OpenGL I like (second) best - portability. GLSlang will be true OpenGL, and won’t require any additional libraries. I never like having too many libraries hanging around.

CyraX 101 Oct 04, 2003 at 07:22

baldurk you are right about only a part of the complaint regarding the CG tookit for GNU/LINUX. The documentation is bad wrt getting CG started on GNU/Linux. The rest of it is just a matter of being able to read thru the standard CG Toolkit pdf.

However you should understand one thing - CG is not supported for Fragment shaders (or pixel shaders) by ATI. So that puts you pretty much in a fix.
Cg is elegant yes. It helps in faster coding - not really.
GLSlang - find it in its a link on the right.

davepermen 101 Oct 05, 2003 at 11:04

cg is dead. nvidia dropped large support parts of it, and will only continue it as a small tool besides. they can’t compete HLSL in dx so they never had a chance to establish it there. they have no need to continue support in gl eighter, as glslang will be there.

as i said right from the beginning. its a dead birth. no one uses cg. everyone nvidia states uses cg, uses hlsl. it was just, and still is just, marketing crap. as 90% of nvidias work during the last year anyways. the resting 10% where about cheating.. :D