someone have some ideas about how realize a per pixel lighting model in
OpenGL without using nVidia register combiners ?
Please log in or register to post a reply.
if you don’t want to use register combiners you’re not in for fragment
programs as well, right ? but first of all… why do you need to do this
I have an ATi Radeon 9800 Pro graphic card so I can’t use nVidia
I want to realize a good lighting system for my game engine and I have
decided to use a per pixel lighting instead of per vertex light and
lightmap that I think are obsolete.
Well, you don’t have to use the nVidia register combiners. You can
simply use normal vertex and fragment shaders.
If you are implementing Phong lighting, check this
Hope it helps.
i have several fragment shader implementations ready for different
lighting models. i was going to use them in my demo, which i won’t be
able to complete. so if you want to see them feel free to contact
just run ARB_fragment_program. its great, and on all new radeons and
gfFX cards supported.
even on gf4 i think
no. gf4 cannot run it.
reason one: it does not have floatingpoint but only 9bit fixedpoint math
reason two: it cannot excecute that much instructions
reason three: it cannot do programable texture fetches, only
confiruable predefined settings, and thats it
but the main reason: its no dx9 card. because ARB_fragment_program ==
oi mate, my per pixel lighting code just ran on a gf4 using the
ARB_fragment_program extension. i saw it and for once i believe my
from my glxinfo output:
OpenGL renderer string: GeForce4 Ti 4600/AGP/SSE/3DNOW!
OpenGL version string: 1.4.0 NVIDIA 43.49
GL_ARB_depth_texture, GL_ARB_imaging, GL_ARB_multisample,
GL_ARB_multitexture, GL_ARB_point_parameters, GL_ARB_shadow,
GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_S3_s3tc, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_paletted_texture, GL_EXT_point_parameters,
GL_EXT_shared_texture_palette, GL_EXT_stencil_wrap, GL_EXT_texture3D,
GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_vertex_array,
GL_KTX_buffer_region, GL_NV_blend_square, GL_NV_copy_depth_to_color,
GL_NV_depth_clamp, GL_NV_fence, GL_NV_fog_distance,
GL_NV_occlusion_query, GL_NV_packed_depth_stencil, GL_NV_pixel_data_range,
GL_NV_point_sprite, GL_NV_register_combiners, GL_NV_register_combiners2,
GL_NV_texture_env_combine4, GL_NV_texture_rectangle, GL_NV_texture_shader,
GL_NV_texture_shader2, GL_NV_texture_shader3, GL_NV_vertex_array_range,
GL_NV_vertex_array_range2, GL_NV_vertex_program, GL_NV_vertex_program1_1,
GL_NVX_ycrcb, GL_SGIS_generate_mipmap, GL_SGIS_multitexture,
GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow
So anubis, you must be mistaken. Unless you were running in software
i swear to god i saw it running on a gf4 smoothly…
I have some news for you. I have used Cg for a simple Per Pixel lighting
system (specular + diffuse) and it works very well.
i have tried using cg but droped it again…
you should rather rely on fragmen/vertex programs. you could use the cg
compiler to generate those and hand optimize them. please don’t use the
cg runtime. also writing the assembler programs isn’t much harder if you
are used to it, believe me. i won’t touch a hlsl again until glslang is
Why? What’s wrong with Cg?
But Cg supports both OpenGL and DirectX.
glslang is only for opengl.
ain’t I right?
i was being ignorant about DirectX since i don’t use it… you are right
if you want to write shaders for both you could make use of cg… but
still i would use it just as a compiler and run the generated shaders
after i went through them by hand
I think Cg is very easy to use. I have implementated a per pixel
lighting (diffuse + specular without bump mapping) in only 3 hours. So I
haven’t found it overcomplicated.
Where can I find glslang spec ?
Unless it’s changed since I checked, the Cg toolkit for linux is
terrible. The documentation also seemed quite poor, although to be fair
I didn’t really look into that much after the toolkit. Assuming this is
still true, this negates the quality of OpenGL I like (second) best -
portability. GLSlang will be true OpenGL, and won’t require any
additional libraries. I never like having too many libraries hanging
baldurk you are right about only a part of the complaint regarding the
CG tookit for GNU/LINUX. The documentation is bad wrt getting CG started
on GNU/Linux. The rest of it is just a matter of being able to read thru
the standard CG Toolkit pdf.
However you should understand one thing - CG is not supported for
Fragment shaders (or pixel shaders) by ATI. So that puts you pretty much
in a fix.
Cg is elegant yes. It helps in faster coding - not really.
GLSlang - find it in opengl.org its a link on the right.
cg is dead. nvidia dropped large support parts of it, and will only
continue it as a small tool besides. they can’t compete HLSL in dx so
they never had a chance to establish it there. they have no need to
continue support in gl eighter, as glslang will be there.
as i said right from the beginning. its a dead birth. no one uses cg.
everyone nvidia states uses cg, uses hlsl. it was just, and still is
just, marketing crap. as 90% of nvidias work during the last year
anyways. the resting 10% where about cheating.. :D