0
101 Oct 30, 2004 at 18:07

Hi,

We’ve seen some insane leaps of realism in 3d games in the past few years. IMO the biggest breakthrough in recent times for 3D-Hardware is programmable-shading & for 3D Engines it is Real-time Lighting/Shadows.

We’ve seen Doom3 and the mind-blowing Unreal3 Engine. Normal, Displacement & Parallax mapping, soft-shadows, spherical harmonics, HDR & what not?

What do you guys think is the next big thing in 3d hardware & engines.

Apart from more & more GPU horse-power, longer & complex shaders, what kind of hardware capabilities do you guys wish for ?

If u were to design a 3D-Engine that gives Unreal3 a run for it’s money, what techniques would you incorporate (assuming you got the hardware powerful enough)

Peace…. :)
XOR-cist

#### 154 Replies

0
139 Oct 30, 2004 at 19:42

The ability to store arbitrary data on the graphics card and read/write it in shaders using some kind of generalized data stream. This would open up possibilities for GPU-dynamic simulations of water waves, particle systems, other things that can currently only be done in a limited and hacky way, and only on cards that have vertex shader texture lookups.

0
101 Oct 31, 2004 at 02:01

Dedicated ray tracing hardware.

0
101 Oct 31, 2004 at 08:38

@XORcist

We’ve seen Doom3 and the mind-blowing Unreal3 Engine. Normal, Displacement & Parallax mapping, soft-shadows, spherical harmonics, HDR & what not? [snapback]13467[/snapback]

AFAIK Unreal 3 isn’t due to be out for another 2 years because it simply won’t run acceptably on current consumer hardware, so it’s not exactly a good example to take when looking at “recent” graphics developments. That’s what you should be looking at for future developments :).

0
101 Oct 31, 2004 at 14:21

AFAIK Unreal 3 isn’t due to be out for another 2 years because it simply won’t run acceptably on current consumer hardware, so it’s not exactly a good example to take when looking at “recent” graphics developments.

To Baldurk : It is true that games using Unreal3 aint gonna be nowhere for the next 2 years, but, we’ve seen that engine in action. NV40 can run it but not exactly real-time. We all knew about Doom3 engine long time back. The tech was there since years. But, the game came out just a few months back.

What i wanted to know is the Ultimate Next Gen thing in Hardware & software . May be we might be re-writing our engines for hardware that does not do Rasterization at all. Like the Real-Time ray tracer NomadRock & Anubis were talking about http://www.saarcor.de

(btw, I am getting into a project to develop an FPS using Unreal3 engine. We’re gonna get that baby in our hands pretty soon. Boy, I cant want to see that beast in action & code for it. :rolleyes: )

0
101 Nov 01, 2004 at 03:48

You don’t need to wait for Unreal 3 to see exactly the same technology used in games. The technology they have is trivial to implement, but it’s not trivial to implement so that it runs fast, which is essential in games.

If you ask me now, I would say that the next big leap (similar to normal mapping) is this time in geometry side, to increase the geometrical complexity in games. Anyway, if you ask me tomorrow, I may have already changed my mind :tongue:

Cheers, Altair

0
101 Nov 01, 2004 at 07:45

Maybe the ability to generate *extra* vertices from inside the programmable pipeline is going to come next?

0
101 Nov 02, 2004 at 08:17

Maybe the ability to generate *extra* vertices from inside the programmable pipeline is going to come next? [snapback]13510[/snapback]

Yes, dynamic tesselation is definitely on its way. Very useful stuff :-)

0
101 Nov 02, 2004 at 10:47

In my opinnion, the next big thing in relation to computer graphics, will be real time. inter active, global illumination. Sure Unreal Engine 3 has spherical harmonics for global illumination, but it’s rather limited in what it can simulate, and it is only real time for static objects.

So in my opinnion, real time, inter active, global illumination will be the next ‘big’ thing in relation to computer graphics; or atleast i hope so :)

0
101 Nov 02, 2004 at 11:52

If we are going as far as realtime GI, then I would say the next step is total photon simulation. A scene is “rendered” by a server that constantly bounces photons around, and each client need only record the photons that hit their “film”. In this way we get very scaleable network games. This of course would require the sending of each frame over the network so the client would not need much of a rendering card at all, but it would scale well with the number of viewers. Also, quit using polygonal rendering entirely, and specify everything in molecules. This of course requires entirely different modeling techniques, and most likely they would be a combination of evolutionary techniques combined with realistic techniques. IE if you wanted a monster, you might start with your monster catalog and breed selecting for the large horns and lighter green skin tone you are looking for. One could start with a carving tool to literally shave away material from a block of simulated stone or whatnot.

Of course this is not the _next step_ but it will forever be my rendering goal.

0
101 Nov 02, 2004 at 16:12

NomadRock, that sounds dangerous. What if they escape? huh? huh? What then?

0
101 Nov 02, 2004 at 17:47

one of the next big things will be the moment when people officially realise no mather how much shaders you put on something, there is still no way to have a nice dynamic realistic scene all in all. the fact, that rastericers simply don’t scale in any form well to global illumination (wich would solve all the problems we have today..).

this will be the moment where some raytracing hardware has to prove it’s power. we’ll see when that moment comes, and if the hw will be ready.

i definitely hope so.

0
101 Nov 03, 2004 at 09:18

Ray tracing doesnt help much in terms of global illumination as opposed to rasterisation. you still have the same limitations, it’s just ‘easier’ in ray tracing as you have most of the information you need already processed on a per fragment level.

Oh, and it is currently possible to do real time photon mapping (note i did not say interactive), however the lighting may take a few seconds to catch upto the current state of everything… eg: shadows lagging (this can be a matter of seconds!) behind the object… it all just depends on the photon map density… but anyway, that’s another story for another day :)

P.S. IMO, the only advantage to ray tracing is pixel-perfect rendering of mathematically reprasentable objects (eg: spheres, quadrics, super quadrics [see dev shot], etc…)

0
101 Nov 03, 2004 at 17:01

uhm, every full gi solution uses raytracing as backend, photon mapping as well. (radiosity is no full solution).

and that hw realtime photonmapper is full raytracer in shaders, nothing else. you don’t know much about realistic image synthesis, and gi, it seems….

with rastericers you have one gi limitation: it’s virtually impossible (except you emulate raytracing logic).

with raytracing you have no limitation.

0
101 Nov 03, 2004 at 22:18

Games will not see a major benifit in the next generation of graphics technology.

Sure, there will be advancements that games can take advantage of. Most likely these will be dynamic tesselation and the unification of pixel and vertex shaders.

The real advancement for graphics technology will come in a different form this time around. We will see generalization of the GPU to the point of being used for generic stream processing, GPU multitasking/threading, and a major change in the way applications interface with this hardware.

Microsoft has discussed Direct3D’s future movements towards “Windows Graphics Foundations” which will mean the GPU will become a common application resource used by multiple applications simultaniously. Widespread presence of PCI express will mean new possibilities in real time rendering, but it will mainly enable superior usage of available ram for graphical user interfaces.

Microsoft need to catch up to Apple and their Quartz rendering engine. Microsoft wants to do it better than them and will use DirectX development to drive ATI and nVidia development to enable superior looking desktop applications.

0
101 Nov 03, 2004 at 22:27

the most interesting thing to me in some sort will be the ability to multitask on gpu. that means, different rendering “threads”, and not only one big render-queue like today..

so you can render at say 25fps onto a tv-texture, but still render the game at fastest possible (rest-)speed. etc..

we will see that. automatic scheduling and dispatching of tasks over the different general purpose pipelines. will be fun :D

0
101 Nov 05, 2004 at 01:10

davepermen, it seems you’ve misunderstood what i said.

What i meant is that in terms of rendering, ray tracing does not give you much of an advantage when dealing with global illumination, as opposed to polygon rasterisation. And i’m well aware that ‘most’ full gi implementations rely on ray tracing.

0
101 Nov 06, 2004 at 20:21

I don’t think there’ll be dedicated raytracing HW that would replace current rasterizing HW, but that existing rasterizing HW will simply evolve to the generalized point, which allows natural computational assistance for instance for full scene raytracing (we have seen raytracing done in special cases with current rasterized HW already). There was also some paper about using GPU for photonmapping. We got now floating point textures, longer shaders, dynamic branching, etc. which makes the rasterizing HW much more general purpose than what it used to be in sm1.1 days.

0
101 Nov 07, 2004 at 10:10

Forget raytracing!

Real-time REYES rasterization!!! :-D Probably easier to accomplish.

0
101 Nov 07, 2004 at 12:51

Reading this, I started thinking of how making use of advanced graphics will help applications be more useful and productive, rather than just making hardware upgrades faster.

I suppose with GPU utilisation, we can have true zooming interfaces and more 3d hints like shadows which help usability. Any other ideas? I for one want a particle engine in my wordprocessor so when I hit the delete key, the whole freaking world knows what happened to that letter :P

0
101 Nov 07, 2004 at 13:59

@Altair

I don’t think there’ll be dedicated raytracing HW that would replace current rasterizing HW, but that existing rasterizing HW will simply evolve to the generalized point, which allows natural computational assistance for instance for full scene raytracing (we have seen raytracing done in special cases with current rasterized HW already). There was also some paper about using GPU for photonmapping. We got now floating point textures, longer shaders, dynamic branching, etc. which makes the rasterizing HW much more general purpose than what it used to be in sm1.1 days. [snapback]13737[/snapback]

those are just toy demos, with no way to scale to anything useful. this will be true for the next tons of years.

a gpu of today is theoretically yet general enough to handle the job (but linked together in a wrong way). but it’s definitely not really efficient for the task, just as a cpu isn’t as well.

dedicated hw, and i guess you’ve read www.saarcor.de beats the performance of a gf3 with much less hw, much less requirements, much lower bandwith and all. scale this with the technology of todays gf6 (and the hw scales well:D), and your general purpose hw will never be in question for raytracing. dedicated ones beat out everything, just as the still dedicated rastericing, wich is in no way general, but static, optimized as hell, and constant.

0
101 Nov 07, 2004 at 14:09

@Smokey

davepermen, it seems you’ve misunderstood what i said.

What i meant is that in terms of rendering, ray tracing does not give you much of an advantage when dealing with global illumination, as opposed to polygon rasterisation. And i’m well aware that ‘most’ full gi implementations rely on ray tracing.

[snapback]13679[/snapback]

it doesn’t give you MUCH. it DOES GIVE YOU!

rastericers don’t have any way to solve gi, except if you use the rastericer to do ray-intersection tests and implement a full montecarlo raytracer with the rastericer as intersection unit. you can even use the hw for shading. but the full recursive logic doesn’t fit at all onto rastericers, it is, by default, part of any raytracer.

rastericers are PAINTERS. they PAINT ON AN IMAGE. raytracers are photographers, they CAPTURE A DESCRIBED SCENE.

only if we have both, we can handle all sort of things. but for capturing a realistic image, only raytracing solves the problem (or beam tracing, or some other tracer).

all full implementations rely on raytracing. in the logic. they use rastericers, or mathematical exact solutions to estimate intersections. thats the only place, but thats not the ‘raytracer’ thing.

rastericer logic:
project what you want to paint onto your painters surface
start painting those projections

raytracer logic:
define some eye-place, camera-place, where you want to capture a scene from.
from there, evaluate backwards what you see.
when ever you see something, evaluate backwards what it sees, and how it got influenced by lighting, other objects, etc.

this recursion is the gi term. wether you have withed raytracing with only one specular and one reflected and one shadow ray, or some montecarlo with everything, it’s still raytracing. it evaluates (parts of) gi.

rastericers can only really handle local illumination (and can reconstruct quite nice some simple things like expensive shadows (shadow volumes), or payable ones wich are sorta buggy (shadow maps), and some faked reflections).

i know (and i am quite impressed sometimes) how much a rastericer can do. but the logic used for is is always based on ‘how does a raytracer solve it’, and how we can map this in some form onto some multipass solution to rastericers.

0
101 Nov 07, 2004 at 15:37

Of course they are just toy demos but so is the raytracing HW you were referring to. They just demonstrate to you that general purpose computation on GPU is becoming more and more natural and that GPU’s are becoming just a number crunching processors instead of dedicated HW for something specific (rasterization or raytracing) - that’s quite obvious. In the end of the day you’ll end up doing certain number of flops for raytracing and I don’t see special raytracing HW beating general purpose rasterizer GPU in order of magnitude to justify investment of transistors on that. For sure there is some gain in having specialized raytracing HW, but then raytracers need also general purpose HW for shading, because raytracing alone is pretty weak measure of realism. Overall raytracing doesn’t give much bang for the buck in order to increate realism as opposed to shading. Anyway, it sounds you have still the classic image of specialized DX7 class HW in your head.

0
101 Nov 08, 2004 at 19:24

i have no time. i can just say no.

the hw is in the works, and works great in testprojects, and is scalable to games. it’s on 30mhz with one pipeline 100ts of times faster than any of those gf6 realtime raytracing demos.

those demos won’t scale to realtime raytracing games in CENTURIES on a gf6 designed hw, no mather how much bruteforce the hw can get to scale faster.

it looks like you don’t have read, view anything about whats going on on hw raytracing side. but i’m unsure, as you normally are quite knowledgeable about what you say..

0
101 Nov 09, 2004 at 02:37

How is the HW scalable in games exactly? Do you have first hand experience about this because the few screens I saw were running in relatively low framerates of some age old games? Maybe that’s considered fast for raytracing. Where did you pull that 100x from? I think they mentioned that the prototype was running @ 90MHz (64 pipes) and was only like 3x faster than raytracing implementation on Radeon 9700 Pro (that’s what they say in the paper atleast) which isn’t even as generic nor powerful as GF6 and I’m don’t know how good the implementation on 9700 is exactly. To be competent they would also need generic shading HW which eats resources from the chip, because plain raytracing HW with few shading features (phong + few others they mention) can’t even remotely compare to the quality you can achieve with the latest GPUs. I guess it’s time to officially realize no matter how much raytracing you put on something, there is still no way to have a nice dynamic realistic scene all in all ;)

0
101 Nov 10, 2004 at 18:17

gpu’s are incapable of scaling beyond those demos with 3 or 4 spheres at ALL. they can’t profit (yet) from the logarithmic scale raytracers in general have, and can only trace a few spheres with everything before fucking up.

the saarcor hw prototype is much less capable then a gf6.. not even the hw features of a gf1 actually, it’s a cheap, dump piece of hw without much bandwith at all.

yet anyways, it beats a cpu-network-renderfarm in speed. it can play q3 realtime, and similar stuff.. with much more..

but you HAVE to read up!

greetings,
davepermen

0
101 Nov 11, 2004 at 20:08

I actually did read the SaarCOR paper :) You mean logarithmic scale to find the intersection with a triangle? With rasterizers that’s more like constant time. In case you mean output sensitive rendering, you don’t need raytracing for that. Hybrid Holdings (www.hybrid.fi) has developed an output sensitive visibility system called dPVS which you can use with rasterizers. On top of that you would just need some sort of geometry LOD system, maybe GPU assisted one. Visibility isn’t only graphics related thing in games either so it wouldn’t make sense to implement it in GPU and do the double work after in CPU.

Sure the built-in nature of being output sensitive and recursive is an attractive feature, but as I said you don’t need raytracing to be output sensitive and recursion (i.e. interreflections / refractions) doesn’t happen that much in natural environments that the rendering should be driven by the approach and that couldn’t be just approximated. Just look around :) Of course when you need recursion it’s nice to have support for it around, I don’t deny that. For realtime GI it sounds like a potential candidate, but I’m sure with generic GPUs you could do the job just fine without need for specialized raytracing HW. But that’s just speculation because I nor you don’t know how well the raytracing HW would work in games (Q3 running on the HW is more like counter example of usefulness of the HW), so it’s better to leave it at that. Overall it’s pretty absurd claim to say that it would scale well in games and what not without actual practical experience about the technology nor game development.

BUT, I don’t say it’ll never happen! Maybe when we got quantum computers with infinite processing power, memory and memory bandwidth, we can pick what ever solution we like :wink:

0
101 Nov 11, 2004 at 22:26

you don’t need quantum computers… saarcor proves exactly that.. and once implemented, with some hw support from a big company like nvidia or ati, they could possibly additionally implement the raytracing core and combine it with the shader pipelines.. in addition to the rastericing core (just think of the marketing possibilities for nvidia or ati! (or anyone else!)).

the trick is, saarcor provides much higher performance than any other implementation today, while being by far the worst piece of hw.. (some mhz processor, with limited bandwith and all, compared to 40cpu systems of newest generation, or gf6 with half a gigahertz and gigabytes per second bandwidth..)

it shows how much you can get by “just going hw”.

oh, and.. there is about no gi solution that does NOT work with raytracing, because, in the end, you want it per pixel exact.. at least, thats the final target. and there, raytracing fits best.

and if you look around, then you see that the big difference in nature, and buildings, compared to what we have in doom3, or the unreal games, or what ever, is just correct shading. and for this, we need a lot of gi, softshadows, soft reflections, etc.. nearly every surface looks different depending on view. thats ALL REFLECTIONS. diffused ones, of course, but still.. specularity is simply reflection of the lightsource. if you want to have specularity in a global way, you need glossy reflections. thats the only correct way, and due that, the only way for big lightsources, for light-objects… how to solve specularity for the sky? including “occlusion-mapping”? with raytracing, an easy job..

and of course, you could queue requests for special tracers, wich could be used for the physics engine. and, depending on hw, the delay wouldn’t even be that big.. could be quite responsible if not over agp..

possibly the hw could even be sorta multithreaded, so the big scene can trace while the physics engine can have fast intersection queries.. would be an interesting solution, too.. direct feedback.

of course, you can get close with rastericing. but if you want evolve from what we have today, you need much faster hw. but if you instead step back and take very bad raytracing hw, you can see that it can yet now render doom3 style graphics quite fine (even with real displacement mapping). without requirement for highend hw at all…

just imagine if you push such hw to highend like we have today! a gf6 with 16 pipelines is theoretically 90x faster than the saarcor chip (in terms of mhz*pipes).

just imagine such a scaling of what saarcor can do! this is enough for a highquality version of the “instant gi” from www.openrt.de… without any doubt.

the world is full of the illumination details only gi can give. even caustics. but gpu vendors and gamedevers made everyone braindump, believing bumpmapped stenciled phong is the thing. and now slowly will advance to softshadows.. and some faked displacementmaps.. the whole thing goes on VERY SLOWLY. raytracers don’t have any issues with all that stuff yet now, at their very early state..

it’s like having a p4 with all its power getting caught up by it’s little pentium m, with much less effort, just because of the bether design. there, features are equal. raytracers even support superior features, and much bether scaling of those. (the only thing dependent of the objects is the intersection test, and it is logarithmic. all other effects, reflections, refractions, and all, are only dependent on screenres (x intersection time). to have say everything reflecting on rastericers, you eighter have to rerender per pixel more or less (less == worse quality), or have to planar, and cubic reflect more or less per object/surface… much more work.

on rastericers, much ordinary things are “effects” that are costy, or even impossible, to really solve. stencil shadows are hell in fillrate, shadowmaps still… blurry.. since years..

gpu’s evolve verry slow for new stuff. raytracers would change that. only performance would really mather. features would all be there.

0
101 Nov 12, 2004 at 09:40

I have no idea if hw raytracers are the solution to all our problems, but it seams that the argumentation for using this hardware is that it is the only way we are goint to get “realistic” images. I think this is a big misunderstanding, real time graphics have never been about getting realistic images in the physical sense, it has always been about creating images that looked real. This is something entirely different and it means we will always cut any cornor we can to get the immage to look as real as we can at real time rates.

I personally think the current rasterisation technology will continue to dominate for quite a while, try putting a new gpu to the market that cant run the DOOM3 engine !!!

There was a phd student at DTU www.imm.dtu.dk who did research in using FPGA’s as graphics chips, the idea was that using FPGA’s you could program the chip to execute your program in hardware, I guess you could make it behave like a hw raytraycer as well. But realistically this sort of technology wont become avaliable within the next 10 years as consumer hardware. (At least i dont think so :) )

I personally think the next big things in game graphics will be in HDR rendering and then in participating media, or volumetric effects in general. Like the research done by http://nis-lab.is.s.u-tokyo.ac.jp/\~nis/topics.html

my five

Rgs.

0
101 Nov 12, 2004 at 15:44

First of all, you nor the developers of the chip don’t know what kind of problems will crop up if/when they would actually start to scale the hardware. Sorry my sceptisism, but it’s very common for developers of a piece of technology to be overoptimistic about it, and I don’t see an exception here after reading the paper. It’s also bad idea to bring over non-trivial algorithms and data structures such as the storage and traversal of the scene to the HW, because it’s totally data dependent what kind of structure fits the purpose the best.

I didn’t say that you couldn’t use raytracing for GI, but that it’s extremely naive to do so, atleast if you think Monte Carlo raytracing or photon mapping. Also like alf259 put it, often you don’t even want to do physically correct things, but want to fake things for instance in lighting and materials to emphasize certain things and drive certain art direction. Just go see what they do in movies, because IMHO movies are very good reference point where to target with graphics in games. Overall the raytracing HW tries to solve rendering in way too high level.

It’s just so funny to see how you are so hyped up about the piece of HW and seem to think it as silver bullet in rendering :rolleyes:

0
101 Nov 13, 2004 at 14:28

the trick is, in raytracing, using the real logic is often cheaper, than to find fakes on current hw.

and if you say, a gf6 is faster, then i ask you, is swshader faster? because you can compare swshader against software raytracing solutions, and there, raytracers scale bether for advanced “effects”.

anything beyond shadows and faked wrong reflections is beyond what a gpu can really handle in a fast and scalable and predictable way.

the last 2 are the most important to me. they allow big complex scenes to have the effects, not only small things, and, they allow to have it in realtime apps, a.k.a. games.

stencil shadows are definitely _not_ really predictable, and thus lead to big performance drops in doom3….

0
101 Nov 13, 2004 at 19:36

@davepermen

the trick is, in raytracing, using the real logic is often cheaper, than to find fakes on current hw.

I don’t know what you have been smoking, but no. Lets take normal mapped environment reflection as an example. It’s WAY much faster to do it with fakey cubemaps than with raytracing without much impact on the feel of realism. I don’t think you even realize how much processing power it would take to find intersection points with triangles for all those rays hitting the normal mapped surface. Think what it does to the coherency of the rays.
@davepermen

and if you say, a gf6 is faster, then i ask you, is swshader faster? because you can compare swshader against software raytracing solutions, and there, raytracers scale bether for advanced “effects”.

gf6 is faster in doing what? swshader is faster than what? What you mean by vague “advanced effects”? I haven’t used nor read about swshader so I don’t know what it can and can’t do and what it’s built for.
@davepermen

anything beyond shadows and faked wrong reflections is beyond what a gpu can really handle in a fast and scalable and predictable way.

What GPUs can do well is what you need 95% of the cases to be done well. Why would you ever want to use HW which does that 95% much worse and only 5% much better than GPUs?
@davepermen

the last 2 are the most important to me. they allow big complex scenes to have the effects, not only small things, and, they allow to have it in realtime apps, a.k.a. games.

Read about Hybrids dPVS in my previous post and rethink big complex scenes with GPUs.
@davepermen

stencil shadows are definitely _not_ really predictable, and thus lead to big performance drops in doom3….

It’s vast majority of game developers who actually don’t believe that stencil shadows are the future of shadowing technique (me included). Shadow maps are much more attractive solution for the purpose, but they have problems as well.

0
101 Nov 14, 2004 at 13:21

you don’t know swshader but want to talk in here.. i think nick will be unhappy.. :D

swshader is nicks software rastericer. it is, compared to a gf6, very slow (but still one of the fastest cpu implementations around).

now openrt implementations in sw can raytrace quite some stuff. of course, it is slow, but still, faster than some might imagine. compare the gap between swshader and gf6 in performance, and imagine the same jump for raytracing.

suddenly, the 40 cpu raytracing systems could be on one card. of course, rastericing hw evolved over the last century, got much money put in, and raytracer hw got about nothing. but you could expect the same evolution if they would get the same support.

what i mean is, if you could choose raytracing as a default way to handle tasks, most tasks you have to fake today on gpus can be done quite correct in raytracing environments, and scale well compared to the general performance of raytracers.

one example: in the q3 raytraced demo on openrt, they just add displacementmaps over the whole scene, increase the triangle count by a huge factor. performance drops, but not really by much. performance by today drops on gpu’s as well, if we move to perpixellighting with parallax faked displacement mapping.

raytracers have a high starting bill, but once you got over that, a lot of advanced effects and features are in by default, or addable without much cost. and this is true.

one thing wich is about always implemented is perpixellighting. only recently on gpu’s, it’s a ‘default setting’ in raytracers. doing bumpmapping, or parallax, or anything, doesn’t cost anything more on raytracing hw than it would on gpu’s. about _NO_ effect would cost more on raytracing than on rastericers, always relative to the costs of rastericing something against raytracing something, of course.

saarcor shows that a quite small and simple chip can get quite far in realtime raytracing. gf6 has 16x the pipelines, and nearly 6x the clockspeed. if you could push saarcor that far, i’d guess you would have _no_ problems to realtime raytracing most games, at all. on the other hand, only those gpu’s can handle a doom3 adecuate, with does very simple perpixellighting over the whole scene, and simple hard shadows.. a doom3 style graphics engine would normally be the default for any raytracing engine (the most simple raytracing scene at least..). the q3rtrt demo for example has all those features. not dot3 bumpmapping, but that wouldn’t cost much more.

0
101 Nov 14, 2004 at 21:35

@davepermen

you don’t know swshader but want to talk in here.. i think nick will be unhappy.. :D

Yeah, I have heard of it, but don’t really know much about it, sorry Nick ;) IIRC, it’s kind of replacement to D3D reference rasterizer, no? It would be really nice, if Nick would modify the swshader to collect statistics about rendering, i.e. how many texels were accessed during the frame, how many quads/pixels gets z-culled, what’s the utilization of vertex cache, etc. Also PIX like pixel debugging on XBox would be neat, i.e. you can pick a pixel on the screen and all pixel shaders and their values executed for that pixel gets listed ;)

A hugely important thing from performance POV in rendering is coherency. With raytracing you totally loose that. Think about it for a second. How much data you need to process for each ray? You need to first traverse through the spatial data structure (SDS for sort), which is KD-tree in case of SaarCOR, to find the object the ray hits in the scene. Once you have found an object, you need to transfer the ray to object space (cheap vector*Matrix operation). Then you need to continue traversal of the ray in object space SDS to find a triangle in the object the ray hits (we don’t want to test the ray against all 10000 or so triangles in an object, right?). If you are lucky, the ray hits a triangle in the object. If not, you have to go back up to the scene spatial data structure to continue the ray traversal. This is extremely huge amount of data processing to just find the intersection point alone (as a reference, for rasterizing HW this is orders of magnitudes faster for visible pixels).

Now you may think that let’s not only trace 1 ray at the time, but bunch of them, like let say 2x2 like SaarCOR chip does IIRC. All goes fine and dandy, until the rays start to go further away from viewpoint. Rays start to hit different SDS scene nodes and require different object space SDS traversals. But this was the case where raytracing was supposed to be powerful, i.e. output sensitive!

And what’s even worse, the story doesn’t end here. Once you have actually found the intersection point with a ray and triangle after huge amount of number and data crunching, you need to figure out lighting for the point (lets forget recursive ray traversal of the ray for a sec). For sake of simplicity, lets also forget GI for a sec and just go with old fashioned dot3 diffuse lighting. You’ll need to start the same ray traversal AGAIN for EACH light which might potentially lit the surface! If you go for Monte Carlo raytracing that’s like 100 rays / pixel for decent results and for each of those 100 rays you might want to have secondary pounche of N rays for better radiosity, so you might be talking about multiplying the first-hit ray cost by let say 1000 which would be conservative estimate!

Talking about real world scenarios and “advanced effects”, how does raytracing handle for instance per-pixel reflections and refractions? You say it’s built-in feature in raytracing. Sure, but for what cost? The normal map texture totally splits up the 2x2 block (or what ever size you choose) of rays. This is bad even with current GPUs because it causes random accesses to the cubemap, but with raytracing you are talking about random accesses to the entire scene! I wonder why none of those SaarCOR chip screenshots actually have reflections nor refractions other than few flat reflections. How about other basic rendering features, like let say 1-bit alpha textures? 4 fetches to a texture (bilinear) and possibly back to scene traversal.

So what about if we would parallelize the raytracing HW like crazy and scale it to the level of GF6? Now that we know that raytracing is extremely incoherent way to render, think how big caches you would need per pipeline. As a reference I think it was mentioned somewhere (unofficially) that L2 cache on GF6 was something like 8kb, shared between all the 16 pipes. Even if you think that all GF6 resources would be available for raytracing speed-ups, even in the paper they say that the most prominent lack in the raytracing HW is programmable shaders, which would already take all the GF6 resources.

I’m no raytracing nor HW design guru, but it sadly seems that raytracing HW has no future, atleast not until we harness the quantum pocessing power ;) But maybe I’m missing something obvious here.

0
101 Nov 15, 2004 at 11:42

Altair. You have some good points there but it is worth looking into Ray-tracing a little more. The problem with current triangle system is the massive amount of overdraw and transformation work that need to get done. Seeing as you only need to transform triangles that get hit by the rays (the result can then be cached as it is likely the pixel next door, or below, is from the same triangle) you can start to make real savings. At present, you are right, ray tracing is too slow. In 1996 the Voodoo1 could render 45 Million 16-bit Pixels a second, the GF6800 Ultra can do \~4 billion 32-bit pixels a second … thats an increase of around 135 times!! Looking at Vertex throughput is much the same. The voodoo1 was capable of rendering around 2 million triangles a second the gf6 is capable of transforming 600 million triangles a second (Ok … these figures all mean crap, in the real world, but it gives an idea). Over the next 10 years teh amount of polys rendered is probably going to inrease by another factor of 100 (or more) which would mean we’d be able to render, in real time, scenes of 600 million polys at 100fps!! Thats a HELL of a lot of tranformation work. The world polygon database is likely to be huge! Can you imagine writing the occlusion system for that? When you start getting to these numbers efficiently written ray tracers can provide a real win …

Well .. thats my thoughts anyway :)

0
101 Nov 15, 2004 at 16:04

Heya Goz, and welcome to the debate ;)

Transformation work is something you need to do anyway, regardless if you use raytracing or rasterization. Consider this: Since raytracing relies on the SDS to find intersected triangles, how would you update the SDS to allow flexibility of vertex shaders? I mean triangles as result of vertex shader can be anything. Of course you could set some reasonable limits, let say that triangles are always assumed to be within some user defined bounding volume, but after that you would need to update object space SDS anyway. And you definately want to update vertices frequently for example for skinning, and this is of course per instance work, so each instance would require own object space SDS.

With good visibility algorithm the overdraw in rasterization can be controlled. Also if you have read about GF6, it can reject 16 quads (64 pixels) per clock for overdraw with z-cull and this number will only go up in future cards. But z-cull alone isn’t good enough of course, so rasterization HW will need something better to also skip vertex processing work, but that’s quite trivial to think what would follow. They simply need to add logic to check a bounding volume against the hierarchical z-buffer that already exist in HW for z-cull, and use the result to decide if vertices of the object needs to get processed. And if you want to think a step further, you can apply the hierarchical z-buffer visibility for scene traversal to get output sensitive rendering (read “hierarchical z-buffer visibility” paper by Ned Greene et al. for further info). Having this kind of support from GPU isn’t that far fetched considering what GPUs already do, and support for it would be pretty trivial to add to existing APIs, do you agree?

In addition to output sensitive scene traversal you would need of course some kind of geometry LOD system I mentioned earlier to speed up rendering of far-away objects. Upcoming GPs will definately give a hand here.

0
101 Nov 15, 2004 at 17:31

Transformation work is something you need to do anyway, regardless if you use raytracing or rasterization. Consider this: Since raytracing relies on the SDS to find intersected triangles, how would you update the SDS to allow flexibility of vertex shaders? I mean triangles as result of vertex shader can be anything.

This is a very good point but its worth remembering not everything is dynamic. I will admit, however, that i hadn’t considered dynamic geometry :whistle:

You also have a good point on polys being able to be easily rejected via the hierarchical Z-buffer thingy. However, even then, this means you either need to sort your polygons by z or live with the fact you are going to get some overdraw from certain angles. Sorting 100 million polys is likely to hurt a bit so the only real option is too live with the overdraw … thanks to limiting where a player can go its not too bad afterall (especially as by this stage the memory bandwidth is likely to amount to terabytes a second :)).

Now in which case all you ever need to do is process the dynamic polygons and run the various vertex shaders on them. But the polygons are only transformed to world space. Once in world space we could re-jig the space partitioning scheme (oct-trees are handy here, for example) and render from world space. We wouldn’t even need to retain the original poly database (admittedly vertex shaders would be a bit different to code for but not beyond 99% of graphics coders i feel).

What can raytracing do better/faster for you that a scan conversion system couldn’t? Well for one it could provide you with much better results than cubemapping for environmental reflection. For one the scene needs to be rendered a further 6 times for each dynamically cube mapped object. Imagine a field of reflecting refracting crystals. You would end up rendering the scene thousands (if not millions) of times to get everything to dynamically reflect correctly (due to interreflection im not even sure if this is even possible). I’m not saying this would be particularly fast in a ray tracer, even then, but you’d not be transfoming millions of polys per frame several times. Mind there is, of course, nothing stopping you from using a cube map in a ray-tracer …

Now one of the things that a lot of the hardware does is massive parallelisation. It helps to do multiple things at once (unsurprisingly :lol:). In ray tracing parallelisation is simple. No single ray depends on any other ray. In polygon scan conversion this isn’t “quite” so simple. For one imagine the logic behind having 1000 pixel pipe lines. Many triangles are going to be less than 1000 pixels in size so what does the system do with the other pipelines? Ignore them or dynamically re-assign them amongst numerous polygons? Either way is a hell of a lot of, expensive, core logic. In RT this isn’t even a problem. It cuts rendering time by a factor of 1000. Massively parallel hardware polygon rendering has always proved to be a pain (plenty of evidence on this one on the net) but with ray tracing this just isn’t a problem … in fact … its ideal.

There are, undoubtedly, other situations that RT can provide a win but, for now, i will have to think a bit further on these :) (Where are you Mr Perman? … you started this!! :D)

Anyway have a good day i wanna get out the office :happy:

0
101 Nov 15, 2004 at 21:41

I totally agree that not everything is dynamic. I would say that you could probably get away of saying that atleast 95% of normal scene is totally static (okay, if you think forest with swaying trees and grass, it’s prolly 50-50 :)). I was just pointing that out as a counter example how raytracing would fit in rendering dynamic scenes as Mr. Permen previously claimed.

So you mention about sorting 100 million polys for rasterizing. Well, I guess you didn’t read the paper or know about hierarchical z-buffer visibility, so let me quickly bring you to the same page; You don’t even touch those 100 million polys at all, if they are not visible (just like with raytracing). The basic algorithm goes, that you process octree nodes starting from the eye and render their content to the screen or proceed to sub-nodes if they are visible. If a node isn’t visible you skip the node entirely thus don’t even touch the polys. You can quickly test if a node is visible by comparing the node planes against the hierarchical z-buffer, that already exist in GPUs. Very simple algorithm and is very natural extension of current existing GPU architecture. There definately would still be some overdraw, but particularly because of early z-cull processing those extra pixels is neglible.

“re-jiging” polygons to SDS is very expensive, particularly if you have to do it to the world space one which contains the whole world. That’s why I said you would need to separate instance of object space SDS for each “skinned” object, because that would atleast be localized to that object.

With non-planar reflections and refractions raytracing is definately more natural way to go. But it’s way from “free” effect in raytracing either like I explained in my previous posts, because you potentially end up traversing through huge amount of scene data in incoherent way (huge bandwidth killer). Mostly you can get away with simple static cubemaps for the environment reflections anyway, because in real world everything isn’t built from semi reflective and refractive spheres ;)
@Goz

For one imagine the logic behind having 1000 pixel pipe lines. Many triangles are going to be less than 1000 pixels in size so what does the system do with the other pipelines?

Okay, let me clarify the parallelism of GPUs a bit. First of all, all those quad pipes don’t process pixels only from a single triangle at once. Triangle setup engine splits triangles into quads and feeds them into available quad pipes. Once all quads for a triangle has been dispatched quads for next triangle are sent to quad pipes once they become available. So on nv40 with its 4 quad pipes, there can be pixels for 4 different triangles being processed simultaneously. I don’t know the details how it works, so don’t ask :D

0
101 Nov 16, 2004 at 09:13

@Altair

So you mention about sorting 100 million polys for rasterizing. Well, I guess you didn’t read the paper or know about hierarchical z-buffer visibility, so let me quickly bring you to the same page; You don’t even touch those 100 million polys at all, if they are not visible (just like with raytracing). The basic algorithm goes, that you process octree nodes starting from the eye and render their content to the screen or proceed to sub-nodes if they are visible. If a node isn’t visible you skip the node entirely thus don’t even touch the polys. You can quickly test if a node is visible by comparing the node planes against the hierarchical z-buffer, that already exist in GPUs. Very simple algorithm and is very natural extension of current existing GPU architecture. There definately would still be some overdraw, but particularly because of early z-cull processing those extra pixels is neglible.

You got a link to that paper? Sounds very interesting… Also sounds like the idea was borrowed straight from Ray Tracer architecture :)

“re-jiging” polygons to SDS is very expensive, particularly if you have to do it to the world space one which contains the whole world. That’s why I said you would need to separate instance of object space SDS for each “skinned” object, because that would atleast be localized to that object.

Again, though, as computers increase in speed this “expense” will become negligible for both systems.

With non-planar reflections and refractions raytracing is definately more natural way to go. But it’s way from “free” effect in raytracing either like I explained in my previous posts, because you potentially end up traversing through huge amount of scene data in incoherent way (huge bandwidth killer). Mostly you can get away with simple static cubemaps for the environment reflections anyway, because in real world everything isn’t built from semi reflective and refractive spheres ;)

hehehe mind, as i say, there is nothing stopping you using cube maps in RT. In fact its somewhat more logical …

Okay, let me clarify the parallelism of GPUs a bit. First of all, all those quad pipes don’t process pixels only from a single triangle at once. Triangle setup engine splits triangles into quads and feeds them into available quad pipes. Once all quads for a triangle has been dispatched quads for next triangle are sent to quad pipes once they become available. So on nv40 with its 4 quad pipes, there can be pixels for 4 different triangles being processed simultaneously. I don’t know the details how it works, so don’t ask :D

Thats quite strange. Everything i have read says that those pixel pipelines can only be loaded with one shader at a time (though, admittedly, what i’ve read doesn’t include the current generation … any links?) and only process one triangle at a time. The silicon to do otherwise is just getting horrendously complicated and expensive to produce.

However throughout this entire discussion the only real problem with RT that i can see is ray-triangle intersection being too expensive. Everywhere else we have pretty much stated that both are as capable. In 10 years time do you think a few million ray-triangle intersects are going to be impossible? I don’t … sounds quite likely that what we will end up with is a hybrid system that takes the strengths from both and leaves the user able to decide whether they want to recursively ray trace or scan convert tris … So far i don’t see any argument (on either side) that says one will be all that much better than the other :)

Did you know that one interesting RT optimisation suggested was to use a special Z-Buffer that records the Z intersection depth and the id of the poly. They then scan convert the tris as normal and then can go through the “Z-Buffer” afterwards and spawn secondary rays … Would be an interesting hybrid optimisation that uses the powers of both systems (you could then get away with not ray tracing every pixel, for example).

Allowing you to scan convert some parts and then ray trace others opens up a HELL of a lot of interesting possibilities, IMHO :)

0
101 Nov 16, 2004 at 14:56

@Goz

You got a link to that paper?  Sounds very interesting… Also sounds like the idea was borrowed straight from Ray Tracer architecture :)

I don’t know where or if it was “borrowed” from anywhere and that’s quite irrelevant anyway. I don’t think octree was invented for raytracing and z-buffer is inherently rasterization construct. Anyway, you can get the paper from here
@Goz

Again, though, as computers increase in speed this “expense” will become negligible for both systems.

Of course. Like I said, once we got quantum computers, it’s irrelevant which algo we choose ;)
@Goz

hehehe mind, as i say, there is nothing stopping you using cube maps in RT.  In fact its somewhat more logical …

Sure, but at that point you need already the programmable shader architecture which takes about the same resources as current GPUs alone. On top of that you would need the resources for raytracing. Or are you telling me that raytracing logic is somewhat simplier than rasterization logic? I don’t think so.
@Goz

Thats quite strange.  Everything i have read says that those pixel pipelines can only be loaded with one shader at a time (though, admittedly, what i’ve read doesn’t include the current generation … any links?) and only process one triangle at a time.  The silicon to do otherwise is just getting horrendously complicated and expensive to produce.

I don’t think you can execute different shaders on different quad pipes simultaneously, because investing transistors for that would be pretty useless considering how current games utilize GPUs. Anyway, yeah, your info is outdated, because I don’t think rasterizing multiple triangles simultaneously has been possible before nv4x. It’s only matter of investing transistors on logic that gives you the biggest bang for the buck anyway. That’s how NVidia/ATI work, i.e. they take existing games and use them to guide where transistors would have most impacts on the performance. Again, here is a link.

edit: Oops, sorry, that was wrong article :blush: Here is the good one (page 7).
@Goz

However throughout this entire discussion the only real problem with RT that i can see is ray-triangle intersection being too expensive.Everywhere else we have pretty much stated that both are as capable.

I don’t think computing ray-triangle intersection is expensive, but quite contrary, it’s pretty fast actually. What’s expensive is to find the closest triangle to use for intersection computations and that’s where raytracing is inherently slow algorithm. Sure you can utilize coherency to a degree, but it wont solve the inherent flaw in the algorithm.
@Goz

Did you know that one interesting RT optimisation suggested was to use a special Z-Buffer that records the Z intersection depth and the id of the poly.

Yes, I have heard it, i.e. you rasterize the first-hit rays and continue with raytracing afterwards. The problem is though that then you have double logic for rendering (not necessarely bad idea if they complement each other) and that raytracing would require the whole scene capturing. Also there is many other issues with raytracing which I have brought up, so I’m not quite sure the hybrid solution would be good idea in practice.

0
101 Nov 16, 2004 at 16:18

@Altair

Of course. Like I said, once we got quantum computers, it’s irrelevant which algo we choose ;)

hehe .. but .. what’s so complicated about modifying a structure like an octtree? Dynamically rearranging it is fairly simple as are things like CSG … Its not particularly slow either. Undoubtedly there are better algorithms out there …

Sure, but at that point you need already the programmable shader architecture which takes about the same resources as current GPUs alone. On top of that you would need the resources for raytracing. Or are you telling me that raytracing logic is somewhat simplier than rasterization logic? I don’t think so.

In certain circumstances yes but, admittedly, in the majority of cases no… However the firther we push rasterisation logic the more bits and pieces will be borrowed from things such as RT (and other schemes) to provide the necessary results. How does cube mapping work again?

I don’t think you can execute different shaders on different quad pipes simultaneously, because investing transistors for that would be pretty useless considering how current games utilize GPUs. Anyway, yeah, your info is outdated, because I don’t think rasterizing multiple triangles simultaneously has been possible before nv4x. It’s only matter of investing transistors on logic that gives you the biggest bang for the buck anyway. That’s how NVidia/ATI work, i.e. they take existing games and use them to guide where transistors would have most impacts on the performance. Again, here is a link.

Ta for that …interesting article. Still not sure where you get your 4 pixels from 4 different triangles from, though …. that part of the article is a little confusing.
@”Article”

On triangle-borders, the GPU still works with full quads, even if only one pixel is part of the triangle to be computed. In this example, three pipes won’t do any work

Is that referring to the NV40 or previous designs?

I don’t think computing ray-triangle intersection is expensive, but quite contrary, it’s pretty fast actually. What’s expensive is to find the closest triangle to use for intersection computations and that’s where raytracing is inherently slow algorithm. Sure you can utilize coherency to a degree, but it wont solve the inherent flaw in the algorithm.

Gonna have to do a bit of research on this but i am sure there are scene traversal systems that work in linear time. Its a simple process to iterate an octtree to find the nearest tri along a ray path, for example (and there are undoubtedly far more efficient methods of doing this). I don’t see how this is so expensive …

Not to mention the fact that if you are doing this sort of thing you already have a perfect database setup for things like collision detection and such like :)

Yes, I have heard it, i.e. you rasterize the first-hit rays and continue with raytracing afterwards. The problem is though that then you have double logic for rendering (not necessarely bad idea if they complement each other) and that raytracing would require the whole scene capturing. Also there is many other issues with raytracing which I have brought up, so I’m not quite sure the hybrid solution would be good idea in practice.

Well, tbh, this could go on forever (must remember not to get involved in this sort of discussion 2 weeks before beta :lol:). So i’m going to end with … I disagree. As yet you have not managed to convince me so shall we agree to disagree on this? Afterall the only REAL way to tell is to re-visit this subject in 10 years time. You up for it? :D

Edit: Should add to that last bit that i don’t reckon im gonna convince you either …

0
101 Nov 16, 2004 at 16:25

just to inform, i’ll be out of this discussion for a while. gets me too stressy all the time :D

i just prefer to have a full solution, and then step back and take what you need to get an optimal solution for your needs. and not a weak solution and i have to add and tweak and change and fake around till it gets at least close to what i want.. and nowhere near close to what i could have in the full solution.

i just wished, years ago, there would have never been the invention of the hw rastericer.. without that, raytracing would have stayed competitive in realtime tasks. but capitalism took over, and pushed one into heaven, and the rest back beyond earth.. and now it’s very hard to catch up again.

but anyone stating he would prefer fakes and tweaks over a simple, working, full solution, lies.. at least if he programs by heart.

oh, and altair, the important thing about swshader is just it’s speed. it’s about the fastest dx9 sw implementation possible. and you can compare it by highest end cpu’s of today with highest end gpu’s.. and then, you look at what realtime raytracing is possible on highest-end cpu’s (quite something possible actually), and scale that by the same amount.

then you got where we, more or less, would be, if rastericing would not have taken over gaming and marketing world due it’s cheaper starting requirements.

0
101 Nov 16, 2004 at 16:28

@davepermen

just to inform, i’ll be out of this discussion for a while. gets me too stressy all the time :D

Wuss :tongue: hehehehe

0
101 Nov 16, 2004 at 17:03

@Goz

hehe .. but .. what’s so complicated about modifying a structure like an octtree?

Not really complicated but expensive. Think what it takes to rebuilt octree for an object of let say 10k polys. In object level it makes more sense. Or did you think of getting away of having an object space SDS and testing each ray against all object triangles? Didn’t think so ;)
@Goz

How does cube mapping work again?

There is no raytracing involved in cube mapping if that’s what you are implying.
@Goz

Ta for that …interesting article. Still not sure where you get your 4 pixels from 4 different triangles from, though …. that part of the article is a little confusing.

Yeah, that was wrong article. I edited my previous post earlier.
@Goz

@”Article”

On triangle-borders, the GPU still works with full quads, even if only one pixel is part of the triangle to be computed. In this example, three pipes won’t do any work

Is that referring to the NV40 or previous designs?

That’s the nv40 architecture. It seems you are confusing pixel pipes and quad pipes. Even when at triangle border other 3 pixel pipes for a quad don’t do work, other 3 quad pipes do work.
@Goz

Gonna have to do a bit of research on this but i am sure there are scene traversal systems that work in linear time. Its a simple process to iterate an octtree to find the nearest tri along a ray path, for example (and there are undoubtedly far more efficient methods of doing this). I don’t see how this is so expensive …

If you compare the process of finding ray intersection with a triangle from both memory access/coherency and performance POV to simply rasterizing triangle you’ll realize what I mean.
@Goz

Not to mention the fact that if you are doing this sort of thing you already have a perfect database setup for things like collision detection and such like :)

You’ll have that structure with hierarchical z-buffer visibility without raytracing anyway.
@Goz

Well, tbh, this could go on forever (must remember not to get involved in this sort of discussion 2 weeks before beta :lol:). So i’m going to end with … I disagree. As yet you have not managed to convince me so shall we agree to disagree on this? Afterall the only REAL way to tell is to re-visit this subject in 10 years time. You up for it? :D

Man, you are in no hurry with whole two weeks for beta ;) Anyway, I hope I have atleast convinced some other people who happens to read the thread and have a clue that raytracing has no future *throws fuel to the fire* :tongue:

0
101 Nov 17, 2004 at 09:51

@davepermen

i just prefer to have a full solution, and then step back and take what you need to get an optimal solution for your needs. and not a weak solution and i have to add and tweak and change and fake around till it gets at least close to what i want.. and nowhere near close to what i could have in the full solution.

Sorry .. this comment bothers me. What is so “full” a solution about raytracing? How do you model a truly diffuse surface without firing an infinite amount of rays for every surface intersection?

Ray tracing is no perfect solution …

0
101 Nov 17, 2004 at 10:15

@Altair

Not really complicated but expensive. Think what it takes to rebuilt octree for an object of let say 10k polys. In object level it makes more sense. Or did you think of getting away of having an object space SDS and testing each ray against all object triangles? Didn’t think so ;)

Sure its expensive to rebuild a whole octtree … thats why you take advantage of frame to frame coherency.

There is no raytracing involved in cube mapping if that’s what you are implying.

No thats true … but the idea behind it is much the same. Trace a beam/ray/normal into the cube map. Admittedly thanks to the wonders of people with more of a brain than me this is a HEAVILY optimised solution that requires no beam to actually be traced :)

Yeah, that was wrong article. I edited my previous post earlier.

Kinda like the old Pixel Planes system …

That’s the nv40 architecture. It seems you are confusing pixel pipes and quad pipes. Even when at triangle border other 3 pixel pipes for a quad don’t do work, other 3 quad pipes do work.

Indeed i was …

If you compare the process of finding ray intersection with a triangle from both memory access/coherency and performance POV to simply rasterizing triangle you’ll realize what I mean.

Thats actually a good point. Mind there are always ways to improve cache coherency … Work on the PS2 you’ll see what i mean :lol:

You’ll have that structure with hierarchical z-buffer visibility without raytracing anyway.

If you can access it … the hierarchical Z is kept on the graphics card … Mind lets hope they allow us to get shit from the card quickly …

Man, you are in no hurry with whole two weeks for beta ;) Anyway, I hope I have atleast convinced some other people who happens to read the thread and have a clue that raytracing has no future *throws fuel to the fire* :tongue:

hahahaha :tongue: While i do appreciate your points i think you are being entirely too dismissive of Ray tracing … Many solutions are far more elegant and, indeed, simpler when using raytracing (bulletholes in walls, for example)…

0
101 Nov 17, 2004 at 15:14

@Goz

Sure its expensive to rebuild a whole octtree … thats why you take advantage of frame to frame coherency.

If you have flexibility of current vertex shaders, you can’t gain from temporal coherency. That’s why I said that you should pose some reasonable limitations to objects (i.e. assume that objects are always within their bounding volumes) in order to gain from temporal coherency when updating world space SDS, but as I see it, you can’t pose those limitations to object space SDS, or it would atleast be extremely difficult (i.e. defeat the purpose of trying to gain from the coherency) and very limiting.
@Goz

Thats actually a good point. Mind there are always ways to improve cache coherency … Work on the PS2 you’ll see what i mean :lol:

I have actually worked on PS2 and GC in the past, so yeah, I see what you mean :D Having coherent memory accesses has always been very important from performance POV regardless of architecture and the importancy has nothing but increased since caches were first introduced. And now even with GPUs all kind of coherency is extremely important and GPU manufacturers take all the measures to increase it (swizzled/compressed textures/render targets, mipmaps, etc.)
@Goz

You’ll have that structure with hierarchical z-buffer visibility without raytracing anyway.

If you can access it … the hierarchical Z is kept on the graphics card … Mind lets hope they allow us to get shit from the card quickly …

That’s what PCI Express is for ;)
@Goz

hahahaha :tongue: While i do appreciate your points i think you are being entirely too dismissive of Ray tracing … Many solutions are far more elegant and, indeed, simpler when using raytracing (bulletholes in walls, for example)…

Sure, there are alot of utilizations (other than rendering) for raytracing, and we use raytracing quite a bit in our game (for instance for bullet holes you mention) because it’s very intuitive and simple. But even for those cases it’s not very cheap and has fair amount of problems alone. I may appear more dismissive to raytracing than I am though, but that’s kind of nature of a debate :happy:

0
101 Nov 17, 2004 at 16:22

That’s what PCI Express is for ;)

Forgive me if wrong here (spent too much time recently on X-Box and PS2 to pay much attention to the PC happenings) but as i understand it the only real difference between PCIx and AGP (ignoring the bandwidth) is that PCIx is full duplex where AGP was half. There was no AGP spec reason why a card can’t download from the card as fast as it can upload …

Am i right on that?

If i am i can see whats gonna happen with PCIx :rolleyes:

0
101 Nov 17, 2004 at 21:14

@Goz

That’s what PCI Express is for ;)

Forgive me if wrong here (spent too much time recently on X-Box and PS2 to pay much attention to the PC happenings) but as i understand it the only real difference between PCIx and AGP (ignoring the bandwidth) is that PCIx is full duplex where AGP was half. There was no AGP spec reason why a card can’t download from the card as fast as it can upload …

Am i right on that?

If i am i can see whats gonna happen with PCIx :rolleyes:

PCI-X != PCI Express. PCI Express (x16) delivers \~4GB/s transfer rate simultaneously to both directions, while AGP8x does \~2GB/s to single direction. I don’t know much about the details, but it’s supposed to be very cost efficient solution, currently delivers \~10x more data / pin than AGP4x (expected to increase up to \~4x / pin with advanced silicon technology), has dedicated line / device thus doesn’t cause transfer bottlenecks to SLI graphics cards, etc.

0
101 Nov 17, 2004 at 23:31

theoretically agp readback could be quite faster (and is on some highend cards), but yes, they always ‘forgot’ to add that feature.

now, that pcie doesn’t bring more performance (as agp8x is more than enough in one direction anyways for now), they now need some reason to hype pcie => they now try to hype it’s readback speed.

marketing, all marketing :D but no, pcie is much faster in readback.. still, agp by itself could be much faster, too.. :D

i know of at least one who will profit from pcie by much.. and it’s one of the most impressive gpu works done..

0
101 Nov 18, 2004 at 03:14

@davepermen

theoretically agp readback could be quite faster (and is on some highend cards), but yes, they always ‘forgot’ to add that feature.

now, that pcie doesn’t bring more performance (as agp8x is more than enough in one direction anyways for now), they now need some reason to hype pcie => they now try to hype it’s readback speed.

marketing, all marketing :D but no, pcie is much faster in readback.. still, agp by itself could be much faster, too.. :D

“Yes, you’re right, but theoretically I could be right as well, it’s just that I’m not.” ;)

0
101 Nov 18, 2004 at 07:37

It’s most likely dumb to step between two dogs going at it, especialy as a booter to the site. But, I don’t feel lighting alone is what it takes to make a game feel real. I would much like to see a smarter colision system. Slid along walls, brush up on things, and actually see persons grasp objects with the skin giving way to the object. I guess that’s not really relative to the video card aspects of things though, huh? :blush:

:D Well then while I’m at it then what about sound cards getting things like dragon speak and reading programs intergrated into it.

Then you could really get into character by making a costom voice. No more 12 year old kid’s voice cutting into my counter strike games. :lol: -YEAH :lol:

And maybe then you could get some really complex infocom program on the computer! Imagine that for mmorpg’s! You would have to act out as your character! Then of course there’s this:

“Hey Computer!” (from other room)
“Yes, Man?”
“Play my ‘Punk n Ska’ mix”
“Sure thing, boss!”
-Music Plays
-Robot Mother F#@ker, by green jelly stops. Dashboard starts.

how cool would that be?!

you could even get it to rejester tone.

0
101 Nov 18, 2004 at 10:19

@pat_mathis

I would much like to see a smarter colision system. Slid along walls, brush up on things, and actually see persons grasp objects with the skin giving way to the object.

Hey man! You trying to steal my Quantum Physics engine idea? :lol:

0
101 Nov 18, 2004 at 12:26

i’ve played a bit hl2 yesterday (a quarter of an hour, or so..), and.. well.. the physics do, for me, first time feel quite real. we can go much further, of course, but it’s at least a step.

oh, and, yes, there’s much more in the whole “good illusion” thing than lighting. it’s just, i’m fighting for raytracing because ‘it works’. it can be more or less converge to a general full solution to all such problems => programers won’t have to work that much for it anymore, and can dive more into the artistic part of it (implementing good tools for complex shader generation for artists, etc..).

most important: realistic animation. that both involves physics, and great art. so far, hl2 impresses me, there, too. zelda did as well. realistic movement of characters, they actually do steps to walk, and not just “swing their legs” like in quite some shooters. the animations have to be fluid, and behave rather logical. physics as well (they are just automatic animations, so the same rules apply).

next important: constistency. the whole world, the individual grass blade. all need some detail, and all the same ‘some’. the world has to look like it’s one piece, not like uhm.. a bsp with some skinned characters in it. it should give the feeling of being touchable. has to do very much with the animation of above, but as well much more with the artists work (like characters wear stuff that exists in the world, too.. say some gold-medaillon gets the same shader as the goldnuggets you can find..)

third: realistm: not important if you want cartoon shading, or the most realistic reallife simulation. graphics have to look logical, to not stress the eye in any form. to have a good backend there, a graphical api should exist, that provides a simple interface to full customisable global illumination, theoretically. you can still disable what you don’t want, but it would be the right “base” to work with.

and thats why i fight for hw raytracing. it would give this possibility. with doom3 graphics sort of the lowest a raytracer hw does by default, it would be quite a nice “start” to evolve to realistic graphics :D

i’m quite aware that graphics are not everything. but they are an important part. they should as accurate as possible represent what the artists and designers imagined. just like it is for movies today.. be it nemo, shrek, the incredibles, ice age, or what ever. there, artists don’t have to care (much) about technique at all. the result is a consistent, realistic, believable world.

this is even true in some old games. graphics don’t have to be top notch, to be believable. but the whole mix of the 3 points above should find a good combination. thats what counts.

of course, ai and sound have the same criteria :D

0
101 Nov 18, 2004 at 12:44

Ok, back to Raytracing vs. Rasterization …
Altair can you tell me why saarcor as a rtrt-chip does perform so incredibly well with that limited resources in contrast to rasterization hardware ?
- 90Mhz Clock Speed (Rasterization Hardware is at 500Mht, right ???)
- 64 MB Ram (Rasterization Hardware has > 128MB)
- it’s good old SD-Ram not DDR
- it’s implemented in a FPGA, it’s not even “real hardware” => it’s still some kind of simulation
- btw it’s a PCI-bus FPGA-Card they use \^\^

Now look at the performance :
“This small prototype with only one rendering pipeline achieves already realtime frame rates of 15 to 60 fps in 512x384 and 32 bit colour depth and between 5 to 15 fps at 1024x768 in our benchmark scenes as presented on this page. Thus the prototype with on 90 MHz already achieves the performance of the highly optimized OpenRT software ray tracer on a (virtual) Pentium-4 with 8 to 12 GHz!”

And they didn’t use trivial scenes but levels from quake3 and ut2003 :
- Quake3-p: 52,790 triangles, 17 objects
- UT2003-Shooter Game “Rabbit”: 53,722 triangles, 3 objects
- SunCor: 187,145,136 triangles, 5622 objects (ok fps drop, but still a lot less than on rasterization hw)

So now look at the rasterization hardware with those specs : I suppose we’d been at the pre-riva tnt2-level, right (90 Mhz, ok Ram was less than 32mb, so 64mb would’ve been huge)?

Doesn’t that prove that Raytracing Hardware is AT LEAST as fast/good as Rasterization Hardware.

By the way you mentioned the shading problem of saarcor, did you ? Well,
look at the pictures at http://www.saarcor.de/
… maybe there are no “wow-they-are-so-cool” effects comparable to
Pixel Shader effects but compare those pics to saarcor’s
rasterization-eqivalent (seen from the hardware side) - a voodoo 2 card
Ok, you have bilinear filtered textures, but the realtime shadow would be a problem (hey I mean GPU supported !) , also that huge amount of triangles. I think the quality of the sample pics is quite promissing, so don’t forget : saarcor is the first real usable implenention not even in “real hardware”. Think about what would be possible if you’d use the same amount of transistors like a GF6 \^_\^ and use them for the shading pipeline if you want ….\^

Well I really don’t wan’t to argue about the theoretical possibilities and algorithms concerning raytracing/rasterization because I think only the results count in the end. And comparing the raw output (the image quality) I think the saarcor wins even against a Riva TNT (which had better hardware specs).

0
101 Nov 18, 2004 at 18:08

PsiProvider,

I don’t have time to repeat everything that was said in this thread, so read it throughfully, try to understand and we can discuss more.

0
101 Nov 18, 2004 at 19:45

Well Altair seams like you missed my point :
The only things you mentioned here in this forum are comparisons of algorithms - rather boring stuff. Concerning the image quality I don’t care what algorithm was used as long as it runs at “realtime” (> 15fps). It seems like you’re completely ignoring the fact that there is a working realtime raytracing “chip”. And its output is IMHO absolutely comparable to the high end grafic cards today (GF6, Radeon X800). So why can’t you simply accept the fact that a 90mhz rtrt-chip produces comparable output to a 500Mhz Rasterizer ? I do NOT say that some algorithm is better than another - that would be very stupid, because it always depends on the use and the exact context you want to use the algorithm in.
By the way I’m not a newbie concerning raytracing : I’ve written a working realtime raytracer for school so I know the biggest problem in raytracing : to find out which primitive is hit by a ray without testing them all against the ray. But saarcor’s solution seems to work : otherwise they wouldn’t have “realtime”-framerates at 1024x768.)
Ok, so what’s your problem in one sentence ?
To be fair I’ll put my argument in one sentence, too (so you don’t have to spend too much of your precious time on my posting):
I prefer the Realtime Raytracing Chip towards rasterization hardware because of comparable image quality (often better IMHO) at comparable speed at MUCH lower hardware costs.

0
101 Nov 18, 2004 at 21:16

@PsiProvider

And its output is IMHO absolutely comparable to the high end grafic cards today (GF6, Radeon X800).

Ok, I guess there is nothing to discuss about then :rolleyes: Since there is no raytracing HW that’s scaled up to GF6 level, you have to talk about it in algorithm level and use that as argumentation for what kind of problems there potentially is, regardless how boring it is. IMHO, SaarCOR chip has come nowhere near the rendering quality of latest GPUs and I don’t think investing the transistors of nv40 to raytracing would get you there either.

0
101 Nov 19, 2004 at 13:59

I would say, a particle physics system, say it rains(particle system kicks in) then mud,(physics engine kicks in, deforms the mesh). I dont know, just an idea.

0
101 Nov 20, 2004 at 17:46

altair, what does saarcor miss in terms of rendering quality?

0
101 Nov 20, 2004 at 20:20

@davepermen

altair, what does saarcor miss in terms of rendering quality?

Shaders, flexibility and performance are the first ones that come into mind.

0
101 Nov 21, 2004 at 16:21

performance has nothing to do with rendering quality, and given the hw constraints, the performance is great (and if you have brain you should finally understand that).

shaders will have the same performance as on a rastericer gpu (as it essencially has the exact same hw logic).

flexibility. rastericers don’t have much flexibility as well. they can rasterice triangles, lines and points onto rectangular buffers, sample textures, and have two places with shaders.

the raytracer can raytrace triangles onto rectangular buffers, sample textures, and will soon have one place with shaders. so where’s the big difference ?

oh, and it works great together with opengl btw..

0
101 Nov 21, 2004 at 17:50

@davepermen

performance has nothing to do with rendering quality

You are wrong right there boy. It’s kinda ironic you claim otherwise now, when you were all for scalability yesterday :rolleyes: When you don’t have performance you have to cut in quality.
@davepermen

shaders will have the same performance as on a rastericer gpu (as it essencially has the exact same hw logic).

I guess you don’t understand what coherency means and what it has to do with performance of shaders. This discussion doesn’t get anywhere until you are able to understand that and frankly, I find continuing this discussion pretty much waste of time now that Goz is gone with his beta deadline :glare:
@davepermen

flexibility. rastericers don’t have much flexibility as well. they can rasterice triangles, lines and points onto rectangular buffers, sample textures, and have two places with shaders.

That’s naive thinking. Rasterizers abstract only the most low level part of the rendering, while they leave more complex, non-trivial algorithms to fully programmable CPU. Raytracers restrict the rendering in much higher non-trivial level and one huge restriction because of this in raytracers is that they require scene capturing.
@davepermen

oh, and it works great together with opengl btw..

I’m sure you got loads of experience of that to back up that comment like everything else you have stated as matter of fact.</sarcasm>

0
101 Nov 22, 2004 at 10:25

Wow, that was quite a discussion!
I want to get this topic back to The Next BIG Thingy in Real-Time Graphics but before that, just a passing thought to the two friends.
Altair…“Wrong and right” reply for “performance has nothing to do with rendering quality” is perfect as the two factors will still decide how the chip is (sells) as a whole. Also Daveperman… I dont think the discussion here is about if SaarCOR is good. Its hardware that does real rime ray tracing!! it will obviously be a better overall performer (if implemented well) than software counterparts. I think what Altair wants to argue is that if its good enough. Good enough for the time & money that will need to be invested for the development of this. Am I correct?

Well anyways. Coming back to The Next BIG Thingy in Real-Time Graphics I am surprised nobody has talked anything about Volumetric Rendering. (Sorry I joined in late :D ). Lets divide the rendering in two parts One where we do the lighting, shading & effects like shadows & the other part… Modelling. Honestly guys you got to admit.. we FAKE! I mean we know that every sphere that we render is in fact not a sphere, it is a closedfaced set of 2D Polygons. We fake when we say its 3D when we know that its hollow inside. Probably the next big thing, or the next big thing to the next big thing would be when we do the modelling to that LOD that we do layers inside a primitive, think of this… there is a wall (Ah…. no not a textured rectangle) We have detailed the wall to that level that we have layer of bricks of porus material (Ya no more bump maps lets call it 3D Porus material) which is brought together with a layer of thickened cement. & BOOM there goes the rocket from the rocket launcher. The wall (made of volumetric elements) breaks into small peices. There is a real time physics engine that will compute the the impact, breaking of the group of wall elements & what the hell… it will also compute the scatter of each of the voxel. The possibilites are unimaginable!
We can even have water / blood rendered out of small voxels where a realtime Computational Fluid Dynamics engine which will compute the flow.
Ahhh… They say I like to dream :D
But I say I dream thats why I can do! :D

0
101 Nov 25, 2004 at 07:29

uhm, altair.. openrt (the api for saarcor) works flawlessly with opengl together for displaying (as saarcor is only a renderer.. the display card is still an ordinary gpu for now).

but it’s okay. you don’t even try to think about what i state, and just state me dump instead.

0
101 Nov 25, 2004 at 15:16

The problem is I understand completely what you are trying to say. Sorry, but I just got a bit frustrated about going over the same thing over and over again without anything new on the table and maybe stated things in wrong tone of voice. I had no intention to claim you are stupid and I know you are very clever guy, but that you just lack experience in game development to see the whole picture. I’m truly interested about seeking potentials for raytracing HW and would like to hear that one deciding factor why raytracing would be better option of rendering, but no one has been able to show that yet (not saying there isn’t such thing). Sure there is nice natural features in raytracing like refractions and reflections, but those alone are quite weak argument when facing all the potential problems there will be and if you had the experience, you would understand that. You can’t simply ignore those problems. Rasterization as it is now also has its fair share of problems, but there is also potential solutions for some of them which stand on the basis of already existing architecture and which has already been proven to work “on the field”.

0
101 Nov 29, 2004 at 00:27

the one definite factor is, finally, the big hacking that is 99% of the todays hw and sw part of graphics (and we all get teached now good it is on nvidias and atis and even carmacks and friends sides) to get an only near to good looking image could be over. on raytracing sides, rather scalable full working solutions for everything imaginable (in newtonian space, that is.. nobody wants quantum effects in his games.. yet:D) in terms of graphics.

just look at the whole shadow issue. there is no simple 100% working, fast, and scalable solution on gpu’s yet.. and gpu’s are old today.. isn’t that depressing?

raytracing can handle in a clean and natural manner all sort of effects in direct, and indirect illumination situations. it can handle gi well.

rasterisation has not yet proven it can. sure, it can get managed well for movies, but heck, they still render with about 20minutes per frame. thats FAR away from todays gpu’s. if we look at saarcor, thats much more close:D its in the near-to-realtime speed for gigantic meshes, it can illuminate them with nice gi, and shaders are just a bit of hw that you could copy from gpu’s today. the same parallelity would still exist, only coherence effects (texture cache comes to mind) would get issues.

it looks like you have no clue how far we are away from natural illuminated and rendered scenes.. its quite far. todays images look good, but to get closer to ‘the solution’, we have to add a lot of additional work onto the gpu (near to 1 render per pixel.. this is not scalable). they can (and do) this in scenarios for movie-renderings. accurate reflections get more or less raytraced with a rastericer for the intersections.

if you read enough about what gi involves, and start to look at the real world again, you notice how much just depends on reflections. every surface gets reflections, that change it’s look. often its rather diffuse, and only a bit, but it mathers to detect what sort of material it is. brdfs are highly complex for all sort of materials, and they all need accurate input from the whole surroundings to get an accurate shading output. you can fake that more or less. but only raytracing can give the real input values for your shaders to give the correct output.

i don’t see problems in raytracers. if you finally compared the hw of saarcor to the hw of a gf6 (only hw features), and do the math, then you notice that there is a hw-speed difference of a factor 50 to 90 (depending on how you do the math..).

now scale saarcor by a factor 50 in speed (raytracers scale well, you know that yet), and take the other numbers and images on www.openrt.de and you can start imagine what a saarcor with nvidia hw technology in the back-ass would be able to do.

one simple statement: the term “interactive global illumination” would not exist anymore. they could call it “realtime global illumination”.

but you fail to, for one time, look at it and _CARE_ABOUT_WHAT_YOU_SEE_. (oh, you should read all the papers..).

i’m tired of saing the sam over and over again, but i got trained as i talk about it since quite long.. i feel quite lonely at my position, people are trained to believe in rasterisation and gpu’s, they get trained by their jobs, by the nvidia and ati marketing, and the fact that “everyone around uses it, too..”. they don’t even bother to look around and ‘restart thinking’.

0
101 Nov 29, 2004 at 00:28

fuck that got much too long :D sorry..

i had now nearly one week no pc.. now i have a new one.. athlon 64 :D

0
101 Nov 29, 2004 at 04:43

@davepermen

the one definite factor is, finally, the big hacking that is 99% of the todays hw and sw part of graphics (and we all get teached now good it is on nvidias and atis and even carmacks and friends sides) to get an only near to good looking image could be over. on raytracing sides, rather scalable full working solutions for everything imaginable (in newtonian space, that is.. nobody wants quantum effects in his games.. yet:D) in terms of graphics. [snapback]14188[/snapback]

yeah…complex effects done using rasterization are essentially approximations…granted the results may be arguably acceptable, but the effort, time and learning curve needed to get there just snowballs year after year…can this be the right way of doing things? And the novelty of eye-popping graphics will wear down year after year…and there’re tons of more pressing challenges ahead - full volumetric rendering, volumetric-based physics, smarter AI, biomechanics…but I’m digressing.

just look at the whole shadow issue. there is no simple 100% working, fast, and scalable solution on gpu’s yet.. and gpu’s are old today.. isn’t that depressing?

couldn’t agree more on that…tons of elaborate tricks have been invented by enterprising folks just to do some decent shadows…deep shadow maps for translucent shadows, elaborate schemes for soft shadows(and then only just approximations), perspective shadowing mapping for antialiased shadows etc etc….then imagine having to pull all of these complicated techniques together to create an even more complicated universal shadowing scheme…and all while raytracing simply traces out completely natural shadows as it renders, quietly and efficiently….I can’t imagine going into the future of 3D rendering without such ease and power.
heck, just getting something as basic as transparency right is such a drag in rasterization…with raytracing, the issue becomes elegantly moot…so too, the issue with pixel overdraw…occlusion culling is inherent!

raytracing can handle in a clean and natural manner all sort of effects in direct, and indirect illumination situations. it can handle gi well.

yep…GI is its mainstay alright.

rasterisation has not yet proven it can. sure, it can get managed well for movies, but heck, they still render with about 20minutes per frame. thats FAR away from todays gpu’s. if we look at saarcor, thats much more close:D its in the near-to-realtime speed for gigantic meshes, it can illuminate them with nice gi, and shaders are just a bit of hw that you could copy from gpu’s today. the same parallelity would still exist, only coherence effects (texture cache comes to mind) would get issues.

it looks like you have no clue how far we are away from natural illuminated and rendered scenes.. its quite far. todays images look good, but to get closer to ‘the solution’, we have to add a lot of additional work onto the gpu (near to 1 render per pixel.. this is not scalable). they can (and do) this in scenarios for movie-renderings. accurate reflections get more or less raytraced with a rastericer for the intersections.

if you read enough about what gi involves, and start to look at the real world again, you notice how much just depends on reflections. every surface gets reflections, that change it’s look. often its rather diffuse, and only a bit, but it mathers to detect what sort of material it is. brdfs are highly complex for all sort of materials, and they all need accurate input from the whole surroundings to get an accurate shading output. you can fake that more or less. but only raytracing can give the real input values for your shaders to give the correct output.

ditto man…just take a look at PRT and the numerous caveats that come along with it…that’s about the nearest rasterization can get to GI at interactive rates….but the catch is that objects have to stay close to each other for the precomputed “localised global illum transport”(kind of an oxymoron there really) to hold…again, all that hard work only to end up with crippling limitations. Is this how 3D rendering is to advance into the future?

i don’t see problems in raytracers. if you finally compared the hw of saarcor to the hw of a gf6 (only hw features), and do the math, then you notice that there is a hw-speed difference of a factor 50 to 90 (depending on how you do the math..). now scale saarcor by a factor 50 in speed (raytracers scale well, you know that yet), and take the other numbers and images on www.openrt.de and you can start imagine what a saarcor with nvidia hw technology in the back-ass would be able to do.

performance alone, raytracing as implemented by openrt is *awesome*…the jumbo jet visualization is definitive proof, as is the factory visualization! :wink:

one simple statement: the term “interactive global illumination” would not exist anymore. they could call it “realtime global illumination”.

but you fail to, for one time, look at it and _CARE_ABOUT_WHAT_YOU_SEE_. (oh, you should read all the papers..).

i’m tired of saing the sam over and over again, but i got trained as i talk about it since quite long.. i feel quite lonely at my position, people are trained to believe in rasterisation and gpu’s, they get trained by their jobs, by the nvidia and ati marketing, and the fact that “everyone around uses it, too..”. they don’t even bother to look around and ‘restart thinking’.

exactly…that some inherently flawed rendering paradigm gets an early headstart to arrive first at the “accomplished” state it is today is no exhaustive proof that other ahead-of-their-time alternatives are irrelevant…raytracing might have been untenable at a time when processing power was lacking, but the time is now ripe for a new way of looking at rendering…before we waste more resources working towards some technological dead-end.

analogy: look at the way scripted “physics” has completely given way to interactive physics…I’m sure there must’ve been the naysayers decrying the impracticability of it, but look how far desktop physics has gone…bang-for-the-GPU-cycle-wise, raytracing represents a quantum leap over rasterization in performance and quality.

you’re not alone, dave…I’m a firm believer that raytracing will rule the day pretty soon.

0
101 Nov 29, 2004 at 19:26

@davepermen

its in the near-to-realtime speed for gigantic meshes, it can illuminate them with nice gi, and shaders are just a bit of hw that you could copy from gpu’s today.

Like I have told you, output sensitive rendering can be brought to rasterization GPUs and that’s not my idea. What I have heard, there has even been proposals to MS to extend DX to this direction (from GPU manufacturer side).

Sure you can illuminate meshes with GI in raytracing, but for what cost? You make it sound like GI (or reflections/refractions for that matter) are almost free in raytracing, so you have either no clue how it’s done or are intentionally giving totally biased claims. Have you even had a thought how much data you need to access per ray to find intersections?

If you had decent understanding of GPUs you would realize that shaders take vast majority of the die size and are not just “a bit of HW”. The proportional size of shader HW is nothing but increasing. If you want to have good idea how graphics HW should evolve, you need to understand how it’s currently working. You should spend some time in getting into details of current GPUs so that we could have meaningful debate.

0
101 Nov 29, 2004 at 22:48

have you actually READ ON OPENRT?!

because there are BENCHES and TESTS with GI SOLUTIONS BASED ON RAYTRACING, AND THEIR COST!

sorry that i cry, but i get really annoyed. they have what they call realtime raytracing, and at the same hw configuration, they can get, what they call, interactive gi.

read up on it, read those papers if you really want to discuss with me. there is the cost. yes, gi is slower, but no, not by much. more something of the sort “uh, i can play the game at 640x480 smooth with raytracing.. lets enable gi.. uhm, about smooth on 320x240.. but what the heck, now i can actually SEE something in ‘doom3 openrt edition’.

the numbers are there, those are real tested live numbers (some are just old and hw got faster), they give you your questioned estimate. gi is slower, sure, but if you design right, you can make it very scalable, from no gi, up to perfect montecarlo gi with a blendfactor that scales the quality from the first to the last. this is what openrt more or less provides in the interactive gi demos, and its set to a quality level that makes it interactive.

what i mean with shaders is, they are the same bit of hw on a rasteriser as on a rt-gpu. _THATS_ what i mean. i don’t say they don’t cost much. but if you have to shade the intersection result of 1024x768 rays with a shader, it will be as much as rendering one fullscreen image with a shader on a rastericer. THAT PART OF THE HW IS EQUAL IN LOGIC. so it equals itself out of the whole equation.

oh, and i know in detail how todays gpus work. thats why i don’t believe in them anymore at all.

0
101 Nov 30, 2004 at 02:02

@davepermen

have you actually READ ON OPENRT?!

Yes I have, but apparently you haven’t. Even in the papers they say that getting adequate GI you should have budget of atleast 50 rays / pixel (I find people talking more about \~100). Now, I checked the videos they have and it looks that they are using more like 10 or so because to me what they call GI in videos looks simple soft shadowing not GI (i.e. spawning \~10 rays to the light source) in the factory example for instance. With that technique you get way more coherent memory access than for Monte Carlo type GI, so good luck with your 4 rays / pixel for GI (640x480 -> 320x240).
@davepermen

this is what openrt more or less provides in the interactive gi demos, and its set to a quality level that makes it interactive.

Yeah, the quality level is very poor in the demos, because GI becomes really expensive in RT when you pump it up. I.e. when you move from 50 to 100 rays / pixel you probably get only like 10% increment in quality.
@davepermen

what i mean with shaders is, they are the same bit of hw on a rasteriser as on a rt-gpu. _THATS_ what i mean.

I guess _THATS_ what you mean _NOW_ :rolleyes: Up until this moment you have been talking about reusing the HW resources of GPUs like nv40 to scale the raytracing HW. But I’m glad you did learn something :tongue:
@davepermen

but if you have to shade the intersection result of 1024x768 rays with a shader, it will be as much as rendering one fullscreen image with a shader on a rastericer. THAT PART OF THE HW IS EQUAL IN LOGIC. so it equals itself out of the whole equation.

Okay, so I guess you haven’t heard of the de facto way of rendering with expensive shaders nowdays. You first lay down the Z for opaque objects (pre Z-pass) and run another pass for shading, so you end up shading each opaque pixel once. When you disable color writes for pre Z-pass you end up processing 32 pixels in a single clock with nv40.
@davepermen

oh, and i know in detail how todays gpus work. thats why i don’t believe in them anymore at all.

According to what you have been talking about, it doesn’t seem to be the case. But ignorancy is a bliss I guess.

edit: Oh, and almost forgot. Since you are big fan of scalability and output sensitive rendering, check out Hybrid dPVS. It’s an output sensitive visibility system that you can use with GPUs. It doesn’t rely on GPU visibility support so it does bunch of redundant work on CPU AFAIK.

0
101 Dec 01, 2004 at 02:41

One should not also forget about the issue of escalating development complexity inherent in rasterization techniques.
Rasterization techniques at best lend themselves adequately to simplified local lighting models eg. per-pixel direct diffuse/specular lighting.
Global effects like reflection, refraction, caustics, inter-reflected diffuse, soft shadows with translucency etc require elaborate schemes just to obtain approximate results, often with varying degrees of restrictions.
Raytracing takes care of these effects in a most intuitive, straightforward manner…freeing developers to concentrate on other more pressing challenges in an increasingly complex development environment.
One could also ask the question “at what price?” to the assertion that rasterization can also approximate, albeit with limitations and substantial computational costs, complex global effects…how much monstrous computing capacity must rasterization consume in order to come close to the interactivity and fidelity of complex effects of which raytracing only exacts a fraction? and how many more research papers need be churned out expounding yet more ways to fake/approximate certain aspects of visual effects in isolation from the full gamut of all the global factors?

0
101 Dec 01, 2004 at 15:45

Those are good points Melvin.

Probably the largest contributor to the performance of this generation GPUs is the coherency or locality they exploit. If you take for instance huge memory bandwidth GPUs nowdays have, this is possible for the most part because of coherent memory access pattern. IIRC, memory in GPUs is nowdays fetched in 128 byte blocks and introducing incoherency to the memory access pattern will bring the bandwidth to its knees. Another benefit of coherency is that caches for pipelines can also be kept minimal which in turns makes adding more pipelines adequate. If you think how much effort has been put into increasing coherency even with current generation GPUs which utilize already inherently coherent technique, rasterization, it’s amazing. Only place where raytracing inherently beats rasterization in memory coherency is the way it accesses framebuffer, and even for that GPU designers have went great lenghts to improve it (z-cull, swizzling, quad pipes, etc.)

Now, on top of memory coherency, there is also coherency in shaders (GPUs can be setup with only single shader at once), shading (quad pipes rendering adjacent pixels with similar interpolated shader parameters which therefore can share computations), textures (TMU setup & memory access), vertices (vertex cache), triangles (triangle setup) and what not, which in turn makes monsterous throughput of GPUs possible.

With raytracing you sure can exploit some coherency (IIRC, SaarCOR chip does this by tracing 2x2 block of rays at once), but as I see it, it’s WAY more difficult, because raytracing is inherently incoherent way of rendering. Even the seemingly simple tracing of 2x2 rays has its fair share of problems alone (diverging rays in the traversal of spatial data structure). I’m sure there are many other ways to improve coherency in raytracing than what SaarCOR does, but trying to exploit it in turn makes the hardware much more complex.

I sure find those inherent global effects of raytracing attractive and it’s quite elegant how they are handled, but when faced all those potential problems, I’m not convinced at all that it’s good way to go. Most of the effects raytracing solves are also quite subtle in common cases where cheap approximations do work just fine, so I don’t find it well justified to invest loads of processing power for that. At very least it should be optional, but in that case you would need to bring those expensive shaders (in terms of transistors) to the raytracing HW.

You also mentioned how rasterization tries to solve those global effects and which are restricted in a way or another, but raytracing is also very specific in the way it skins the cat. Raytracing requires the higher level knowledge of the rendered environment, whose representation in rasterization is completely free. You can, for instance, with rasterization GPUs generate and render the scene in the way it’s most appropriate.

0
101 Dec 02, 2004 at 11:56

@Altair

@davepermen

what i mean with shaders is, they are the same bit of hw on a rasteriser as on a rt-gpu. _THATS_ what i mean.

I guess _THATS_ what you mean _NOW_ :rolleyes: Up until this moment you have been talking about reusing the HW resources of GPUs like nv40 to scale the raytracing HW. But I’m glad you did learn something :tongue:

i ment that all the time. just you never got it.

0
101 Dec 02, 2004 at 14:21

@davepermen

i ment that all the time. just you never got it.

How should I understand comment like:@davepermen

if you finally compared the hw of saarcor to the hw of a gf6 (only hw features), and do the math, then you notice that there is a hw-speed difference of a factor 50 to 90 (depending on how you do the math..). now scale saarcor by a factor 50 in speed (raytracers scale well, you know that yet), and take the other numbers and images on www.openrt.de and you can start imagine what a saarcor with nvidia hw technology in the back-ass would be able to do.

Care to elaborate how you get the factor of 50-90?

0
101 Dec 03, 2004 at 03:06

We’ve been comparing realtime raytracing technology against rasterization as if the 2 are on the same level playing field.
To be fair, SaarCOR is not even out of its prototyping diapers yet…while rasterization clearly has had the benefit of relentless and exclusive development in the mainstream realtime 3D industry all these years. But by any standards, SaarCOR is already showing precocious signs of promise.
We should only really start comparing the 2 in earnest when SaarCOR debuts…the issues SaarCOR faces now could well become moot in time, who knows?

Anyway, the point I was trying to make isn’t so much on rasterization performance vs raytracing performance but the sheer complexity of development on rasterization platforms vs that of raytracing.
Even for something as basic as transparency, the rasterization approach throws up sorting issues, which despite the standard back-to-front tactic, is generally not robust(eg. intersecting polys between transparent objects or intersecting polys within a transparent object). Then there’s the sorting overhead when you run up to hundreds, even thousands or more objects. There’s depth peeling for order-independent rendering, but that’s 1 heck of a complicated approach just to render transparent polys, notwithstanding the hefty processing overheads.
Speaking of reflections, if 1 object requires 1 cube map, then 100 objects require 100 cube maps….that’s of course assuming we care about “reasonably approximate”(but not accurate) reflections on all 100 objects. Suppose we don’t, then we just use a single cube map to represent the reflections of all the 100 objects…which would not look even half decent since they’d all look the same. So we use 100 cube maps to get “reasonably approximate”(but not accurate) reflections on all 100 objects, and blow our texture budget and frame rate….to make things worse, for serious apps like product visualization, walkthroughs and what not, these reflections might not even pass muster. Oh and don’t forget refractions, per-pixel lighting, shadows and a few other million things we have to “tack” on. Development complexity and processing overheads quickly go up the roof…and you start racking your brain to decide which to compromise over what and by how much.
Precomputed radiance transfer comes to the rescue with fancy tricks like inter-reflection, soft shadows, caustics, scattering…but wait, the caveats list is just as long - only valid if objects don’t stray too far from each other, only valid for distant environmental lighting, require lengthy preprocessing of transfer vectors…in a nutshell, it’s not truly an interactive approach to global effects. Nevermind whether one actually understands the complex maths behind it.
Shadows have an impressive literature of their own…suffice to say that soft-edged, translucent, alias-free shadows in realtime still remain elusive.
One can certainly spend an inordinate amount of development effort in cheap cheats for these effects, but this effort just gets bigger and bigger with rising expectations each year, and reusability becomes more and more difficult, resulting in the familiar “this engine’s rendering pipeline has been rewritten from ground up to take advantage of the latest advances in hardware” claim of pride by engine programmers.
In contrast, the raytracing approach to these challenges is as intuitive as can be, simply because it mimics the way light works in real life.
Devoting ever increasing chunks of developer effort on faking ever more impressive visual effects on rasterization hardware is not unlike faking/scripting elaborate physics behavior back when the CPU wasn’t up to it…now it seems silly to do it in this day and age of realtime physics engines.
There comes a point when it makes more sense to do things the correct though inherently more computationally expensive way when the hardware is finally there to make it viable, than doggedly sticking with cheaper/limited approximations that just keep getting increasingly complicated and consequently putting pressure on what initially started out as “economical” rendering hardware to evolve into the gigaflops processing behemoths they are today.
Today, it seems a raytracing prototype with just a fraction of the processing capacity of modern day rasterization hardware can easily produce results that rasterization engines will have to resort to tons of approximations/hacks just to come close to.
Going by what the SaarCOR prototype has already shown it is capable of and will be capable of when it debuts, and the elegant ease with which to implement accurate global effects, I’m not at all convinced that rasterization-based engines is the way forward into the future.

0
101 Dec 03, 2004 at 15:45

@Melvin

We’ve been comparing realtime raytracing technology against rasterization as if the 2 are on the same level playing field.
To be fair, SaarCOR is not even out of its prototyping diapers yet…while rasterization clearly has had the benefit of relentless and exclusive development in the mainstream realtime 3D industry all these years. But by any standards, SaarCOR is already showing precocious signs of promise.
We should only really start comparing the 2 in earnest when SaarCOR debuts…the issues SaarCOR faces now could well become moot in time, who knows?

But to be fair to rasterization, the technique has went through extensive testing in production over a decade and many issues with it has already been surfaced. Same can’t be said about raytracing since it, like you said, hasn’t even got out of its prototyping diapers yet. Only thing we can do is to make well educated guesses and try to be objective about known and speculative pros and cons both techniques have, and with the raytracing cons I have brought up I believe I have only scratched the surface a bit.
@Melvin

Anyway, the point I was trying to make isn’t so much on rasterization performance vs raytracing performance but the sheer complexity of development on rasterization platforms vs that of raytracing.

0
101 Dec 06, 2004 at 06:38

@Altair

But to be fair to rasterization, the technique has went through extensive testing in production over a decade and many issues with it has already been surfaced. Same can’t be said about raytracing since it, like you said, hasn’t even got out of its prototyping diapers yet. Only thing we can do is to make well educated guesses and try to be objective about known and speculative pros and cons both techniques have, and with the raytracing cons I have brought up I believe I have only scratched the surface a bit.

>> just as you’ve pointed out how rasterization has had the benefit of dedicated research/development by the mainstream real-time graphics industry to evolve to the state it is today, do note that raytracing has traditionally only received the niche attention of academics, offline rendering developers and the occasional gung-ho raytracer coder…it’s hardly any wonder then why rasterization is more “established” and has had more of its inner demons “exorcised” compared to raytracing, which is only recently beginning to attract mainstream attention given increasing processing power nowadays. Not surprising then is it that only the tip of the iceberg has been scratched. My argument is, just because rasterization is now looking good(albeit with limitations, and IMHO a heck lot of it) with tons of processing power thrown at it plus clever optimization techniques, doesn’t mean we should sit pretty on our laurels and stop looking elsewhere for a potentially better solution because we don’t want to venture outside our comfort zone. Rasterization may look “good enough”(and that is subjective, not forgetting that graphics isn’t just for games but also other more serious apps like product visualization, walkthrus etc) now, but it may not be the best solution to bring us into the future. Should reflections still be done the way it is today, say 5 years from now? What with all those clumsy memory-hungry reflection cube maps, tedious setting up for rendering, expensive multi-pass rendering into not 1 but 6 targets…only to get at best approximate results, worst case completely wrong results? And let’s not get started on those other myriad effects we also have to consider, which all have to be integrated together somehow to produce the final rendered result. Surely we should’ve long outgrown that phase of hacking and fudging, and left it up to the renderer(incidentally a raytracer naturally) to simply and elegantly generate the accurate results that we can all take for granted, while we move on to other bigger more deserving challenges?@Altair

@Melvin

Anyway, the point I was trying to make isn’t so much on rasterization performance vs raytracing performance but the sheer complexity of development on rasterization platforms vs that of raytracing.

>> even so, I’d say the SaarCOR prototype makes a pretty good statement about performance, if not the last word…

0
101 Dec 08, 2004 at 15:39

Don’t worry Melvin, I can tell you that many game developers can think out of the box and don’t just sit pretty on their laurels. I think game developers contributing to this discussion is fair example of it :) It’s just very common to see people, particularly with academic and hobbyist background, to lullaby themselves to over optimistic dream and totally forget, or not even be aware of practical issues about different techniques. I see people constantly, when talking about their favourite technique, to bring up only the positive things about their beloved pet, but that doesn’t give me impression they got comprehensive understanding of it.

About cubemapping, yes, I’m well aware that it’s not perfect solution but it’s good, simple and efficient approximation which fits the purpose in most of the cases, in other words, it’s practical. If you want to deal with dynamic cubemapping, which isn’t actually needed that often to give believable impression of interreflectance, you need memory only for a single cubemap since you can recycle the memory for different objects. There are also solutions coming to shortcomings of rendering to a cubemap.

0
101 Dec 15, 2004 at 00:43

Okay, i feel i have to but in here, and i probably shouldnt do so because i’ve skipped half of the BS you guys have been talking… but davepermen… FFS dude, ray tracing and rasterization cannot be compared to each other, the ray tracing algorithm in general increases exponentially with scene complexity. as opposed to rasterization which scales alot more efficiently to scene complexity. Also, last i checked, OpenRT was realtime on a cluster of a few P3’s or something (I havent referenced this recently, this is just off the top of my head, so excuse me of i’m wrong…). And last but not least… Global Illumination has been proven via multiple methods on GPU’s via shaders. Most of which require ray tracing, however it still proves that GI is possible on GPU’s.

And once again, i dont know if you’ve already discussed this, as i am not going to read through all that crap, however if you have, it seems you’ve skipped it awefully quick, and means you have not taken these few things under careful consideration… (i mainly speak of the ray tracing algorith in general here)

0
101 Dec 15, 2004 at 01:59

@Smokey

And last but not least… Global Illumination has been proven via multiple methods on GPU’s via shaders. Most of which require ray tracing, however it still proves that GI is possible on GPU’s. [snapback]14516[/snapback]

>> more precisely, a precomputed localized rendition of GI has been demonstrated on the GPU(aka precomputed radiance transfer in DX9)…the caveats list includes no arbitrary movement of objects, no vertex deformation for objects, no high-frequency lighting conditions, no surface color change etc among other restrictions…this is a far cry from the truly interactive GI that raytracing is capable of.
@Altair

About cubemapping, yes, I’m well aware that it’s not perfect solution but it’s good, simple and efficient approximation which fits the purpose in most of the cases, in other words, it’s practical. If you want to deal with dynamic cubemapping, which isn’t actually needed that often to give believable impression of interreflectance, you need memory only for a single cubemap since you can recycle the memory for different objects. There are also solutions coming to shortcomings of rendering to a cubemap. [snapback]14415[/snapback]

>> not really…there has to be a unique cubemap for every reflective object for the final render pass…you can’t reuse the same cubemap from object to object or else they’d all have the same reflections…and so the number of cubemap quickly escalates
>> interreflections can be approximated with cubemaps only if you break objects into convex components…just imagine how complex the content generation and technical implementation process turns into…and not to mention when objects deform
>> and don’t forget about other global effects like diffuse interreflectance, caustics interreflectance, color transfer through translucency, shadowed reflected caustics etc etc…simple reflections are only the *tip* of the iceberg which I highlighted to make a point
>> more serious apps like visualizations, product showcases, training/simulations etc typically have more stringent requirements on visual fidelity than games do…just because games can get by with visual inconsistencies doesn’t mean other kinds of apps can

0
101 Dec 15, 2004 at 02:18

@Melvin

>> not really…there has to be a unique cubemap for every reflective object for the final render pass…you can’t reuse the same cubemap from object to object or else they’d all have the same reflections…and so the number of cubemap quickly escalates

It seems you don’t have basic knowledge about dynamic cubemapping so let me explain. You simply render the scene to a cubemap for each object before you render the object, reusing the same memory. Anyway, this kind of dynamic cubemapping is overkill most of the time, just like raytracing interreflections would be.

0
101 Dec 15, 2004 at 06:43

>> suppose you “simply render the scene to a cubemap for each object before you render the object, reusing the same memory”…consider 10 reflective objects…if you render 10 times to the *same* cubemap from different viewpoints, you’re effectively overwriting the contents 10 times, so all 10 objects will reference the same cubemap contents as of the last render when you finally render all the 10 objects into the user’s camera…so you get the *same* reflections off the 10 objects…bottomline is, you need 10 cubemaps to hold 10 unique reflection contents at the same time
>> “this kind of dynamic cubemapping is overkill most of the time, just like raytracing interreflections would be”…it may well be overkill for games *now*…but like I said, take the big picture of the 3D industry as a whole…in serious apps like architectural walkthroughs, product visualization and training simulations where correct visual cues are paramount, it may not be “overkill” but an actual requirement…which is probably why large companies like Boeing, automobile corporations etc look into rendering solutions like OpenRT to provide accurate visualization of large datasets
>> & 1 more thing…expectations evolve over time…why, elaborate self-shadowing, bumpy surfaces, full-scene glare and what not might have been yesterday’s “overkill” effects, but witness how they’ve become today’s indispensible features in any cutting-edge 3D app. Fast forward 5 years from now…it’s a fair guess that global effects are expected to be commonplace what with emerging genres like “interactive cinematic gameplay” or even “realtime movie experience”…it may no longer be acceptable *not* to see the spout’s reflection on the teapot’s shiny body anymore by that time…by then, I don’t fancy the prospect of spending loads of time writing tons of complicated rendering code to fudge some semblance of global effects…such tasks are best left to raytracing while I move on to other more pressing challenges

0
101 Dec 15, 2004 at 15:34

@Melvin

>> suppose you “simply render the scene to a cubemap for each object before you render the object, reusing the same memory”…consider 10 reflective objects…if you render 10 times to the *same* cubemap from different viewpoints, you’re effectively overwriting the contents 10 times, so all 10 objects will reference the same cubemap contents as of the last render when you finally render all the 10 objects into the user’s camera…so you get the *same* reflections off the 10 objects…bottomline is, you need 10 cubemaps to hold 10 unique reflection contents at the same time

The point is, you don’t need to hold 10 unique reflection contents (for 10 objects) at the same times :rolleyes: It would be downright stupid to do things like that. It would be extremely naive (though straightforward) to implement dynamic cubemapping even in the way I described. To translate this to coder language:

for each object with dynamic cubemap
{
render scene to a cubemap from the pivot point of the object
render the object by using the cubemap
}


That’s pretty trivial, don’t you think?
@Melvin

take the big picture of the 3D industry as a whole…in serious apps like architectural walkthroughs, product visualization and training simulations where correct visual cues are paramount, it may not be “overkill” but an actual requirement

Lets face it, GPU technology is driven by games, not architectural walkthroughs, nor rendering of reflective & refractive spheres, which not surprisingly raytracing demos are all about to advocate the technology. Even though I agree that once you get used to certain level of quality, you start to notice lacks in certain subtle areas, I don’t see raytracing taking over simply because it would be major leap backwards in performance, quality and flexibility. Also many of those subtle effects are very far from cheap to compute by using raytracing (GI, smooth reflection, etc.) and fall to the “naive implementation” category, so even if you had extremely fast raytracing HW in your hands, you probably wouldn’t use it to compute those effects in practice but would need seek for alternative solutions anyway.

0
101 Dec 16, 2004 at 08:00

@Altair

The point is, you don’t need to hold 10 unique reflection contents (for 10 objects) at the same times :rolleyes: It would be downright stupid to do things like that. It would be extremely naive (though straightforward) to implement dynamic cubemapping even in the way I described. To translate this to coder language:

for each object with dynamic cubemap
{
render scene to a cubemap from the pivot point of the object
render the object by using the cubemap
}


That’s pretty trivial, don’t you think?

>> yes, I’d thought of that initially…but this would imply the need to switch between >1 depth buffers that (i)store the depth info as you render the scene into the cubemap for the reflective object you’re currently rendering (ii)store the depth info as you render the reflective object into the final render target…and as if this complication in the render pipeline isn’t bad enough, this approach just plain doesn’t work…coz the moment you render the scene into the cubemap for reflective object A, reflective object B that’s being reflected by A and which also references the *same* cubemap, basically references the same reflection content as A…and that’s wrong…plus what’s more, B can’t read the same cubemap that’s currently being written to…unless of course it has its own reflection cubemap(duh)…in other words, N reflective objects require N cubemaps
>> trivial? not at all…like all the other effects we try to fudge with rasterization, it only seems trivial on the surface, but compounds exponentially in implementation complexity the more effects we combine together…unlike raytracing which elegantly does away with these messy multipass, multitexture complexity
>> this is an example of how convoluted and unintuitive rasterization approaches are…and we haven’t even considered anything beyond simple reflections yet
@Altair

Lets face it, GPU technology is driven by games, not architectural walkthroughs, nor rendering of reflective & refractive spheres, which not surprisingly raytracing demos are all about to advocate the technology.

>> in truth, the graphics industry creates the demand for its products(isn’t it the same for business everywhere from slimming pills to golf clubs to automobile makers?)…the games industry drives it as much as it drives the former…remember when bump-mapping was first trotted out by hardware vendors, and how *long* it took for developers to really embrace it? and the gazillion other spanking new features that have been steadily rolling out of chip foundries even before developers everywhere could breathe a collective sigh of relief that their engines supported the “latest hardware features”(notwithstanding how much of a moving target that is)…so let’s face this, the graphics business creates fresh demand for new features to stay competitive and “relevant”…and developers as consumers lap it up, while clamouring for feature X, which vendors readily oblige…so what’s wrong with raytracing joining in the fray? coz it’s a “little too fancy” for gaming tastes? I still remember how ridiculously glitzy early hardware bumpmapping demos looked…and now bumpmapping’s on every darn wall, box and creature.
>> the Boeing showcase serves to highlight visualization of massive datasets…and “reflective & refractive spheres” are not at all there is to raytracing(they’re but a small aspect of GI)…full GI takes care of all visual complexity while steering clear of unnecessary implementation complexity…and such simplification of the 3D development process is a major step in the right direction
>> one shouldn’t dismiss the fact that 3D technology development is also driven by other industries eg. large scale image generators by Evans & Sutherland for training/simulations, massively parallel graphics servers by SGI for supercomputing visualization needs etc…
>> it is easy to hold a dim view of raytracing, that understandably stems in no small part from the “alpha version look” of raytracing technology at its current state…however, that’d be making up one’s mind before the race has even started in earnest
@Altair

Even though I agree that once you get used to certain level of quality, you start to notice lacks in certain subtle areas, I don’t see raytracing taking over simply because it would be major leap backwards in performance, quality and flexibility.

>> I’d hardly call global effects subtle anymore than I’d call soft shadows “subtle”…various kinds of effects can appear glaring or subtle depending on myriad conditions…what’s “subtle” can look real obvious the next moment and what’s “glaring” can seem inconsequential next
>> given the obvious visual quality that raytracing has historically been lauded for, I don’t quite get your constantly alluding to raytracing for producing “inferior quality”…as for performance, that still remains to be seen, though the massive sunflower field demo running on the prototype shows a glimmer of things to come
@Altair

Also many of those subtle effects are very far from cheap to compute by using raytracing (GI, smooth reflection, etc.) and fall to the “naive implementation” category, so even if you had extremely fast raytracing HW in your hands, you probably wouldn’t use it to compute those effects in practice but would need seek for alternative solutions anyway. [snapback]14541[/snapback]

>> tradeoffs between quality and performance can also be applied to raytracing, like LOD, number of ray bounces etc…if I had extremely fast raytracing hardware, I’d save tons of development time from not having to develop elaborate graphics hacks than if I had to make do with extremely fast rasterization hardware…and there’re many more interesting challenges besides mere rendering

0
101 Dec 16, 2004 at 12:00

I’ve recently started reading lot of Ingo Wald’s publications (see: http://graphics.cs.uni-sb.de/\~wald/Publications/index.html)), and also alot on saarcor… and believe it or not, i think davepermen may be right… This actualyl does look like a viable solution, there have already been tests, implementing games with a real time ray tracing renderer, and they came out with real time frame rates (5-20), both using 30Ghz virtual CPUs (over a cluster), and the saarcor chip, and they did not have any problems… I’m not entirely sure what i’m saying here, but i think i agree with dave permen on alot of things now…

0
101 Dec 16, 2004 at 15:51

@Melvin

>> yes, I’d thought of that initially…but this would imply the need to switch between >1 depth buffers that (i)store the depth info as you render the scene into the cubemap for the reflective object you’re currently rendering (ii)store the depth info as you render the reflective object into the final render target…and as if this complication in the render pipeline isn’t bad enough, this approach just plain doesn’t work…

What does it matter if you have to switch between depth buffers? You seem to make big deal out of trivial things. And yes, the approach does work, since you don’t need dynamic cubemaps for rendering the dynamic cubemap, unless if you do recursive reflections. In the approach where you need only a single cubemap the depth of the recursion is 1. Even in raytracing you need to put the cap on the depth of your recursions and finally revert to static cubemap or something similar.
@Melvin

>> in truth, the graphics industry creates the demand for its products(isn’t it the same for business everywhere from slimming pills to golf clubs to automobile makers?)…the games industry drives it as much as it drives the former… remember when bump-mapping was first trotted out by hardware vendors, and how *long* it took for developers to really embrace it?

I don’t know where have you come up with that “truth”, but GPU manufacturers want to spend transistors where it matters. Why do you think they have the quad pipeline architecture? Why do you think NVidia has their UltraShadow technology? Those technologies exist just because they happen to make biggest bang for the buck for current games and games released in the near future. Of course GPU manufacturers also have their visions and they promote those vision to game developers, but in the end what matters is how that technology is exposed in games. That’s why GPU manufacturers have development relationship to be aware what and how game developers are exactly doing things in the rendering side so that they can focus on things that matter and to evangelize (spelling?) the use of their GPUs.

There are also things like production issues which prevent using certain technologies in games immediately when they are exposed in GPUs. It takes time to learn to use new technology and have good tools available for its utilization. In case of normalmapping it takes significant production effort from game developers to extensively take use of the technique and that’s partly the reason why the size of our production team has more than doubled in size. Not that many game developers are yet ready to invest that amount of money to take extensive use of normalmapping and rather wait for tools to mature and knowledge to spread to make the effort thus investment on it smaller.
@Melvin

so what’s wrong with raytracing joining in the fray? coz it’s a “little too fancy” for gaming tastes? I still remember how ridiculously glitzy early hardware bumpmapping demos looked…and now bumpmapping’s on every darn wall, box and creature.

If you provide raytracing as an additional feature on top of rasterization which doesn’t interfere with rest of the architecture, I don’t see much wrong with it. Worst thing that could happen is that GPU manufacturers simply waste die size & money for some technology that no one uses (not that it never has happened before) and that could have been spent to improve things that matter. Anyway, as I see it, introducing raytracing to current GPUs and exposing its functionality to the level you are talking about wouldn’t only change the whole architecture of the chip with all the potential problems I brought up, but also the way applications (games) deal with it and APIs expose it. Now, consider the implications of the change to the whole picture and the subtle gains you would have from raytracing, and suddenly sticking with rasterization and finding solutions by using it starts to appear much more appealing alternative - atleast if you see the big picture that is.

I know it’s easy to propose new ideas without thinking of implications or being responsible of them, and you hear this particularly from new developers. How many (particularly new) developers want to just throw the engine they are using out of the window and rewrite the whole crap from scratch, because they “know” how it should be done while wasting tons of valuable time put on debugging, learning, optimizing, etc. of the old engine :rolleyes:@Melvin

the Boeing showcase serves to highlight visualization of massive datasets…

And as I have told you many times already, you don’t need raytracing to visualize massive datasets.
@Melvin

>> given the obvious visual quality that raytracing has historically been lauded for, I don’t quite get your constantly alluding to raytracing for producing “inferior quality”…as for performance, that still remains to be seen, though the massive sunflower field demo running on the prototype shows a glimmer of things to come

It’s the quality:performance ratio I’m talking about, not plain quality. Of course if you have infinite processing power quality of raytracing will overshine rasterization, but that’s not the case in real world. Even for non-realtime movie CG shots you can’t forget the performance.
@Melvin

>> tradeoffs between quality and performance can also be applied to raytracing, like LOD, number of ray bounces etc…if I had extremely fast raytracing hardware, I’d save tons of development time from not having to develop elaborate graphics hacks than if I had to make do with extremely fast rasterization hardware…and there’re many more interesting challenges besides mere rendering

I don’t know how much experience you have in developing “elaborate graphics hacks” in games, but most of the “hacks” we have to do are due to achieving adequate performance and getting around the limitations of shaders, particularly sm1.1 shaders. I don’t see raytracing helping in either of these cases to save us any development time.

Anyway, it has been pleasure to discuss with you guys about this even though it did heat up a bit in few occasions :) It definately made me think the future of gfx technology more than I would have probably done just by myself, but now I need to focus more “pressing challenges” of finishing the game we are working on.

Cheers, Altair

0
101 Dec 17, 2004 at 10:33

@Altair

for each object with dynamic cubemap
{
render scene to a cubemap from the pivot point of the object
render the object by using the cubemap
}


That’s pretty trivial, don’t you think?

Well im busy doing a last minute rewrite of the PS2 rendering system so i will keep this brief … but Altair you do appreciate that this is only trivial for convex objects? Non-convex objects reflect other bits of itself and for arbitrarily complex objects cube mapping can no longer cut it without breaking the object up into convex parts…

I know this is a pretty cheesy point but do remember that cube mapping is not a be all and end all solution :) It was introduced (for raytracing quaintly enough) because mapping all the reflected rays was just too expensive :)

Anyway another tuppence from myself :)

0
101 Jan 13, 2005 at 12:41

Okay, after a few weeks of intense research (which has also lead to a new field of study for myself), I fear I must deeply apologize for what I’ve said earlier in this thread. My statements are more or less completely incorrect in almost every way possible. I have read alot of thesis’, articles, tutorials amd the-like on ray tracing, scene transversal algorithms for varying spatial partitioning/sub-division algorithms, and only now realise the potential ray tracing has.

I still believe in my first post in this thread, global illumination (probably via photon mapping) will be the next big thing in computer graphics (which will use ray tracing, however photon density estimation for irradiance will be done in rasterization hardware, via shaders or textures… As i do not see rasterization subsiding in the near future). However ray tracing has absolutely impeckable potential in relation to computer graphics, as well as to computer simulations in general, more specifically acoustics and physics.

In relation to hardware and ray tracing… I do not see nVidia nor ATI going down this track at all, and if i’m not mistaken nVidia have said they will not be touching ray tracing in their hardware. So if anything is to happen here, SaarCOR, if successful, will be what we should expect to see once ray tracing becomes more viable for main stream graphics… (which is yet another lengthy discussion in itself, which I dont think I’ll get into here)

Anyway, I just thought this thread required a bit of a kick… It’s definately still worth discussing in my opinnion, and one of the most interesting threads i’ve read on a forum in quite some time. (It has apparently even caught the attention of Jacco Biker… And there for most probably Thierry Berger-Perrin… people who I’m sure could give alot of feedback on ray tracing…)

Oh and Davepermen, sorry for my ignorance earlier. I should have done my research before even thinking about responding.

Cant way to hear on everyone’s thoughts! :D

P.S. I’m currently nearing completion of the first version of my graphics engine… So I should be around the forum alot more. :D

0
101 Aug 29, 2005 at 04:57

I believe that there will eventually be a limit to GPUs in terms of realisim. Hardware is being driven more and more towards how nature tends to behave. The fact is, the way current renderers work is somewhat unnatural physically. Raytracing is the way to go since they completely simulate reality. This is the next big step: creating a mainstream hardware-accelerated raytracer.

0
101 Aug 29, 2005 at 06:26

Raytracing is the way to go since they completely simulate reality.

How do _they_ do that ?
I’m a huge fan of ray tracing and all it’s decendants, but lets not overestimate what you can do with it.

0
139 Sep 02, 2005 at 19:21

Hehe, they don’t completely simulate reality by any means. However, he has a point that raytracing is a bit more physically based than rasterization. This doesn’t mean that rasterization necessarily produces physically incorrect images, though.

0
101 Sep 04, 2005 at 23:02

I think we need the best out of both worlds…

GPU Vendors could add a raytracing unit that you can call from a Pixelshader or something like that. But there are many problems like holding and updating the necessary scenegraph. Especially with dynamic scenes.

0
101 Sep 05, 2005 at 17:19

@anubis

Raytracing is the way to go since they completely simulate reality.

How do _they_ do that ?
I’m a huge fan of ray tracing and all it’s decendants, but lets not overestimate what you can do with it.

[snapback]20443[/snapback]

well, raytracers give the possibility to scale towards complete correct physical simulation as close as we know about it.

they as well give the possiblility to scale down till about the quality of doom3 (wich is whitted raytracing + some fake effects for pseudocaustics and similar stuff).

every cheat gpu’s of today use to fake something can be used to fake the same thing in a raytracer. but there, we have the choise to do it correct as well. thats what altair somehow never wants, he wants to stay at the fakes and thinks they are good enough.. :D (but performance-drop per fake gets bigger and bigger.. integrating the fakes to make a complete faked engine harder and harder..)

well, i’m getting offtopic. fact is, i miss you in msn!! :D

0
101 Sep 06, 2005 at 12:04

@davepermen

well, raytracers give the possibility to scale towards complete correct physical simulation as close as we know about it.

And how does a ray tracer scale toward complete diffuse lighting interaction? You’d have to generate an infinite amount of rays for every ray that hits a diffuse surface…

But i suppose its THEORETICALLY possible … :rolleyes:

0
101 Sep 06, 2005 at 13:58

I think dave is right. With more advanced lighting models in realtime, rasterizing will get more and more hacky while rt could provide a clean solution for many problems we currently face.

Absolut physicaly correct lighting is not a goal that we should have for game programming. You won’t notice the difference to photon mapping anyway :)

I think after WGF 2 IHVs will perhaps add raytracing units to their HW. The shaders are unified then so they “only” have to add a raytracer in addition to the rasterizer unit.

The main problem will be dynamic geometry. I can’t imagine a solution for the problem that vertexshaders can change the position of a vertex to any position. Raytracers without a hierarchical scene structure are slow, that means it would have to be updated after every position change.

0
101 Sep 06, 2005 at 15:07

@davepermen

every cheat gpu’s of today use to fake something can be used to fake the same thing in a raytracer. but there, we have the choise to do it correct as well. thats what altair somehow never wants, he wants to stay at the fakes and thinks they are good enough.. :D (but performance-drop per fake gets bigger and bigger.. integrating the fakes to make a complete faked engine harder and harder..)

well, i’m getting offtopic. fact is, i miss you in msn!! :D

[snapback]20865[/snapback]

After reading this long discussion I must say I’m thinking more along the davepermen, and am somewhat surprised the tough stance Altair is taking here.

Although the points he represents are valid as far as the current game industry is concerned, such as development and training costs and time it takes to develop new APIs and final product - They are not good enough reasons to stop progress and stop finding new ways of doing things.
@Altair

..
Now, consider the implications of the change to the whole picture and the subtle gains you would have from raytracing, and suddenly sticking with rasterization and finding solutions by using it starts to appear much more appealing alternative - atleast if you see the big picture that is.

I don’t think the gains would be necessarily “subtle” if we had a raytracing hardware, which had seen at least 10 years of intense development and research. The first rasterizing GPUs were not really that impressive and lacked both speed and majority of features we can now expect to find from a consumer class hardware. I don’t think sticking to the rasterization *only* is a good approach in the long run.

We have been seeing somewhat incremental progress with the current graphics hardware, and I don’t know if there can be an incremental path for the future GPUs to provide a feasible raytracing support alongside the rasterizer. Maybe it turns out to be easier than we think or maybe the road ahead is bumpy for pioneers of this technology and for companies which are willing to take risks, but after all, I see it inevitable that we will be seeing a game taking use of a raytracing technology sooner or later. After all, game developers are(or should be) keen - and the competition is keeping us too - finding new ways to provide the ultimate game experience for players. If the raytracing can help us to bring interesting visuals for the games, I’m all for it. And there surely are people who are willing to take the risks. One can always say there will be problems ahead, but so there always is. If a one is afraid of possible problems and therefore doing nothing, he/she’s going nowhere.

I have seen raytracing to produce better images than any rasterizer so far, that alone is a good reason to keep on research and try doing it.

Juhani

0
101 Sep 06, 2005 at 18:45

i know of tons of scenarios wich simply not exist in current games graphically, wich would ROCK to play in, would look awesome, but are simply impossible to visualise in current hw..

in raytracing hw, it would be doable, and would definitely add a lot to gaming immersion/experience.. blabla :D (just throwing buzzwords)

well, anyways.. once we get far enough with realtime raytracing i hope i can set up some nice example with my friends, hehe.

another important thing: any form of precalculation will be useless the more dynamic games get. and as we know, thats the trend: PPU, PhysX, just type into google :D
with complex scenes, fully dynamic changeable, stuff like precalculated radiance transfer simply doesn’t work anymore. and you have to stay with what you can do realtime, dynamic. suddenly, a lot of the hacks altair loves so much won’t work anymore..

0
101 Sep 06, 2005 at 21:02

And how do you want to update the scenetree for the raytracer in that cases?

0
101 Sep 11, 2005 at 15:43

@Axel

And how do you want to update the scenetree for the raytracer in that cases? [snapback]20930[/snapback]

Every time someone throws the word “dynamic” to raytracing discussion it has a devastating effect :D I agree 100% with davepermen that games are by nature dynamic and become more and more dynamic in the future. However, it’s the raytracing that suffers from the dynamic nature because it relies heavily on precomputation in the fundamental level of the algorithm. Rasterizers are actually moving away from the precomputation (e.g. lightmaps) and can choose to use precomputation based on application.

I attended few raytracing courses at Siggraph this year and people seem to be stuggling with the same fundamental problems as years ago (e.g. efficiently building & updating KD-tree for raytracing). Also very basic things in games like skinning are conveniently pushed to background in raytracing discussions. Anyway, once we get those consumer class quantum computers in the market, raytracing problems are solved, but current 6 qubit test-lab quantum computers doesn’t quite cut it yet :wink:

Juhnu et al, don’t take me wrong, I totally wish we could have those raytracing units which could process rays cheaply as butter, but when the algorithm has fundamental performance flaws my hope is running quite thin.
@juhnu

The first rasterizing GPUs were not really that impressive and lacked both speed and majority of features we can now expect to find from a consumer class hardware.

Btw, you are dead wrong here. I don’t know where you pulled that argument and what was the first 3D HW you ever saw, but I think it was Pyramid 3D by BitBoys (-97) and I was totally impressed of the capabilities of that chip at the time. I would argue that rasterizing HW \~10 years ago was more capable than raytracing HW today :rolleyes:

Cheers, Altair

0
101 Sep 12, 2005 at 11:12

I would argue that rasterizing HW \~10 years ago was more capable than raytracing HW today

Some would argue that if the same amount of money were spent on research and production of ray tracing we would be some place else now. You remind me of all those right wingers who claim that alternative energy is not possible because it does not generate enough output but fail to see that the amount of money spent on the development of nuclear power is uncomparably larger (note : I’m not calling you a right winger :)). To conclude : Just because there are no solutions right now to certain problems in ray tracing doesn’t mean that they don’t exist.

0
101 Sep 12, 2005 at 13:42

@anubis

conclude : Just because there are no solutions right now to certain problems in ray tracing doesn’t mean that they don’t exist.

With all due respect … it doesn’t mean they do either …

0
101 Sep 12, 2005 at 14:38

@anubis

Some would argue that if the same amount of money were spent on research and production of ray tracing we would be some place else now.

Some people do argue about that indeed, but it sounds more like a convenient excuse for raytracing not taking off. Pyramid 3D was developed with low budget, something comparable to SaarCOR chip AFAIK AND \~10 years ago. I don’t know much about ASIC engineering, but I would imagine it’s easier/cheaper now than 10 years ago. Big companies like nVidia and Intel have also done research in the area so it’s not unexplored area by HW manufacturers. For sure more money has been invested on raytracing HW research now than was invested for the first shipped commercial 3D rasterizer chip. Raytracing isn’t new science either and the movie business also provides direct commercial motivation it.

So why no commercial HW raytracing chip has emerged? I don’t think it’s because HW manufacturers wouldn’t want or hasn’t invested on it, but because there are too many fundamental issues to begin with.
@anubis

To conclude : Just because there are no solutions right now to certain problems in ray tracing doesn’t mean that they don’t exist.

That’s right, but you need a bit more solid arguments to back that up if you want constructive conversation :wink: You can’t just shy away from the fact that raytracing is fundamentally complex algorithm when compared to rasterization, and that complexity doesn’t just fade away.

Cheers, Altair

0
101 Sep 12, 2005 at 19:02

You can’t just shy away from the fact that raytracing is fundamentally complex algorithm

I am not shying away from the fact that dynamic geometry is currently one of the biggest issues in ray tracing, but so were other things for raster graphics a few years back.

when compared to rasterization, and that complexity doesn’t just fade away.

By that arguement i could say that the whole move to programmable hardware is a bad one because it introcduces so much more complexity to the pipeline. Of course you pick on more complex problems with increasing hardware capabilities. It’s the nature of computer science. But what you are claiming is that noone should try to optimize ray tracing because rasterizers are superior anyway (at least you seem to be feeling superior over “the poor souls” who don’t follow the “enlightend” path). The matter of the fact though is that ray tracing provides some pretty elegant sollutions to problems that are addressed in a pretty hacked way right now.

I attended few raytracing courses at Siggraph this year and people seem to be stuggling with the same fundamental problems as years ago

I found this years Siggraph courses interesting in the way that exactly the problems of dynamic scenes and animation were addressed, without creating a big cover up for the problems at hand. Mind you, I can only judge from the keynote presentations, since I don’t have the money to fly over to america just for Siggraph.

0
101 Sep 12, 2005 at 21:04

@anubis

I am not shying away from the fact that dynamic geometry is currently one of the biggest issues in ray tracing, but so were other things for raster graphics a few years back

Could you elaborate which “other things” are you referring to?
@anubis

By that arguement i could say that the whole move to programmable hardware is a bad one because it introcduces so much more complexity to the pipeline.

That would be pretty marthyric but at least you would have kind of point there. Introducing more complex programmability to the pipeline comes with the expense of performance (transistors allocated for features vs. performance), but at least this complexity is optional in the fundamental level of the rasterizer.
@anubis

But what you are claiming is that noone should try to optimize ray tracing because rasterizers are superior anyway.

I don’t think I ever said that, but merely pointed out the fundamental issues in raytracing for its advocates, who doesn’t seem to have good grasp of practical issues in computer graphics and who for some ridiculous reason seem to take it personally.
@anubis

I found this years Siggraph courses interesting in the way that exactly the problems of dynamic scenes and animation were addressed, without creating a big cover up for the problems at hand.

Could you point me the references, because in the courses I attended they were “addressed” in very VERY vague/impractical way (i.e. test ray against each triangle in a skinned mesh, etc.)

Cheers, Altair

0
101 Sep 13, 2005 at 05:16

One can argue back and forth forever about what problems might or might not occur with hardware raytracing, but one thing is sure. In areas where even the highest end Nvidia card is not high enough and people have to use general purpose processors, ray tracing is the method of choice for high quality graphics. This is evidenced by every 3D animated film.

Hardware makers in all areas including graphics are looking for every way to parallelize the work, because we need more performance than we can get out of the silicone with just serial instructions. There have been some pretty amazing feats in parallel processing on board of the high end graphics cards, but it remains a fact that ray tracing can leverage parallel hardware much better than rasterization.

It is true that rasterization is currently the way to go for real time graphics and the fact that the hardware is already in place plays no small part in that conclusion. Ray tracing scales much better but it is true that rasterization is rediculously cheap for scenes with lower complexity in geometry and effects. It may be true that at the current state of the art rasterization hardware would still be faster than ray tracing hardware with the benefit of modern designs and budgets, but as we progress in demanded complexity and hardware ability rasterization as an algorithm just cannot compete with ray tracing.

This is not to say that ray tracing is the holy grail, however. I believe that even ray tracing will eventually need to be replaced with a more accurate model of light interaction withing the world. I think we will need to move to full on photon tracing. This would have the added benefit of allowing an unlimited number of views from each wave of photons. This may turn out to be ideal for massively multiplayer games of the future where all the photon tracing could be done on giant hardware behind the servers. But that is all just drooling on my part, and there is a good chance that this will not come while I am young enough to care :)

0
101 Sep 13, 2005 at 06:16

Could you elaborate which “other things” are you referring to?

Come on now… this is rethoric. What do you want me to do ? Tell you about the algorithmic developments of the last years ?

I don’t think I ever said that, but merely pointed out the fundamental issues in raytracing for its advocates, who doesn’t seem to have good grasp of practical issues in computer graphics and who for some ridiculous reason seem to take it personally.

I’m not sure what you are saying though… Rasterizing is more powerful currently ? Has a wider support ? I agree. But what is the point of saying that ? What do you want to tell the world ?

Excuse me if i misinterpreted you but it just sounds like you want to prove that rt tracing is futile. If that’s not the case and you just want to point us to the problems at hand that’s nice. Rest assured it’s being worked on.

Btw, addressing others as foolish just because they don’t share your point of view isn’t proving your point one bit.

Could you point me the references, because in the courses I attended they were “addressed” in very VERY vague/impractical way (i.e. test ray against each triangle in a skinned mesh, etc.)

The sixth talk on that page is about dynamic scenes and animation.

0
101 Sep 13, 2005 at 09:45

One can argue back and forth forever about what problems might or might not occur with hardware raytracing, but one thing is sure. In areas where even the highest end Nvidia card is not high enough and people have to use general purpose processors, ray tracing is the method of choice for high quality graphics. This is evidenced by every 3D animated film.

Im sorry .. are you saying that 3D animated films are done using ray tracing?

REYES is a triangle subdivision rasteriser. Thought RenderMan was too …

0
101 Sep 13, 2005 at 13:23

@anubis

Come on now… this is rethoric. What do you want me to do ? Tell you about the algorithmic developments of the last years ?

I want you to realize that rasterization has never had such fundamental problems. It seems you don’t realize what kind of advantage the simplicity and coherency of the rasterization algorithm gives to the HW implementation. If you take a stab and claim there have been “other things” for Gods sake, back yourself up if you want to claim youself some bit of credibility.
@anubis

I’m not sure what you are saying though… Rasterizing is more powerful currently ? Has a wider support ? I agree. But what is the point of saying that ? What do you want to tell the world ?
Excuse me if i misinterpreted you but it just sounds like you want to prove that rt tracing is futile. If that’s not the case and you just want to point us to the problems at hand that’s nice. Rest assured it’s being worked on.

I never said RT is futile, but it’s just you taking it to personal level and making the conversation black and white. I never said that rasterization was the ultimate rendering algorithm either. We both should realize at this point what are the real advantages and disadvantages of both algorithms AND then work out how to address those problems in both. THAT’S what the mature and constructive conversation is about and I honestly thought we could have that conversation here. There has been some work to modify the raytracing algorithm to be more suitable for stream processors (GPU Gems 2) and I think if you ever want to have well performing raytracing HW, that’s what you need to do - modify the fundamental algorithm.
@anubis

Btw, addressing others as foolish just because they don’t share your point of view isn’t proving your point one bit.

What are you talking about? If you can’t have constructive conversation and get offended because I don’t agree with yours, don’t start to pull low personal tricks on me.
@anubis

http://www.openrt.de/Siggraph05/UpdatedCourseNotes/course.php
The sixth talk on that page is about dynamic scenes and animation.

I’ll check that out. I think it was actually one of the RT courses I attended.

Cheers, Altair

0
101 Sep 13, 2005 at 13:24

Renderman actually doesnt do rasterization or ray tracing it’s just an interface. Pixar’s implementation uses Reyes which origionally was made to “overcome the speed and memory limitations of photorealistic algorithms, such as ray tracing, in use at the time.” This from Wikipidea they also say that nowadays “versions also include ray tracing and global illumination features”

0
101 Sep 13, 2005 at 18:56

Renderman actually doesnt do rasterization or ray tracing it’s just an interface. Pixar’s implementation uses Reyes which origionally was made to “overcome the speed and memory limitations of photorealistic algorithms, such as ray tracing, in use at the time.” This from Wikipidea they also say that nowadays “versions also include ray tracing and global illumination features”

So it does :) Surprises me, tbh … most of the rendering looks very rasterised (though i openly admit RT and Rasterisation can look identical (such a shame so few ppl do so … its like the plastic look of all the specularly bump mapped games out there!)

TBH, though, any decent ray tracer does its first pass using rasterising anyway … you store an ID buffer identifying what polygon is hit and pass it through a rasteriser. You then read back that ID Buffer and perform second passes over the geometry once the rasteriser has performed the visibility detection … Far far cheaper with modern (and for that matter much older) technology :)

0
101 Sep 13, 2005 at 20:03

actually, if you do GI and such with your raytracer, primary rays are only a small part (and constant, while all other can scale depending on the quality you want).. thus, in any raytracers other than witted, using rastericing for first-hits doesn’t give much, often not worth the additional effort (espencially if a lot of the geometry isn’t simple triangular meshes, but volumetric stuff, pointsamples, etc.. or higherorder (yes, you could tesselate..)).

anyways, for realtime raytracers that don’t have much stuff reflecting/refracting, it definitely takes away a bit.. but if you think of a scene with say 5 lights, you have one primary and 5 shadow rays per pixel.. so only one 6th of the whole can get speed up. yes thats a bit, but it does only give some percent.. not a “wow, i speed up by 2x, or 4x or something..”.

0
101 Sep 23, 2005 at 15:33

Real-time physically based caustics running completely on the GPU:

with light dispersion:

more pictures visit project page: http://graphics.cs.ucf.edu/caustics

0
101 Oct 02, 2005 at 18:03

Very fancy! *_*

0
101 Oct 03, 2005 at 14:14

The original question is quite interesting. Sadly the thread is flooded with a big and rather pointless discussion between a few members that discuss their knowledge (or the absense there of) of raytracing. I stoped somewhere on page 3. Is there anything interesting after page 3? Can the pointless discussion be moved to another topic please?

I agree that in terms of image quality GI (or a better approximation of it) will be an importand step forward. The bigger problem I see comeing is the amount of data an engine needs to handle to create an even more realistic world. We will need ever more detailed objects with more detailed surface description (among other properties for physics etc). Already artists outweight programmers in most game development teams.
The next importand thing (IMHO) is to start using procedural geometry and surface descriptions that can be extracted to full geomtry+surface data by the hardware to save storing all the tris or what ever in vidram. Objects could probably be rotated and moved around as procedural rather than a tesselated object with that many thousand verts.
I mean how many gigs of ram do we want to put on our gfx cards before we realize it’s never gonna be enough?

Alex

0
101 Oct 04, 2005 at 06:57

If you check the dates you will see that the thread got resurrected several times and is in fact a year old, so maybe that explains why there are strange topic jumps.

0
101 Oct 04, 2005 at 13:34

Indeed. Still I’d be interested to hear more opinions on this topic (as originally started in the first post)..so anyone?

Alex

0
101 Oct 04, 2005 at 17:18

@Alex

Indeed. Still I’d be interested to hear more opinions on this topic (as originally started in the first post)..so anyone? Alex

I read an article on ATI’s site the other day, on the subject you mentioned:

0
101 Oct 04, 2005 at 17:23

@Alex

Indeed. Still I’d be interested to hear more opinions on this topic (as originally started in the first post)..so anyone?

Just a little point. :-)
Ken Perlin has patented a hardware approach to his famous noise. So this could be another step toward real-time raytracing.

0
101 Oct 04, 2005 at 22:59

@zavie

Just a little point. :-)
Ken Perlin has patented a hardware approach to his famous noise. So this could be another step toward real-time raytracing.

I’d see it as a step away from that.

0
101 Oct 05, 2005 at 01:02

@Altair

Anyway, once we get those consumer class quantum computers in the market, raytracing problems are solved, but current 6 qubit test-lab quantum computers doesn’t quite cut it yet

I don’t think quantum computers are especially suited for raytracing. Do you know that only certain special algorithms work on them?

Or is there some theoretic work on quantum-raytracing-algorithms that I never heard of?

0
101 Oct 05, 2005 at 14:03

@Axel

I don’t think quantum computers are especially suited for raytracing. Do you know that only certain special algorithms work on them? Or is there some theoretic work on quantum-raytracing-algorithms that I never heard of?

I was half-joking ;) Anyway, what I know is that quantum computers reduce the Big-Oh complexity of algorithms, so for example search of an element from random set of data can run in O(sqrt(n)) instead of O(n) of classical computers. The way this would help with raytracing is that you wouldn’t need aux data structure (i.e. Kd-tree), but could just check the ray intersection against random set of polygons in O(sqrt(n)) time. Not quite O(1) time test yet, but at least a step towards the real solution :)

I don’t claim to be expert on the subject, but at least it sounds an interesting area of research. They had some whacky problems with quantum computers, such that you can’t duplicate data, but they figured out some weird solution to do this reliably by using probabilities or something (don’t ask me the details ;)) Maybe that’s why you think you can run only certain algorithms on quantum machine? There was a full-day course about quantum computing at siggraph, but I unfortunately didn’t attend it for more than the introductory part. It’s a bit difficult to justify for your employer how this would be useful for your next game ;)

Cheers, Altair

0
101 Oct 16, 2005 at 11:20

Okay - this kind of thread always gets hijacked by the raytracer people who are desperate for anything that will free them from the endless wait for a render…

But everyone has missed what the REAL need is for graphics to progress, whether real-time or raytraced. What really is keeping computer graphics from looking as good as, say, a motion picture is monitor technology.

If you go outside on a clear night and look at the sky you will see the stars and the moon shining brighter than any monitor can reproduce. High luminance objects like stars, high-albedo planets, car headlights, Hell even christmas tree lights just can’t be done on current monitors.

We need monitors that can reproduce the high luminance seen in reality. On current hardware such objects just go to a white washout far too soon.

-tAE-

0
101 Oct 17, 2005 at 14:19

@theAntiELVIS

We need monitors that can reproduce the high luminance seen in reality. On current hardware such objects just go to a white washout far too soon.

Actually, there has been progress in this area, and HDR monitors were demoed at siggraph with still pictures & real-time 3D. Anyway, I didn’t find the technology that convincing but that might be because of the limited and not that well authored demo material. The material that was demoed appeared to be even tailored to look bad on regular monitors to hype the technology but even then it didn’t really add THAT much to the experience, IMHO. Anyway, when you get that type of quality increment you don’t really realize that big of a difference at first, but when you get used to it, it’s really painful to go back, so I guess it’s the same deal with HDR monitors as well, just like it was when we transitioned from 8bit sound to 16bit :)

Cheers, Altair

0
101 Oct 17, 2005 at 16:08

this is definitely the most reanimated thread ever on devmaster. we should finally close it, or whatever.. :D

hm altair, you should’ve stayed longer at the quantumpart at siggraph. definitely a cool thingy..

hm.. if an O(n) algorithm goes down to O(sqrt(n)), what will happen to an O(ln(n)) algorithm? well.. an O(sqrt(ln(n)) algorithm or something similar would definitely be cool :D

i for myself am now waiting for a nice lowlevel gpgpu interface for the new ati cards. if that one gets available, i’ll have to think about writing a rendering backend for my distributed raytracer for it. would definitely boost up rendering times by some margin, i’d guess :D

0
101 Oct 17, 2005 at 16:38

What really is keeping computer graphics from looking as good as, say, a motion picture is monitor technology.

It is true that better monitors would allow you to display what you have in a better way. Still it only allows you to display what you already have. I think the next big thing should do more than that. Is should give you the possibility to go further and bring up pictures witch could not be produced in realtime before. So the raytracing adepts do not really hijack the thread here but rather point out that raytracing brings you closer to reality.

0
101 Oct 25, 2005 at 09:51

Hi Everyone,

Looks like this thread is gonna be closed. :sad: I never thought i would learn so much from it when I started this thread almost an year ago.
I have myself learnt ‘A LOT’ of new things from this thread. I want to thank all the people who have posted some cool things.

There were a lot of point-less ‘post-count++’ arguments. Its like one nice thought with a good-point, followed by 5 to 10 args. BIG Thanks to everyone for sharing your knowledge with us. Its not ATI, nVidia or EA that propel this industry. Its the community. I am a self taught game coder. I never attended a class. What ever I learned is what the community taught me.

I am pretty much convinced Real time ray tracing is the ‘WAY TO GO’. I am no interest to further arguments on the above statement.

Argument is essential to any research area. Point-less arguments slow things down. There gotta be a balance. Its just that we always have a lot more issues coming up to think (or argue) about. Someone was arguing about how we can do inter-relations & render 10 reflective objects using 10 RenderTexture Cube maps.
10 Cube maps??? 60 RenderToTexture operations ???? :wallbash:

Mature argument?? Yeah sure…

Okay - this kind of thread always gets hijacked by the raytracer people who are desperate for anything that will free them from the endless wait for a render

Its not about Ray-tracer people or Rasterizer people. Its about Real-time people. Its all about reducing that endless wait period for a render. About getting close to reality at real-time rates.

We need monitors that can reproduce the high luminance seen in reality

Yeah those HDR monitors arent all that great. Higher range adds up to the realism factor. But, there are many other things. For now, we are faking it good pretty well.

Sweeny said that some years down the line, we would go video-like. We ‘d have that kind of processing power. Monitors would be the bottle neck from then on.

For the next few years, we would be going towards getting the color (brightness, whatever.. ) of pixels right and go photo-real. Right now the gap is wide enough to keep us busy (and thread bloating :wallbash: ) for sometime. Then we would hit the limitations of resolution.

From then on its not about ‘Photo-Realistic’ but ‘Eye-Realistic’. Its not easy to convince a human’s sense of vision.

But, for now, we need to live with GPUs for a while. There is a lot to desire on GPU side.

I am not comfortable with the Stream based GPU architecture. Programmble shaders. We are inside a for-loop iterating over vertices & pixels. You can only write code inside the inner loop. Our existing rendering techniques can be simplified & improved significantly when dealing with render data as a set. not per-element.

Currently we are just hacking to overcome this problem. Render-To-Texture and more recently, Render to VertexBuffer ( WTF ). Current engine development is like stacking up more hacks over & over and making them work together.

Hacks = Code entropy + Art-Pipe entropy

In one of those GDC Videos, ATI’s Render monkey chick was talking about something like
**
“Dynamic Image-Space Per-Pixel Displacement Mapping with Silhoutte Antialiasing via Parallax Occlusion Mapping.”**

Thats a SM 3.0 shader (nvidia only) and a single quad brought my 6800 Ultra to its knees (in a shader-visualizer. not a game ). All this for some shadowing on normal-mapped stuff (plastic-mapped). Gimme a more decent hack. ( And with a shorter name that i can remember ).

Most of those presentations are trying to push PRT. PRT does not interest me, for now. And those Ruby demos arent all that great either. Dawn’s skin looks much better than that of Ruby.

Those caustic-mapping screens looked good. The numbers dont. I read their paper and I liked it a lot. Best shader based caustics (and also fastest) i’ve seen.:cool2: But, 30 FPS on a 7800 GTX (a few bunch of polys). May be for my next engine.

Rasterizer GPU development may hit fabrication limitations soon and they will go parallel. I wont be surprised if they tell us to code for 2D ‘Cell’ GPUs. And then 3D, Volume GPU matrices.

More flexible dynamic control-flow is desired ( especially pixel shader ). Lock-step execution of GPUs are a bit irritating. They said, its better to avoid checkered (high frequency) textures for good speed. I wont try telling it to our artists.

And ofcourse memory allocation on GPU. I am not keen on General Purpose stuff on GPUs. Physics hardware and such is coming anyway.

Currently I am researching Procedural-data generation. Natalya put a good paper on the topic ( ATI developer site )

Once again, so many thanks to all of you guys for sharing your thougts & also for the arguments.

Happy hackin people… :yes:

C ya people… Peace

0
101 Oct 27, 2005 at 16:08

Okay - this kind of thread always gets hijacked by the raytracer people who are desperate for anything that will free them from the endless wait for a render

Ok… I vowed not to respond to this thread anymore but there are a few words that need to be said. The topic ofthe thread clearly was : “The next BIg Thingy in RT-Graphics”. Since RT-Ray Tracing has gained quite some momentum over the past few years IMO it’s only fair to mention it. But as soon as you do a lot of people will jump in and say that it can thi, that and the other thing. But so what ? It handles huge static scenes quite gracefully and with a high degree of realism. In that respect it certainly is becoming more and more important to the industry.

I totally agree : Right now rt-tracing has no place in gaming, but at least looking at the title that was neither the intent of the question nor does it mean that it never will be able to handle dynamic scenes.

I can’t speak for other people but I certainly didn’t attack anybody for thinking that raster is the way to go. So if people try to shout me down (in the fashion of above quote), when I talk about something that I love to spent my time on and that is part of my work, I get agitated without a doubt. I’m sorry if that is distracting from the original topic (granted we still consider a thread on topic after a whole year aynway).

To conclude : I’d say that a lot of people who invest their time in current mainstream algorithms shouldn’t be so smuck about what they can handle well. Ray tracing can score in areas that are difficult for other algorithms and, of course, has shortcomings in others. I don’t see any need to pit these two worlds against each other.

PS : Rereading the first post I have to admit that it was more geared towards games, but as this raster vs trace discussion is popping up once in a while in different places I think my points are still valid.

0
101 Nov 08, 2005 at 01:31

Uh - the line about ray-tracer people hijacking the thread was joke. Jeez, lighten-up you guys. I started ray-tracing on the old 486. Now THOSE were the days of “go away, have kids, put them through college, and when you come back maybe it will be done”.

What I meant was, if you’re raytracing, you got time on your hands - so why not make a forum post!

It was no slight on ray-tracing in general, although I COULD make comments on how some people turn their graphics technologies into a religious thing.

0
101 Nov 14, 2005 at 04:44

@Altair

Btw, you are dead wrong here. I don’t know where you pulled that argument and what was the first 3D HW you ever saw, but I think it was Pyramid 3D by BitBoys (-97) and I was totally impressed of the capabilities of that chip at the time. I would argue that rasterizing HW \~10 years ago was more capable than raytracing HW today :rolleyes:

The first accelerator I saw with my own eyes was the Pyramid3D as well and it very well might be the time and the place were same too ;)

However I think there were some consumer affordable 3DLabs Glint based products before that. So what I had in mind was Glint, not Pyramid so I still don’t think the first cards were that impressive feature-wise. Keep in mind that 10 years ago we were also 10 years younger and more easily impressed.

Anyway, It’s a matter of taste so I don’t think I’m “dead wrong” here ;)

Juhani

0
101 Nov 14, 2005 at 08:22

where those cards capable of perpixellighting and shadows just the way doom3 looks? if not, then, no, i’d guess that hardware of back then was not as capable as the raytracing hardware of today.

0
101 Nov 14, 2005 at 15:10

@juhnu

However I think there were some consumer affordable 3DLabs Glint based products before that. So what I had in mind was Glint, not Pyramid so I still don’t think the first cards were that impressive feature-wise. Keep in mind that 10 years ago we were also 10 years younger and more easily impressed.

Glint and Pyramid 3D were developed around the same time, difference being that Pyramid 3D never shipped. If you just check the specs of the chip it was quite impressive for something developed 10 years ago. 10 years ago I was probably around the age of davepermen ;)@davepermen

where those cards capable of perpixellighting and shadows just the way doom3 looks? if not, then, no, i’d guess that hardware of back then was not as capable as the raytracing hardware of today.

Is raytracing HW today capable for per-pixel lighting like Doom3 looks today? Last time I checked, SaarCOR wasn’t even capable of doing normal/bump mapping. On the other hand, Pyramid 3D supported bump mapping + stencil, so it seems you lost your bet right there.

Cheers, Altair

0
101 Nov 14, 2005 at 16:08

every raytracer by default DOES per pixel lighting. if you know how they work, you know that it’s actually easier to implement than vertexlighting.
and the rest is the shading part, and yes, the RPU2 provides shaders, so allow for anything (well.. at least a dot3.. :D)

doom3 graphics are about the default of any raytracer (if texturing is in, the bumpmapping is just a very small thing to add.. because everything except the dot3 is yet there).

and was the pyramid 3d capable of running doom3 graphics more or less smooth? with all the stencil shadows and that?

0
101 Nov 14, 2005 at 21:58

@davepermen

every raytracer by default DOES per pixel lighting. if you know how they work, you know that it’s actually easier to implement than vertexlighting.

You do realize I said “per-pixel lighting like Doom3”, don’t you? It means that it involves normal mapping, which I haven’t seen any of the SaarCOR screens/videos featuring. Where can I find RPU2 specs and have they actually implemented the HW?
@davepermen

and was the pyramid 3d capable of running doom3 graphics more or less smooth? with all the stencil shadows and that?

I’m sure it doesn’t run Doom3 smooth if at all, but neither does SaarCOR so that argument is pretty much pointless ;) Anyway, it’s just kind of sad that we have to be comparing SaarCOR against 10 year old prototype hardware.

Cheers, Altair

0
101 Nov 15, 2005 at 02:12

available on the same page afai remember. and even if it’s perpixellighting (and perpixelshadowing, a.k.a. about equal to stencil, but with more features) withOUT normalmapping, thats still more than a lot of gpu’s where capable in doing for any useful scenes. sure, my gf2 was capable of faking some phong (but with horrible precicion), but trying to get it to handle perpixellighting, stencilshadows, and that in 2digit frames per second.. forget it.

this is doable on a saarcor, doesn’t need to be an RPU2 for this. RPU2 adds shaders, thus, allows for doom3+ graphics.

0
101 Nov 15, 2005 at 05:08

I don’t quite call a simple scene which consists of a table and 4 chairs running at \~20fps in low resolution quite Doom3 just yet ;) For sure even your gf2 was able to do better than that. Some Pyramid3D specs I found:
- Environment mapping
- Bump mapping
- Stencil operations
- Specular highlights
- 2-32 Mbytes of SDRAM, SGRAM or EDO DRAM supported
- Memory bandwidth up to 800 MBytes/sec with 64 bit bus
- 100 MHz operation
- 1 000 000 randomly rotated Gouraud shaded 25 pixel triangles per second
- 800 000 randomly rotated textured Gouraud shaded 25 pixel triangles per second
- Pixel fill rate 50 000 000 pixels per second

Cheers, Altair

0
101 Nov 24, 2005 at 14:55

Hey altair, looks like u r back in business.

I get your point crystal clear and I am sure most others who read this thread also did. :worthy:

Lemme list down what i percieved from all your posts
- Ray-tracing hardware sucks.
- Rasterizer algos, techniques, and hardware rule the planet 10 years go -> now -> and forever
- Rasterization is the way of life.

If there is anything else, just tell us man. Post, counter-post, saga aint gonna stop if we go on this way.

But, one more thing i asked u guys in the first post is….

If u were to design a 3D-Engine that gives Unreal3 a run for it’s money, what techniques would you incorporate (assuming you got the hardware powerful enough)

After unreal-3, not many engines have surfaced ( atleast announced ) except a few indie ones with nothing way better than that of ue3 features. Reality engine sold out. Project offset is kinda close ( their object-based motion-blur is one cool thing UE3 does not have). Unigine is cool but not much new . Serious engine 2 sucks.

Rendermonkey chick’s parallax-occlusion mapping was actually, very impressive in ATI’s Toyshop demo. especially the pavement thingy. But, ofcourse, its just an ‘ highly improved offset mapping’ ( more displacement, occluision, shadows )

Btw, caustic mapping thing was cool though. :yes: something new, that we lacked for years.

More stuff from u guys ?? ( not just data-amplification thingies )

And whats with Shader model 4.0 & Dx10. Any ideas of new stuff possible. ?

BTW, worlds made out of diffuse , specular normal mapped stuff SUCK big time ( Doom3, Quake4 ) and so do simple shadow volumes. One guy titled it Quake4 - Plastic Arena.

0
101 Nov 24, 2005 at 19:31

All I can say is :wallbash:

0
101 Nov 25, 2005 at 04:59

Does that mean you are gonna stop your stuff ?? If thats what you meant, then GREAT:yes: . No more posts from me dude. :surrender

0
101 Nov 25, 2005 at 14:54

@XORcist

Does that mean you are gonna stop your stuff ?? If thats what you meant, then GREAT:yes: . No more posts from me dude. :surrender

What it means is that even after a year, you don’t have a clue what I’m saying. I just find it frustratingly unbelievable.

0
101 Nov 25, 2005 at 17:40

Altair : One of the first principles of information technology is : The message is created by the receiver… Meaning : Since some people here apparently constantly misinterpret your posts, you should start to think about whether you are broadcasting at the right frequency. Just my two cents.

0
101 Nov 25, 2005 at 18:56

@anubis

Altair : One of the first principles of information technology is : The message is created by the receiver… Meaning : Since some people here apparently constantly misinterpret your posts, you should start to think about whether you are broadcasting at the right frequency. Just my two cents.

You need reasonable amount of background information to be able to understand the message and I was relying on that to get my message across. Apparently I was radically wrong. It’s like trying to teach linear algebra to someone who doesn’t even know how to multiply numbers together.

That’s a problem in discussion forums in general because there’s vastly different level of experienced people involved. Btw, no offence intended against inexperienced people. I definately like to have chats with them as well because they often got fresh view on things and might actually bring some nice thoughts on the table sometimes.

Cheers, Altair

0
101 Nov 25, 2005 at 21:34

Altair:

How do you propose to handle complex reflection/refraction/transparency without raytracing? Sure we have cubemapping and depth peeling and other hacks for coarse effects, but it’s impractical to scale this up to movie quality levels. I don’t find cache coherency and other arguments for performance compelling if there’s a show-stopper like that looming ahead.

Also, it seems to me that a hardware raytracer capable of rendering modern games at rt speeds, is already halfway to real time photon mapping, soft shadows and so on – all of which benefit from the tree you calculate per frame (the performance hit of which is minimal when you start talking about gi and such).

Thoughts?

0
101 Nov 26, 2005 at 18:38

@Nameless

How do you propose to handle complex reflection/refraction/transparency without raytracing? Sure we have cubemapping and depth peeling and other hacks for coarse effects, but it’s impractical to scale this up to movie quality levels.

I wish I had answers to those questions. It doesn’t mean that raytracing is the way to solve those problems though. Possibilities in rasterization have been taken huge leaps recently and people are actively researching how those possibilities can be utilized to their best effect. Not only that, but there are also huge leaps ahead, so there’s no reason dispair that those specific features you are talking about wont be solvable (at least to a degree) with rasterization approach. Note, I’m not saying that raytracing doesn’t solve elegantly some of the issues, but rather that it has huge pile of issues on its own, which in my opinion are the “show-stopper”.@Nameless

I don’t find cache coherency and other arguments for performance compelling if there’s a show-stopper like that looming ahead.

Coherency has huge importancy in realtime computer graphics as well as in any realtime application. GPU vendors have been made huge effort to maximize this in order to avoid memory access becoming the bottleneck in rendering.@Nameless

Also, it seems to me that a hardware raytracer capable of rendering modern games at rt speeds, is already halfway to real time photon mapping, soft shadows and so on

There’s LONG way to go for HW raytracer to be even close to performance of modern GPUs, or rather render similar scenes with comparable framerate. Even if we would assume that there would be HW raytracer capable for it, GI would totally bring it to its knees.
@Nameless

all of which benefit from the tree you calculate per frame (the performance hit of which is minimal when you start talking about gi and such).

The kd-tree isn’t computed per frame (it’s very expensive process), since then you would end up processing all the geometry every frame and one of the inherent advantage of raytracer is the hierarchical scene traversal which prevents you from touching all the data. However, not building kd-tree per frame has its problems which has been discussed in this thread.

It’s not really good supporting argument for raytracing to say that building kd-tree is nothing in comparison in computing to GI. It makes you sound more like you are taking my stance by saying that building kd-tree is very slow but hey look we got something even much more radically slower we can do! ;)

Cheers, Altair

0
101 Nov 26, 2005 at 19:02

a simple “photonmap everything” is “just another” raytracing pass.. doesn’t really cost much more.. say if you would SLI your rpu’s, you could have one photonmapping, one raytracing, and continue at the same speed and res with (more or less) full gi.

0
101 Nov 26, 2005 at 19:19

@davepermen

a simple “photonmap everything” is “just another” raytracing pass.. doesn’t really cost much more.. say if you would SLI your rpu’s, you could have one photonmapping, one raytracing, and continue at the same speed and res with (more or less) full gi.

Do you realize how awfully wrong that statement is? Now I can understand why you think HW raytracing is such a great idea ;) Just think of a completely diffuse white wall and red box next to it. You don’t get reddish GI effects on the wall with “just another” raytracing pass.

Cheers, Altair

0
101 Nov 28, 2005 at 19:09

>>It’s not really good supporting argument for raytracing to say that building kd-tree is nothing in comparison in computing to GI. It makes you sound more like you are taking my stance by saying that building kd-tree is very slow but hey look we got something even much more radically slower we can do!

I agree; I don’t doubt that realistically we’re years away from raytracing what modern rasterizers do effortlessly. On the other hand, it does provide *elegant* solutions to ugly, ugly hacks (and after all, we’re talking about the next *BIG* thing in rt graphics). Dynamic cube-maps, parallax mapping, and so on are great for making things look 95% realistic, but I think that last 5% is what will be the next *BIG* thing.

I think the next *BIG* thing is definitely light transport. Raytracing provides a natural solution to a problem as simple as reflections of reflections (a simple case is just looking at a concave object from the right angle). Something fancier, like dispersion, is trivial: just have your glass shader re-emit more refractive rays to cover a nice variety of wavelengths. I know there’s hacks for all of this that look ‘okay’, but again it’s the last 5%.

Here’s another problem: how do you stencil shadow translucent objects, or even just something with a 1-bit mask on it – like leaves on a tree (without modelling each leaf with a huge number of polygons, which would eliminate your performance advantage anyway)?

0
101 Nov 30, 2005 at 23:17

I’m not really an advocate of stencil shadows but rather lean towards shadow mapping. Both have their advantages/issues though, but there has been some proposals how to solve issues in shadow mapping (irregular z-buffer, alias-free shadow mapping). It’s not that pixel perfect shadows or translucency wouldn’t be doable on rasterization technique, but rather that current implementations on GPU doesn’t quite support it due to practical reasons.

Cheers, Altair

0
101 Dec 01, 2005 at 05:32

@Altair

Do you realize how awfully wrong that statement is? Now I can understand why you think HW raytracing is such a great idea ;) Just think of a completely diffuse white wall and red box next to it. You don’t get reddish GI effects on the wall with “just another” raytracing pass. Cheers, Altair

uhm.. how old is photonmapping now, that you don’t know how it works.. it’s a twopass algorithm, wich works about exactly that way. one pass generates the map, the second reads and uses it. and yes, photonmapping generates about all gi effects possible (depending on implementation, of course). and it doesn’t have to be slow in doing so.

0
139 Dec 01, 2005 at 07:53

Altair has a point. You have to emit photons from each light source, sampling across all points on the surface and all directions in the hemisphere above each point, and then trace their interactions with surfaces to whichever depth you want. To get a good distribution requires shooting a LOT more photons than there are primary rays in a typical rendered scene. So generating the photon map is quite nontrivial. It re-uses basically the same algorithm, but you need a lot more horsepower.

0
101 Dec 01, 2005 at 22:07

You need reasonable amount of background information to be able to understand the message and I was relying on that to get my message across. Apparently I was radically wrong. It’s like trying to teach linear algebra to someone who doesn’t even know how to multiply numbers together.

Altair : It must be really lonely up there, huh ? If you think that diminishing people let’s you appear any wiser… well… you have my pitty.

0
101 Dec 01, 2005 at 23:00

@anubis

Altair : It must be really lonely up there, huh ?

There are a lot of way more experienced people than I am who are/have been working in game/hw industry and with whom I keep talking frequently, so no, not really. There’s a lot you can learn as hobbyist developer and there are some really talented ones, but industry experience is hard to replace. No offense intended and I try to be less blunt in the future ;)

Cheers, Altair

0
101 Dec 02, 2005 at 01:35

Could it be possible for the moderators to close this thread? It has long gone too personal and occasionally quite insulting.

Maybe time for people to step back and chill out a bit.

0
101 Dec 02, 2005 at 08:50

There are a lot of way more experienced people than I am who are/have been working in game/hw industry and with whom I keep talking frequently, so no, not really. There’s a lot you can learn as hobbyist developer and there are some really talented ones, but industry experience is hard to replace. No offense intended and I try to be less blunt in the future Cheers, Altair

That’s fine then… I’m really just trying to act as a moderator here. Personally I don’t have any preferences in this dicussion.

Could it be possible for the moderators to close this thread? It has long gone too personal and occasionally quite insulting. Maybe time for people to step back and chill out a bit.