The Next BIG Thingy in Real-Time Graphics

154 replies to this topic

#81Smokey

New Member

• Members
• 25 posts

Posted 15 December 2004 - 12:43 AM

Okay, i feel i have to but in here, and i probably shouldnt do so because i've skipped half of the BS you guys have been talking... but davepermen... FFS dude, ray tracing and rasterization cannot be compared to each other, the ray tracing algorithm in general increases exponentially with scene complexity. as opposed to rasterization which scales alot more efficiently to scene complexity. Also, last i checked, OpenRT was realtime on a cluster of a few P3's or something (I havent referenced this recently, this is just off the top of my head, so excuse me of i'm wrong...). And last but not least... Global Illumination has been proven via multiple methods on GPU's via shaders. Most of which require ray tracing, however it still proves that GI is possible on GPU's.

And once again, i dont know if you've already discussed this, as i am not going to read through all that crap, however if you have, it seems you've skipped it awefully quick, and means you have not taken these few things under careful consideration... (i mainly speak of the ray tracing algorith in general here)

#82Melvin

New Member

• Members
• 8 posts

Posted 15 December 2004 - 01:59 AM

Smokey said:

And last but not least... Global Illumination has been proven via multiple methods on GPU's via shaders. Most of which require ray tracing, however it still proves that GI is possible on GPU's.

>> more precisely, a precomputed localized rendition of GI has been demonstrated on the GPU(aka precomputed radiance transfer in DX9)...the caveats list includes no arbitrary movement of objects, no vertex deformation for objects, no high-frequency lighting conditions, no surface color change etc among other restrictions...this is a far cry from the truly interactive GI that raytracing is capable of.

Altair said:

About cubemapping, yes, I'm well aware that it's not perfect solution but it's good, simple and efficient approximation which fits the purpose in most of the cases, in other words, it's practical. If you want to deal with dynamic cubemapping, which isn't actually needed that often to give believable impression of interreflectance, you need memory only for a single cubemap since you can recycle the memory for different objects. There are also solutions coming to shortcomings of rendering to a cubemap.

>> not really...there has to be a unique cubemap for every reflective object for the final render pass...you can't reuse the same cubemap from object to object or else they'd all have the same reflections...and so the number of cubemap quickly escalates
>> interreflections can be approximated with cubemaps only if you break objects into convex components...just imagine how complex the content generation and technical implementation process turns into...and not to mention when objects deform
>> and don't forget about other global effects like diffuse interreflectance, caustics interreflectance, color transfer through translucency, shadowed reflected caustics etc etc...simple reflections are only the *tip* of the iceberg which I highlighted to make a point
>> more serious apps like visualizations, product showcases, training/simulations etc typically have more stringent requirements on visual fidelity than games do...just because games can get by with visual inconsistencies doesn't mean other kinds of apps can

#83Altair

Valued Member

• Members
• 151 posts

Posted 15 December 2004 - 02:18 AM

Melvin said:

>> not really...there has to be a unique cubemap for every reflective object for the final render pass...you can't reuse the same cubemap from object to object or else they'd all have the same reflections...and so the number of cubemap quickly escalates
It seems you don't have basic knowledge about dynamic cubemapping so let me explain. You simply render the scene to a cubemap for each object before you render the object, reusing the same memory. Anyway, this kind of dynamic cubemapping is overkill most of the time, just like raytracing interreflections would be.
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." - Albert Einstein

#84Melvin

New Member

• Members
• 8 posts

Posted 15 December 2004 - 06:43 AM

>> suppose you "simply render the scene to a cubemap for each object before you render the object, reusing the same memory"...consider 10 reflective objects...if you render 10 times to the *same* cubemap from different viewpoints, you're effectively overwriting the contents 10 times, so all 10 objects will reference the same cubemap contents as of the last render when you finally render all the 10 objects into the user's camera...so you get the *same* reflections off the 10 objects...bottomline is, you need 10 cubemaps to hold 10 unique reflection contents at the same time
>> "this kind of dynamic cubemapping is overkill most of the time, just like raytracing interreflections would be"...it may well be overkill for games *now*...but like I said, take the big picture of the 3D industry as a whole...in serious apps like architectural walkthroughs, product visualization and training simulations where correct visual cues are paramount, it may not be "overkill" but an actual requirement...which is probably why large companies like Boeing, automobile corporations etc look into rendering solutions like OpenRT to provide accurate visualization of large datasets
>> & 1 more thing...expectations evolve over time...why, elaborate self-shadowing, bumpy surfaces, full-scene glare and what not might have been yesterday's "overkill" effects, but witness how they've become today's indispensible features in any cutting-edge 3D app. Fast forward 5 years from now...it's a fair guess that global effects are expected to be commonplace what with emerging genres like "interactive cinematic gameplay" or even "realtime movie experience"...it may no longer be acceptable *not* to see the spout's reflection on the teapot's shiny body anymore by that time...by then, I don't fancy the prospect of spending loads of time writing tons of complicated rendering code to fudge some semblance of global effects...such tasks are best left to raytracing while I move on to other more pressing challenges

#85Altair

Valued Member

• Members
• 151 posts

Posted 15 December 2004 - 03:34 PM

Melvin said:

>> suppose you "simply render the scene to a cubemap for each object before you render the object, reusing the same memory"...consider 10 reflective objects...if you render 10 times to the *same* cubemap from different viewpoints, you're effectively overwriting the contents 10 times, so all 10 objects will reference the same cubemap contents as of the last render when you finally render all the 10 objects into the user's camera...so you get the *same* reflections off the 10 objects...bottomline is, you need 10 cubemaps to hold 10 unique reflection contents at the same time
The point is, you don't need to hold 10 unique reflection contents (for 10 objects) at the same times :rolleyes: It would be downright stupid to do things like that. It would be extremely naive (though straightforward) to implement dynamic cubemapping even in the way I described. To translate this to coder language:
for each object with dynamic cubemap
{
render scene to a cubemap from the pivot point of the object
render the object by using the cubemap
}
That's pretty trivial, don't you think?

Melvin said:

take the big picture of the 3D industry as a whole...in serious apps like architectural walkthroughs, product visualization and training simulations where correct visual cues are paramount, it may not be "overkill" but an actual requirement
Lets face it, GPU technology is driven by games, not architectural walkthroughs, nor rendering of reflective & refractive spheres, which not surprisingly raytracing demos are all about to advocate the technology. Even though I agree that once you get used to certain level of quality, you start to notice lacks in certain subtle areas, I don't see raytracing taking over simply because it would be major leap backwards in performance, quality and flexibility. Also many of those subtle effects are very far from cheap to compute by using raytracing (GI, smooth reflection, etc.) and fall to the "naive implementation" category, so even if you had extremely fast raytracing HW in your hands, you probably wouldn't use it to compute those effects in practice but would need seek for alternative solutions anyway.
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." - Albert Einstein

#86Melvin

New Member

• Members
• 8 posts

Posted 16 December 2004 - 08:00 AM

Altair said:

The point is, you don't need to hold 10 unique reflection contents (for 10 objects) at the same times :rolleyes: It would be downright stupid to do things like that. It would be extremely naive (though straightforward) to implement dynamic cubemapping even in the way I described. To translate this to coder language:
for each object with dynamic cubemap
{
render scene to a cubemap from the pivot point of the object
render the object by using the cubemap
}
That's pretty trivial, don't you think?
>> yes, I'd thought of that initially...but this would imply the need to switch between >1 depth buffers that (i)store the depth info as you render the scene into the cubemap for the reflective object you're currently rendering (ii)store the depth info as you render the reflective object into the final render target...and as if this complication in the render pipeline isn't bad enough, this approach just plain doesn't work...coz the moment you render the scene into the cubemap for reflective object A, reflective object B that's being reflected by A and which also references the *same* cubemap, basically references the same reflection content as A...and that's wrong...plus what's more, B can't read the same cubemap that's currently being written to...unless of course it has its own reflection cubemap(duh)...in other words, N reflective objects require N cubemaps
>> trivial? not at all...like all the other effects we try to fudge with rasterization, it only seems trivial on the surface, but compounds exponentially in implementation complexity the more effects we combine together...unlike raytracing which elegantly does away with these messy multipass, multitexture complexity
>> this is an example of how convoluted and unintuitive rasterization approaches are...and we haven't even considered anything beyond simple reflections yet

Altair said:

Lets face it, GPU technology is driven by games, not architectural walkthroughs, nor rendering of reflective & refractive spheres, which not surprisingly raytracing demos are all about to advocate the technology.
>> in truth, the graphics industry creates the demand for its products(isn't it the same for business everywhere from slimming pills to golf clubs to automobile makers?)...the games industry drives it as much as it drives the former...remember when bump-mapping was first trotted out by hardware vendors, and how *long* it took for developers to really embrace it? and the gazillion other spanking new features that have been steadily rolling out of chip foundries even before developers everywhere could breathe a collective sigh of relief that their engines supported the "latest hardware features"(notwithstanding how much of a moving target that is)...so let's face this, the graphics business creates fresh demand for new features to stay competitive and "relevant"...and developers as consumers lap it up, while clamouring for feature X, which vendors readily oblige...so what's wrong with raytracing joining in the fray? coz it's a "little too fancy" for gaming tastes? I still remember how ridiculously glitzy early hardware bumpmapping demos looked...and now bumpmapping's on every darn wall, box and creature.
>> the Boeing showcase serves to highlight visualization of massive datasets...and "reflective & refractive spheres" are not at all there is to raytracing(they're but a small aspect of GI)...full GI takes care of all visual complexity while steering clear of unnecessary implementation complexity...and such simplification of the 3D development process is a major step in the right direction
>> one shouldn't dismiss the fact that 3D technology development is also driven by other industries eg. large scale image generators by Evans & Sutherland for training/simulations, massively parallel graphics servers by SGI for supercomputing visualization needs etc...
>> it is easy to hold a dim view of raytracing, that understandably stems in no small part from the "alpha version look" of raytracing technology at its current state...however, that'd be making up one's mind before the race has even started in earnest

Altair said:

Even though I agree that once you get used to certain level of quality, you start to notice lacks in certain subtle areas, I don't see raytracing taking over simply because it would be major leap backwards in performance, quality and flexibility.
>> I'd hardly call global effects subtle anymore than I'd call soft shadows "subtle"...various kinds of effects can appear glaring or subtle depending on myriad conditions...what's "subtle" can look real obvious the next moment and what's "glaring" can seem inconsequential next
>> given the obvious visual quality that raytracing has historically been lauded for, I don't quite get your constantly alluding to raytracing for producing "inferior quality"...as for performance, that still remains to be seen, though the massive sunflower field demo running on the prototype shows a glimmer of things to come

Altair said:

Also many of those subtle effects are very far from cheap to compute by using raytracing (GI, smooth reflection, etc.) and fall to the "naive implementation" category, so even if you had extremely fast raytracing HW in your hands, you probably wouldn't use it to compute those effects in practice but would need seek for alternative solutions anyway.

>> tradeoffs between quality and performance can also be applied to raytracing, like LOD, number of ray bounces etc...if I had extremely fast raytracing hardware, I'd save tons of development time from not having to develop elaborate graphics hacks than if I had to make do with extremely fast rasterization hardware...and there're many more interesting challenges besides mere rendering

#87Smokey

New Member

• Members
• 25 posts

Posted 16 December 2004 - 12:00 PM

I've recently started reading lot of Ingo Wald's publications (see: http://graphics.cs.u...ons/index.html), and also alot on saarcor... and believe it or not, i think davepermen may be right... This actualyl does look like a viable solution, there have already been tests, implementing games with a real time ray tracing renderer, and they came out with real time frame rates (5-20), both using 30Ghz virtual CPUs (over a cluster), and the saarcor chip, and they did not have any problems... I'm not entirely sure what i'm saying here, but i think i agree with dave permen on alot of things now...

#88Altair

Valued Member

• Members
• 151 posts

Posted 16 December 2004 - 03:51 PM

Melvin said:

>> yes, I'd thought of that initially...but this would imply the need to switch between >1 depth buffers that (i)store the depth info as you render the scene into the cubemap for the reflective object you're currently rendering (ii)store the depth info as you render the reflective object into the final render target...and as if this complication in the render pipeline isn't bad enough, this approach just plain doesn't work...
What does it matter if you have to switch between depth buffers? You seem to make big deal out of trivial things. And yes, the approach does work, since you don't need dynamic cubemaps for rendering the dynamic cubemap, unless if you do recursive reflections. In the approach where you need only a single cubemap the depth of the recursion is 1. Even in raytracing you need to put the cap on the depth of your recursions and finally revert to static cubemap or something similar.

Melvin said:

>> in truth, the graphics industry creates the demand for its products(isn't it the same for business everywhere from slimming pills to golf clubs to automobile makers?)...the games industry drives it as much as it drives the former... remember when bump-mapping was first trotted out by hardware vendors, and how *long* it took for developers to really embrace it?
I don't know where have you come up with that "truth", but GPU manufacturers want to spend transistors where it matters. Why do you think they have the quad pipeline architecture? Why do you think NVidia has their UltraShadow technology? Those technologies exist just because they happen to make biggest bang for the buck for current games and games released in the near future. Of course GPU manufacturers also have their visions and they promote those vision to game developers, but in the end what matters is how that technology is exposed in games. That's why GPU manufacturers have development relationship to be aware what and how game developers are exactly doing things in the rendering side so that they can focus on things that matter and to evangelize (spelling?) the use of their GPUs.

There are also things like production issues which prevent using certain technologies in games immediately when they are exposed in GPUs. It takes time to learn to use new technology and have good tools available for its utilization. In case of normalmapping it takes significant production effort from game developers to extensively take use of the technique and that's partly the reason why the size of our production team has more than doubled in size. Not that many game developers are yet ready to invest that amount of money to take extensive use of normalmapping and rather wait for tools to mature and knowledge to spread to make the effort thus investment on it smaller.

Melvin said:

so what's wrong with raytracing joining in the fray? coz it's a "little too fancy" for gaming tastes? I still remember how ridiculously glitzy early hardware bumpmapping demos looked...and now bumpmapping's on every darn wall, box and creature.
If you provide raytracing as an additional feature on top of rasterization which doesn't interfere with rest of the architecture, I don't see much wrong with it. Worst thing that could happen is that GPU manufacturers simply waste die size & money for some technology that no one uses (not that it never has happened before) and that could have been spent to improve things that matter. Anyway, as I see it, introducing raytracing to current GPUs and exposing its functionality to the level you are talking about wouldn't only change the whole architecture of the chip with all the potential problems I brought up, but also the way applications (games) deal with it and APIs expose it. Now, consider the implications of the change to the whole picture and the subtle gains you would have from raytracing, and suddenly sticking with rasterization and finding solutions by using it starts to appear much more appealing alternative - atleast if you see the big picture that is.

I know it's easy to propose new ideas without thinking of implications or being responsible of them, and you hear this particularly from new developers. How many (particularly new) developers want to just throw the engine they are using out of the window and rewrite the whole crap from scratch, because they "know" how it should be done while wasting tons of valuable time put on debugging, learning, optimizing, etc. of the old engine :rolleyes:

Melvin said:

the Boeing showcase serves to highlight visualization of massive datasets...
And as I have told you many times already, you don't need raytracing to visualize massive datasets.

Melvin said:

>> given the obvious visual quality that raytracing has historically been lauded for, I don't quite get your constantly alluding to raytracing for producing "inferior quality"...as for performance, that still remains to be seen, though the massive sunflower field demo running on the prototype shows a glimmer of things to come
It's the quality:performance ratio I'm talking about, not plain quality. Of course if you have infinite processing power quality of raytracing will overshine rasterization, but that's not the case in real world. Even for non-realtime movie CG shots you can't forget the performance.

Melvin said:

>> tradeoffs between quality and performance can also be applied to raytracing, like LOD, number of ray bounces etc...if I had extremely fast raytracing hardware, I'd save tons of development time from not having to develop elaborate graphics hacks than if I had to make do with extremely fast rasterization hardware...and there're many more interesting challenges besides mere rendering
I don't know how much experience you have in developing "elaborate graphics hacks" in games, but most of the "hacks" we have to do are due to achieving adequate performance and getting around the limitations of shaders, particularly sm1.1 shaders. I don't see raytracing helping in either of these cases to save us any development time.

Anyway, it has been pleasure to discuss with you guys about this even though it did heat up a bit in few occasions :) It definately made me think the future of gfx technology more than I would have probably done just by myself, but now I need to focus more "pressing challenges" of finishing the game we are working on.

Cheers, Altair
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." - Albert Einstein

#89Goz

Senior Member

• Members
• 575 posts

Posted 17 December 2004 - 10:33 AM

Altair said:

for each object with dynamic cubemap
{
render scene to a cubemap from the pivot point of the object
render the object by using the cubemap
}
That's pretty trivial, don't you think?

Well im busy doing a last minute rewrite of the PS2 rendering system so i will keep this brief ... but Altair you do appreciate that this is only trivial for convex objects? Non-convex objects reflect other bits of itself and for arbitrarily complex objects cube mapping can no longer cut it without breaking the object up into convex parts...

I know this is a pretty cheesy point but do remember that cube mapping is not a be all and end all solution :) It was introduced (for raytracing quaintly enough) because mapping all the reflected rays was just too expensive :)

Anyway another tuppence from myself :)

#90Smokey

New Member

• Members
• 25 posts

Posted 13 January 2005 - 12:41 PM

Okay, after a few weeks of intense research (which has also lead to a new field of study for myself), I fear I must deeply apologize for what I've said earlier in this thread. My statements are more or less completely incorrect in almost every way possible. I have read alot of thesis', articles, tutorials amd the-like on ray tracing, scene transversal algorithms for varying spatial partitioning/sub-division algorithms, and only now realise the potential ray tracing has.

I still believe in my first post in this thread, global illumination (probably via photon mapping) will be the next big thing in computer graphics (which will use ray tracing, however photon density estimation for irradiance will be done in rasterization hardware, via shaders or textures... As i do not see rasterization subsiding in the near future). However ray tracing has absolutely impeckable potential in relation to computer graphics, as well as to computer simulations in general, more specifically acoustics and physics.

In relation to hardware and ray tracing... I do not see nVidia nor ATI going down this track at all, and if i'm not mistaken nVidia have said they will not be touching ray tracing in their hardware. So if anything is to happen here, SaarCOR, if successful, will be what we should expect to see once ray tracing becomes more viable for main stream graphics... (which is yet another lengthy discussion in itself, which I dont think I'll get into here)

Anyway, I just thought this thread required a bit of a kick... It's definately still worth discussing in my opinnion, and one of the most interesting threads i've read on a forum in quite some time. (It has apparently even caught the attention of Jacco Biker... And there for most probably Thierry Berger-Perrin... people who I'm sure could give alot of feedback on ray tracing...)

Oh and Davepermen, sorry for my ignorance earlier. I should have done my research before even thinking about responding.

Cant way to hear on everyone's thoughts! :D

P.S. I'm currently nearing completion of the first version of my graphics engine... So I should be around the forum alot more. :D

#91cdgray

New Member

• Members
• 21 posts

Posted 29 August 2005 - 04:57 AM

I believe that there will eventually be a limit to GPUs in terms of realisim. Hardware is being driven more and more towards how nature tends to behave. The fact is, the way current renderers work is somewhat unnatural physically. Raytracing is the way to go since they completely simulate reality. This is the next big step: creating a mainstream hardware-accelerated raytracer.

#92anubis

Senior Member

• Members
• 2225 posts

Posted 29 August 2005 - 06:26 AM

Quote

Raytracing is the way to go since they completely simulate reality.

How do _they_ do that ?
I'm a huge fan of ray tracing and all it's decendants, but lets not overestimate what you can do with it.
If Prolog is the answer, what is the question ?

#93Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 02 September 2005 - 07:21 PM

Hehe, they don't completely simulate reality by any means. However, he has a point that raytracing is a bit more physically based than rasterization. This doesn't mean that rasterization necessarily produces physically incorrect images, though.
reedbeta.com - developer blog, OpenGL demos, and other projects

#94Axel

Valued Member

• Members
• 119 posts

Posted 04 September 2005 - 11:02 PM

I think we need the best out of both worlds...

GPU Vendors could add a raytracing unit that you can call from a Pixelshader or something like that. But there are many problems like holding and updating the necessary scenegraph. Especially with dynamic scenes.

#95davepermen

Senior Member

• Members
• 1306 posts

Posted 05 September 2005 - 05:19 PM

anubis said:

Quote

Raytracing is the way to go since they completely simulate reality.

How do _they_ do that ?
I'm a huge fan of ray tracing and all it's decendants, but lets not overestimate what you can do with it.

well, raytracers give the possibility to scale towards complete correct physical simulation as close as we know about it.

they as well give the possiblility to scale down till about the quality of doom3 (wich is whitted raytracing + some fake effects for pseudocaustics and similar stuff).

every cheat gpu's of today use to fake something can be used to fake the same thing in a raytracer. but there, we have the choise to do it correct as well. thats what altair somehow never wants, he wants to stay at the fakes and thinks they are good enough.. :D (but performance-drop per fake gets bigger and bigger.. integrating the fakes to make a complete faked engine harder and harder..)

well, i'm getting offtopic. fact is, i miss you in msn!! :D
davepermen.net
-Loving a Person is having the wish to see this Person happy, no matter what that means to yourself.
-No matter what it means to myself....

#96Goz

Senior Member

• Members
• 575 posts

Posted 06 September 2005 - 12:04 PM

davepermen said:

well, raytracers give the possibility to scale towards complete correct physical simulation as close as we know about it.

And how does a ray tracer scale toward complete diffuse lighting interaction? You'd have to generate an infinite amount of rays for every ray that hits a diffuse surface...

But i suppose its THEORETICALLY possible ... :rolleyes:

#97Axel

Valued Member

• Members
• 119 posts

Posted 06 September 2005 - 01:58 PM

I think dave is right. With more advanced lighting models in realtime, rasterizing will get more and more hacky while rt could provide a clean solution for many problems we currently face.

Absolut physicaly correct lighting is not a goal that we should have for game programming. You won't notice the difference to photon mapping anyway :)

I think after WGF 2 IHVs will perhaps add raytracing units to their HW. The shaders are unified then so they "only" have to add a raytracer in addition to the rasterizer unit.

The main problem will be dynamic geometry. I can't imagine a solution for the problem that vertexshaders can change the position of a vertex to any position. Raytracers without a hierarchical scene structure are slow, that means it would have to be updated after every position change.

#98juhnu

Valued Member

• Members
• 292 posts

Posted 06 September 2005 - 03:07 PM

davepermen said:

...

every cheat gpu's of today use to fake something can be used to fake the same thing in a raytracer. but there, we have the choise to do it correct as well. thats what altair somehow never wants, he wants to stay at the fakes and thinks they are good enough.. :D (but performance-drop per fake gets bigger and bigger.. integrating the fakes to make a complete faked engine harder and harder..)

well, i'm getting offtopic. fact is, i miss you in msn!! :D

After reading this long discussion I must say I'm thinking more along the davepermen, and am somewhat surprised the tough stance Altair is taking here.

Although the points he represents are valid as far as the current game industry is concerned, such as development and training costs and time it takes to develop new APIs and final product - They are not good enough reasons to stop progress and stop finding new ways of doing things.

Altair said:

..
Now, consider the implications of the change to the whole picture and the subtle gains you would have from raytracing, and suddenly sticking with rasterization and finding solutions by using it starts to appear much more appealing alternative - atleast if you see the big picture that is.

I don't think the gains would be necessarily "subtle" if we had a raytracing hardware, which had seen at least 10 years of intense development and research. The first rasterizing GPUs were not really that impressive and lacked both speed and majority of features we can now expect to find from a consumer class hardware. I don't think sticking to the rasterization *only* is a good approach in the long run.

We have been seeing somewhat incremental progress with the current graphics hardware, and I don't know if there can be an incremental path for the future GPUs to provide a feasible raytracing support alongside the rasterizer. Maybe it turns out to be easier than we think or maybe the road ahead is bumpy for pioneers of this technology and for companies which are willing to take risks, but after all, I see it inevitable that we will be seeing a game taking use of a raytracing technology sooner or later. After all, game developers are(or should be) keen - and the competition is keeping us too - finding new ways to provide the ultimate game experience for players. If the raytracing can help us to bring interesting visuals for the games, I'm all for it. And there surely are people who are willing to take the risks. One can always say there will be problems ahead, but so there always is. If a one is afraid of possible problems and therefore doing nothing, he/she's going nowhere.

I have seen raytracing to produce better images than any rasterizer so far, that alone is a good reason to keep on research and try doing it.

Juhani

#99davepermen

Senior Member

• Members
• 1306 posts

Posted 06 September 2005 - 06:45 PM

i know of tons of scenarios wich simply not exist in current games graphically, wich would ROCK to play in, would look awesome, but are simply impossible to visualise in current hw..

in raytracing hw, it would be doable, and would definitely add a lot to gaming immersion/experience.. blabla :D (just throwing buzzwords)

well, anyways.. once we get far enough with realtime raytracing i hope i can set up some nice example with my friends, hehe.

another important thing: any form of precalculation will be useless the more dynamic games get. and as we know, thats the trend: PPU, PhysX, just type into google :D
with complex scenes, fully dynamic changeable, stuff like precalculated radiance transfer simply doesn't work anymore. and you have to stay with what you can do realtime, dynamic. suddenly, a lot of the hacks altair loves so much won't work anymore..
davepermen.net
-Loving a Person is having the wish to see this Person happy, no matter what that means to yourself.
-No matter what it means to myself....

#100Axel

Valued Member

• Members
• 119 posts

Posted 06 September 2005 - 09:02 PM

And how do you want to update the scenetree for the raytracer in that cases?

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users