Real-time Ray Tracing of 'Sponza'

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 21, 2006 at 15:00

06-02-21.jpg

Description
A few weeks ago I decided to pick up my old real-time ray tracing project.

The scene shown in these images is ‘Sponza’, more info on this model is available from http://hdri.cgtechniques.com/\~sponza/ . The lower half of the IOTD shows a visualization of the kd-tree: The brightness of a pixel is directly linked to the number of traversal steps at that location. This is used to pinpoint problematic areas.

Some statistics: Sponza consists of 76.000 textured triangles. For the screenshots, the scene is rendered using pure ray tracing: No 3D accelerator was used to generate these images. The ray tracer does not use any ‘realtime ray tracing tricks’: It doesn’t approximate pixels or skip rays; therefore it does not show any of the artifacts usually associated with real-time ray tracing.

You can download a demo of the ray tracer here:
http://www.bik5.com/rtNexGen.rar (16Mb)

If you run the ‘lq’ (low quality) version of this ray tracer, you will see a fly-through of the Sponza scene, rendered at interactive frame rates (even though the demo is not interactive, sorry). On my machine, a Pentium-M running at 1.7Ghz, I typically get 4-8 fps, depending on the viewpoint.

Couple of tech notes:

The ray tracer is using a optimized kd-tree to determine which geometry is to be tested against a particular ray. The average ray tests 5 triangles in the demo. Also, rays usually don’t travel alone: They are grouped in packets of 4 rays, and traverse the kd-tree together. Using SIMD instructions, traversing the tree and intersecting rays takes far less time this way; the average gain is 2.5 (there’s some overhead when some rays miss a triangle, and some others don’t).

If you have a fast machine, please post some timings in the comments section.

More information about fast ray tracing can be found on a forum we recently started:
http://www.ompf.org/forum

Greets
Jacco.

44 Replies

Please log in or register to post a reply.

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Feb 21, 2006 at 15:59

There is no spoon, err bunny.
That can’t be right :blink:

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Feb 21, 2006 at 17:33

I’ll risk to post the most 2 stupid questions of the year…

What’s the purpose of a real time ray tracer that gives you an average of 6 pfs when rendering 76k triangles on a 1.7 GHz machine (though I understand it’s all work done in software and no special optimizations are in place)?

Should you make a gross estimation, how far would you say you can boost performance by applying all possible optimizations?

Please don’t get mad. Mine is innocent curiosity :surrender

Best regards,
Ciao ciao :)

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Feb 21, 2006 at 18:05

Say your scene has 76k triangles, and you’re rendering @ 512x384 with one point light.
You will have to shoot (and shade) 512*384 = 196608 primary rays and 196608 shadow rays for that light, assuming a 100% coverage.
If you sustain that at 8 fps, that’s 3.1M ray/s or put another way \~540 cycles per ray on a P-M 1.7Ghz.
Such performance requires quite some optimization ;)

Raytracing primary & shadow rays isn’t very useful by itself, but it paves the way for more complex shading. Think global illumination.

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 21, 2006 at 18:09

These are perfecly valid questions. :)

There are several reasons why I consider fast ray tracing interesting:

First of all, ray tracing can do things that a rasterizer can’t do, e.g real recursive reflections & refraction of arbitrary surfaces. And, it does these ‘effects’ in a very natural manner, as it simulates natural light transport. A ray tracer is therefore relatively simple: Physical effects don’t need to be hacked, they can be simulated. A 3D engine based on hardware or software rasterizing on the other hand is usually a mixed bag of various tricks, and some things remain difficult to approximate correctly. Also, visibility determination comes for free: A ray tracer already detects what’s visible and what not, no need for special constructs.

Secondly, ray tracing scales much better with scene complexity. I use a BSP (the kd-tree variant) for spatial subdivision. Doubling the number of polygons adds a level to the tree. Going from 512 to 1024 triangles therefore has only a 10% impact on performance. This can be taken to extremes: I can render a model of several million triangles with accurate self-shadowing and illumination at several frames per second.

Third, ray tracing scales almost perfectly with processing power. Individual rays are completely independent of each other (well, there’s the cache of course…), and so parts of the screen can be rendered by different threads without complex constructs. Also, rendering using multiple machines on a network is quite easy to do. Performance scales linearly with the number of available processors.

And finally, it’s challenging. :) Ray tracing is easy to grasp at first, but hard to get (really) fast. I am using SIMD to trace 4 rays in parallel, a kd-tree build with the latest insights (there are people who write 200+ pages on this problem), and the code is optimized to the bone: Things like caching, hot and cold data, const correctness and compiler hints really do matter, more than you perhaps think. It’s cool to see that in real life.

About room for improvement: There is room for improvement. What I currently consider the ‘King of the Hill’ is a program that serves as a plug-in for the released Q2 source code. It completely replaces the Q2 rasterizer by ray tracing; it gathers the scene polygons, builds a kd-tree and renders them, with reflections and all. Resolution is 512x384, frame rate varies between 5 and 10fps, on a 1.7Ghz laptop like mine. That’s just hardcore. :)

Some German guys experimented with ray tracing hardware. They built a 90Mhz prototype FPGA chip that traces rays at double the rate I can do on my CPU right now. The chip uses only a fraction of the transistors that modern accelerators use. If NVidia would build a chip that does that at nowadays speeds, and places it in parallel to match todays transistor counts, we would have a dream machine.

So there’s some potential. But frankly, for me as a coder it’s just the ultimate optimization / theory project. You should try it. :)

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Feb 21, 2006 at 18:16

It would be nice if you could multithread the tracer, this would give us dual core users a nice speedup :)

Impressive work nonetheless!

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Feb 21, 2006 at 19:05

yeah, in my (singlethreaded cpu) i got currently more than 2x the speed due to multithreading than i had before with singlethreading (but haven’t changed much else actually).

that would have resulted in 4x speed up on a dualcore system out of the box :D

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Feb 21, 2006 at 19:08

yeah, in my (singlethreaded cpu) i got currently more than 2x the speed due to multithreading than i had before with singlethreading (but haven’t changed much else actually).

WTF? :blink:

That sounds quite… eeh… impossible? Why should a program run faster when you add the multithreading-overhead when the CPU can only run one active thread in the same moment?

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Feb 21, 2006 at 20:29

because it can better handle stalls, as it can just switch to another thread?

similar to what hyperthreading does. but i was impressed to see that works even on non-hyperthreading cpu’s, too:D i’m just happy that it _DOES_ work :D i don’t care why :D

best is, due to the threading, i got an ageold p4 2ghz notebook to run nearly as fast as an athlon64 with 2ghz. before, it was about at half the speed of the athlon. but with 64 threads, both got about equal, both much faster.

this just showed how bad the pipes are for the p4 :D

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Feb 21, 2006 at 21:08

Thanks to tbp and phantom for the exhaustive explanations.

@ phantom:
Now I better understand your impressive achievement.
Hats off!

Ciao ciao :)

87e614b8b888bb2c4485c1ac16d8c779
0
moe 101 Feb 21, 2006 at 21:51

That’s really cool!

I have been following your realtime raytracers since they showed up back on flipcode. Impressive to see what you achieved. My favourite pet is ray tracing on the gpu. I’d be interested to hear what you think about that. Did you consider to use the gpu in your projects or are there specific reasons for you to keep it in software?

Btw I chanched the scene number in the scene.txt file. But most of the scenes just crash. The necessary files seem to be present. Any hints as to why it crashes?

I got \~4 Fps running rtNexGen_512_lq with my Notebook (Dell Inspiron 8200; 2.4 Ghz)

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Feb 21, 2006 at 21:58

well, i’d guess with the newest ati hw (x1900 3x the computational power of the x1800) wich features excelent branching behaviour, one could set up very fast raytracing over rather complex scenes.

it would be definitely the best (gpu based) chip ever for raytracing, i think thats at least true.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Feb 22, 2006 at 02:24

Nice stuff, I like some of the model sets you have.

Here are my results for my P4 2.8GHz.

1) (default) In the room I got anywhere from 5-7 fps on LQ and 1 fps on HQ.

2) The Lego car I got 10fps on LQ and 3fps with HQ

3) The cloister (hey! I have that model too =) I got 1 fps on LQ and 0.2 fps on HQ. There were quite a few back-face culling issues with some of the columns. The blooming on this scene is marvelous though. I would love to be able to get a full blown screenshot of that.

4) For the school scene, I got 10fps on LQ and 4 fps on HQ

Unfortunately, none of the other scenes ran (program error).

Tease: I got 30+ fps with rasterization via my engine. Didn’t look as nice though =)

23bc07ac6cae18d0f9a67c95bef9896d
0
917a 101 Feb 22, 2006 at 03:21

When I try to run it it crashes, access violation. The only thing it manages to display in the window is “Timings:” in the top left corner.

6c7cdd179622db03893e63476bfa4bc5
0
Surrealix 101 Feb 22, 2006 at 05:51

Similar problem here, it displays ‘timings’ in the top left, and comes up with the error: “ The instruction at “0x004031d6” referenced memory at “0x82f082f8”. The memory could not be “read”. “

1.9Ghz (AMD Athlon)

Those images look very nice, I always get excited about realtime raytracing. Well done!

3660f98ccd9e7079e44572e870c24113
0
AticAtac 101 Feb 22, 2006 at 07:33

Another great contribution from the Phantom.
Its always fun and refreshing to read and follow what this guy does.
I also think that the future of 3D computer graphics belongs to realtime raytracing. Its just like the old days where 3dfx brought a 3D card out and everything started to catch up with 3D hardware accerelation. The same will happen with a first productive 3D raytracing card and what then comes you can just guess …

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 08:12

I am not so sure about the future of ray tracing. What we really need is a way to mix ray tracing with regular rendering, so that a gradual conversion becomes a possibility. Let’s face it: Even if NVidia would release a chip that does realtime ray tracing, it won’t be used in games, as it would break all existing code. There are no API’s, there’s little knowledge, and there are some fundamental problems with ray tracing that I conveniently ignored so far, the most problematic is the poor support of dynamic scenes:

Right now, I precalculate a kd-tree. This is a rather expensive process, and the resulting tree is static and hard to modify at run-time. You can still move lights and the camera, but that’s it. That’s why I thought the Q2 demo is so cool: It calculates a full kd-tree in 11ms for 20k triangles, which makes it possible to render a completely dynamic scene of 20k triangles at interactive rates. Still, that’s just 20k triangles.

Another problem is the explosion of the number of rays. For 512x384, you need 512x384 rays to have a minimal visualization of the scene. Add a light, and you need another 512x384 rays. And that’s just a point light, for area lights you need to send out several rays to approximate shadowing. You do get soft shadows in that case, and the code barely becomes more complex. Same for reflections: Each reflection adds a ray per pixel, refraction adds two (assuming the ray also leaves the object you enter). So a glass sphere with some reflection and some refraction, lit by two lights, requires 6 rays per pixel. Make that area lights, and the ray count explodes to infinity.

If you could render just some special effects using ray tracing, you solve several problems: First of all the number of pixels to ray trace is reduced, secondly, the new functionality can easily be approximated by classic rasterizing. Only remaining problem is that the ray tracer needs a full scene representation, especially for reflections. And, it needs to duplicate all shading. Some guys are working on Cg shaders for real time ray tracing, so that may not be too much of an issue.

So basically realtime ray tracing is ‘fundamental research’, or ‘pure coding fun’ if you’re not too academic. :)

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 08:18

By the way, one of the things I wanted to do for my ray tracer (basically the reason for picking it up after several months of inactivity) is a recent paper published by Intel. It describes a technique named ‘MLRTA’, which is used to determine an entry point in the kd-tree for all rays of a tile. If you can guarantee that the first couple of branches in a tree are not taken for all rays in the tile, you can substantially speed up rendering. They claim a 2-3 speedup on indoor scenes; this figure increases when you see a smaller portion of a larger scene. Right now, this code is not working in my tracer (some bugs), but as soon as it works and delivers the expected performance boost I’ll post here. :)

I’m just mentioning this to show that research on fast ray tracing is still alive and kicking; there are significant improvements to be made. Initially I started on ray tracing because another scientist (Ingo Wald) showed that low level optimizations can make a real difference: He was probably the first to push ray tracing without tricks and approximations to ‘interactive’ levels. Optimization is my thing, so I got severely interested. I would like to invite anyone who could use a little training in optimization skills and graphics theory to do the same, it’s an awesome joyride.

96564be00c9aefcf5fcfdd950687ed0a
0
vanbirk 101 Feb 22, 2006 at 11:04

[UPDATE] Sorry, skipped over the last post. Some of the points are already discussed. [/UPDATE]
@phantom

I am not so sure about the future of ray tracing. What we really need is a way to mix ray tracing with regular rendering, so that a gradual conversion becomes a possibility. Let’s face it: Even if NVidia would release a chip that does realtime ray tracing, it won’t be used in games, as it would break all existing code. There are no API’s, there’s little knowledge, and there are some fundamental problems with ray tracing that I conveniently ignored so far, the most problematic is the poor support of dynamic scenes:

That’s not entirely true -> http://www.openrt.de/
They have specified an API that is very similar to OpenGL (indeed, most
of the calls have to be switched from gl… to rt…)
@phantom

Another problem is the explosion of the number of rays. For 512x384, you need 512x384 rays to have a minimal visualization of the scene. Add a light, and you need another 512x384 rays. And that’s just a point light, for area lights you need to send out several rays to approximate shadowing. You do get soft shadows in that case, and the code barely becomes more complex. Same for reflections: Each reflection adds a ray per pixel, refraction adds two (assuming the ray also leaves the object you enter). So a glass sphere with some reflection and some refraction, lit by two lights, requires 6 rays per pixel. Make that area lights, and the ray count explodes to infinity. If you could render just some special effects using ray tracing, you solve several problems: First of all the number of pixels to ray trace is reduced, secondly, the new functionality can easily be approximated by classic rasterizing. Only remaining problem is that the ray tracer needs a full scene representation, especially for reflections. And, it needs to duplicate all shading. Some guys are working on Cg shaders for real time ray tracing, so that may not be too much of an issue.

That’s where the SaarCoor or similar processors jump in (see http://www.saarcor.de/ ).
And I read some amazing figures what can be done with the CELL processor.
No one would have believed 10 years ago the triangle count/shading amount that can be done nowadays.
@phantom

So basically realtime ray tracing is ‘fundamental research’, or ‘pure coding fun’ if you’re not too academic. :)

You should really read some papers from this group on this topic, it’s a very active research area.
A friend of mine implemented a kd-traversal for realtime-raytracing on a gpu and that’s just the beginning (his diploma thesis starts soon … )

Greets,
vb

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 11:20

I have seen the OpenRT / InTrace stuff. Wald’s research is groundbreaking.

About the API: Technically, the OpenRT API is an API, but what I meant is that there is no industry standard; so far there are no ray tracers besides OpenRT using that API. Would be interesting if someone else implemented it on top of a different tracer, especially if that tracer is faster (several tracers beat OpenRT atm).

The SaarCor processor is nice, but they don’t even use it themselves. Right now their InTrace product simply uses fast machines linked through a fast network to do the rendering. But that isn’t the point: The problem is that ray tracing quickly starts requiring excessive amounts of processing power when you start using rt-specific features, like reflection/refraction. A fast system can render 4M or 5M rays per second (perhaps a bit more), a dualcore machine might top at 10M. But it’s easy to come up with a scene that requires 100M or more rays for a single frame. Doubling processing power would only allow you to add 1 or 2 lights. The real cool stuff, like area lights, is just far too expensive at the moment.

I know OpenRT does realtime global illumination, but sadly that’s also where they break their own rules and introduce approximations to get it realtime. That’s fine with me, but from there on, it isn’t ‘no compromises’ ray tracing anymore.

Perhaps you can invite your friend to take a look at ompf.org/forum , we have some nice discussions going on, discussing GPU based rendering would definitely be interesting.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Feb 22, 2006 at 12:02

I think real-time raytracing in its current state would be an excellent tool to use for taking in-game screenshots. It would benefit games like The Sims where you have a photo album of your sims life. Since it’s only a single frame, this sort of thing would be fast. It would be a good way to falsely promote your game too =)

All this talk about raytracing is starting to get me interested =P

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 12:35

Give it a try dude. The nice thing about ray tracing is that it’s already rewarding way before you get it real time. Check out the ray tracing tutorials, right here at devmaster:

http://www.devmaster.net/articles/raytracing_series/part1.php

You’ll be up and running in no-time. After that, go for the advanced stuff, you could specialize in speed or realism (GI is very cool).

25bbd22b0b17f557748f601922880554
0
bramz 101 Feb 22, 2006 at 12:53

impressive! =)

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Feb 22, 2006 at 17:11

http://davepermen.net/FaultyLine.png

this line in there, is this an error? (frame 13..)

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 18:07

Looks like a bug to me.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Feb 22, 2006 at 18:22

@phantom

The real cool stuff, like area lights, is just far too expensive at the moment.

I wonder if you’ve seen this paper from SIGGRAPH last year. Their technique achieves a factor of 10 to 100 speedup on area lights, and it’s not an approximation or a cheat. :worthy:

2b97deded6213469bcd87b65cce5d014
0
Mihail121 102 Feb 22, 2006 at 22:03

Getting an exception here! Something’s wrong I guess :)

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 22:42

You need an sse2 proc for the demo. I keep forgetting to mention that, sorry…

3011c8bfabd61d39f1d40bdd7651b7f5
0
conman 101 Feb 22, 2006 at 22:48

Nice work!

But phantom I am not sure about one thing you wrote:@phantom

I know OpenRT does realtime global illumination, but sadly that’s also where they break their own rules and introduce approximations

I don’t know anything about how the global illumination in OpenRT is implemented, but there are ways (papers) to get global illumination unbiased.
I’m pretty sure the guys from OpenRT know about that, so I don’t think they are breaking any rules here.

…but maybe I am wrong.

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 22:54

Reedbeta: I didn’t spot that one, that certainly looks cool. They even use Sponza, looks like that mesh is a new standard mesh. I need to read the doc in more detail to see if it’s doable… I’m still struggling with that other 2005 paper, on MLRTA. :(

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 22, 2006 at 23:02

Conman: I suppose they are still doing it like Wald described in his thesis (‘Realtime ray tracing and global illumination’): Basically, he proposes to sample a more or less random small subset out of a large array of lights to approximate global illumination. The ‘large array of lights’ is generated by bouncing some random rays from area lights into the scene. It works quite well, but it’s hardly physically correct.

96564be00c9aefcf5fcfdd950687ed0a
0
vanbirk 101 Feb 23, 2006 at 08:36

@conman

I don’t know anything about how the global illumination in OpenRT is implemented, but there are ways (papers) to get global illumination unbiased.
I’m pretty sure the guys from OpenRT know about that, so I don’t think they are breaking any rules here.

Nope, just like phantom said they are also ‘faking’. In theory you have to discretise a
space integral of all incoming photons. But you can trick the human
eye with some good heuristics and probabilistics. (sorry about my math-english)
And that’s what they are basically doing.

8e9aaea714077ea62a0c7f81cfb7672a
0
Razor 101 Feb 24, 2006 at 03:48

@vanbirk

Nope, just like phantom said they are also ‘faking’. In theory you have to discretise a
space integral of all incoming photons. But you can trick the human
eye with some good heuristics and probabilistics. (sorry about my math-english)
And that’s what they are basically doing.

By that reckoning all raytracing is an aproximation, or “fake”. You have to trace the whole area of each pixel through the scene to get it right, at which point it’s not really ray tracing is it?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Feb 24, 2006 at 05:29

The distinction is that physically based raytracing converges to the real solution as you increase the number of samples (rays). The OpenRT people use approximations that don’t have that property, since they aren’t physically based.

3c811bc002e4db2bd43aab728b96ca99
0
bitshit 101 Feb 24, 2006 at 23:57

@phantom

That’s why I thought the Q2 demo is so cool: It calculates a full kd-tree in 11ms for 20k triangles, which makes it possible to render a completely dynamic scene of 20k triangles at interactive rates. Still, that’s just 20k triangles.

Sounds cool indeed, is this Q2 raytraced demo available for download somewhere? Couldn’t find anything about it on google….

292735c946e23b74087633232f49c58d
0
Pon 101 Feb 25, 2006 at 02:07

Phantom, you MUST do some new tutorials on how you got it to here. I love your older ones, they’re very insightful. If only you could pick up where you left off :(

Great developments on the tracer, by the way. It looks to me like you’ve implemented HDR, or is my eyesight just going? :D

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Feb 26, 2006 at 08:03

hdr is (just as per pixel lighting) nearly a default for a raytracer implementation (as most just begin with a struct Color { float Red; float Green; float Blue; } to do the math.

but yes, he has some blooming and such stuff in, as modern “hdr-enabled” engines do, too.

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Feb 26, 2006 at 16:18

tonemapping was the word i was looking for yesterday night. he has some tonemapping in, with blooming and such included. i’d guess.

couldn’t find the word, too much people smoking some marihuana around me (goa party :D)

157dd169eabec527c07f1d57ff0a8850
0
phantom 101 Feb 26, 2006 at 21:37

It’s a bloom filter. Nothing special. Makes it a bit ‘dreamy’.

76f9e5f7a54435d0b84d041592441460
0
zukko 101 Feb 27, 2006 at 04:06

hi, got the following times on a p4 3.4 with HT and 1280mb of pc3200:
scene 3 :: 3.5fps
scene 5 :: 0.5fps
scene 7 :: 4fps
scene 8 :: 6pfs

ran it at HQ, the other scenes crashed :(

292735c946e23b74087633232f49c58d
0
Pon 101 Feb 27, 2006 at 06:30

Heh, well I had my eyes checked just to be sure, as it was rather blurry for me :D

3c811bc002e4db2bd43aab728b96ca99
0
bitshit 101 Jun 05, 2006 at 21:13

bump :glare: :

Originally Posted by phantom
That’s why I thought the Q2 demo is so cool: It calculates a full kd-tree in 11ms for 20k triangles, which makes it possible to render a completely dynamic scene of 20k triangles at interactive rates. Still, that’s just 20k triangles.

Sounds cool indeed, is this Q2 raytraced demo available for download somewhere? Couldn’t find anything about it on google….

D4e0078669848eb36bf99faba25ed0b6
0
Ace 101 Sep 20, 2006 at 11:19

@Reedbeta

The distinction is that physically based raytracing converges to the real solution as you increase the number of samples (rays). The OpenRT people use approximations that don’t have that property, since they aren’t physically based.

Im not sure you’re right about that. If you’re referring to the use of Virtual Point Lights (VPLs) as used also in Instant Radiosity by Alexander Keller, there is a way to see this as a just approximation of the physical reality. In stead of shooting new randow secondary rays from point hit as an approximation of sampling ALL directions, you just use the same approximation for all pixels per frame (the randow walk that made the VPLs). If you increase the total number of VPLs and the number of VPLs you sample, the result would therefore (I think) still converge toward the real solution. The same arguments go for the arealights.

For the Interleaved sampling and Continuity Buffering they also use in their Instant Global Illumination system, they are analogue to normal sampling and filtering teqniques you use for pixel sampling.

So all in all I don’t agree that this is in any way a “cheat”, though it might be a vaguer (more vague) approximation. But I might be wrong…

I’m doing a master project on real-time ray tracing, but my ray tracer is not that impressive… And I’m handing in my work before the 1/10, so I can’t get much improvement through…

S.

P.S.: I do think ray tracing is the future, especially if the gc-producers will take a chance on rt-hardware.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Sep 20, 2006 at 18:18

@phantom

const correctness […] really do matter

Do you have specific examples? Because most people tend to think constness helps performance, while in fact it usually doesn’t :)

.edit: oops, didn’t see this is an old topic.

8725cf56a4bcc7cd6d8615c12a993251
0
stealther 101 Jan 17, 2007 at 22:49

This project is really great! Thank’s phantom for his Raytracing tutorials.
I think everybody here d like to see new tutorials on that topic. =) That’s very interesting!!!
& also… your link http://www.bik5.com/rtNexGen.rar is broken =|