Discuss link: dim3 Ray Traced Shooter (www.klinksoftware.com)
My name is Brian Barnes and I’ve created what I believe to be the first completely ray-traced shooter game. Every pixel is ray-traced and it also does what are normally rastering effects like bump, specular, and glow mapping. The HUD and UI are OpenGL (there’s no reason to re-invent the 2D wheel.) It’s a complete 9 level game, made mostly to show off what you can do with ray tracing in a live light environment.
Here’s a demonstration video: http://www.youtube.com/watch?v=m6t-qHjE1gU
The code has two pieces, dim3RT, which is a development environment for developing 3D games, and dim3RTL, which is a high-speed API for creating ray-traced applications.
Both of these components are free and open-source. My goal would be to have my ray-tracing API become an open standard. If anybody has an ideas about how to start this procedure, please help me. I do understand that I’m not the first and won’t be the last, but I’d like to give it a try.
The applications themselves:
OS X: http://www.klinksoftware.com/RT/dim3RayShooter_OSX.zip
Note that while this system is very fast, ray tracing is inherently slow and so requires a modern multi-processor or multi-core computer. The video card is irrelevant, it’s all CPU bound at this point. One nice thing about this API is the implementation itself is small but removes so much code from engine developers – lighting, shadows, bump-mapping, etc. You just need to be concerned with vertex and UV data.
If you want to cheat to see all the levels, then in Data/Settings/, swap the contents of the files SinglePlayer.xml and SinglePlayer_cheat.xml (by renaming is fine.)
I’m making a couple stops at forums like this to hopefully get this in front of as many people as possible. Any suggestions on what else I can would be very helpful. There’s no money involved, it’s all free and open-source, I’m in it for my love of creating and coding! All comments appreciated!
Please log in or register to post a reply.
What’s the point of using raytracing for a game of this type? It doesn’t look any different from what Doom 3 did with rasterization and stencil shadows a decade ago.
If you’re going to make raytracing the central feature, I expect to see reflection/refraction, caustics, stuff that can’t reasonably be done with rasterization. After all, why would anyone use your API if all they can do is rehash graphics techniques from 2003?
Plus, you need better art in your demo. Nothing personal, but programmer art isn’t very appealing. If you’re trying to impress people, I’d invest in hiring an artist or two to make you a better level, and better character models/animations. And upload the youtube video in at least 720p - that 360p video is awful to look at.
You might have some good tech, but you need to do a lot better job of presentation if you want people to pay you any attention.
The difference between this and Doom3 is the amount of code engine users have to write, obviously, though, without increases in CPUs or specialized GPUs, it is sort of relegated to tech demo status, but to get a little ahead of the game is where APIs become standards.
As for things like reflections and refraction, you need to get further in the demo, but I set up another video showcasing that level: http://www.youtube.com/watch?v=1ME7fH_toC4
The problem with the video is that, if you’ve run the actual app, I’m running it in a relatively low resolution for the speed, so the video is actually in low res because the game is in low res. I still think it’s very impressive for what it is, but that’s the state of live ray tracing at this point.
Thanks for your reply! The art actually is something I hired out for – again – it’s ray tracing – I’m kind of stuck with relatively low poly maps, and spent most of the high poly count on models.
I put up one more video, just a demonstration of a level that has a row of lights that cycle back and forth, it’s not as impressive as the other demos: http://www.youtube.com/watch?v=ei-DmV43Uqs
It’s nice to see others work in the field of ray tracing, although like Reed says your graphics are dated. You need to emphasize the strong points of ray tracing such as area lights, shadows with penumbras, reflections, refractions, or anything solvable by an equation. Things that would otherwise require clever hacks in a rasterizer. Naturally when you enter these topics, you’ll start to see your frame rate drop considerably. The amount of rays you need to cast increase exponentially with image quality.
There have been other projects that have attempted this, but keep in mind this sort of stuff won’t take off with today’s tech. It’s more hobby related. Quake 3 was ray traced quite some time ago using the OpenRT library. There’s also nVidia’s optiX, which will utilize your GPU for better render times. They have some demos rendering high quality art at slow frame rates, but it’s enough to be interactive. So the field is progressing, but we need more time for the hardware to catch up.
But I’ve gotten there with today’s tech. Grab the application and play it – it can do 10-20 fps on a good machine at admittedly smaller pixel doubled resolution (like old PC doom resolution, or higher if you have the hardware for it.) My videos were played at 480x300.
The Quake 3 thing is interesting, as is the optiX, but mine is real time. To do the Quake 3, you had to have a virtual 36GHz machine. Mine is playable on what’s available today.
There’s other videos with reflections and refractions, but grab the code and run it, I think you’ll be a bit more impressed with what I’m doing. Again, this other stuff is simulated. I’m real time, right now. There are certainly limitations and the API is very sensitive to how it’s used, and I’m keeping the maps a bit simpler for speed. It could certainly run Quake/Quake 2 maps up to speed.
I’m out of the hobby realm at this point.
First of all, I admire you’ve entered the realm of ray tracing. This is good, although your renderer is still slow among other ray tracers. I bet you know Arauna - it runs on current CPUs and I can get like 30 - 70 fps (depending on how much of reflective surface is seen) on Core i5 with default Sponza scene in there, little less on my laptop Core i3.
You’re rendering 10-20 fps at 480x300 pixels ~ that means some 1.44 - 2.88 MRays/s … which is quite low number.
Anyways can you tell some details about ray tracer? I don’t have too much time to go through code, but I’d like to know something about acceleration structure, whether it’s mono-ray tracer, etc.
Right now I’m also working on ray tracing library, though achieving much more MRays/s … note that I use GPU for ray tracing (wonderful world of OpenCL), although using CPU is also possible, don’t know what numbers would I receive though.
I didn’t know about Arauna, that’s interesting, but it seems to be a CUDA nVdia only tracer (I have an older ATI (now AMD) card.) The demo youtube videos don’t seem anywhere near 30-70 fps (more like 10 or so) and, as far as I can tell from the docs, require multiple nVdia cards (again, at least that’s what the demo says) to achieve the fps they have. Not at all trying to tear anybody down, but 30-70 fps would be an incredible achievement. I could easily be wrong, but it seems to be the case from the demos.
I do need to get into parallel-izing with the GPU and OpenCL will be my next step.
Here’s the basis of what I’m doing. It’s a scene API. It performs best when you pre-load everything you can though you are free to alter anything within the scene between renders. Written all in C, the scenes are a collection of meshes and lights. It handles materials with transparency, bumps, speculars, reflection, and angle of refraction. It’s designed to fit the kind of data structures that you’d see in games.
It’s multi-threaded and the API itself is asynchronous (it’s designed so you can start a render on a scene and then go back and run the game physics, for instance). It currently renders to an OpenGL texture. The API also reflects a C++ binding in naming and parameter passing, and is object based. You can have multiple scenes and render to all simultaneously.
The scene analyzes all the meshes and builds hit lists through a bunch of methods, including any bounce backs to lights, where it gets it’s most speed. In a simple scene, it could achieve huge frame rates, but the more complex the scene, the slower the frame rate. That’s basically the gist of my ray tracer – an attempt to minimize the amount of things I have to check for any one ray.
Also, it breaks up the frame into a number of squares and starts a stack that the threads feed from. I do this because some areas can be in darkness and a thread per square means a lot of threads end up finishing before any other one, which isn’t efficient. Each square has it’s own frustum eliminations of items in the scene.
It does have limits, not in the API, but in this implementation. It won’t bounce a ray more than 8 times (this is settable), and right now it doesn’t have reflections and refraction when tracing back to lights.
I am super interested in anybody with a playable game in a ray tracer, especially so I can compare with mine. So if you have anything to show, I’d love to see it. If you’re achieving much higher fps than me, than it will tell me what direction to head off to next.
Just to note - Arauna is CPU only, Brigade (from the same author) is CUDA only.
Now as for me - I run my own ray tracer (working on both CPU and GPU platforms), but I guess it still needs a bit of work (multi level BVHs for example and HLBVH are on my TODO list for quite some time).
I could give you few hints in terms of performance, although I can’t show you any game with my ray tracer (only some performance tests) - as none exists. Although I could show you some samples, etc.
If you feel like talking about optimizations and possibly exchange some code - spam me at my mail (firstname.lastname@example.org), but I won’t be that much available for few days (tons of real work -.-).
Thanks for the correct!
I should note that when I originally started this, I was getting really good fps until it became part of an actual game. The physics, any stalling required, and just the daily grind of mesh updates, etc, can have a real effect. All my simulations turned out to not be realistic. I’ve been bitten by this before. It could absolutely not be the case for you, and hopefully it isn’t, but it’s something that is always a surprise.
I’d try to work it into something – even like an older engine like Quake – and see what happens.
Mine actually already has BVH, that and normals and frustums (which start the bounding volumes) are it’s main areas of optimization.
FYI, thanks a lot for some suggestions and I see where my approach of explaining this thing has been bad in places, and also I added some docs for anybody interested:
The best practices document:
The API header:
And, again, you can download and play the game up above!
Hi, I am the author of Arauna, Brigade, and Brigade 2 (which is now property of OTOY and being maintained by Jeroen van Schijndel). Couple of notes:
Just setting a benchmark here; I applaud experiments in ray tracing / games, but I think you should not be satisfied until you can match state-of-the-art (unless you have some fantastic feature that justifies lower performance). Arauna is freely available including source code, so you might want to check it out.
Yeah, I understand the problems, the horsepower just doesn’t exist. Still, I think there might be a market for older style games; there’s a lot of lower-poly games which are still fun, but, obviously, so much work and research has gone into rastering that it seems the only good attributes of ray tracing is it removes a ton of code the engine people would normally have to write, along with the obvious reflections and such.
My goal was to always prove the API by using it in an actual game, as I mentioned earlier, the minute you put a ray tracer in the game, you are now slowed down by the game logic itself and the forces stalling with updates, which makes the situation worse. Worse still, it scales badly with screen resolution and not such more poly count, when in fact modern games tend to optimize for the opposite.
One imagines a game who mechanics are light/shadow would be the killer app to make up for the lower performance.
Regardless, once I’ve proved the API works, I was hoping to get enough interest to find an organization that might be willing to try a massive multiprocessing card version of it, which smaller, inexpensive chips like ARM chips. There’s problems in the design in my head, but that was always my next goal.
And yes, I understand the hubris also involved, but taking a shot is worth it. Getting over the hump to get a little press seems to be hard, especially in the light of how high end rastering games are now.
One other thing my engine does is it considers the entire collection of meshes in a scene to be dynamic as you’d expect modern games to be (modern games are full of movement.) This is one design consideration that is kind of technical but is necessary to eventually make modern games. It’s one thing that I have that’s different from other APIs I’ve seen, and is a absolute killer for performance, but modern games demand this.
Ray tracing makes game development easier. We tested Arauna on 6 or so student games over the course of 8 years, and the teams working on these games consistently performed better (in terms of project completion). This was acknowledged by James McCombe of Imagination (they do the Caustic ray tracing hardware, as well as some mobile ray tracing tech). Imagination wants ray tracing on mobile to ease game development: if you make $1 on an Android game, development can’t take as much resources as console / PC games demand. I thought this was a pretty interesting point of view.
Regarding dynamic geometry: Ingo Wald once explained me an approach that I still use today, where each mesh has it’s own BVH, and a top-level BVH links these. The top-level BVH is constructed per frame, taking into account object positions and orientations, making rigid motion virtually free. Individual BVHs are then either constructed once using a high-quality BVH builder (check out SBVH for state-of-the-art performance), refitted if animation doesn’t change object topology (think ocean waves, waving trees), or rebuild quickly (balancing tree quality and build time) if the animation is ‘chaotic’ (explosions). Animations that sit inbetween (skinned meshes) may be rebuilt once every N frames and refitted inbetween, or they could use some other scheme.
The top-level BVH exploits the fact that most games are not fully dynamic at al (I disagree with you there); typically large chunks of geometry are static or at least rigid. Not exploiting that means disregarding an important possible advantage. Not being fully dynamic is not just a technical thing; fully dynamic game levels tend to be taxing on game design, since it’s hard to guarantee that the player cannot get in a situation where progress is impossible.
As for whitted ray tracing quality in games and not-fully-dynamic stuff. Currently I’m working on a small game project, that uses standard deferred shading approach - with tons of other tricks to make game look good (not photorealistic, but good).
Now there are reflective surfaces, the standard approach would be to use cubemap (or cubemaps), but that is a problem as lighting in my game is dynamic, so basically I perform ray tracing against game scene (only static meshes, not dynamic - I can’t afford re-building KdTrees in runtime, as both CPU and GPU are already quite busy), generating another GBuffer with this - and computing shading from there. Of course I can’t use all the optimizations for deferred rendering this way! For performance reasons I also use half resolution buffer.
Of course rendering pipeline is rather complicated (as opposing to path tracing - like in Brigade), but it works and reflections do make a game-like scene look better.
I can’t show any image right now - as I’m not at home where I have the images (posting from my beloved notebook), but as soon as I’ll get to my PC I’ll try to post at least some screenshot (I can’t show it on game scenes though, just on the test ones).
You are right about game meshes being dynamic; I was more thinking future forward where realism in games is going to depend on destructible or more live environments, if something like ray-tracing is going to catch on – ease of development can’t be the sole motivating factor – it’s going to need to be as forward thinking as possible.
My scheme to get around mesh elimination as quickly as possible and assume a completely dynamic environment is to split the scene (which I do anyway, as the scene is rendered by a series of threads feeding off a stack of screen chunks) and frustum cull for the entire view and then frustum call in that reduced list of each mini view, of which there are hundreds. It pretty much immediately chops off large chunks of possible hit targets.
Reflections/Refractions throw that out, though, so that’s the one problematic part, but they have their own optimization schemes.
Actually getting back to the lights – the dim3 maps have 10-20 lights in them, not counting all the dynamic lights from the weapons – is a bit more of a killer for me.
And that’s one thing that hasn’t been mentioned yet. People mention polys per scene, but lights per scene are more intensive for me. A single light scene is very fast.
I haven’t really looked at anybody else’s code because I’m trying to “clean room” approach it, in hopes that I might come up with something interesting. I might fail, but it’s one way to find approaches that might not have been thought of.
Though, this discussion has given me some new ideas!
I want to give out a couple real life numbers, basically in-engine game play. This is one a regular single chip i7.
Running at 640x400, with a scene of about 75K polygons – note, the engine is polygon based, not triangle based, so if you are counting as triangles, you’ll need to basically double that since most polys are quads – 70K for regular stuff (map, models) and another 5K for particles and projectiles, temporary thing. The vast bulk of these polys are in movement (in models, etc.) In trigs, 150K.
12 lights in the scene, most meshes have at least 2-3 lights falling on them. The vast bulk of these lights are in movement, also (usually tied to a swinging light.)
This gets a relatively poor 6-8 fps. CPU only. Most all of my speed is lost in tracing back to lights, and I have some ideas with likely hit lists and other thoughts, so this will be kind of a baseline. My biggest worry is that what is pretty simple code will eventually become a spaghetti mess for speed.
I think on the game I need to start out with some textures with alpha transparency to show off a bit more, but first I want to see if I can get clever with the speed.
”… strong points of ray tracing such as area lights, shadows with penumbras, reflections, refractions, or anything solvable by an equation…that would otherwise require clever hacks in a rasterizer.”
The strongest point of tracing is that it sees no overdraw and can render massively occlusive geometry. How would one code a sierpinski cube flythrough using a rasterizer? For a tracer it is almost trivial. Dim3 isn’t that sort of tracer, judging from the film, but at least Mr. Barnes spends much of his time coding instead of making snarky remarks about other people’s work.
That’s a good point, especially in resolution in a z-buffer (though you’d still have problems if the geometry was so large that you started running into math resolution errors, but that would be getting silly.)
To me, all this comes down to somebody writing an engine gets to avoid a boat-load of work. No shadow code. No fancy trees or other obscuring code. Treat it as a scene, pre-load as much geometry as possible, and worry about other parts of engines that don’t get as much work – physics realism as one – instead hassling with tech.
Of course, everybody’s done so much work on rasters that it’s hard to justify ray-tracing at this point. Some of them are just absolutely gorgeous. But I like the challenge so I’ll keep pounding away at it.
One interesting thing – my biggest problem isn’t tracing the view to the scene, it’s tracing the massive number of lights (I’m always thinking Quake level games, where if you remove the light mapping you’ve got a large number of live lights in the scene). I’ve been tweaking my “likely block” code, I keep a list of likely blocks on a slice of rendering with the thought that if I hit something in the slice, tracing back to a light is “likely” to be blocked by the same mesh polygon. It does speed up rendering a good deal, but it’s also victim to some false hits that basically waste time. It’s also at the slice level. I need to re-arrange a lot of stuff to change how that works, but that’s my next attempt at improving frame rates.
My biggest concern is getting a code base that’s a large collection of hacks instead of a clean more math like implementation.
I’m really intrigued by the idea of mixing ray-tracing and voxels.
I’ve seen some really nice demos of very large voxel terrains with ray traced lighting.
Talking to the authors didn’t reveal much, except that the final pixel rendering is actually done with point sprites, which I found really surprising.
As far as I can work out, they do ray tracing to detect intersections with geometry, use the distance to work out level of detail and point sprite size, ray trace the lights for the point sprite, then mark the pixels covered by the point sprite as done to reduce ray tracing.
Well that’s my guess at least.