Hello and welcome to a new series of articles (or column, if you wish) on the topic of raytracing.
For those that do not know me: My name is Jacco Bikker, also known as 'Phantom'. I work as '3D tech guy' at Overloaded, a company that develops and distributes games for mobile phones. I specialize at 3D Symbian games, which require highly optimized fixedpoint, nonHWaccelerated 3D engines, crammed into 250Kb installers. So basically I'm having fun.
As software rendering used to be my spare time activity, I was looking for something else. I tried some AI, which was great fun, and recently I dove into a huge pile of research papers on raytracing and related topics; such as global illumination, image based lighting, photon maps and so on.
One document especially grabbed my attention. It's titled: "StateoftheArt in Interactive Ray Tracing", and was written by Wald & Slusallek. I highly recommend this paper. Basically, it summarizes recent efforts to improve the speed of raytracing, and adds a couple of tricks too. But it starts with a list of benefits of raytracing over rasterizationbased algorithms. And one of those benefits is that when you go to extremes, raytracing is actually faster than rasterizing. And they prove it: Imagine a huge scene, consisting of, say, 50 million triangles. Toss it at a recent GeForce with enough memory to store all those triangles, and write down the frame rate. It will be in the vicinity of 25. If it isn't, double the triangle count. Now, raytrace the same scene. These guys report 8 frames per second on a dual PIII/800. Make that a quad PIII/800 and the speed doubles. Raytracing scales linearly with processing power, but only logarithmically with scene complexity.
Now that I got your attention, I would like to move on to the intended contents of this crash course in raytracing.
Series Outline
Over the next couple of articles I would like to introduce you to the beauty of raytracing. I would like to start with a really simple raytracer (spheres, planes, reflections) to get you familiar with the basic concepts.
After that it's probably a good idea to add things like refraction, area lights (resulting in soft shadows) and antialiasing to improve the quality of the graphics.
By that time, the raytracer will be painfully slow, so we'll add a simple spatial subdivision to speed it up.
And finally, I would like to introduce you to the wonderful world of Global Illumination, using photon maps. And believe me, you haven't really lived until you see your first colors bleeding from one surface to another due to diffuse photon scattering…
Disclaimer
I would like to point out that I am pretty new to this. I've got some pretty decent results, and I'm quite sure my maths are OK, but in some cases I will undoubtedly make incredibly stupid mistakes. When that happens, don't forget to bash my sorry ass on the forum. I might learn from it.
Acknowledgements
I would like to thank Bram de Greve for proofreading these articles and providing very useful insights and corrections.
Basics
Raytracing is basically an attempt to imitate nature: The colors that you see are rays of light cast by the sun (probably), bouncing around the detailed scenery of nature, and finally hitting your eye. If we forget special relativity for a moment, all those rays are straight lines.
Consider the following illustration:
Figure 1: Rays from sun to observer 
I have drawn a couple of rays in this picture. The yellow one goes directly from the sun to the camera. The red ones reach the camera only after bouncing off scenery, and the blue one is bent by the glass sphere before hitting the camera.
What is missing in this picture are all the rays that never hit the observer. Those rays are the reason that a raytracer does not trace rays from a light source to a camera, but vice versa. If you look closely at the above picture, you can see that this is a fine idea, since the direction of a ray doesn't matter.
That means that we can have it our way: Instead of waiting for the sun to shoot a ray through that one pixel that is still black, we simply shoot rays from the camera through each pixel of the screen, to see what they hit.
Coding Time
At the bottom of this article you'll find a link to a file containing a small raytracer project (VC6.0 project files included). It contains some basic stuff that I'm not going to explain here (winmain to get something on the screen and a surface class for easier pixel buffer handling and font rendering), and the raytracer, which resides in raytracer.cpp/.h and scene.cpp/.h. Vector math, pi and screen resolution #defines are in the file common.h.
Spawning rays
In raytracer.h, you will find the following class definition for a ray:
class Ray { public: Ray() : m_Origin( vector3( 0, 0, 0 ) ), m_Direction( vector3( 0, 0, 0 ) ) {}; Ray( vector3& a_Origin, vector3& a_Dir ); … private: vector3 m_Origin; vector3 m_Direction; }; 
A ray has an origin and a direction. When starting rays from the camera, the origin is usually one fixed point, and the rays shoot through the pixels of the screen plane. In 2D this looks like this:
Figure 2: Spawning rays from the camera through the screen plane 
Have a look at the ray spawning code from the Render method in raytracer.cpp:
vector3 o( 0, 0, 5 ); vector3 dir = vector3( m_SX, m_SY, 0 )  o; NORMALIZE( dir ); Ray r( o, dir ); 
In this code, a ray is started at the origin ('o'), and directed to a location on the screen plane. The direction is normalized, and the ray is constructed.
A note about this 'screen plane': This is simply a rectangle floating in the virtual world, representing the screen. In the sample raytracer, it's centered at the origin, it's 8 world units wide and 6 world units high, which fits nicely for a 800x600 screen resolution. You can do all sorts of nice things with this plane: If you move it away from the camera, the beam of rays becomes narrower, and so the objects will appear bigger on the screen (use fig.2 to visualize this). If you rotate the plane (and the camera origin with it), you get a different view on the virtual world. It's rather nice that things like perspective and field of view are just a logical byproduct.
Building a scene
Next, we need a scene to raytrace. A scene consists of primitives: Geometric objects like spheres and planes. You could also decide to use triangles, and build all other primitives using those.
Take a look at the class definitions in scene.h. The primitives 'Sphere' and 'PlanePrim' are derived from 'Primitive'. Each primitive has a 'Material', and implements methods such as Intersect and GetNormal.
The scene itself is stored in a class named 'Scene'. Have a look at the InitScene method:
void Scene::InitScene() { m_Primitive = new Primitive*[100]; // ground plane m_Primitive[0] = new PlanePrim( vector3( 0, 1, 0 ), 4.4f ); m_Primitive[0]>SetName( "plane" ); m_Primitive[0]>GetMaterial()>SetReflection( 0 ); m_Primitive[0]>GetMaterial()>SetDiffuse( 1.0f ); m_Primitive[0]>GetMaterial()>SetColor( Color( 0.4f, 0.3f, 0.3f ) ); // big sphere m_Primitive[1] = new Sphere( vector3( 1, 0.8f, 3 ), 2.5f ); m_Primitive[1]>SetName( "big sphere" ); m_Primitive[1]>GetMaterial()>SetReflection( 0.6f ); m_Primitive[1]>GetMaterial()>SetColor( Color( 0.7f, 0.7f, 0.7f ) ); // small sphere m_Primitive[2] = new Sphere( vector3( 5.5f, 0.5, 7 ), 2 ); m_Primitive[2]>SetName( "small sphere" ); m_Primitive[2]>GetMaterial()>SetReflection( 1.0f ); m_Primitive[2]>GetMaterial()>SetDiffuse( 0.1f ); m_Primitive[2]>GetMaterial()>SetColor( Color( 0.7f, 0.7f, 1.0f ) ); // light source 1 m_Primitive[3] = new Sphere( vector3( 0, 5, 5 ), 0.1f ); m_Primitive[3]>Light( true ); m_Primitive[3]>GetMaterial()>SetColor( Color( 0.6f, 0.6f, 0.6f ) ); // light source 2 m_Primitive[4] = new Sphere( vector3( 2, 5, 1 ), 0.1f ); m_Primitive[4]>Light( true ); m_Primitive[4]>GetMaterial()>SetColor( Color( 0.7f, 0.7f, 0.9f ) ); // set number of primitives m_Primitives = 5; } 
This method adds a ground plane and two spheres to the scene, and of course a light source (two, in fact). A light source is simply a sphere that is flagged as 'light'.
Raytracing
Now all is set up to start tracing the rays. First, let's have a look at some pseudocode for the process:
For each pixel { Construct ray from camera through pixel Find first primitive hit by ray Determine color at intersection point Draw color } 
To determine the closest intersection with a primitive for a ray, we have to test them all. This is done by the Raytrace method in raytracer.cpp.
Intersection code
After some initializations, the following code is executed:
// find the nearest intersection for ( int s = 0; s < m_Scene>GetNrPrimitives(); s++ ) { Primitive* pr = m_Scene>GetPrimitive( s ); int res; if (res = pr>Intersect( a_Ray, a_Dist )) { prim = pr; result = res; // 0 = miss, 1 = hit, 1 = hit from inside primitive } } 
This loop processes all the primitives in the scene, and calls the Intersect method for each primitive. 'Intersect' takes a ray, and returns an integer that indicates a hit or a miss, and the distance along the ray for the intersection. The loop keeps track of the closest intersection found so far.
Colors
Once we know what primitive was hit by the ray, the color for the ray can be calculated. Simply using the material color of the primitive is too easy; this would result in boring colors without any gradient. Instead, the sample raytracer calculates a diffuse shading using the two lights. Since each light contributes to the color of the primitives, this happens inside a loop:
// determine color at point of intersection pi = a_Ray.GetOrigin() + a_Ray.GetDirection() * a_Dist; // trace lights for ( int l = 0; l < m_Scene>GetNrPrimitives(); l++ ) { Primitive* p = m_Scene>GetPrimitive( l ); if (p>IsLight()) { Primitive* light = p; // calculate diffuse shading vector3 L = ((Sphere*)light)>GetCentre()  pi; NORMALIZE( L ); vector3 N = prim>GetNormal( pi ); if (prim>GetMaterial()>GetDiffuse() > 0) { float dot = DOT( N, L ); if (dot > 0) { float diff = dot * prim>GetMaterial()>GetDiffuse(); // add diffuse component to ray color a_Acc += diff * prim>GetMaterial()>GetColor() * light>GetMaterial()>GetColor(); } } } } 
This code calculates a vector from the intersection point ('pi') to the light source ('L'), and determines illumination by the light source by taking the dot product between this vector and the primitive normal at the intersection point. The result is that a point on the primitive that is facing the light source is brightly illuminated, while points that are lit at an angle are darker. The test for 'dot > 0' prevents faces that are turned away from the light source get lit.
Last words
That's all for this article. In the next article I will explain how to add more interesting lighting and how to add shadows. Here is a shot from the sample raytracer, and a preview of things to come.
See you next time, Jacco Bikker, a.k.a. "The Phantom"
Further Reading

"StateoftheArt in Interactive Raytracing", Wald & Slusallek, pdf
Source Code
Articles in the Series
 Raytracing: Theory & Implementation Part 1, Introduction
 Raytracing: Theory & Implementation Part 2, Phong, Mirrors and Shadows
 Raytracing: Theory & Implementation Part 3, Refractions and Beer's Law
 Raytracing: Theory & Implementation Part 4, Spatial Subdivisions
 Raytracing: Theory & Implementation Part 5, Soft Shadows
 Raytracing: Theory & Implementation Part 6, Textures, Cameras and Speed
 Raytracing: Theory & Implementation Part 7, KdTrees and More Speed