0
101 Sep 23, 2006 at 23:47

Hello, I’ve been trying to implement a photon map in java, but I’ve run into some problems. Unlike Jensen’s implementation I stored references to each node explicitly. My implementation is based off of his psuedo-code.

I usually get a StackOverflow exception in the sort method when using 40,000 photons, but not always. The probability of crashing increases with the number of photons I’m using. I’m guessing this is because I allocate too many integers when the sorting method calls itself. I don’t know though, I can’t really debug an error like this and I don’t know that particulars of the stack.

The algorithm’s pretty simple: I find the splitting plane for which the bounding box is the greatest, sort the photons according to that plane, and then pick the median out of the middle.

At the bottom you’ll see two internal classes dealing with the collection of photons. I was originally going to change the gathering algorithm from a sphere to a cylinder, but in Jensen’s code he uses the search radius to find potential photons. I don’t know how that’d work out using a cylinder. I used the distance to the plane defined by the normal and the distance to the point to see if it’s contained within the cylinder. None of this is actually added yet, mind you. Jensen also mentions using an ellipsoid to collect photons, but I have no idea how one would do that.

So, I basically have two questions; Is the balancing done correctly (and is there a faster sorting algorithm out there than quick sort), and how can I implement other photon gathering algorithms?

Also, the whole check of the incident direction is messed up somehow. I haven’t tried to debug that though, so I don’t expect anyone to know what’s wrong with it.

Oh, and I’m lost as far BRDF’s. Does it return a scalar, a color, or a direction?

The code should be self-explanitory:

public class PhotonMap
{
private Photon[] photons;
private int last;
private static float[] ct, st, cp, sp;
private PhotonGroup pg;
private Node root;
public PhotonMap(int max)
{
photons = new Photon[max];
last = -1;
if (ct == null)
{
ct = new float[256];
st = new float[256];
cp = new float[256];
sp = new float[256];
double t = (Math.PI/256.0);
for (int i=0; i<256; ++i)
{
double a = i*t;
ct[i] = (float) Math.cos(a);
st[i] = (float) Math.sin(a);
cp[i] = (float) Math.cos(2.0 * a);
sp[i] = (float) Math.sin(2.0 * a);
}
}
}
{
++last;
if (last < photons.length)
{
photons[last] = p;
}
else
{
Photon[] t = new Photon[photons.length * 2];
for (int i=0; i<photons.length; ++i)
{
t[i] = photons[i];
}
t[last] = p;
photons = t;
}
}
private void find(Node n)
{
if (n == null)
{
return;
}
Photon p = n.photon;
float d0 = pg.position.X[p.plane] - p.position[p.plane];
if (d0 < 0F)
{
find(n.left);
if (d0*d0 < pg.maximum_distance2)
{
find(n.right);
}
}
else
{
find(n.right);
if (d0*d0 < pg.maximum_distance2)
{
find(n.left);
}
}
}
public void balance()
{
//resize the array
Photon[] t = new Photon[last+1];
for (int i=0; i<=last; ++i)
{
t[i] = photons[i];
}
photons = t;
//begin balancing
root = balance(0,last);
}
private Node balance(int low, int high)
{
int axis;
if (low >= high)
{
return null;
}
else
{
//find bbox
float[] min = new float[3];
float[] max = new float[3];
for (int i=0; i<3; ++i)
{
min[i] = max[i] = photons[low].position[i];
}
for (int i0=low+1; i0<=high; ++i0)
{
float[] t = photons[i0].position;
for (int i1=0; i1<3; ++i1)
{
if (t[i1] < min[i1])
{
min[i1] = t[i1];
}
else if (t[i1] > max[i1])
{
max[i1] = t[i1];
}
}
}
axis = 0;
float t0 = max[0] - min[0];
for (int i=1; i<3; ++i)
{
float t1 = max[i] - min[i];
if (t1 > t0)
{
t0 = t1;
axis = i;
}
}
}
sort(low, high, axis);
int median = (low+high+1)/2;
photons[median].plane = axis;
return new Node(photons[median], balance(low,median-1), balance(median+1,high));
}
private void sort(int low, int high, int axis)
{
if (low >= high)
{
return;
}
else if (low == high - 1)
{
if (photons[low].position[axis] > photons[high].position[axis])
{
swap(low,high);
}
return;
}
int lo = low;
int hi = high;
Photon pivot = photons[(lo+hi)/2];
photons[(lo+hi)/2] = photons[hi];
photons[hi] = pivot;
while (lo < hi)
{
while (photons[lo].position[axis] <= pivot.position[axis] && lo < hi)
{
++lo;
}
while (pivot.position[axis] <= photons[hi].position[axis] && lo < hi)
{
--hi;
}
if (lo < hi)
{
swap(lo,hi);
}
}
photons[high] = photons[hi];
photons[hi] = pivot;
sort(low, lo-1, axis);
sort(hi+1, high, axis);
}
private void swap(int a, int b)
{
Photon t = photons[a];
photons[a] = photons[b];
photons[b] = t;
}
private static Vector direction(Photon p)
{
//random website implementation
//return new Vector(cp[p.phi]*ct[p.theta], cp[p.phi]*st[p.theta], sp[p.phi]);
//jensen implementation
return new Vector(st[p.theta]*cp[p.phi], st[p.theta]*sp[p.phi], ct[p.theta]);
}
public AColor getColor(Point p, Normal n, float d2)
{
pg = new PhotonGroup(p, 100, d2);
find(root);
Photon[] phot = pg.photons;
AColor r = new AColor();
for (int i=0; i <= pg.last; ++i)
{
//if (n.dot(direction(phot[i])) < 0F)
//{
//}
}
r.scale(AMath.OOPI / pg.longest_distance2);
pg = null;
return r;
}
public void clean(int max)
{
photons = new Photon[max];
last = -1;
}
private class Node
{
public Photon photon;
public Node left, right;
public Node(Photon p, Node l, Node r)
{
photon = p;
left = l;
right = r;
}
}
public class Photon
{
public float[] position;
public short theta, phi;
public float[] color;
public int plane;
public Photon(Point p, Vector d, AColor a)
{
position = p.X;
color = a.X;
plane = 0;
theta = (short)((byte)(256.0 * Math.acos(d.X[2])/ AMath.PI) + 128);
phi = (short)((byte)(256.0 * Math.atan2(d.X[0],d.X[1])/AMath.PI2) + 128);
}
}
private class PhotonGroup
{
public int last;
public Photon[] photons;
public float maximum_distance2;
public float longest_distance2;
public Point position;
public PhotonGroup(Point p, int max, float d2)
{
maximum_distance2 = d2;
position = p;
photons = new Photon[max];
last = -1;
}
{
float d2 = position.distance2(new Point(p.position));
if (d2 <= maximum_distance2)
{
if (d2 > longest_distance2)
{
longest_distance2 = d2;
}
++last;
if (last < photons.length)
{
photons[last] = p;
}
else
{
Photon[] t = new Photon[last*2];
for (int i=0; i<last; ++i)
{
t[i] = photons[i];
}
t[last] = p;
photons = t;
}
}
}
}
private abstract class Conditional
{
protected Point origin;
protected Normal normal;
protected Conditional(Point o, Normal n)
{
origin = o;
normal = n;
}
public abstract boolean fits(Photon p);
public abstract float area();
}
private class Sphere extends Conditional
{
public Sphere(Point o, Normal n, float dis)
{
super(o,n);
}
public boolean fits(Photon p)
{
float d2 = origin.distance2(new Point(p.position));
{
{
}
return true;
}
return false;
}
public float area()
{
}
}
private class Cylinder extends Conditional
{
private float a, b, c, d;
protected Cylinder(Point o, Normal n, float h, float r2)
{
super(o, n);
height = h;
a = n.X[0];
b = n.X[1];
c = n.X[2];
d = -(o.X[0]*n.X[0] + o.X[1]*n.X[1] + o.X[2]*n.X[2]);
}
public float area()
{
}
public boolean fits(Photon p)
{
float ht = Math.abs(a*p.position[0] + b*p.position[1] + c*p.position[2] + d);
float d2 = origin.distance2(new Point(p.position));
if ((ht < height) && (d2 < maximum_radius2))
{
if (ht > longest_height)
{
longest_height = ht;
}
{
}
return true;
}
return false;
}
}
}


Any help would be greatly appreciated.

33 Replies

0
165 Sep 24, 2006 at 00:16

I see that your sort function is using a recursive implementation of Quicksort to order the photons. With 40,000 photons you’re going to have recursion 16 levels deep in the best case, and probably more than that in real life; too much recursion will cause the stack overflow. There may be a way to increase the stack size in Java, but I’m not familiar with how to do that. The best solution would probably be to rewrite the function in an iterative way (google for “iterative quicksort”). You may also want to do this with your other recursive functions, like find.

Also, just for reference, there are actually faster algorithms for computing medians than sorting the list and picking the middle element. See here for a simple one that runs in ‘expected’ O(n) time (it’s randomized), and here for discussion of some more algorithms, including ones that execute in worst-case O(n) time. (Most of these are also written recursively, but you can convert them to iterative implementations if you still have stack problems.)

Regarding the use of alternative shapes like ellipsoids or cylinders, one easy way to do it would be to draw points out of the kd-tree in a sphere large enough to encompass the shape you actually want, and then perform a second pass on just those results to throw out the extras. However, you should be able to extract points in any shape that you can easily intersect with a plane, as that will let you find out which branches of the kd-tree to descend into. For cylinders, I assume you’re thinking of capped (not infinite) ones; you can intersect these with a kd-tree plane by using CSG techniques (let the cylinder be the CSG intersection of an infinite cylinder and two capping planes). Ellipsoids are just scaled spheres, so you can define a coordinate system in which the ellipsoid is a sphere and transform the kd-tree plane into that coordinate system to do the intersection.

BRDFs are tricky. What they return is a reflectance (ok…technically, a reflectance density), which is a wavelength-dependent scalar (i.e. a function from wavelength to reflectance). However, most rendering systems just bin wavelength into ‘red’, ‘green’, and ‘blue’. So, the BRDF would return a 3-vector giving the average reflectances over the red, green, and blue wavelength ranges.

0
101 Sep 24, 2006 at 01:48

Hmm, as far as sorting, even if I find the median, I still have to find all photons below that median, so I just thought it’d be fastest to sort them in order and pick out the middle, then balance the photons below and above the median.

Also, what would a BRDF take as parameters?

0
165 Sep 24, 2006 at 05:14

@flux00

Hmm, as far as sorting, even if I find the median, I still have to find all photons below that median

True, but you can do that partition in a single additional O(n) pass through the data, and in fact some of the median-finding algorithms perform this partition as part of their operations. Of course, your mileage may vary, and asymptotic performance isn’t always a great indicator of actual performance - but in this case, I’m guessing that the linear-time median algorithms would give you a speedup.
@flux00

Also, what would a BRDF take as parameters?

A BRDF takes two parameters, which are directions (unit vectors). The result returned is a measure of how much of the light coming from one direction would be reflected into the other direction. (BRDFs are symmetric, so it won’t matter which direction is taken as incoming or outgoing.) So, for photon mapping, you would evaluate the BRDF using the direction that the photon came from (which is stored in the photon data structure), and the direction for which you want to find the outgoing light - e.g. the direction toward the camera. Oh, and some people also have a BRDF take the third parameter, which is the point on the surface. I usually just think of it as each point having its own BRDF (for instance you can store BRDF components in a texture image).

0
101 Sep 24, 2006 at 21:55

A) Would it be completely wrong to sum over the irradiance and then scale by the BRDF of the surface? Wouldn’t scaling each photon by the BRDF and then summing them over result in blurry textures?

B) For area lights, would I have to evaluate the BRDF at each sample?

C) As far as gathering, If I defined a coordinate system in which the surface normal pointed in the positive Y direction, how would I define the X and Z directions and the transformation matrix for this? (I’m using a left-handed coordinate system and post-multiplying the vectors/points by the transformation matrix) Also, wouldn’t transforming each photon into this coordinate space be extremely slow?

D) Since BRDF’s are, what, 5 dimensional at least? How would one interpolate across a table for an arbitrary BRDF?

I really, really appreciate your help.

0
165 Sep 24, 2006 at 22:25

@flux00

A) Would it be completely wrong to sum over the irradiance and then scale by the BRDF of the surface? Wouldn’t scaling each photon by the BRDF and then summing them over result in blurry textures?

Yeah, it would be wrong to do that. :) The BRDF is direction-dependent - if you sum the irradiance and then evaluate the BRDF once, you’re essentially throwing away all the directional information about where light came from. The BRDF really does need to be evaluated for each sample (photon) based on its individual direction. That shouldn’t result in blurry textures - remember, the BRDF varies from point to point depending on the texture, so you use the right BRDF for that point on the surface. The fact that you’re using photons from a small but nonzero area will result in some blurriness in the lighting (not the texture) - but that’s inherent to the photon mapping algorithm.

B) For area lights, would I have to evaluate the BRDF at each sample?

Yes; as noted above, BRDFs are very direction sensitive, and the different samples on an area light will lie in different directions.

C) As far as gathering, If I defined a coordinate system in which the surface normal pointed in the positive Y direction, how would I define the X and Z directions and the transformation matrix for this? (I’m using a left-handed coordinate system and post-multiplying the vectors/points by the transformation matrix) Also, wouldn’t transforming each photon into this coordinate space be extremely slow?

What you’re asking is how to define a tangent basis for a surface, which is an interesting problem with no good general solution (since can’t comb hair on a sphere, etc). Using the texture UV axes (as they are mapped onto the surface) is a good start. However, there’s no reason to transform all the photons into this space, you only need to transform the ones that you intend to use for the radiance estimate. The process of actually finding those photons can be done in world space, while it’s convenient to use tangent space to define the BRDF.

D) Since BRDF’s are, what, 5 dimensional at least? How would one interpolate across a table for an arbitrary BRDF?

Instead of using lookup tables, most people use analytic BRDFs, in which the BRDF for any pair of directions is given by some mathematical equation with a few user-defined constants that set things like roughness or specular power. You just need to store those constants in a texture (if you want them to vary across the surface), then evaluate the equation whenever you want to compute the BRDF. Also, remember that there are effectively 3 separate BRDFs for red, green, and blue. Some examples of these analytic reflection models are the Lambertian, Phong, Torrence-Sparrow, Oren-Nayar, Lafortune, and Schlick BRDFs (google them, or look them up in a book).

0
101 Sep 25, 2006 at 00:57

Since the BRDF has to be evaluated at each photon, and can only be computed when the radiance estimate is done, then wouldn’t it make sense to have each photon contain a reference to the primitive on which it hit? I mean, the radiance estimate function -could- take a primitive as an argument, but it’s not guaranteed that all the photons gathered would come from the same primitive.

Haha, wow, I’m just thinking of how slowly this is going to run. Perhaps it’s time to implement a BSP tree…

0
165 Sep 25, 2006 at 03:50

That would indeed make sense and would probably help in spots like corners where the range search would gather some incorrect photons. However, it would also increase the memory requirement, which could be problematic when using large numbers of photons.

And yeah, it will be slow as hell if you don’t have some space-partitioning system. :) I thought you had a kd-tree though, wasn’t that what the median finding from above was about?

0
101 Sep 25, 2006 at 07:56

Yeah, but for the ray-primitive intersection routine. From what I gather building and traversing a KD tree of primitives is considerably more complicated. All I have now is a linear search for the intersection.

Hahaha, when I first tried to implement this I used a linear search. That was fun.

0
101 Sep 25, 2006 at 19:16

Quickly building - n.log(n) or below - a good kd-tree, whatever’s your metric, is a bit involved. Traversing isn’t that much.
Relevant discussions:

http://ompf.org/forum/viewtopic.php?t=252

Much of the complexity regarding how to get a kd-tree built fast has to do with the efficient handling of massive amounts of clipped primitives, a highly probable case if you’re dealing with triangles.

There’s many possible approaches like considering that n.log²(n) is good enough and re-sorting at every step, approximating the cost function or going for a bvh/kd-tree hybrid like BIH.

0
101 Sep 26, 2006 at 01:49

Thank you sir.

I tried to convert my recursive quicksort to iterative, but I ended up sorting the array of ranges so often, I think it seriously compromised my speed. If this is completely idiotic please say so:

public static void qsort(double[] d, int low0, int high0)
{
int[][] range = new int[10000][0];
for (int i=0; i<range.length; ++i)
{
range[i] = null;
}
range[0] = new int[]{low0,high0};
int last = 0;
while (last > -1)
{
int low = range[0][0];
int high = range[0][1];
if (low >= high)
{
for (int i=1; i<=last+1; ++i)
{
range[i-1] = range[i];
}
--last;
continue;
}
if (low == high-1)
{
for (int i=1; i<=last+1; ++i)
{
range[i-1] = range[i];
}
--last;
swap(d,low,high);
continue;
}
int lo = low;
int hi = high;
double pivot = d[(lo+hi)/2];
d[(lo+hi)/2] = d[hi];
d[hi] = pivot;
while (lo < hi)
{
while (d[lo] <= pivot && lo < hi)
{
++lo;
}
while (pivot <= d[hi] && lo < hi)
{
--hi;
}
if (lo < hi)
{
swap(d,lo,hi);
}
}
d[high] = d[hi];
d[hi] = pivot;
for (int i0=last+1; i0 > 0; --i0)
{
range[i0] = range[i0-1];
}
range[0] = new int[]{low,lo-1};
range[1] = new int[]{hi+1,high};
++last;
}
}
private static void swap(double[] d, int a, int b)
{
double t = d[a];
d[a] = d[b];
d[b] = t;
}


I’ve tested it to make sure it works but I haven’t done any of comparison to the recursive one. I don’t care all that much if it’s a little slower I just don’t want it to crash :happy:

0
101 Sep 26, 2006 at 02:00

Just forget about quicksort entirely and radix sort; it’s linear with up-front bounded allocations.

If that’s for finding the median, then go for what reedbeta linked to directly.

0
101 Sep 27, 2006 at 02:53

Radix sort seems intimidating. The median search seems simple enough, and as reed said I could just do a second pass, but it’s recursive, and I fear a stack overflow. Sometimes at night I can hear the stack crashing. Do you think I could make the median search iterative? Would that beat out an iterative quicksort?

I have a new question if you don’t mind entertaining it; if my ray tracer is built to handle an arbitrary BRDF on a surface, and a photon mapper is only supposed to add photons to a diffuse surface, how do I know whether or not a surface is diffuse? Would I have to add a flag into each primitive indicating that?

0
101 Sep 27, 2006 at 03:39

@flux00

I’m pretty sure that if you were asked to sort cards in the Real World you’d naturally do a distribution/radix sort. It’s as simple as that. Now it’s a tad trickier when handling floating points because negatives need to be bit-flipped; that can - and should - be done on the fly. Expressing that efficiently in a high level language is left as an excercise for the reader ;)
See http://ompf.org/forum/viewtopic.php?t=266 for a more robust approach.
@flux00

The median search seems simple enough, and as reed said I could just do a second pass, but it’s recursive, and I fear a stack overflow. Sometimes at night I can hear the stack crashing. Do you think I could make the median search iterative? Would that beat out an iterative quicksort?

You can turn any recursive algorithm into an iterative-with-explicit-stack form, but you’re only trading an implicit stack for an explicit one; that doesn’t change the memory footprint or the nature of the algorithm (read performance).
@flux00

I have a new question if you don’t mind entertaining it; if my ray tracer is built to handly an arbitrary BRDF on a surface, and a photon mapper is only supposed to add photons to a diffuse surface, how do I know whether or not a surface is diffuse? Would I have to add a flag into each primitive indicating that?

There’s no One True answer to your question, but generally primitives reference some kind of shader which should be abble to return such query.

0
101 Sep 27, 2006 at 03:53

When I make the sorting iterative though, doesn’t that improve performance or at least requires less memory because I have control over the memory? For instance, in a recursive quicksort the parameters are kept in memory because they don’t go out of scope immediately, but doing this iteratively I can destroy the data as soon as I’m done using it. Then again, with my iterative quicksort I spend most of my time adjusting the array.

I really haven’t worked on sorting that much, so I’m just wondering. My AB class is going soo slow.

Also, after improving my iterative quicksort considerably (no more sorting the stupid array), it now only takes 5 seconds to sort 5000 photons. Woo hoo. Needless to say, I’m going to try the randomized sort. During the second pass, should I just create a scratch array, add the median to the middle, and just begin adding photons to either side?

0
165 Sep 27, 2006 at 04:45

@flux00

When I make the sorting iterative though, doesn’t that improve performance or at least requires less memory because I have control over the memory?

An iterative implementation might use a bit less memory if you don’t need to pay the cost of a whole stack footprint. However, the main benefit here is that you allocate your memory from the heap rather than the stack, which saves you from the stack overflow caused by too deep recursion.

During the second pass, should I just create a scratch array, add the median to the middle, and just begin adding photons to either side?

You can do it that way, but it’s also possible to partition the array in place; just use the same code as you did for the partition in quicksort, but replace the pivot with the median.

0
101 Sep 27, 2006 at 07:25

@flux00

Hmm, as far as sorting, even if I find the median, I still have to find all photons below that median, so I just thought it’d be fastest to sort them in order and pick out the middle, then balance the photons below and above the median.

You’re not using C++, but what you’re looking for is somethin g like nth_element from the STL.

0
101 Sep 28, 2006 at 00:02

Reedbeta: wouldn’t using a quicksort to partition the array after I found the median sort of… destroy the purpose of looking for the median in the first place? Or does finding the median of the entire set speed it up considerably?

0
165 Sep 28, 2006 at 01:06

@flux00

Reedbeta: wouldn’t using a quicksort to partition the array after I found the median sort of… destroy the purpose of looking for the median in the first place? Or does finding the median of the entire set speed it up considerably?

You’re not doing a full quicksort, you just need to do a partition. I’m saying use the same partitioning algorithm as you would use for quicksort, since it does the partition in place.

0
101 Sep 28, 2006 at 02:20

Sorry, I got it. Thank you so much. This should be much faster than my previous implementation, not to mention not crashing.

Edit: I put a median search along with the quicksort partition just like you said :O

public int sort(int low, int high, int axis)
{
int hi = high;
int lo = low;
int median = (lo+hi)/2;
while (true)
{
if (hi <= lo)
{
break;
}
if (lo+1 == hi)
{
if (photons[lo].position[axis] > photons[hi].position[axis])
{
swap(lo,high);
}
break;
}
int middle = (lo+hi)/2;
if (photons[middle].position[axis] > photons[hi].position[axis])
{
swap(middle,hi);
}
if (photons[lo].position[axis] > photons[hi].position[axis])
{
swap(lo,hi);
}
if (photons[middle].position[axis] > photons[lo].position[axis])
{
swap(middle,lo);
}
swap(middle,lo+1);
int l = lo+1;
int h = hi;
while (true)
{
do
{
++l;
}
while (photons[lo].position[axis] > photons[l].position[axis]);
do
{
--h;
}
while (photons[h].position[axis] > photons[lo].position[axis]);
if (h < l)
{
break;
}
swap(l,h);
}
swap(lo,h);
if (h <= median)
{
lo = l;
}
if (h >= median)
{
hi = h-1;
}
}
lo = low;
hi = high;
if (lo >= hi)
{
return median;
}
else if (lo == hi - 1)
{
if (photons[lo].position[axis] > photons[hi].position[axis])
{
swap(lo,hi);
}
return median;
}
Photon pivot = photons[median];
photons[median] = photons[hi];
photons[hi] = pivot;
while (lo < hi)
{
while (photons[lo].position[axis] <= pivot.position[axis] && lo < hi)
{
lo++;
}
while (pivot.position[axis] <= photons[hi].position[axis] && lo < hi)
{
hi--;
}
if (lo < hi)
{
swap(lo,hi);
}
}
photons[high] = photons[hi];
photons[hi] = pivot;
return median;
}


This is so fast now, it only took a second to balance 500,000 photons.

0
101 Sep 29, 2006 at 22:37

One more question, if seems to me that there are two kinds of shaders, those that need to be summed over by the lights, i.e., BRDF’s, and recursive ones, like reflections. Obviously a perfectly reflective sphere doesn’t require visibility and contribution from lights, but a perfectly diffuse ball does. Basically what I’m saying is that the computation is excessive for such situations. Would it make sense to have two kinds of shaders per material?

0
165 Sep 29, 2006 at 23:40

Yes, it would make sense, and it’s a common design choice. Mathematically speaking, all BRDFs are of a kind - but as you’ve pointed out, different sampling strategies are appropriate for different BRDFs.

Some BRDFs are characterized by what are called ‘delta distributions’, like your perfect specular reflection example; such a BRDF would be nonzero only when the input and output directions happened to be perfectly aligned by the reflection law, and zero in all other cases. If a BRDF like this was sampled by light sources, it would be essentially impossible that a light source direction would happen to be perfectly aligned with the viewing direction, so you’d get zero light reflected. A similiar case is BRDFs that are highly glossy (not quite perfect specular reflection, but close to it). In these cases, the only sane sampling strategy is to sample by the BRDF and look for light incoming from the directions in which we know the BRDF will be nonzero.

On the other hand, some BRDFs, like the perfectly diffuse one, essentially don’t care which direction the light is coming from. If we sampled by the BRDF in that case, we would end up shooting rays in totally random directions - when in fact it’s likely that the majority of the reflected light is coming from the light sources in the scene. So in this case, it makes more sense to sample by the light sources, although one should also fire off a ray or two in completely random directions to gather indirect illumination (or use the photon map for this task).

Of course, there’s going to be some gray area in between. There’s a continuum of BRDFs between perfectly specular and perfectly diffuse, so you’ll need to decide where to switch over sampling strategies (or try to use multiple importance sampling, to sample by both the BRDF and the light sources simultaneously). Also, BRDFs can be composed by weighted sums of other BRDFs (for example, a shiny plastic surface that has a mixture of a perfectly diffuse BRDF and a highly glossy BRDF). In this case you can break the BRDF into components and apply a different sampling strategy to each one.

0
101 Sep 30, 2006 at 05:39

Hmm, I understand, mostly. Should photons traced through the scene be stored at every non-recursive intersection, or just diffuse ones?

Of all the explinations of BRDF’s, this is definetly the clearest.

0
165 Sep 30, 2006 at 06:07

Photons probably ought to be stored at every surface where you want to use the light-sources sampling strategy - diffuse surfaces, and other surfaces that aren’t extremely picky about directions (dull reflection/transmission, etc). That’s because you won’t use the photon map to compute the reflected light through a specular or highly glossy BRDF - so there’s no reason to store photons on purely specular surfaces.

0
101 Oct 01, 2006 at 13:32

For surfaces with multiple weighted BRDF’s, when gathering irradiance, should I factor in each BRDF for each photon? I suppose I could maintain a reference to the random BRDF chosen when I added the photon to the map. Hmm, calculating out each BRDF for the surface and then weighting them accordingly will go really slow, so I’m guessing I should just choose a random BRDF per photon, which should amount to the same result if the number of photons in the estimate is high enough.

0
165 Oct 01, 2006 at 15:31

Yes, you can do that. Just be sure to use Russian roulette to scale the results. It will make things go a bit faster; of course the tradeoff is that the variance in the image will be higher and you might have to use more photons in the estimate. But if your system only uses the photon map for indirect illumination, then the extra variance might not even be noticeable.

0
101 Oct 04, 2006 at 21:05

If I weighted the random choice by the RGB weight, I wouldn’t have to scale the result right?

http://img424.imageshack.us/img424/806/render2uh1.png

Sorry, but I have more questions. I really did read Jensen’s book, but I got lost on some of the specifics, particularly color bleeding and the caustics map.

Jensen spoke as if the irradiance was gathered based on the number of photons wanted for the estimate, not on the search radius, contrary to his implementation. I think that would result in better estimates of concentrated photons, like in the caustics map. I know in order to build a caustics map, one should concentrate photons toward objects that generate caustics, but how do you render it without blurring? Would using a filter have that much of a dramatic effect on the irradiance estimate?

As far as re-emitting photons after they strike a diffuse surface, should standard ray-tracing checks be done so neither the recursion goes to deep, nor the weight gets too small?

0
165 Oct 04, 2006 at 22:35

@flux00

If I weighted the random choice by the RGB weight, I wouldn’t have to scale the result right?

Sorry, I assumed that weighting the random choice by the weights of the BRDFs went without saying. :) You should definitely do that, and no, it doesn’t mean you can get out of scaling the result. The result has to be divided by the weight to compensate for the fact that you’re only computing one of the BRDFs. (Just to clarify: this division would cancel out the multiplication by the weight that you would ordinarily do if you were going to compute with all the BRDFs. So you end up not multiplying the result by anything. But from a theory standpoint that division is important.)

Jensen spoke as if the irradiance was gathered based on the number of photons wanted for the estimate, not on the search radius, contrary to his implementation. I think that would result in better estimates of concentrated photons, like in the caustics map. I know in order to build a caustics map, one should concentrate photons toward objects that generate caustics, but how do you render it without blurring? Would using a filter have that much of a dramatic effect on the irradiance estimate?

Yes, the book talks about extracting a particular number of photons in the radiance estimate. However, a kd-tree doesn’t let you directly extract the n nearest photons to a particular point. In order to get this, you have to make an estimate of how large a search radius you want and look for photons within a sphere (or other shape). Then you modify the search radius so as to get approximately the correct number of photons (density estimation).

Rendering a photon map involves “blurring” the photons; that’s just how it works. After all, a photon map stores individual, discrete photons, but that’s not what you want to see when you render something - you want to see the impression of continuous light. The idea with a caustic map is that by concentrating a lot of photons toward the specular objects you’ll generate a map that has a lot of concentrated photons in the areas where the caustics are, so you’ll get a lot of photons to work with in even a small radius. As for filters, they do have a visible effect on the image, and choosing the right one can make the image look a lot better. Jensen’s book talks about this on pages 80-83.

As far as re-emitting photons after they strike a diffuse surface, should standard ray-tracing checks be done so neither the recursion goes to deep, nor the weight gets too small?

Generally speaking you shouldn’t modify the weights of the photons, but just randomly choose whether to re-emit the photon or not based on the reflectance of the surface. It’s better for variance to have the photons all have the same energy as much as possible. Of course, this means every photon will eventually stop by chance (unless you have all 100% albedo surfaces :)).

0
101 Oct 05, 2006 at 21:20

If I don’t scale the BRDF according to the weight, though, how would a red wall be red? I have it set up so the BRDF is independent of the color.

And if I don’t modify the weight of a diffusely reflective surface how will simulate things like color bleeding?

0
165 Oct 06, 2006 at 00:22

Let me clarify what I’m talking about. For a surface with multiple BRDFs you might have, say three BRDFs called f1, f2, and f3, and the BRDF of the surface could be 0.7*f1 + 0.2*f2 + 0.1*f3. When you choose which one of those BRDFs to sample, you make the choice so that there’s a 70% probability of choosing f1, then a 20% probability of choosing f2, then a 10% probability of choosing f3. If you were going to compute the whole compound BRDF, you would then compute the results for all three, then weight them and add them. But if you’re doing Russian roulette, you don’t weight them. That doesn’t have anything to do with the color content of the BRDF, as far as I can see.

0
101 Oct 06, 2006 at 00:39

I actually have my BRDF’s return a scalar, then have 3 weights per BRDF in the red, green, and blue spectrum respectively. The weight is detached from the BRDF though. For russian-roulette, I take the average weight of each BRDF, and then pick a random BRDF based on that.

It just seemed odd to have a diffuse BRDF with three floats for color, and then an additional float for its contribution.

0
165 Oct 06, 2006 at 04:35

@flux00

For russian-roulette, I take the average weight of each BRDF, and then pick a random BRDF based on that.

In that case, you should multiply the BRDF output by the red, green and blue weights, then divide all three components by that average weight.

0
101 Oct 09, 2006 at 00:42

Should that be done for both the radiance estimate and the re-emission of photons due to diffuse reflection?

0
165 Oct 09, 2006 at 01:15

Yes. That’s because when doing the radiance estimate, you’ll use the incoming photons at a surface, which haven’t had the BRDF of that surface applied to them.