Phaton gathering

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 01:04

When gathering all the photons under a specific radius, do we use NumbPhotons/(PI*radius*radius) or do we use 1/(PI*radius*radius)?

56 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 12, 2012 at 01:26

Typically you’re trying to get an estimate of the irradiance - the light power per unit area. Each photon represents a chunk of power, so you’d want to add up the power carried by all the photons, and divide by the area.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 01:57

@Reedbeta

Typically you’re trying to get an estimate of the irradiance - the light power per unit area. Each photon represents a chunk of power, so you’d want to add up the power carried by all the photons, and divide by the area.

So then I need to use TotalFlux/(PI*Radius*Radius) instead?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 12, 2012 at 03:25

Yes, that sounds right.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 16:09

Thanks Reedbeta.

One more thing!

Do I also have to save the “light” direction and the normal for each photons, then when gathering them, take into consideration the normal and “light” direction, and distance? If so, does this looks right?…

for each photon in radius do {
a = dot(photon.ray.dir, photon.object.normal) ;
b = dot(photon.ray.dir, surfaceNormal) ;
c = a/b;
d = photon.dist * radius;
flux += photon.flux * c * d;
}
flux = flux/(PI*radius*radius);

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 12, 2012 at 16:52

Each photon, when it strikes a surface, should have a record of which direction it came from. When you gather the photons, you must push them through the surface’s BRDF to see how much output light they contribute to the direction you’re gathering from.

However, the N . L factor is NOT in the BRDF and should not be included. It will be taken care of by the geometry - fewer photons per unit area will naturally land on the surface when it is at a greater angle to the light source. Likewise, you do not need to attenuate the photons based on distance from the light source. That is also taken care of by the geometry, since fewer photons per unit area will naturally land on an object far from the light.

You should pick up Henrik Wann Jensen’s book on photon mapping if you’re interested in this - it explains everything in much greater detail.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 17:22

Thanks, I’ve looked at his paper but I’m still a bit confused, and I’m not that good at maths with all those weird symbols!

What I meant by distance was the distance of the photon within the radius. For example, when I shoot a ray from the camera to the scene and land on a surface, I take all the photons within a radius from that hit point. So I was wondering if the photons within this radius needs to be attenuated the farther away they are from the hit point? (not from the light source)

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 18:18

Here is the problem I’m facing. If we pull all the photons within a radius, how do we handle the photons with the same normal as the hit point but under the hit point?? Here is a screen shot of what I mean. The green dot is the point to query, so it’s at the center of the sphere, and all the black dot are the photons with the sphere radius. As you see, the photons on the blue surface should not contribute to the green point on the red surface correct? And what about those with opposite normals, facing the hit point? Should they contribute?

70635416.jpg

6eaf0e08fe36b2c23ca096562dd7a8b7
0
__________Smile_ 101 Mar 12, 2012 at 20:15

There is no easy way to distinguish between photons on near thin layers (usually you attenuate by N*V thus skipping photons with opposite orientation N*V < 0). Photon mapping is approximate technique and you have to deal with approximation errors. Less sphere radius and more photon count -> more correct image. You can use M nearest photons instead of fixed sphere radius or combine approaches for better looking picture.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 20:45

@}:+()___ (Smile)

There is no easy way to distinguish between photons on near thin layers (usually you attenuate by N*V thus skipping photons with opposite orientation N*V < 0). Photon mapping is approximate technique and you have to deal with approximation errors. Less sphere radius and more photon count -> more correct image. You can use M nearest photons instead of fixed sphere radius or combine approaches for better looking picture.

oh! I see what you mean. Thanks!

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 12, 2012 at 21:26

I think people usually use an ellipsoid rather than a sphere, too, so you can flatten the ellipsoid along the surface normal to try not to get so many photons from other surfaces.

As for attenuating photons by distance to the center of the search, that would help you get smoother results so it could indeed be a good idea. I don’t remember off the top of my head whether HWJ does this sort of thing or not. Anyway, if you do attenuate them, just make sure you normalize by the sum of the weights, as usual with any weighted average. You could use any attenutation function you choose, whatever looks best.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 21:42

@Reedbeta

I think people usually use an ellipsoid rather than a sphere, too, so you can flatten the ellipsoid along the surface normal to try not to get so many photons from other surfaces. As for attenuating photons by distance to the center of the search, that would help you get smoother results so it could indeed be a good idea. I don’t remember off the top of my head whether HWJ does this sort of thing or not. Anyway, if you do attenuate them, just make sure you normalize by the sum of the weights, as usual with any weighted average. You could use any attenutation function you choose, whatever looks best.

Good idea about the ellipsoid! thanks!

What do you mean by “normalize by the sum of the weights”?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 12, 2012 at 21:44

I mean that you should keep track of the weights you apply to each photon and divide the total flux by the total weight, so that the unequal weights don’t cause you to make the whole thing overall brighter or darker.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 12, 2012 at 22:10

@Reedbeta

I mean that you should keep track of the weights you apply to each photon and divide the total flux by the total weight, so that the unequal weights don’t cause you to make the whole thing overall brighter or darker.

oh ok thanks

6aa952514ff4e5439df1e9e6d337b864
0
roel 101 Mar 13, 2012 at 10:04

What I did, if I remember correctly, was to devise a function that measured how much photons were away from the plane of the hit point (i.e. your red plane). You can do that with a dot product or so. More away => less weight; some parameter controlled it. And indeed, if you apply such ad hoc hacks, do something like:

totalweight=0
for each photon
{
weight = computeWeight(photon);
flux += computeFlux(photon) * weight;
totalweight += weight;
}
flux /= totalweight
6eaf0e08fe36b2c23ca096562dd7a8b7
0
__________Smile_ 101 Mar 13, 2012 at 13:03

As far as I understand, there is no sense in smoothing photon field. It only worsens performance. Photon field is not designed for direct viewing and smoothing will be done at gathering stage.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 13, 2012 at 21:25

@}:+()___ (Smile)

As far as I understand, there is no sense in smoothing photon field. It only worsens performance. Photon field is not designed for direct viewing and smoothing will be done at gathering stage.

What do you suggest? I’m using the photon map as direct viewing, I do not use raytrace for direct illumination, and do not use caustic/indirect illumination photon maps.

6eaf0e08fe36b2c23ca096562dd7a8b7
0
__________Smile_ 101 Mar 14, 2012 at 18:02

Well… photon mapping is technique for indirect illumination, so calculate usual 1-bounce direct illumination and then add indirect contribution. Photon map sampled in “final gathering” stage at second bounce position, and result will be good even with small number of photons. In theory it’s possible to throw many photons and visualize photon map directly, but it’s not photon mapping it’s forward ray tracing with 3D blur (much less effective and with more noticeable errors).

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 14, 2012 at 23:39

I see what you mean. But one problem I have is “final gathering”. I don’t know how to make this work. What I do now is take all photons within a radius to get an approximation. But it’s very gainy, I have to let it run for hours to look better! Or maybe I have the logic for “final gathering” all wrong?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 15, 2012 at 02:01

Final gathering refers to the process of using raytracing to compute the final image. One typically uses raytracing to do direct lighting and photon mapping for indirect lighting. That is to say, you trace from the eye into the scene and at each hit point, calculate indirect lighting by spawning a large number of rays in all directions (distributed/weighted according to the BRDF). At each of these secondary intersections you use the photon map to estimate light reflected back toward the primary hit point. So the photon map is not seen directly, only used on the second bounce back from the eye. Effectively any noise in the photon map is averaged out over the whole hemisphere of the primary ray hit point, quite a lot of photons.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 15, 2012 at 14:51

I get it, and did it, thanks. But, now I get a very dull output, no shade or anything???

Isn’t this like path tracing? except we query the photon map instead? So we have to shoot huge amount of secondary rays?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 15, 2012 at 18:12

Yeah, it is like path tracing for the first bounce back from the eye. But it should still be faster than path tracing since you use the photon map for all further bounces. BTW, I think the secondary ray samples can be co-stratified with pixel area, lens area etc. to reduce the total number of rays. I don’t know why you’d get a “dull” output; if everything’s working right, it should look the same as visualizing the photon map directly except for being a lot smoother.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 15, 2012 at 21:55

Well, isn’t suppose to be dull since we average indirect lighting on secondary rays? It’s like every eye pixels are averaging say 128 rays from the photon map, so all of them will be about the same right? This is what I do (in somewhat pseudocode)…

for each eye pixel cast ray to scene and get closest hit {
for x = 1 to 128 {
from hit point, cast ray to scene and get photon at hit point
add photon color to total
}
total /= 128
add total to direct illumination ray and set eye pixel to it
}

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 15, 2012 at 23:19

“Get photon at hit point” sounds suspicious to me. Photons are incoming light, remember, so at the secondary hit points you would need to search for photons and process them through the surface’s BRDF to get the reflected light back toward the original hit point. Don’t use photons as reflected light, which they aren’t.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 15, 2012 at 23:53

“get photon at hit point” is get all the photons in a radius at the hit point and average them. So I’m doing this wrong. What you are saying is, instead of averaging all the photons in the radius, I should treat each photons as if they were to hit the surface from which the secondary rays are from?

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 15, 2012 at 23:57

Let me correct what I say, you mean that for each secondary rays, I get all the photons in the radius at hit point, but for each photon I do the BRDF to see if it emit some light (mirror etc) or is totally absorbed (rusian roulette)? and then I add all those up and average them?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 16, 2012 at 03:56

Yes, treat each photon as if it’s hitting that secondary surface and reflecting back toward the primary hit point. Therefore, multiply each photon by the BRDF evaluated using the direction from which the photon is coming and the direction toward the primary hitpoint. Then add those all up and divide by the area used in the photon search (converts the photons’ flux to irradiance on the secondary surface, so the units will work out right and the returned value will be a radiance).

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 16:58

Works beautifully Reedbeta. Thank you so much, and thanks everyone else as well for helping me out and taking the time to explain. I really appreciate it.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 17:02

Just a tech question. Is it correct to say that using the photon map directly (direct & indirect lighting, caustics and all) with billion and billions of photons would produce the perfect output as in real life?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 16, 2012 at 17:30

Yes, in the limit of very many photons it should produce a mathematically correct result. For it to be true to life of course requires also perfectly correct materials and scene modeling, etc. :)

Plus, real life has other things like dispersion, polarization, phosphorescence, fluorescence, nonlinear optics, and quantum phenomena that we hardly ever bother to model in computer graphics, because (in most conditions) we can’t see them. :)

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 21:20

Yes of course, beside, a pixel can only display 256 shades of RGB so that alone is not even close to real life. But I should of ask it another way, can we use this as a reference image, meaning, use it to compare other style of render, like raytrace, path tracing and whatever, so for example, when making a raytracer, use the photon image to know how good the raytracer is?

6eaf0e08fe36b2c23ca096562dd7a8b7
0
__________Smile_ 101 Mar 16, 2012 at 21:29

It’s not a good idea to use biased algorithm for reference image. And photon mapping even without final gathering is biased algorithm due to 3D blur of photon field. Better to use something like basic ray tracing with insanely large ray count instead.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 16, 2012 at 21:52

Smile - if you use so many photons that the blur radius is \~1 pixel, it should be ok for practical purposes.

Alternatively, use this. :)

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 22:00

@}:+()___ (Smile)

It’s not a good idea to use biased algorithm for reference image. And photon mapping even without final gathering is biased algorithm due to 3D blur of photon field. Better to use something like basic ray tracing with insanely large ray count instead.

But raytracing doesn’t do caustics and color blending. Path tracing does not work well with very small emitters because the chance to hit one is little, but photon mapping should simulate real life light transport, it should even work for laser lights. And I was technicaly thinking that if I have trillions of photons (of course in a progressive manner) with a radius <1 should be like the real thing. Material has lots to do as well like Reedbeta said, but I would imagine that photon mapping is the way to go to use a reference no? I mean, if you let it run for a week, you should have all the dots blended in by then!

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 22:01

@Reedbeta

Smile - if you use so many photons that the blur radius is \~1 pixel, it should be ok for practical purposes. Alternatively, use this. :)

Yes I’ve looked at the progressive photon mapping. Works great, but slowwwwww.

But what do you mean Reedbeta by “it should be ok for practical purposes”? It’s not that good?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 16, 2012 at 22:12

I mean for practical use as a reference image to compare other renderers with. Even though technically any photon mapping technique with a finite number of photons is “biased” - well, real-world path traced images do not have an infinite number of samples either. :) The error can be made as low as you desire with either photon mapping or path tracing, by running with enough photons or enough samples, so I do not really see the point in calling out photon mapping as being biased, although by some particular technical definition, it is.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 22:42

I see. But what exactly is “biased” and “unbiased” as far as a render goes? Isn’t it just a saying or is there actual facts about them?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 16, 2012 at 23:09

Unbiased means that as you increase the number of samples taken, it will converge toward the mathematically correct answer. Biased means it converges toward something other than the mathematically correct answer, so some error is unavoidable no matter the number of samples taken. The terms derive from statistics where people speak of biased and unbiased “estimators”, i.e. formulas for estimating some quantity based on limited data. Being biased or unbiased is a provable mathematical property of the estimator.

The trouble, IMO, is that when people speak about photon mapping renderers they count only the raytracing phase as being part of the estimator when they talk about bias or lack thereof. Once you create a photon map with a finite number of photons, the error (due to having only finitely many photons) is “baked in”, and then raytracing an image using that photon map will converge not to the mathematically correct illumination in the scene, but to the illumination represented by the photon map, which has some error. That’s why they say it is biased - even infinitely many samples in the raytracing part will not eliminate the error from the photon mapping part.

It seems silly to me to neglect that you can also turn up the number of photons as well as the number of samples, and make it converge to the correct answer that way. In other words, consider the estimator as including both the photon mapping phase and the raytracing phase. Then it’s unbiased.

It’s true that in the traditional form of photon mapping you must store the photons all in a data structure, so memory consumption grows with the number of photons, so you will eventually run out of space, but IMO that is not a good reason to leave the photon mapping part out of the estimator. In any case, with the progressive photon mapping technique you do not need to store all the photons so it is really unbiased by any reasonable definition and even some unreasonable ones. :)

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 16, 2012 at 23:56

oh I see! So then, a pure raytracer (no photons) is biased because of the incorrect result, even if it ‘looks’ good. A path tracer will most of the time be unbiased if it can reach the lights (if they are not point lights), and a path tracer with direct light sampling is half raytracer and half path tracer so it’s also biased, and the method you told me about photon mapping for indirect illumination and raytrace for direct illumination is also biased.

So in other words, only photon mapping with a radius <1 per pixel is truly unbiased, and path tracing also (without point lights) in unbiased, the rest are all biased. So most raytracer’s claimes to be unbiased (and real fast, no dots) is not correct because they use approximation.

Am I getting this right?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 17, 2012 at 00:29

Just because it has direct light raytracing doesn’t mean it’s biased. As long as you are careful not to double-count some lighting components or leave some lighting components out, it’s perfectly legitimate to use different evaluation methods for different sectors of illumination, e.g. direct light sampling for light sources (even area lights), and path tracing or photon mapping for indirect light. This is just a form of importance sampling - distributing your samples in a way that puts more samples in areas likely to be important. This is compatible with being unbiased as long as you are careful about the math and make sure everything is correctly weighted and each lighting component is included exactly once.

A raytracer with no indirect illumination would be biased because it leaves out a lighting component. Path tracers are typically designed to be unbiased in all cases, whether they use direct light sampling or not (and it’s better that they do, because it decreases variance and therefore improves quality for a given amount of render time!). Photon mapping with or without final gathering is unbiased if you increase the number of photons as well as the number of samples. It’s biased if you consider only a fixed photon count while increasing the number of samples in the raytracing (& final gathering) part. Most raytracers’ claims to be unbiased are probably on the level (excepting bugs or unintentional design flaws, and really extreme issues like floating point precision etc.)

BTW, I guess I should clarify that with regard to the photon search radius, I’m expecting it to get smaller as the number of photons increases, so that it goes to zero in the limit.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 17, 2012 at 01:15

ok, I see what you mean. As long as the renderer can do the ‘rendering equation’, it’s unbias, no matter what approach they use to get there.

As for the photon search radius, do you mean that I should start with a big radius, like x10 and bring it down a bit on every itteration till it’s zero and then stop? If I use something like this…

radius = 10 for example
beta = 0.8

for each itteration {
alpha = (radius*beta+beta) / (radius*beta+1);
radius *= alpha;
}

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 17, 2012 at 01:51

Every iteration of what? The photon radius is usually sized to contain a constant number of photons, such as 100. Thus at each photon search you expand it as much as you need to contain the correct number (at least approximately; it doesn’t need to be exact). When you use more photons total they will be closer together and therefore the radius will get smaller.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 17, 2012 at 02:10

you got me here! So you mean to shot x number of photons, then on the render pass, you try to get 100 photons from that hit point and divide by the radius they are in? Then when is the radius gets lower if no more photons are shot??

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 17, 2012 at 02:41

If you do the render with more photons you get lower radii. So if you did one render with 1 million photons and another with 2 million, most of the radii would be smaller in the second one. That’s all I meant.

The progressive photon mapping paper I linked earlier outlines an algorithm that continuously shoots more photons and tightens the radius within one render, but it operates with a bit of a different data model. Read the paper for details.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 17, 2012 at 03:00

I get it, thanks Reedbeta for all your help. I appreciate it.

6eaf0e08fe36b2c23ca096562dd7a8b7
0
__________Smile_ 101 Mar 17, 2012 at 03:18

Well… biasness of the photon mapping is more practical than mathematical. You don’t know required number of photons beforehand and cannot simple estimate error of the rendered image. And even if you can determine that the resulting image is not good enough, you must restart the whole rendering process with increased number of photons. While for path tracing you can estimate noiseness simply by looking and continue rendering more longer if needed.

6aa952514ff4e5439df1e9e6d337b864
0
roel 101 Mar 17, 2012 at 11:23

@Reedbeta

Photon mapping with or without final gathering is unbiased if you increase the number of photons as well as the number of samples.

Aren’t you confusing unbiased with consistent or did I miss understand you?

Unbiased: the correct answer is computed on average;
Consistent: converges towards the correct solution when given more samples.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 17, 2012 at 18:10

But what does it mean that “the correct answer is computed on average”? As far as I can tell, this means that the expectation value of the distribution from which samples are drawn is at the mathematically-correct value, which just means that as more samples are taken, the average of the samples converges to the mathematically-correct value.

5f8d1e15c62932fa37f8a383b8567e52
0
macnihilist 101 Mar 17, 2012 at 19:02

Well, it’s been a while since I posted here, but this is an interesting topic, so I thought I’d throw in my two cents:

Unbiased: Expected value of estimator is what you want to compute.
Consistent: lim_ {n->inf}(estimator) = what you want to compute (with probability one, and for whatever n is, e.g. number of photons).
In other words: Maybe the expected value of your estimator is off, but you can get arbitrary close to what you want by increasing n.

For unbiased estimators, the only error you have is the variance of the estimator.
For biased estimators, however, there is (usually a systematic) source of additional error (the bias).
Mostly this has theoretical consequences (e.g. for calculating convergence rates), and often (e.g. in computer graphics) there are biased estimators that converge more quickly than unbiased ones (much, much, much more quickly).
Just not to the exact result.
As long as you have some idea of what our bias is, it is safe to use biased estimators.

But there are also some practical consequences.
For example you can take n independent runs of an unbiased estimator and average them and can expect the error to decrease.
With an biased estimator you can average as many runs as you like, if there is a systematic error in each run, it will stay in the final result.

As was pointed our earlier in this thread, this IS of practical consequence for photon mapping.
The problem with (classical, not progressive) photon mapping is that the bias vanishes only as the number of photons (in ONE PASS) approaches infinity.
But the number of photons is bounded relatively tightly, because you’ll run out of storage quickly.
So you can average 10 path tracing runs (or just let it run longer), but averaging 10 photon mapping runs with N photons each does probably not work (not that I tried it).
This was the primary motivation for progressive photon mapping.
(I actually ran into this problem when I worked on cluster rendering. Let each machine compute the whole image and average? Beep, not so easy for biased algorithms.)

Also, I’d like to point out that _estimators_ are biased or unbiased.
You state what you want to compute, and then you come up with an estimator for that.
It is perfectly valid to have an estimator that only computes direct illumination or only diffuse interreflection.
It is still unbiased, as long the expected value is the direct illumination or diffuse interreflections.
Otherwise, as was already pointed out, nothing we do in computer graphics would be really unbiased, e.g. because of machine precision. (Not that it would matter in practice.)

EDIT: Little Add-on:
Since I used path tracing as an example for “unbiased” above, I should probably add that the variants of path tracing that are usually implemented technically aren’t really unbiased.
And not just because of floats and pseudorandom numbers.
Almost all tricks that you have to use to make the original estimator remotely practical introduce bias.
Examples are: limiting path length, clamping low throughput paths, next event estimation, adaptive sampling (although I’m not 100% sure with that one, it’s probably unbiased if you do it right)
But the bias is in general considered negligible.

6aa952514ff4e5439df1e9e6d337b864
0
roel 101 Mar 18, 2012 at 15:53

Excellent explanation, macnihilist. I learned the distinction between consistent and unbiased from this paper. I’ll quote the part about photon mapping from it, as it is intuitively easy to understand:

There are several sources of bias in photon mapping, but to see that it is biased simply consider what happens when a large number of images generated by a photon mapper are averaged. For example, if we have too few photons in the caustic map, caustics appear blurry due to interpolation. Averaging a large number of blurry caustics will not result in a sharp caustic – in other words, we don’t expect to get the correct answer on average. On the other hand, as we increase the number of photons in the photon map, the region used for each density estimate shrinks to a point. In the limit, a photon used to estimate illumination at a point will correspond to the end of a light subpath at that point. Therefore, as long as the photon map contains a proper distribution of paths, photon mapping is consistent

(Keenan Crane, “Bias in Rendering”, http://multires.calt…keenan/bias.pdf).

And the above is also my answer to Reedbeta’s post:@Reedbeta

But what does it mean that “the correct answer is computed on average”? As far as I can tell, this means that the expectation value of the distribution from which samples are drawn is at the mathematically-correct value, which just means that as more samples are taken, the average of the samples converges to the mathematically-correct value.

By the way:@macnihilist

Almost all tricks that you have to use to make the original estimator remotely practical introduce bias.
Examples are: limiting path length, clamping low throughput paths, next event estimation, adaptive sampling (although I’m not 100% sure with that one, it’s probably unbiased if you do it right)

I’ve never created a serious unbiased ray tracer, but I believe that your statement is not (or should not be) true. For example, one can use stochastic path lengths to limit the path length, and compensate for the change in the expect value with a constant factor (e.g. see here page 116). There are also countless carefully constructed math tricks to maintain unbiasedness while improving sampling performance. Like Metropolis Light Transport (Veach), but I have to admit that I never understood that algorithm.

5f8d1e15c62932fa37f8a383b8567e52
0
macnihilist 101 Mar 18, 2012 at 21:27

You are right that there are implementations that can really be called unbiased and practical at the same time. I just wanted to point out that a lot of seemingly harmless things can bias an estimator – if your nit-pick level is high enough.

Take Russian roulette for example, since you brought it up. The way I see it, it does not really limit path lengths, it just makes long paths less likely. So in the end you still have to cut them somewhere. Of course, you can make the probability of long paths so ridiculously low that this is merely a technicality. Still, if you take the biased-unbiased thing _really_ seriously, you can regard any practical implementation of path tracing with Russian roulette as biased. (But that is more my personal opinion, not a well established fact, as far as I know, so you people should probably take it with a grain of salt.)

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Mar 18, 2012 at 21:45

Isn’t the simplest explanation is that a biased render is one that do not solve the rendering equation while an unbiased render does solve the rendering equation?

6aa952514ff4e5439df1e9e6d337b864
0
roel 101 Mar 18, 2012 at 21:53

No, a biased but consistent renderer solves it too in the end. That’s the entire point.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 18, 2012 at 22:35

Yes, if you want to be really nitpicky, you must use arbitrary-precision arithmetic to have an unbiased renderer - else you only converge to the answer rounded off to floating-point precision. :) Though I doubt anyone is really worried about this.

OK, it’s true that averaging a large number of photon-mapping renders still leaves error on the table, due to the photon search radius in each component image. On the other hand, after thinking about it more, I think progressive photon mapping suffers from an inverse issue: while it allows arbitrarily many photons to be accumulated and gradually decreases the search radius to zero, it has a fixed, finite number of “hit points” at which it accumulates those photons.

So:
classical photon mapping: finite photon count, unbounded sample count
progressive photon mapping: finite sample count, unbounded photon count

Therefore PPM is biased and inconsistent too, under the conventional definitions, since there is error “baked in” to the chosen set of hit points. Now, if you averaged a large number of renders using PPM, you’d get different hit points for each, reducing the error associated with them - but then you must either cut off each render at a finite number of photons, reintroducing bias, or you return to having an unbounded amount of memory to store an unbounded number of hitpoints onto which you accumulate an unbounded number of photons.

Conclusion: truly unbiased rendering is not possible without an unbounded amount of memory. If you ignore numerical precision limits, then path-tracing and friends do not need unbounded memory, but photon-mapping algorithms still do.

(At least, the photon-mapping algorithms currently known still do. I can’t rule out the possibility that there is some clever way of recycling hitpoints in the PPM algorithm that would solve the issue…)

5f8d1e15c62932fa37f8a383b8567e52
0
macnihilist 101 Mar 19, 2012 at 06:55

@Reedbeta

Therefore PPM is biased and inconsistent too, under the conventional definitions, since there is error “baked in” to the chosen set of hit points.

I haven’t looked at PPM to an extend that I’d _really_ understand it, but I’m pretty sure it is biased and consistent. Maybe it’s just per definition (“I only want to estimate the radiance leaving that point in that direction” instead of “I want to estimate the averaged (ir)radiance that reaches a virtual sensor in this camera”); I don’t know for sure atm. Also, at least stochastic PPM should be able overcome the fixed shading samples and stay consistent.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 19, 2012 at 16:13

Ahh, I hadn’t heard of stochastic PPM before! I’ll have to read up on that.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 21, 2012 at 06:37

@macnihilist

Maybe it’s just per definition (“I only want to estimate the radiance leaving that point in that direction” instead of “I want to estimate the averaged (ir)radiance that reaches a virtual sensor in this camera”)

Yes, I think that’s the case. PPM correctly estimates incoming radiance at a set of points, but doesn’t support integrating incoming radiance over a domain - which isn’t just for “distribution ray tracing” stuff like defocus, motion blur, and glossy reflections; it’s also needed for just plain old antialiasing if you want to do it properly (with stochastic subpixel sampling and a good reconstruction filter).

Anyway, the SPPM technique is very nice and seems to solve this problem. I didn’t follow all the details of their derivation why it works, but the test images are quite nice.