# Phaton gathering

56 replies to this topic

### #41Alienizer

Member

• Members
• 435 posts

Posted 17 March 2012 - 01:15 AM

ok, I see what you mean. As long as the renderer can do the 'rendering equation', it's unbias, no matter what approach they use to get there.

As for the photon search radius, do you mean that I should start with a big radius, like x10 and bring it down a bit on every itteration till it's zero and then stop? If I use something like this...

beta = 0.8

for each itteration {

}

### #42Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 17 March 2012 - 01:51 AM

Every iteration of what? The photon radius is usually sized to contain a constant number of photons, such as 100. Thus at each photon search you expand it as much as you need to contain the correct number (at least approximately; it doesn't need to be exact). When you use more photons total they will be closer together and therefore the radius will get smaller.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #43Alienizer

Member

• Members
• 435 posts

Posted 17 March 2012 - 02:10 AM

you got me here! So you mean to shot x number of photons, then on the render pass, you try to get 100 photons from that hit point and divide by the radius they are in? Then when is the radius gets lower if no more photons are shot??

### #44Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 17 March 2012 - 02:41 AM

If you do the render with more photons you get lower radii. So if you did one render with 1 million photons and another with 2 million, most of the radii would be smaller in the second one. That's all I meant.

The progressive photon mapping paper I linked earlier outlines an algorithm that continuously shoots more photons and tightens the radius within one render, but it operates with a bit of a different data model. Read the paper for details.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #45Alienizer

Member

• Members
• 435 posts

Posted 17 March 2012 - 03:00 AM

I get it, thanks Reedbeta for all your help. I appreciate it.

### #46}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 17 March 2012 - 03:18 AM

Well... biasness of the photon mapping is more practical than mathematical. You don't know required number of photons beforehand and cannot simple estimate error of the rendered image. And even if you can determine that the resulting image is not good enough, you must restart the whole rendering process with increased number of photons. While for path tracing you can estimate noiseness simply by looking and continue rendering more longer if needed.
Sorry my broken english!

### #47roel

Senior Member

• Members
• 698 posts

Posted 17 March 2012 - 11:23 AM

Reedbeta, on 17 March 2012 - 12:29 AM, said:

Photon mapping with or without final gathering is unbiased if you increase the number of photons as well as the number of samples.
Aren't you confusing unbiased with consistent or did I miss understand you?

Unbiased: the correct answer is computed on average;
Consistent: converges towards the correct solution when given more samples.

### #48Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 17 March 2012 - 06:10 PM

But what does it mean that "the correct answer is computed on average"? As far as I can tell, this means that the expectation value of the distribution from which samples are drawn is at the mathematically-correct value, which just means that as more samples are taken, the average of the samples converges to the mathematically-correct value.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #49macnihilist

New Member

• Members
• 19 posts

Posted 17 March 2012 - 07:02 PM

Well, it's been a while since I posted here, but this is an interesting topic, so I thought I'd throw in my two cents:

Unbiased: Expected value of estimator is what you want to compute.
Consistent: lim_ {n->inf}(estimator) = what you want to compute (with probability one, and for whatever n is, e.g. number of photons).
In other words: Maybe the expected value of your estimator is off, but you can get arbitrary close to what you want by increasing n.

For unbiased estimators, the only error you have is the variance of the estimator.
For biased estimators, however, there is (usually a systematic) source of additional error (the bias).
Mostly this has theoretical consequences (e.g. for calculating convergence rates), and often (e.g. in computer graphics) there are biased estimators that converge more quickly than unbiased ones (much, much, much more quickly).
Just not to the exact result.
As long as you have some idea of what our bias is, it is safe to use biased estimators.

But there are also some practical consequences.
For example you can take n independent runs of an unbiased estimator and average them and can expect the error to decrease.
With an biased estimator you can average as many runs as you like, if there is a systematic error in each run, it will stay in the final result.

As was pointed our earlier in this thread, this IS of practical consequence for photon mapping.
The problem with (classical, not progressive) photon mapping is that the bias vanishes only as the number of photons (in ONE PASS) approaches infinity.
But the number of photons is bounded relatively tightly, because you'll run out of storage quickly.
So you can average 10 path tracing runs (or just let it run longer), but averaging 10 photon mapping runs with N photons each does probably not work (not that I tried it).
This was the primary motivation for progressive photon mapping.
(I actually ran into this problem when I worked on cluster rendering. Let each machine compute the whole image and average? Beep, not so easy for biased algorithms.)

Also, I'd like to point out that _estimators_ are biased or unbiased.
You state what you want to compute, and then you come up with an estimator for that.
It is perfectly valid to have an estimator that only computes direct illumination or only diffuse interreflection.
It is still unbiased, as long the expected value is the direct illumination or diffuse interreflections.
Otherwise, as was already pointed out, nothing we do in computer graphics would be really unbiased, e.g. because of machine precision. (Not that it would matter in practice.)

Since I used path tracing as an example for "unbiased" above, I should probably add that the variants of path tracing that are usually implemented technically aren't really unbiased.
And not just because of floats and pseudorandom numbers.
Almost all tricks that you have to use to make the original estimator remotely practical introduce bias.
Examples are: limiting path length, clamping low throughput paths, next event estimation, adaptive sampling (although I'm not 100% sure with that one, it's probably unbiased if you do it right)
But the bias is in general considered negligible.

### #50roel

Senior Member

• Members
• 698 posts

Posted 18 March 2012 - 03:53 PM

Excellent explanation, macnihilist. I learned the distinction between consistent and unbiased from this paper. I'll quote the part about photon mapping from it, as it is intuitively easy to understand:

Quote

There are several sources of bias in photon mapping, but to see that it is biased simply consider what happens when a large number of images generated by a photon mapper are averaged. For example, if we have too few photons in the caustic map, caustics appear blurry due to interpolation. Averaging a large number of blurry caustics will not result in a sharp caustic – in other words, we don’t expect to get the correct answer on average. On the other hand, as we increase the number of photons in the photon map, the region used for each density estimate shrinks to a point. In the limit, a photon used to estimate illumination at a point will correspond to the end of a light subpath at that point. Therefore, as long as the photon map contains a proper distribution of paths, photon mapping is consistent
(Keenan Crane, "Bias in Rendering", http://multires.calt...keenan/bias.pdf).

And the above is also my answer to Reedbeta's post:

Reedbeta, on 17 March 2012 - 06:10 PM, said:

But what does it mean that "the correct answer is computed on average"? As far as I can tell, this means that the expectation value of the distribution from which samples are drawn is at the mathematically-correct value, which just means that as more samples are taken, the average of the samples converges to the mathematically-correct value.

By the way:

macnihilist, on 17 March 2012 - 07:02 PM, said:

Almost all tricks that you have to use to make the original estimator remotely practical introduce bias.
Examples are: limiting path length, clamping low throughput paths, next event estimation, adaptive sampling (although I'm not 100% sure with that one, it's probably unbiased if you do it right)
I've never created a serious unbiased ray tracer, but I believe that your statement is not (or should not be) true. For example, one can use stochastic path lengths to limit the path length, and compensate for the change in the expect value with a constant factor (e.g. see here page 116). There are also countless carefully constructed math tricks to maintain unbiasedness while improving sampling performance. Like Metropolis Light Transport (Veach), but I have to admit that I never understood that algorithm.

### #51macnihilist

New Member

• Members
• 19 posts

Posted 18 March 2012 - 09:27 PM

You are right that there are implementations that can really be called unbiased and practical at the same time. I just wanted to point out that a lot of seemingly harmless things can bias an estimator -- if your nit-pick level is high enough.

Take Russian roulette for example, since you brought it up. The way I see it, it does not really limit path lengths, it just makes long paths less likely. So in the end you still have to cut them somewhere. Of course, you can make the probability of long paths so ridiculously low that this is merely a technicality. Still, if you take the biased-unbiased thing _really_ seriously, you can regard any practical implementation of path tracing with Russian roulette as biased. (But that is more my personal opinion, not a well established fact, as far as I know, so you people should probably take it with a grain of salt.)

### #52Alienizer

Member

• Members
• 435 posts

Posted 18 March 2012 - 09:45 PM

Isn't the simplest explanation is that a biased render is one that do not solve the rendering equation while an unbiased render does solve the rendering equation?

### #53roel

Senior Member

• Members
• 698 posts

Posted 18 March 2012 - 09:53 PM

No, a biased but consistent renderer solves it too in the end. That's the entire point.

### #54Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 18 March 2012 - 10:35 PM

Yes, if you want to be really nitpicky, you must use arbitrary-precision arithmetic to have an unbiased renderer - else you only converge to the answer rounded off to floating-point precision. Though I doubt anyone is really worried about this.

OK, it's true that averaging a large number of photon-mapping renders still leaves error on the table, due to the photon search radius in each component image. On the other hand, after thinking about it more, I think progressive photon mapping suffers from an inverse issue: while it allows arbitrarily many photons to be accumulated and gradually decreases the search radius to zero, it has a fixed, finite number of "hit points" at which it accumulates those photons.

So:
classical photon mapping: finite photon count, unbounded sample count
progressive photon mapping: finite sample count, unbounded photon count

Therefore PPM is biased and inconsistent too, under the conventional definitions, since there is error "baked in" to the chosen set of hit points. Now, if you averaged a large number of renders using PPM, you'd get different hit points for each, reducing the error associated with them - but then you must either cut off each render at a finite number of photons, reintroducing bias, or you return to having an unbounded amount of memory to store an unbounded number of hitpoints onto which you accumulate an unbounded number of photons.

Conclusion: truly unbiased rendering is not possible without an unbounded amount of memory. If you ignore numerical precision limits, then path-tracing and friends do not need unbounded memory, but photon-mapping algorithms still do.

(At least, the photon-mapping algorithms currently known still do. I can't rule out the possibility that there is some clever way of recycling hitpoints in the PPM algorithm that would solve the issue...)
reedbeta.com - developer blog, OpenGL demos, and other projects

### #55macnihilist

New Member

• Members
• 19 posts

Posted 19 March 2012 - 06:55 AM

Reedbeta, on 18 March 2012 - 10:35 PM, said:

Therefore PPM is biased and inconsistent too, under the conventional definitions, since there is error "baked in" to the chosen set of hit points.

I haven't looked at PPM to an extend that I'd _really_ understand it, but I'm pretty sure it is biased and consistent. Maybe it's just per definition ("I only want to estimate the radiance leaving that point in that direction" instead of "I want to estimate the averaged (ir)radiance that reaches a virtual sensor in this camera"); I don't know for sure atm. Also, at least stochastic PPM should be able overcome the fixed shading samples and stay consistent.

### #56Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 19 March 2012 - 04:13 PM

Ahh, I hadn't heard of stochastic PPM before! I'll have to read up on that.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #57Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 21 March 2012 - 06:37 AM

macnihilist, on 19 March 2012 - 06:55 AM, said:

Maybe it's just per definition ("I only want to estimate the radiance leaving that point in that direction" instead of "I want to estimate the averaged (ir)radiance that reaches a virtual sensor in this camera")

Yes, I think that's the case. PPM correctly estimates incoming radiance at a set of points, but doesn't support integrating incoming radiance over a domain - which isn't just for "distribution ray tracing" stuff like defocus, motion blur, and glossy reflections; it's also needed for just plain old antialiasing if you want to do it properly (with stochastic subpixel sampling and a good reconstruction filter).

Anyway, the SPPM technique is very nice and seems to solve this problem. I didn't follow all the details of their derivation why it works, but the test images are quite nice.
reedbeta.com - developer blog, OpenGL demos, and other projects

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users