Typically you’re trying to get an estimate of the irradiance - the light power per unit area. Each photon represents a chunk of power, so you’d want to add up the power carried by all the photons, and divide by the area.

Please log in or register to post a reply.

@Reedbeta

So then I need to use TotalFlux/(PI*Radius*Radius) instead?

Thanks Reedbeta.

One more thing!

Do I also have to save the “light” direction and the normal for each photons, then when gathering them, take into consideration the normal and “light” direction, and distance? If so, does this looks right?…

for each photon in radius do {

a = dot(photon.ray.dir, photon.object.normal) ;

b = dot(photon.ray.dir, surfaceNormal) ;

c = a/b;

d = photon.dist * radius;

flux += photon.flux * c * d;

}

flux = flux/(PI*radius*radius);

Each photon, when it strikes a surface, should have a record of which direction it came from. When you gather the photons, you must push them through the surface’s BRDF to see how much output light they contribute to the direction you’re gathering from.

However, the N . L factor is NOT in the BRDF and should not be included. It will be taken care of by the geometry - fewer photons per unit area will naturally land on the surface when it is at a greater angle to the light source. Likewise, you do not need to attenuate the photons based on distance from the light source. That is also taken care of by the geometry, since fewer photons per unit area will naturally land on an object far from the light.

You should pick up Henrik Wann Jensen’s book on photon mapping if you’re interested in this - it explains everything in much greater detail.

Thanks, I’ve looked at his paper but I’m still a bit confused, and I’m not that good at maths with all those weird symbols!

What I meant by distance was the distance of the photon within the radius. For example, when I shoot a ray from the camera to the scene and land on a surface, I take all the photons within a radius from that hit point. So I was wondering if the photons within this radius needs to be attenuated the farther away they are from the hit point? (not from the light source)

@}:+()___ (Smile)

oh! I see what you mean. Thanks!

I think people usually use an ellipsoid rather than a sphere, too, so you can flatten the ellipsoid along the surface normal to try not to get so many photons from other surfaces.

As for attenuating photons by distance to the center of the search, that would help you get smoother results so it could indeed be a good idea. I don’t remember off the top of my head whether HWJ does this sort of thing or not. Anyway, if you do attenuate them, just make sure you normalize by the sum of the weights, as usual with any weighted average. You could use any attenutation function you choose, whatever looks best.

@Reedbeta

Good idea about the ellipsoid! thanks!

What do you mean by “normalize by the sum of the weights”?

@Reedbeta

oh ok thanks

What I did, if I remember correctly, was to devise a function that measured how much photons were away from the plane of the hit point (i.e. your red plane). You can do that with a dot product or so. More away => less weight; some parameter controlled it. And indeed, if you apply such ad hoc hacks, do something like:

```
totalweight=0
for each photon
{
weight = computeWeight(photon);
flux += computeFlux(photon) * weight;
totalweight += weight;
}
flux /= totalweight
```

@}:+()___ (Smile)

What do you suggest? I’m using the photon map as direct viewing, I do not use raytrace for direct illumination, and do not use caustic/indirect illumination photon maps.

*secondary* intersections you
use the photon map to estimate light reflected back toward the primary
hit point. So the photon map is not seen directly, only used on the
second bounce back from the eye. Effectively any noise in the photon map
is averaged out over the whole hemisphere of the primary ray hit point,
quite a lot of photons.

I get it, and did it, thanks. But, now I get a very dull output, no shade or anything???

Isn’t this like path tracing? except we query the photon map instead? So we have to shoot huge amount of secondary rays?

Well, isn’t suppose to be dull since we average indirect lighting on secondary rays? It’s like every eye pixels are averaging say 128 rays from the photon map, so all of them will be about the same right? This is what I do (in somewhat pseudocode)…

for each eye pixel cast ray to scene and get closest hit {

for x = 1 to 128 {

from hit point, cast ray to scene and get photon at hit point

add photon color to total

}

total /= 128

add total to direct illumination ray and set eye pixel to it

}

Yes, in the limit of very many photons it should produce a mathematically correct result. For it to be true to life of course requires also perfectly correct materials and scene modeling, etc. :)

Plus, real life has other things like dispersion, polarization, phosphorescence, fluorescence, nonlinear optics, and quantum phenomena that we hardly ever bother to model in computer graphics, because (in most conditions) we can’t see them. :)

Smile - if you use so many photons that the blur radius is \~1 pixel, it should be ok for practical purposes.

Alternatively, use this. :)

@}:+()___ (Smile)

But raytracing doesn’t do caustics and color blending. Path tracing does not work well with very small emitters because the chance to hit one is little, but photon mapping should simulate real life light transport, it should even work for laser lights. And I was technicaly thinking that if I have trillions of photons (of course in a progressive manner) with a radius <1 should be like the real thing. Material has lots to do as well like Reedbeta said, but I would imagine that photon mapping is the way to go to use a reference no? I mean, if you let it run for a week, you should have all the dots blended in by then!

@Reedbeta

Yes I’ve looked at the progressive photon mapping. Works great, but slowwwwww.

But what do you mean Reedbeta by “it should be ok for practical purposes”? It’s not that good?

*technically* any photon mapping technique with a
finite number of photons is “biased” - well, real-world path traced
images do not have an infinite number of samples either. :) The error
can be made as low as you desire with either photon mapping or path
tracing, by running with enough photons or enough samples, so I do not
really see the point in calling out photon mapping as being biased,
although by some particular technical definition, it is.

Unbiased means that as you increase the number of samples taken, it will converge toward the mathematically correct answer. Biased means it converges toward something other than the mathematically correct answer, so some error is unavoidable no matter the number of samples taken. The terms derive from statistics where people speak of biased and unbiased “estimators”, i.e. formulas for estimating some quantity based on limited data. Being biased or unbiased is a provable mathematical property of the estimator.

The trouble, IMO, is that when people speak about photon mapping renderers they count only the raytracing phase as being part of the estimator when they talk about bias or lack thereof. Once you create a photon map with a finite number of photons, the error (due to having only finitely many photons) is “baked in”, and then raytracing an image using that photon map will converge not to the mathematically correct illumination in the scene, but to the illumination represented by the photon map, which has some error. That’s why they say it is biased - even infinitely many samples in the raytracing part will not eliminate the error from the photon mapping part.

It seems silly to me to neglect that you can also turn up the number of photons as well as the number of samples, and make it converge to the correct answer that way. In other words, consider the estimator as including both the photon mapping phase and the raytracing phase. Then it’s unbiased.

It’s true that in the traditional form of photon mapping you must store the photons all in a data structure, so memory consumption grows with the number of photons, so you will eventually run out of space, but IMO that is not a good reason to leave the photon mapping part out of the estimator. In any case, with the progressive photon mapping technique you do not need to store all the photons so it is really unbiased by any reasonable definition and even some unreasonable ones. :)

oh I see! So then, a pure raytracer (no photons) is biased because of the incorrect result, even if it ‘looks’ good. A path tracer will most of the time be unbiased if it can reach the lights (if they are not point lights), and a path tracer with direct light sampling is half raytracer and half path tracer so it’s also biased, and the method you told me about photon mapping for indirect illumination and raytrace for direct illumination is also biased.

So in other words, only photon mapping with a radius <1 per pixel is truly unbiased, and path tracing also (without point lights) in unbiased, the rest are all biased. So most raytracer’s claimes to be unbiased (and real fast, no dots) is not correct because they use approximation.

Am I getting this right?

Just because it has direct light raytracing doesn’t mean it’s biased. As long as you are careful not to double-count some lighting components or leave some lighting components out, it’s perfectly legitimate to use different evaluation methods for different sectors of illumination, e.g. direct light sampling for light sources (even area lights), and path tracing or photon mapping for indirect light. This is just a form of importance sampling - distributing your samples in a way that puts more samples in areas likely to be important. This is compatible with being unbiased as long as you are careful about the math and make sure everything is correctly weighted and each lighting component is included exactly once.

A raytracer with no indirect illumination would be biased because it leaves out a lighting component. Path tracers are typically designed to be unbiased in all cases, whether they use direct light sampling or not (and it’s better that they do, because it decreases variance and therefore improves quality for a given amount of render time!). Photon mapping with or without final gathering is unbiased if you increase the number of photons as well as the number of samples. It’s biased if you consider only a fixed photon count while increasing the number of samples in the raytracing (& final gathering) part. Most raytracers’ claims to be unbiased are probably on the level (excepting bugs or unintentional design flaws, and really extreme issues like floating point precision etc.)

BTW, I guess I should clarify that with regard to the photon search radius, I’m expecting it to get smaller as the number of photons increases, so that it goes to zero in the limit.

ok, I see what you mean. As long as the renderer can do the ‘rendering equation’, it’s unbias, no matter what approach they use to get there.

As for the photon search radius, do you mean that I should start with a big radius, like x10 and bring it down a bit on every itteration till it’s zero and then stop? If I use something like this…

radius = 10 for example

beta = 0.8

for each itteration {

alpha = (radius*beta+beta) / (radius*beta+1);

radius *= alpha;

}

If you do the render with more photons you get lower radii. So if you did one render with 1 million photons and another with 2 million, most of the radii would be smaller in the second one. That’s all I meant.

The progressive photon mapping paper I linked earlier outlines an
algorithm that *continuously* shoots more photons and tightens the
radius within one render, but it operates with a bit of a different data
model. Read the paper for details.

@Reedbeta

Photon mapping with or without final gathering is unbiased if you increase the number of photons as well as the number of samples.

Aren’t you confusing unbiased with consistent or did I miss understand you?

Unbiased: the correct answer is computed on average;

Consistent: converges towards the correct solution when given more
samples.

Well, it’s been a while since I posted here, but this is an interesting topic, so I thought I’d throw in my two cents:

Unbiased: Expected value of estimator is what you want to compute.

Consistent: lim_ {n->inf}(estimator) = what you want to compute (with
probability one, and for whatever n is, e.g. number of photons).

In other words: Maybe the expected value of your estimator is off, but
you can get arbitrary close to what you want by increasing n.

For unbiased estimators, the only error you have is the variance of the
estimator.

For biased estimators, however, there is (usually a systematic) source
of additional error (the bias).

Mostly this has theoretical consequences (e.g. for calculating
convergence rates), and often (e.g. in computer graphics) there are
biased estimators that converge more quickly than unbiased ones (much,
much, much more quickly).

Just not to the exact result.

As long as you have some idea of what our bias is, it is safe to use
biased estimators.

But there are also some practical consequences.

For example you can take n independent runs of an unbiased estimator
and average them and can expect the error to decrease.

With an biased estimator you can average as many runs as you like, if
there is a systematic error in each run, it will stay in the final
result.

As was pointed our earlier in this thread, this IS of practical
consequence for photon mapping.

The problem with (classical, not progressive) photon mapping is that
the bias vanishes only as the number of photons (in ONE PASS) approaches
infinity.

But the number of photons is bounded relatively tightly, because you’ll
run out of storage quickly.

So you can average 10 path tracing runs (or just let it run longer),
but averaging 10 photon mapping runs with N photons each does probably
not work (not that I tried it).

This was the primary motivation for progressive photon mapping.

(I actually ran into this problem when I worked on cluster rendering.
Let each machine compute the whole image and average? Beep, not so easy
for biased algorithms.)

Also, I’d like to point out that _estimators_ are biased or
unbiased.

You state what you want to compute, and then you come up with an
estimator for that.

It is perfectly valid to have an estimator that only computes direct
illumination or only diffuse interreflection.

It is still unbiased, as long the expected value is the direct
illumination or diffuse interreflections.

Otherwise, as was already pointed out, nothing we do in computer
graphics would be really unbiased, e.g. because of machine precision.
(Not that it would matter in practice.)

EDIT: Little Add-on:

Since I used path tracing as an example for “unbiased” above, I should
probably add that the variants of path tracing that are usually
implemented technically aren’t really unbiased.

And not just because of floats and pseudorandom numbers.

Almost all tricks that you have to use to make the original estimator
remotely practical introduce bias.

Examples are: limiting path length, clamping low throughput paths, next
event estimation, adaptive sampling (although I’m not 100% sure with
that one, it’s probably unbiased if you do it right)

But the bias is in general considered negligible.

Excellent explanation, macnihilist. I learned the distinction between consistent and unbiased from this paper. I’ll quote the part about photon mapping from it, as it is intuitively easy to understand:

There are several sources of bias in photon mapping, but to see that it is biased simply consider what happens when a large number of images generated by a photon mapper are averaged. For example, if we have too few photons in the caustic map, caustics appear blurry due to interpolation. Averaging a large number of blurry caustics will not result in a sharp caustic – in other words, we don’t expect to get the correct answer on average. On the other hand, as we increase the number of photons in the photon map, the region used for each density estimate shrinks to a point. In the limit, a photon used to estimate illumination at a point will correspond to the end of a light subpath at that point. Therefore, as long as the photon map contains a proper distribution of paths, photon mapping is consistent

(Keenan Crane, “Bias in Rendering”, http://multires.calt…keenan/bias.pdf).

And the above is also my answer to Reedbeta’s post:@Reedbeta

By the way:@macnihilist

Almost all tricks that you have to use to make the original estimator remotely practical introduce bias.

Examples are: limiting path length, clamping low throughput paths, next event estimation, adaptive sampling (although I’m not 100% sure with that one, it’s probably unbiased if you do it right)

I’ve never created a serious unbiased ray tracer, but I believe that your statement is not (or should not be) true. For example, one can use stochastic path lengths to limit the path length, and compensate for the change in the expect value with a constant factor (e.g. see here page 116). There are also countless carefully constructed math tricks to maintain unbiasedness while improving sampling performance. Like Metropolis Light Transport (Veach), but I have to admit that I never understood that algorithm.

You are right that there are implementations that can really be called unbiased and practical at the same time. I just wanted to point out that a lot of seemingly harmless things can bias an estimator – if your nit-pick level is high enough.

Take Russian roulette for example, since you brought it up. The way I see it, it does not really limit path lengths, it just makes long paths less likely. So in the end you still have to cut them somewhere. Of course, you can make the probability of long paths so ridiculously low that this is merely a technicality. Still, if you take the biased-unbiased thing _really_ seriously, you can regard any practical implementation of path tracing with Russian roulette as biased. (But that is more my personal opinion, not a well established fact, as far as I know, so you people should probably take it with a grain of salt.)

Yes, if you want to be *really* nitpicky, you must use
arbitrary-precision arithmetic to have an unbiased renderer - else you
only converge to the answer rounded off to floating-point precision. :)
Though I doubt anyone is really worried about this.

OK, it’s true that averaging a large number of photon-mapping renders still leaves error on the table, due to the photon search radius in each component image. On the other hand, after thinking about it more, I think progressive photon mapping suffers from an inverse issue: while it allows arbitrarily many photons to be accumulated and gradually decreases the search radius to zero, it has a fixed, finite number of “hit points” at which it accumulates those photons.

So:

classical photon mapping: finite photon count, unbounded sample count

progressive photon mapping: finite sample count, unbounded photon count

Therefore PPM is biased and inconsistent too, under the conventional definitions, since there is error “baked in” to the chosen set of hit points. Now, if you averaged a large number of renders using PPM, you’d get different hit points for each, reducing the error associated with them - but then you must either cut off each render at a finite number of photons, reintroducing bias, or you return to having an unbounded amount of memory to store an unbounded number of hitpoints onto which you accumulate an unbounded number of photons.

Conclusion: truly unbiased rendering is not possible without an unbounded amount of memory. If you ignore numerical precision limits, then path-tracing and friends do not need unbounded memory, but photon-mapping algorithms still do.

(At least, the photon-mapping algorithms *currently known* still do. I
can’t rule out the possibility that there is some clever way of
recycling hitpoints in the PPM algorithm that would solve the
issue…)

@Reedbeta

Therefore PPM is biased and inconsistent too, under the conventional definitions, since there is error “baked in” to the chosen set of hit points.

I haven’t looked at PPM to an extend that I’d _really_ understand it, but I’m pretty sure it is biased and consistent. Maybe it’s just per definition (“I only want to estimate the radiance leaving that point in that direction” instead of “I want to estimate the averaged (ir)radiance that reaches a virtual sensor in this camera”); I don’t know for sure atm. Also, at least stochastic PPM should be able overcome the fixed shading samples and stay consistent.

@macnihilist

Maybe it’s just per definition (“I only want to estimate the radiance leaving that point in that direction” instead of “I want to estimate the averaged (ir)radiance that reaches a virtual sensor in this camera”)

Yes, I think that’s the case. PPM correctly estimates incoming radiance at a set of points, but doesn’t support integrating incoming radiance over a domain - which isn’t just for “distribution ray tracing” stuff like defocus, motion blur, and glossy reflections; it’s also needed for just plain old antialiasing if you want to do it properly (with stochastic subpixel sampling and a good reconstruction filter).

Anyway, the SPPM technique is very nice and seems to solve this problem. I didn’t follow all the details of their derivation why it works, but the test images are quite nice.

- Upcoming Multiplatform Game Program...
- Our first game - looking for feedbacks
- Network Emulation Tool
- Trouble with accessing GLSL array
- Fiction
- Game Programming Patterns: Bytecode
- Interactive WebGL Water Demo
- Skeletal Animation Tutorial with GP...
- Unreal Engine 4
- Microsoft xbox one selling poorly

When gathering all the photons under a specific radius, do we use NumbPhotons/(PI*radius*radius) or do we use 1/(PI*radius*radius)?