# Phaton gathering

56 replies to this topic

### #1Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 01:04 AM

### #2Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 12 March 2012 - 01:26 AM

Typically you're trying to get an estimate of the irradiance - the light power per unit area. Each photon represents a chunk of power, so you'd want to add up the power carried by all the photons, and divide by the area.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #3Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 01:57 AM

Reedbeta, on 12 March 2012 - 01:26 AM, said:

Typically you're trying to get an estimate of the irradiance - the light power per unit area. Each photon represents a chunk of power, so you'd want to add up the power carried by all the photons, and divide by the area.

### #4Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 12 March 2012 - 03:25 AM

Yes, that sounds right.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #5Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 04:09 PM

Thanks Reedbeta.

One more thing!

Do I also have to save the "light" direction and the normal for each photons, then when gathering them, take into consideration the normal and "light" direction, and distance? If so, does this looks right?...

for each photon in radius do {
a = dot(photon.ray.dir, photon.object.normal) ;
b = dot(photon.ray.dir, surfaceNormal) ;
c = a/b;
flux += photon.flux * c * d;
}

### #6Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 12 March 2012 - 04:52 PM

Each photon, when it strikes a surface, should have a record of which direction it came from. When you gather the photons, you must push them through the surface's BRDF to see how much output light they contribute to the direction you're gathering from.

However, the N . L factor is NOT in the BRDF and should not be included. It will be taken care of by the geometry - fewer photons per unit area will naturally land on the surface when it is at a greater angle to the light source. Likewise, you do not need to attenuate the photons based on distance from the light source. That is also taken care of by the geometry, since fewer photons per unit area will naturally land on an object far from the light.

You should pick up Henrik Wann Jensen's book on photon mapping if you're interested in this - it explains everything in much greater detail.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #7Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 05:22 PM

Thanks, I've looked at his paper but I'm still a bit confused, and I'm not that good at maths with all those weird symbols!

What I meant by distance was the distance of the photon within the radius. For example, when I shoot a ray from the camera to the scene and land on a surface, I take all the photons within a radius from that hit point. So I was wondering if the photons within this radius needs to be attenuated the farther away they are from the hit point? (not from the light source)

### #8Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 06:18 PM

Here is the problem I'm facing. If we pull all the photons within a radius, how do we handle the photons with the same normal as the hit point but under the hit point?? Here is a screen shot of what I mean. The green dot is the point to query, so it's at the center of the sphere, and all the black dot are the photons with the sphere radius. As you see, the photons on the blue surface should not contribute to the green point on the red surface correct? And what about those with opposite normals, facing the hit point? Should they contribute?

### #9}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 12 March 2012 - 08:15 PM

There is no easy way to distinguish between photons on near thin layers (usually you attenuate by N*V thus skipping photons with opposite orientation N*V < 0). Photon mapping is approximate technique and you have to deal with approximation errors. Less sphere radius and more photon count -> more correct image. You can use M nearest photons instead of fixed sphere radius or combine approaches for better looking picture.
Sorry my broken english!

### #10Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 08:45 PM

}:+()___ (Smile), on 12 March 2012 - 08:15 PM, said:

There is no easy way to distinguish between photons on near thin layers (usually you attenuate by N*V thus skipping photons with opposite orientation N*V < 0). Photon mapping is approximate technique and you have to deal with approximation errors. Less sphere radius and more photon count -> more correct image. You can use M nearest photons instead of fixed sphere radius or combine approaches for better looking picture.

oh! I see what you mean. Thanks!

### #11Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 12 March 2012 - 09:26 PM

I think people usually use an ellipsoid rather than a sphere, too, so you can flatten the ellipsoid along the surface normal to try not to get so many photons from other surfaces.

As for attenuating photons by distance to the center of the search, that would help you get smoother results so it could indeed be a good idea. I don't remember off the top of my head whether HWJ does this sort of thing or not. Anyway, if you do attenuate them, just make sure you normalize by the sum of the weights, as usual with any weighted average. You could use any attenutation function you choose, whatever looks best.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #12Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 09:42 PM

Reedbeta, on 12 March 2012 - 09:26 PM, said:

I think people usually use an ellipsoid rather than a sphere, too, so you can flatten the ellipsoid along the surface normal to try not to get so many photons from other surfaces.

As for attenuating photons by distance to the center of the search, that would help you get smoother results so it could indeed be a good idea. I don't remember off the top of my head whether HWJ does this sort of thing or not. Anyway, if you do attenuate them, just make sure you normalize by the sum of the weights, as usual with any weighted average. You could use any attenutation function you choose, whatever looks best.

Good idea about the ellipsoid! thanks!

What do you mean by "normalize by the sum of the weights"?

### #13Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 12 March 2012 - 09:44 PM

I mean that you should keep track of the weights you apply to each photon and divide the total flux by the total weight, so that the unequal weights don't cause you to make the whole thing overall brighter or darker.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #14Alienizer

Member

• Members
• 435 posts

Posted 12 March 2012 - 10:10 PM

Reedbeta, on 12 March 2012 - 09:44 PM, said:

I mean that you should keep track of the weights you apply to each photon and divide the total flux by the total weight, so that the unequal weights don't cause you to make the whole thing overall brighter or darker.

oh ok thanks

### #15roel

Senior Member

• Members
• 698 posts

Posted 13 March 2012 - 10:04 AM

What I did, if I remember correctly, was to devise a function that measured how much photons were away from the plane of the hit point (i.e. your red plane). You can do that with a dot product or so. More away => less weight; some parameter controlled it. And indeed, if you apply such ad hoc hacks, do something like:
totalweight=0
for each photon
{
weight = computeWeight(photon);
flux += computeFlux(photon) * weight;
totalweight += weight;
}
flux /= totalweight


### #16}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 13 March 2012 - 01:03 PM

As far as I understand, there is no sense in smoothing photon field. It only worsens performance. Photon field is not designed for direct viewing and smoothing will be done at gathering stage.
Sorry my broken english!

### #17Alienizer

Member

• Members
• 435 posts

Posted 13 March 2012 - 09:25 PM

}:+()___ (Smile), on 13 March 2012 - 01:03 PM, said:

As far as I understand, there is no sense in smoothing photon field. It only worsens performance. Photon field is not designed for direct viewing and smoothing will be done at gathering stage.

What do you suggest? I'm using the photon map as direct viewing, I do not use raytrace for direct illumination, and do not use caustic/indirect illumination photon maps.

### #18}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 14 March 2012 - 06:02 PM

Well... photon mapping is technique for indirect illumination, so calculate usual 1-bounce direct illumination and then add indirect contribution. Photon map sampled in "final gathering" stage at second bounce position, and result will be good even with small number of photons. In theory it's possible to throw many photons and visualize photon map directly, but it's not photon mapping it's forward ray tracing with 3D blur (much less effective and with more noticeable errors).
Sorry my broken english!

### #19Alienizer

Member

• Members
• 435 posts

Posted 14 March 2012 - 11:39 PM

I see what you mean. But one problem I have is "final gathering". I don't know how to make this work. What I do now is take all photons within a radius to get an approximation. But it's very gainy, I have to let it run for hours to look better! Or maybe I have the logic for "final gathering" all wrong?

### #20Reedbeta

DevMaster Staff

• 5311 posts
• LocationSanta Clara, CA

Posted 15 March 2012 - 02:01 AM

Final gathering refers to the process of using raytracing to compute the final image. One typically uses raytracing to do direct lighting and photon mapping for indirect lighting. That is to say, you trace from the eye into the scene and at each hit point, calculate indirect lighting by spawning a large number of rays in all directions (distributed/weighted according to the BRDF). At each of these secondary intersections you use the photon map to estimate light reflected back toward the primary hit point. So the photon map is not seen directly, only used on the second bounce back from the eye. Effectively any noise in the photon map is averaged out over the whole hemisphere of the primary ray hit point, quite a lot of photons.
reedbeta.com - developer blog, OpenGL demos, and other projects

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users