0
109 Apr 06, 2012 at 03:39

I’ve read many things all different, and kind of confusing. When a light hit a surface, it is either reflected in part or in whole, or absorbed, to preserve and distibute the light energy without adding or removing from it (conservasion of energy), but is the following correct and true?… if frand(0..1) < material.color.max(0..1) then reflect else absorb, and if reflect, then the reflected.light = incoming.light * material.color?

#### 61 Replies

0
167 Apr 06, 2012 at 04:00

The part that’s not reflected is absorbed. (Or, in the case of a transparent surface, the part that’s not reflected is transmitted.)

So if you calculate reflected light = incoming * material color, then that’s the reflected light, period. No need to do any randomization type of stuff.

This tends to create photons with a wide distribution of values. Some photons are directly from a light source and will be very bright while others have been bounced a few times and are dim. This is not wrong - you’ll still get a correct rendering result - but my understanding is that the trouble with this is it tends to worsen the noise. You’ll need more photons and/or more rendering time to get a high-quality image.

So in photon mapping, people often take some steps to try to keep all the photons at about the same power. One way to do this is, instead of multiplying each photon by the material color when it gets reflected, you randomly either absorb the photon or reflect it. If you forget about colors and think about monochrome rendering for a minute, you can see that if you have a surface that reflects 20% of the light, that it’s equivalent to either reflect all the photons and multiply each one’s power by 0.2, or reflect only 20% of the photons but keep the same power (the rest are absorbed, i.e. thrown away).

In full-color rendering you have three different color channels (or more) but you can only pick one probability to reflect the photon, so people usually pick the max of the three colors or the average of them. Then you have to multiply the photon by material color divided by whatever probability you used. That ensures the photon gets colored properly, but the amount of reflected light is correct overall.

0
109 Apr 06, 2012 at 13:46

@Reedbeta

Then you have to multiply the photon by material color divided by whatever probability you used

What do you mean by “divided by whatever probability you used”?

0
167 Apr 06, 2012 at 17:15

Um - the arithmetic operation of division. :) photon power * material color / probability.

0
109 Apr 06, 2012 at 18:16

:lol: Yes, but what probability? I mean, if I absorb it when frand(0..1) > color.max, and bounce it when not, isn’t that probability of it’s own?

0
167 Apr 06, 2012 at 19:26

I’m talking about the probability you’d use to choose whether to reflect it. As in “you can only pick one probability to reflect the photon, so people usually pick the max of the three colors or the average of them. Then you have to multiply the photon by material color divided by whatever probability you used”.

0
109 Apr 06, 2012 at 19:40

ok, so if I have a probability of frand(0..1) say 0.8 and 0.8 > color.max then the reflectedColor = color * material / 0.8 ?

0
167 Apr 06, 2012 at 20:07

If you meant “reflect if frand(0, 1) < 0.8”, then yes.

0
109 Apr 06, 2012 at 21:22

oh! do you mean that 0.8 is the reflection factor? In this case, 80% as in a slightly smoked mirror? If not, then I’m still confused! I thought that color.max is used with frand to know if it is to be reflected or not!

0
167 Apr 06, 2012 at 21:48

The material color is the reflection factor; they’re the same thing. A material looks like a certain color because it reflects different fractions of R, G, and B light.

But you don’t have separate R, G, and B photons (well, you could, but this isn’t the way it’s typically done); you just have a photon, and you must decide whether to reflect or absorb it. You must somehow condense three numbers, the RGB material color, down to one number, the probability for reflecting or absorbing.

A reasonable way to do this is to choose the max of the RGB components (although personally I would probably choose the weighted average by eye sensitivies to R, G, and B, rather than the max).

But when you reflect the photon it must pick up the material color. So you multiply the photon color by the material color. But now you’ve incorporated the material color twice - once to calculate the probability to reflect a photon, and once to modify the photon color. That’s wrong, so you must compensate. You can do this by dividing the photon color by the probability. That way the photons still pick up the material color but the total amount of reflected light is correct.

In other words, you’re splitting the material color into an overall intensity - which tells you the probability to reflect a photon - and the actual color part, which tells you how to change the color of a photon when it reflects. These two when multiplied together must give the original material color, so it follows that the “color part” = material color / probability.

0
109 Apr 06, 2012 at 22:09

ok, that make sense. So I have a function…

float Luminance(const vec3 clr) {
return 0.212671f * clr.r + 0.71516f * clr.g + 0.072169f * clr.b;
}

So the probability of reflection is…

bool p = frand(0..1) < Luminance(materialColor);

if (p) {
reflectedColor = photon * materialColor / Luminance(materialColor);
}

Is that right?

0
167 Apr 07, 2012 at 00:40

Yep, that looks right.

0
109 Apr 07, 2012 at 01:05

Looking on the net for renders of the Cornell box (red and blue walls), I don’t see any with strong color bleeding. Is that correct? I mean, if I were to paint my bedroom with a red and blue walls and turn on the light, wouldn’t the white walls appear much more red and blue from what we see in renders? Maybe I got to try that :lol:

0
167 Apr 07, 2012 at 02:23

Color bleeding is kinda the whole point of the Cornell box. :) But color bleeding isn’t always very strong; it depends on how saturated and intense the colors are.

0
109 Apr 07, 2012 at 02:28

True! But have you ever seen a photo of a room painted (red/blue) just like the Cornell box to see how the colors are bleeding? Ik now they have a small model they made. but it’s not red/blue, and it’s a model, not a real big room like a bedroom or bigger.

0
167 Apr 07, 2012 at 02:31

No, I haven’t seen such a photo. But the size doesn’t matter as long as everything is to scale, including the light source. And the colors don’t matter either as long as the side walls are two different colors.

0
109 Apr 07, 2012 at 02:46

ok, thanks Reedbeta. But let me bug you some more :wub:

I’ve been trying to make my lights nice and I have a hard time. What I mean is, take the Cornell box for example, the light on the ceiling is square. When I render that, the light is not anti-aliased because it is so bright. So what I did is store the photon comming out of the light right where it leaves the light. Looks ok on large lights, but again, no anti-alias on small lights because the area is so small and packed with photons. So I dumped that and instead I read the light color at trace time, but the light is almost black! So I don’t know how to handle this, and for glass for example, I like to have the light shinning on them, but it’s eather very faint or too bright that it looks stupid! Any idea? you are the genius :)

0
167 Apr 07, 2012 at 03:25

I’m not sure I understand what the problem is. Of course, if you are raytracing and you hit a light source, it should return the color of the light. The light should get antialiased the same way everything else does, i.e. by shooting more rays per pixel. I didn’t get what you were trying to do with storing the emitted photons at the light - that doesn’t make sense because the photon map represents incoming light at a surface, not outgoing (reflected or emitted) light.

One thing to check is that the total photon power is scaled correctly. If you set the radiance of the light - that is, the color that rays return when they hit it - then you can calculate the total power as light color * pi * the area of the light’s geometry. (The pi is to convert from radiance to irradiance, i.e. it’s the integral of cos(theta) over the hemisphere.) This total power should equal the total power of all photons emitted from that light.

0
109 Apr 07, 2012 at 04:49

I was trying something new but like you said, it makes no sense. So I did whaty you said, and it works perfect for one light only. But when I have more than one light, each with different power and size, they come out dark gray!?

0
167 Apr 07, 2012 at 05:32

When you have multiple light sources, are you keeping the photon power scaled correctly, so the total power of all the photons emitted from each light equals light color * pi * area of that light? You’ll have to increase the number of photons when you add more lights, or else increase the power on each photon, if you keep the same total number of photons.

0
109 Apr 07, 2012 at 14:17

Well, that’s the part I’m not sure about. What I do is loop 100 times, and for each loop, I loop through all the lights and shoot a photon = lightColor * lightPower, that’s all. So I obviouslt do this all wrong?

0
167 Apr 07, 2012 at 23:32

Try this: for each light, calculate lightPower = lightColor * pi * area. (lightPower is an RGB color, in case it wasn’t clear.) Then for each light, calculate photonPower = lightPower / numPhotonsPerLight, then shoot numPhotonsPerLight photons with that power. You can decide what numPhotonsPerLight should be.

This should work correctly, but if you have a variety of lights of different powers in your scene, it will produce photons with a variety of different powers. As mentioned earlier in the discussion about reflection, having photons with a variety of different powers increases the noise in the rendered result and requires you to shoot more photons or render for longer to get a smooth result. So, it’s better to distribute photons proportional to the power of the lights, so that high-power lights shoot more photons than low-power ones; then the photons can all be the (roughly) the same power. This is very similar to the reflection probability method we just discussed in this thread, as well as the triangle area probability we talked about in another thread, so you should be able to figure it out if you feel like taking this on… :)

0
109 Apr 08, 2012 at 00:13

I see what you mean! But what I’m doing is shooting photons until I fill up the memory storing them, so I never know how many photons will be shot by a light, until it’s all done! So that’s also the wrong way to go about it? or can I still do it that way using your second suggestion?

So if I have 3 lights of diffrent powers, say, 10, 35 and 60, the idea is to shoot 10 photons for the first light, 35 for the next and 60 for the last, each with identical ‘power’? as oppose to shoot the same number of photons for all lights and adjust the photon power for each light?

0
167 Apr 08, 2012 at 02:19

Yeah, that’s the idea. You can still shoot an unknown number of photons, if for each photon you randomly pick a light to emit it from, with probability proportional to power.

0
109 Apr 08, 2012 at 02:44

You’re a genius, works perfect thank you :D

When I trace after shooting all the photons, if I hit a light, how would this work now with lights of diffrent power and area?? Do I simply return lightColor * PI * lightArea?

0
167 Apr 08, 2012 at 04:46

No, lightColor is the color to return from a ray that hits the light. That’s why we calculated lightColor * pi * area, because that’s the formula to transform between radiance (what a ray carries) and power (what photons carry).

0
109 Apr 08, 2012 at 16:34

ok, is ‘area’ the light area or the search radius?

0
167 Apr 08, 2012 at 21:42

The surface area of the light’s geometry.

0
109 Apr 09, 2012 at 03:44

You said no when I asked you “Do I simply return lightColor * PI * lightArea?” :blink: and you said to use “lightColor * pi * area” :wacko: where area is the area of the light geomety.

I’m all confused now Reedbeta :D :D :D

Hope you have a great Easter, and thanks for your patience with me!

0
167 Apr 09, 2012 at 05:59

Rays that hit a light return lightColor.

The photons from the light must add up to lightColor * pi * the surface area of the light’s geometry.

0
109 Apr 09, 2012 at 13:52

But this would only apply is I know how many photons per lights I shoot as you described in #22?

In your reply #24, that’s how I do it. But at trace time, if I hit a light, do I also return the light color?

0
167 Apr 09, 2012 at 16:42

I’m not sure where the confusion is. It’s really simple: when raytracing, and the ray hits a light source, you just return the light color. When emitting photons, you must ensure that the total of all photons emitted from a light equals (at least approximately) the light’s total power, which as mentioned is lightColor * pi * lightArea. It doesn’t matter how you shoot your photons as long as you do something to ensure this is true. If you don’t know ahead of time how many photons you will shoot, then you must incorporate the number of photons in some other way, or the photon map won’t match the direct lighting.

0
109 Apr 09, 2012 at 17:40

oh I see! so lightColor * pi * lightArea is not for each photons, but for all of them (for that one light source), so each photons shot out are (lightColor * pi * lightArea) / numPhotons which is what you already said in #30

When you trace back, rays that hit a light return lightColor, but aren’t they going to look all at the same intensity? Say, if one light source shoot 50 photons, and they other one shoot 100 photons, it’s like a 50w and 100w, so ins’t the 50w be reduced so that it looks not as bright as the 100w?

0
167 Apr 09, 2012 at 17:51

If one light source is 50 watts and another is 100 watts, then

a. They might be the same color (equal intensity) but the first one has 1/2 the surface area of the other
b. They might be the same area but the first one has 1/2 the intensity of the other
c. Some other possibility (e.g. the first one is 2x the area and 1/4 the intensity, or whatever).

Power is a combination of intensity and area, but when you hit a light source with a ray, it cares only about intensity, not about area. Photons care about power, so they care about both intensity and area.

0
109 Apr 09, 2012 at 18:06

Make sense, I understand that. So if the brightest light in my scene is 300w, then the 100w light when a ray hits it is lightColor*100/300 ?

0
167 Apr 09, 2012 at 18:12

No…like I’ve been saying all along, the color when a ray hits it is JUST lightColor. NOTHING ELSE.

0
109 Apr 09, 2012 at 19:38

ok, so then how can one light bulb show not as bright as another if they all return lightColor???

0
101 Apr 09, 2012 at 19:45

Because each light has a lightColor…?

0
109 Apr 10, 2012 at 01:55

yes, they all have the same color vec3(1,1,1) so at trace time if we return that for each lights, they will look all the same intensity right?

So if you’re saying that a 100w should be vec3(1,1,1) and the 50w should be vec3(0.5,0.5,0.5) then it’s not going to work because we will be shooting 100 photons at vec3(1,1,1) and 50 photons at vec3(0.5,0.5,0.5) when we should shoot photons all at vec3(1,1,1).

0
101 Apr 10, 2012 at 07:38

Just some examples. If a light has an “intensity” lightColor of vec3(1,1,1), another light with the same area but a lightColor of vec3(.5,.5,.5) should emit half the number of photons. Yet another light, with a lightColor of vec3(.2,.2,.2) and half the area should emit a tenth of the number of photons the first light does.. And a last light, with a lightColor of vec3(.25,.25,.25) with four times the area of the first light, should emit the same number of photons of the first light. (select the hidden part of the answer if my tricks works for you, but think first). (and excuse me for any miscalculations)

0
109 Apr 10, 2012 at 14:30

“the same umber of” is the answer to your hidden part :D I undestand all that, it’s all about proportion.

I understand about shooting photons proportinal to light power. But the color is what I’m not sure about.

Say, Light1 color(1,1,1) power 100, and Light2 color(1,1,1) power 50.

Light1 shoot 100 photons of color(1,1,1)
Light2 shoot 50 photons of color(1,1,1)
is that correct so far?

Now at trace time, when a ray hit the light, it’s not correct to return color(1.1,1) for both lights since one is 50% dimmer then the other.

So do I have to scale the color of Light2 to (0.5,0.5,0.5) and shoot the same amout of photons as Light1, and at trace time, return color(0.5,0.5,0.5) for Light2 and color(1,1,1) for Light1?

0
101 Apr 10, 2012 at 21:27

Lights cannot have color and power. It must be either color and area or power (3 components) and area:

power = color * area * pi

Light shoot photons proportional to power and return color equals to their color.

Light1 color(1,1,1) power 100 must have double of surface area than Light2 color(1,1,1) power 50. It is right to return color(1,1,1) for both lights and since the second light is smaller it’ll look dimmer than first.

0
109 Apr 10, 2012 at 22:32

@}:+()___ (Smile)

Lights cannot have color and power. It must be either color and area or power (3 components) and area

oh! I didn’t know that!! Now that makes a big difference.
@}:+()___ (Smile)

It is right to return color(1,1,1) for both lights and since the second light is smaller it’ll look dimmer than first.

How can this be? if we raytrace back and hit a light, (1,1,1) is returned, so both lights will be as bright exept one is smaller then the other which makes no difference on brightness as far as pixels goes ???

0
101 Apr 11, 2012 at 21:18

@Alienizer

How can this be? if we raytrace back and hit a light, (1,1,1) is returned, so both lights will be as bright exept one is smaller then the other which makes no difference on brightness as far as pixels goes ???

This is correct. You are confusing the brightness of the lamp surface with how much power they light up the room with. A large window will light a room more than a tiny one, even though the brightness outside is the same. How much the room is lit is depending both on the brightness of the lightsource and its area. Multiply them and you get the power.

0
109 Apr 11, 2012 at 22:12

ok you pined it, this is where I think I’m getting confused. brightness and power. So if I have a light of power 100w and an area of 2, then the brightness is 50, and I shoot 100 photons for that light. So where do I use brightness at? At trace time? return brightness * lightColor if ray hit the light?

0
109 Apr 11, 2012 at 22:45

Here’s what I do and it seems to look ok…

I have 3 lights set to color(1,1,1) , power 100w, 66w and 33w.

for each light…
shoot power photons (shoot 100 photons for first light, 66 for second and 33 for third light)

at trace time…
if ray hit light return color * power / lightArea

Is that the right way to do it?

0
101 Apr 12, 2012 at 11:27

@Alienizer

ok you pined it, this is where I think I’m getting confused. brightness and power. So if I have a light of power 100w and an area of 2, then the brightness is 50, and I shoot 100 photons for that light. So where do I use brightness at? At trace time? return brightness * lightColor if ray hit the light?

It really doesn’t make sense to have lamp “color”. What would a grey light mean? A tiny LED looks white in a dark room, but so does she sun, even though it is orders of magnitudes brighter.

No. Don’t do it. It might seem intuitive to you, but it messes up your thinking. You simply should not think in therms of white = (1,1,1) until you are actually showing the image on screen. That is also AFTER you run it through an exposure function and gamma correction. Before that, all light is simply a value between zero and infinity.

Instead, store EITHER power OR brightness. Either way, you need to store them in RGB. If you count the power in Watts, you would ger (33, 33, 33) for a completely white 100 W lamp. You can calculate the other from either value if you know the area. (Point lights would need to store the power, since they have no area. But point lights are a hack anyway.)

0
109 Apr 13, 2012 at 04:20

ok I get it now, works perfect :)

Sorry guys for being so hard headed at understanding, but I’m not a genius like you all are, and I like to learn. You’ve been all a great help and I really, really appreciate the efforts you all put into making me understand.

0
109 Apr 19, 2012 at 01:44

ok, me again, hard headed at understanding how it works for raytracing this time, not photons.

Say I have a light of color/power vec3(60,60,50) over a 2x4 rectangle (area=8). Here is what I do…

for each screen pixel (set all balck first)…

if hit something other than the light{
dist = get distance to light
return vec3(60,60,50) / (dist*dist);
} else {
return vec3(60,60,50) / lightArea(8)
}

doesn’t look right. What am I missing?

0
101 Apr 19, 2012 at 10:21

@Alienizer

ok, me again, hard headed at understanding how it works for raytracing this time, not photons.

No problem! It’s not easy to get it right.
@Alienizer

dist = get distance to light

(…)

return vec3(60,60,50) / (dist*dist);

Nope! Since you have an area light you should not care about the distace. Each ray (photon) has still the same brightness, no matter how far they travel. The distance squared is just a hack to make pointlights dimmer by distance. With han area light, you will automatically get the right brightness depending on the distance, since the light source will appear smaller at a larget distance, and thus get hit by fewer (global illumination) rays.

You could emulate a true area light by splitting it into a grid of pointlights. Then your distance formula is correct, but it isn’t clear from your code sample how you have implemented it.

@Alienizer

Hmm. This sounds like you work with a point light. All area lights have a penumbra, so you can’t just check if it’s in the shadow or not. Again, you could sample a number of pointlights spread over the surface of the area light, but you’d have to average a number of samples.

You need to decide if you are going to work with the area light as an actual area, or emulate it with a number of point lights.

0
109 Apr 19, 2012 at 12:13

Thanks Geon.

I’m using area lights by sampling it randomly, not with a grid. This gives me nice shadows when running it for a long time.

I thought that direct illumination must use 1/(dist\^2) because if it doesn’t, everything looks the same intensity!

0
101 Apr 19, 2012 at 22:44

For naive backward ray tracing you don’t need to know light integral power at all, only color is that matter. When your ray hit light directly return light surface color. If you ray hit something another than light then spawn hundreds of secondary rays according to surface BRDF. If secondary ray hit light than return light surface color multiplied by correct coefficient (normalized surface reflection color).

Something like light power can arise then you try to optimize secondary light shooting: shoot rays not randomly in the whole hemisphere but aim to lights. Usually aiming done not to light itself but to enclosing sphere, and it’s where 1/dist\^2 coefficient arise: you replace the whole hemisphere with small circle with area approximately proportional to 1/dist\^2.

0
109 Apr 20, 2012 at 00:42

ok, I’m just trying a simple ray tracer to get the area lights to look correct. I want it to work with lights of different intensity and size, and the code below is the basic (very basic) loop I use, and what I’m not sure about is #1 and #2

for each screen pixel {
find hit
if hit = light {
return light color #1
} else {
color = black
for each light {
find hitLight
if hitLight {
color += light color #2
}
}
return color
}
}

0
101 Apr 20, 2012 at 08:28

“For each light” part isn’t for basic ray tracer (it’s only simple with point lights). Try something like

for each screen pixel {
find hit
if hit = light {
return light color #1
} else {
color = black
for 1..LargeNumber {
(hit2, color2) = spawn secondary ray
if hit2 = light {
color += color2 * light color #2
} else {
spawn ternary ray, etc...
}
return color / LargeNumber
}
}

0
109 Apr 20, 2012 at 12:57

oh I see, but isn’t that a path tracer?

0
101 Apr 20, 2012 at 14:52

Yes, it is path tracer if you don’t stop after second hit. But it’s important to know that light sampling is optimization over simple path tracing and it’s challenging to do it right (probability theory calculations). Main problem is that light sampling works well only with point lights (possibly anisotropic). One of solutions is first randomly choose points on light surfaces then sample that points in raytracer. For good image without banding you must choose different points for different screen pixels.

for each screen pixel {
find hit
if hit = light {
return light color #1
} else {
color = black
for 1..LargeNumber {
(point, power, normal) = generate random point on light surface
if point visible from hit {
color += BRDF * max(0, normal * dir) * power / dist^2
}
return color / LargeNumber
}
}

0
109 Apr 20, 2012 at 15:46

I see what you mean, that works better now, thanks! But, when the eye ray hits the light, it always return the same color/power for the light hit, which is ok, but in the loop 1..LargeNumber, the color returned is very very dark, I have to mult it by 10000 so I can see it. Is it because of dist\^2 that attenuate the light so rapidly? Or maybe I do this wrong? When the eye ray hit the light, I return color/power vec3(60,60,60) as oppose the just color vec3(1,1,1) is that correct?

0
101 Apr 20, 2012 at 17:21

This is the most tricky part to get coefficients right.

For each light you have color and area.
Also you have RGB -> grayscale conversion function norm().
Calculate total normalized power: total = sumi{ area* norm(color) }.
Then choose light with probability **area* norm(color) / total
.
And then use in formula **power = total * color/ norm(color)
**.

Somewhere may be skipped pi or small integer coefficient, but it’s hard for me to guess without calculating integrals. Also create basic path tracer first for reference images before moving to more complicated algorithms.

0
109 Apr 20, 2012 at 23:13

That’s a perfect explanation THANK YOU. Now I understand it. My head got it this time, and it works :D

Thanks again for the details, I realy appreciate it.

0
109 Apr 21, 2012 at 00:07

One more thing, can I substitute norm() for Luminance() or does it have to be grayscale?

0
101 Apr 21, 2012 at 20:47

norm() here is a function for conversion of RGB color to probability. For example 0.18*R+0.81*G+0.01*B, or [R+G+B]/3, or max[R,G,B], or you can process each channel independently.

0
109 Apr 21, 2012 at 23:35

got it. Thank again :)