Hey all, just found your forums here and I’m hoping someone out there
can help me with a problem I’ve been pondering for a while now.
If I have an array of values, with the sum of all value elements equal
to 1.0, is there an efficient way to calculate the array index of where
x% of the total value sum is before that index, and 100% - x% of the
values are after that index? Ideally I will be able to calculate this
value within a shader.
The practical use of this is to find the index within a histogram where
90% of all values are below that index, and the other 10% are above.
This will allow me to reduce the effects of the top 10% of pixels in
some techniques. Thanks in advance for any help you can give!
Please log in or register to post a reply.
interesting question. is the data isn’t sorted?
the most obvious way is to sum each value and check when the sum is
equal to the percentage you desire. but you want a more efficient
solution than that, right?
I’m not a math guru, but depending on whether the data is dynamic or
static, one way i could think of is to calculate beforehand some best
fit curve and perform integration on the resulting equation. you’d solve
the integral for x, given the sum (which would represent the
percentage). So if the data is static, you’d do the curve fitting
offline. In the shader, it would be a matter of plugging in into the
equation. This is probably not practical for dynamic data though.
i’ll give it more thought.
Specifically, this is intended to drive tonemapping in a real-time
application. When the scene is bright, the luminance histogram will be
weighted more to the upper end, while when the scene is dark the
histogram will be more at the low end. The idea behind all this is to
find the point where 5% of the histogram is above and the rest is below,
since this value will be more stable and affected less by fluctuations
in the number of bright pixels. I’m primarily trying to find an
efficient way of calculating this on the GPU, so I have correct results
per-frame and don’t need to stall the CPU waiting for the GPU to catch
up for a read-back. Thanks for any more help you can give.
One way I can think of to do this would be to write a shader that takes
each ‘bin’ in your brightness histogram, and operates upon the rendered
image to output a 1 for each pixel in that bin and a 0 for each pixel
not in the bin. The resulting image could be repeatedly downsampled on
the GPU until it was 1x1, using a box filter; this would give a pixel
value that shows the fraction of total image pixels that fell in that
bin. Do this for each bin, and read back the 1x1 pixel values; then you
have your histogram and you can determine the 90% point on the CPU. It’s
probably possible to do the whole thing on the GPU, but would require
some hackery since the GPU isn’t set up to loop through pixels
one-by-one (which you would need to do to find the 90% point). On the
other hand, the time wasted in a pipeline stall is the same whether
you’re reading back a 1x1 image or a 1280x1024 one; so maybe it’s worth
it to do the whole thing on the GPU.
My best idea so far is to render the luminance surface, then using the
vertex shader to read back those luminance values and accumulate each
pixel into a histogram (prescaling data so resulting histogram is
normalized so the sum of all values is 1.0). This histogram would get
downsampled many times using an additive operation so each smaller level
has the sum of the two pixels it was sampled from. Later, to find the
final position in the shader, I would start at the smallest mip level
and determine if the target value is to the left or right. Next level up
the mip chain I do the same, each time offsetting my UV coordinate
appropriately. Eventually I will get to the top level, where my UV x
coordinate would then be the final position. I could then output that
single value into a 1x1 texture that I can read later. None of this ever
has to hit the CPU, though it does require a way to get the luminance
data into the vertex shader. Thanks for the ideas.