Selection painter, performance issues ?

B7568a7d781a2ebebe3fa176215ae667
0
Wernaeh 101 Mar 28, 2008 at 15:06

Hello there, everyone.

Here’s just a small question in regards to base algorithms.

For my geometry editing tool, I now need to implement a selection painter in addition to normal point-and-click selection.

In detail, the selection painter works similar to brushes in your basic painting program: A circle is shown around the cursor, every object inside the circle is selected, the artist then basically can “paint” the object he wishes to select.

However, I have slight problems with the implementation:

Obviously, only visible (i.e. non-occluded) objects should be selected in this process, similar to normal selection.
Since I need this to be independent from any rendering facilities, I can’t use the simple selection buffer rendering available within OpenGl.

Now, the only alternative I see is to cast a ray for each pixel within the selection brush, and do “normal” selection code for these. However, for a brush with a diameter of say 100 pixels (about the maximum useful size ;) ), I need to cast almost 8000 rays, which, I fear, will seriously hurt interactivity.
I’m also unsure about precision of this approach - i.e. whether all of these rays will give a reliable selection process. It should in particular never happen that “invisible” objects get selected, due to a ray missing something because of FP precision, or similar.

Anybody out there have some ideas / experiences he could share ? :)

Thank you for your time,
Cheers,
- Wernaeh

4 Replies

Please log in or register to post a reply.

B91eae75cd6245bd8074bd0c3f1cc495
0
Nils_Pipenbrinck 101 Mar 28, 2008 at 20:31

I think the raytracing approach is a good one.

The user needs some time to use the brush tool. I think we can assume that a average selection process takes roughly a second (point at object, press button and move the mouse around). You have plenty of time to raytrace your scene in the background.

Just don’t raytrace a pixel twice.

B7568a7d781a2ebebe3fa176215ae667
0
Wernaeh 101 Apr 01, 2008 at 09:05

Okay, I now have a somewhat weird “cone”-tracing process in place… just thought I’d ask for opinions again before the actual implementation.

Basically, I had some problems with “normal” raytracing (especially concerning 0- and 1-dimensional primitives, i.e. points and edges: one has to make sure they have an appropriate radius to end up on a pixel, otherwise, they might end up in the middle of two rays)

I now finally settled for the following scheme:

- Cast a set of cones through the entire selection brush shape.
      These need not be pixel-sized, but may be spread arbitrarily across
      the brush, for improving speed with large brushes.
- For each cone, find the polys it intersects with
- If there are no such polys:
       Select all points and edges inside the cone.
- If there are such polys:
       Select the nearest poly, and all points and edges inside the cone and
       (partially) on the camera side of all polys inside the cone.

I see two problems with this approach: First, cone/poly intersection may be a bit costy. Second, there are certain precision issues with multiple polys inside a single cone. However, this method ensures that there are no false-positives (i.e. selecting elements that are not visible).

Any opinions / improvement ideas ?

Thank you for your time,
Cheers,
- Wernaeh

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 104 Apr 01, 2008 at 12:09

You can actually hit the individual triangles and gather uv coordinates from
the hits.

But thats a crazy technique only some kid would do who didnt
understand shaders properly.

Why cant you use the selection buffer like you would ordinarily?

Me, i do everything in image space now - even tho chugging it back off
the card takes a little while, you do it in little hits. (and slot speed’s improving
all the time.)

B7568a7d781a2ebebe3fa176215ae667
0
Wernaeh 101 Apr 01, 2008 at 16:03

You can actually hit the individual triangles and gather uv coordinates from
the hits. But thats a crazy technique only some kid would do who didnt
understand shaders properly.

I’m not exactly sure what you meant with UV coordinates ? There is no texturing involved here, the topic simply is about selection of polygons, edges and points, I’m not interested in where an object is hit, I’m interested in whether it isn’t occluded on screen :)

Why cant you use the selection buffer like you would ordinarily?

The rendering subsystem of the entire editing application features hot-plugable rendering modules, not all of which support some sort of selection buffers. Writing a software rasterizer for these just to fill selection buffers also seems like overkill to me. Also afaik a GL selection buffer is limited to 256 objects at max (correct me if I’m wrong there, I remember having read this on this very board somewhere), whereas my typical selection situation may contain up to several thousand faces, points, and edges.

Thank you for your input :)

Cheers,
- Wernaeh