Modern scene management techniques.

A3c652c6832b95ef5c3e63e60527e1ab
0
Albertone 101 Dec 21, 2012 at 17:37

Hello,
graphics programming, for me, is only a spare time hobby. In the last years I managed to keep myself up to date as far as rendering techniques go (the “deferred sensation”, the many kinds of screen space processing, etc) however I realized my knowledge of scene management techniques is really obsolete - let’s say it is 10+ years old, when BSP+PVS was a good bet and dinosaurs ruled the earth. Even in those days I remember there were articles suggesting that even a very good BSP tree was still too cache-unfriendly for the then modern GPUs. I even spent some time experimenting with occlusion queries&co. Now, what are the current trends? It’s also heavily related to level editing, right? If I’m right, these days very few engines rely on brush&patches methods, except for some FPSs built over some old idTech x derived engine.

8 Replies

Please log in or register to post a reply.

11f856f3098c531200b4a334a811bd2c
0
xenobrain 101 Dec 21, 2012 at 19:34

I’m on my way to work now so I can’t elaborate much–

CryEngine uses a dynamically reparenting scenegraph (I think baed on the current zone) with octrees for spatial sorting and a very fancy software culling solution called a “Coverage Buffer” that uses simplified occlusion volumes and object bounding boxes to do a pass before moving on to hardware culling solutions (adapted to the host platform). It does have a CSG tool, but I think it’s used mostly for whiteboxing by most developers these days

I don’t know as much about Unreal Engine except it’s also using a dynamically reparenting scenegraph with octree spatial sorting, and that it has “cull distance volumes” and is still pretty heavily dependant on PVS and portals/antiportals. Again, CSG is available in the toolset, I’ve seen it used more often than in CryEngine but more and more games seem to be focusing on static meshes for scene geometry so CSG is getting used only on the coarsest level, if at all.

Phyre Engine is interesting since as far as I can tell, it pretty much ditches the scenegraph and uses nodes only for things like animation where they are conveniant for transforming hierarchies. Instead it is relying very heaviliy on occlusion volumes and SPU/ThreadPool jobs testing against those volumes, using a simple front-to-back scene traversal that it performs by getting world transform matrices from the active objects list.

Finally, Unigine is using the usual dynamically reparenting scenegraph (I think its’s bsed on assets “clusters” or groups” in the world), but is using what they are calling an adaptive (frequently regenerated) axis-aligned bsp tree.

All this is based on my memory which isn’t perfect, of course.

A3c652c6832b95ef5c3e63e60527e1ab
0
Albertone 101 Dec 21, 2012 at 19:51

Thank you, xenobrain. It seems that scene graphs are still healthy, despite Tom Forsyth’s rants :)
I suppose the static meshes are instanced via GPU extensions. Also, from what you write it seems that proprietary level editors are still used a lot - years ago a lot of people were thinking that full-blown 3d packages like MAX/Maya, plus some plugins/scripting, were going to replace in-house editors.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Dec 21, 2012 at 20:10

Just for another perspective, we use static AABB trees for the majority of our scene (the static environment), which is built by our level compiler tools, one AABB per streamed world chunk. For dynamic objects, we just keep a flat list. Each dynamic object has an AABB, or a bounding sphere, but they aren’t in a tree at all. (The physics system might be doing its own tree-building; I’m not sure. But rendering doesn’t use a tree for dynamic stuff.)

We don’t use HW occlusion queries, but we build a BSP of the camera frustum minus occlusion frusta for nearby occluders (which are just really simplified meshes of buildings etc., authored by the environment team) each frame, and cull all our AABBs/spheres against it (multithreaded, on the CPU).

BTW we actually do use Maya for world building. We have a custom in-engine editor for making missions, but the environment is all done in Maya. The geom is just arbitrary poly meshes - not watertight, not made of brushes or anything like that.

A3c652c6832b95ef5c3e63e60527e1ab
0
Albertone 101 Dec 21, 2012 at 20:27

Thank you, Reedbeta.
By the way, do you use some form of baked lighting for the environments? If so, the maps are also rendered in Maya?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Dec 21, 2012 at 21:42

No, all baking is done in the engine. It’s important to use the real game shaders and lighting code, which of course all run in the engine, and we don’t want to have to maintain a Maya copy of all that stuff as well. :)

Ceee4d1295c32a0c1c08a9eae8c9459d
0
v71 105 Dec 22, 2012 at 09:16

Talking for me , i use an octree with a precompute distance map , the player position is obtined relative to the current octree’s node and a kind of quick simd zbuffer is used to compute the occluded light paths.
This runs very nicely, i am working now on the bottleneck which seems to be the vertices transformation

B20d81438814b6ba7da7ff8eb502d039
0
Vilem_Otte 117 Dec 22, 2012 at 10:30

And our management is quite a bit more complicated. For standard culling we hold multiple “scenegraphs” - for example:
Lights are handled using bounding volumes (we use AABBs for all lights right now - either “point”, “spot” or “directional” - note that we only use area lights actually, so they have real size). This BVH is dynamically recomputed on the run.
Basically same goes for dynamic objects (we don’t hold single one BVH here for all of them, but rather one for characters, one for dynamic objects, etc.). Also dynamically recomputed on the run, holds OBBs.
Static meshes are divided to high detail (cells you currently are at) and low detail (distant meshes). High detail is fairly simple, we hold pre-computed tree of the whole cell for culling. Low detail meshes are very tricky. We basically use them for distant view and for other effects (realtime reflections using ray tracing, dynamic global illumination, etc.), so we have precomputed KdTree for ray tracing. For distant view rendering, we use just simple quadtree to cull out invisible cells.

Note that all cells have same dimensions. And also that most things in our world(s) are dynamic. Basically this complicates stuff a lot, because we can’t use things like SVO and cone tracing, we have to stick to in my opinion more accurate and better solutions.

As for culling - we perform simple frustum culling, occlusion culling is performed just for static objects right now (there weren’t any real gains for dynamic objects, as you have them visible only in sorrounding cells and in specified radius - also they’re pretty much LODded out mostly).

The current bottleneck is editing (apart from ray tracing, which eats a lot of performance - but reflections are reflections!), we have an editor that runs interactively “in game”, although after editing we still have the ugly “precompute” button there to precompute distant meshes & static stuff.

A3c652c6832b95ef5c3e63e60527e1ab
0
Albertone 101 Dec 25, 2012 at 20:41

Thanks for the answers. Merry Christmas to everybody!