Render System and Component Hierarchy

Fc5da400b2dab2cc818a1733e7792cbb
0
Naros 101 Nov 20, 2011 at 20:04

I have done a significant amount of trial and error as well as reading about component based design. I am looking to tackle this using multiple subsystems for various component family types, but what stumps me is when I look at the aspects that tie directly to the rendering engine. Most articles tend to refer to a single “renderable” component; however elude that there is some level of a hierarchy behind that component. In fact, my rendering engine API behaves similarly where there an IRenderableObject interface that each mesh, light, particle, etc all derive from.

At first, I took this approach, stuck a name plate component and a mesh component for my avatar on my entity, those two components self registered with the render system and kaboom! The subsystems were designed that for a specific entity, only 1 component of a given type (in this case base type is renderable) could be associated to a given entity. I made a few changes to the container API and now the subsystems can manage multiple components for the same family within a particular subsystem, such as the render subsystem. But this feels wrong to me because if I need to update name plates after meshes, etc then I loose that ability because the render system seems all those components it manages as a common base type, “renderable” so the notion of order between various renderables is non-existant.

So taking a simple example, a mesh system, name plate system, and lighting system. Each system handles a single component type but they all require some level of access to the render engine to perform their job. Additionally, some of these require a scene node which is what my existing render component provides. Is it common practice to have multiple renderables per game object or should my render engine expose it’s API via an interface and that interface be provided to the mesh, name plate and lighting systems?

3 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 20, 2011 at 20:27

I suppose it depends on how much of an abstraction you want a “renderable” to be. It seems to me you could go a low-level route and have a renderable represent more or less a single graphics draw call - basically consisting of a transform, a reference to a mesh, a reference to a shader, etc. In that case, each object could have multiple (possibly many) renderables - representing not just different components but also mesh parts with different shaders applied, attached to different bones in the skeleton, etc.

On the other hand, you could make a renderable a higher-level object that represents multiple draw calls and knows how to do each of them. In that case it might make sense to have a single renderable per object, or perhaps per component.

As for order control, one way to do it is to associate various sort keys with renderables, and update / render them in the order implied by the keys. For instance, the nameplate update key could be later than the mesh one. But the render keys might be in a different order than the update keys, based on the order things need to be drawn for correct appearance. For efficiency, not to have to totally re-sort everything on each frame, you could carry over last frame’s sorted lists, and only update them when new objects are spawned or sort keys get changed (if you allow them to change).

Fc5da400b2dab2cc818a1733e7792cbb
0
Naros 101 Nov 20, 2011 at 20:58

The great thing about the rendering engine library I am using (OGRE), it abstracts away a lot of the complexity behind actually loading the mesh from disk and preparing it so that it can be rendered in the next render call to the target window. My job basically is to call a createEntity() method on my scene manager with reference to the mesh, the mesh is loaded and then all I must do is initialize any additional options on the scene entity object that I wish and then attach it to a scene node which requires the transform’s orientation, position, and scale. I can also use OGRE’s material manager to load textures (mesh) and then apply those textures as a plane or call createLight() to create lighting or using draw commands to project 2D text for name plates and billboards, etc.

From a separation of concerns perspective, it seems bad to have a high-level render component that knows how to do ALL these things, but maybe that is how it is done in commercial software. The concept of a “sort key” is very new and one which I haven’t even seen introduced before in any articles or design documents about this particular architecture. Typically, I see that functionality should be broken down into simple components and subsystems, which would follow one for meshes, another for lights, one for name plates, etc. But they each of these require the ability to invoke things on the render engine.

My first thought was that all these various subsystems (mesh/lights/name plates) would be provided an IRenderer interface during their construction and during the update() call to those subsysetms, they could use this IRenderer interface to invoke drawing methods. Another solution to be totally decoupled without an interface would be to take the interface API and create events from those calls and simply have these various subsystems dispatch events to create lights, meshes, or to fade a light or destroy a light source, etc.

Ideally I would prefer to keep from having OGRE referenced in so many component and system files because I believe it should be encapsulated so should I change engines down the road, I could with minimal efforts. But maybe I am either suffering from over-engineering or over-generalizing too, idk.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Nov 20, 2011 at 21:51

I think trying to design to make it “easy” to switch from OGRE to a different renderer later on is probably a bridge too far. It’s reasonable to expect that making a major change in the project like that would touch a lot of code.