Tips for a better opengl GUI architecture

28223d0c1946e3be9e14018a80a2032d
0
Kaelb 101 Jun 28, 2012 at 09:11

Hello, this week I had to reimplement my picking system for the GUI library I’m working on, and I will explain why later in the post, I would like to receive some feedback about what I’m doing to see if there’s better ways to implement the GUI.

My GUI system follows the composite pattern, elements can be added to elements, and the transformations applied to the parent element will be applied to the child elements, this way the rendering would be made like this:

->parent-element translates to his position

->parent-element renders

->child-element translates to his position, in addition to the parent’s translation

->child-element renders

->child-element untranslates

->parent-element untranslates

With this pattern I can move my parent without needing to set anything to the child elements, I think this is pretty good, since I can create independent infrastructure elements without the need of them knowing about each other if they are added.

Now comes the septical part, in my last GUI version I was using the opengl picking system, the problem was when I wanted to add and element to a panel, and the element was widther than the panel, like this:

post-185716-0-59776600-1340794590.jpg

I know it’s hard to understand, since this is just for debugging, but the yellow square is a panel, and the red one is a button, the button is widther than the panel, so it gets clipped in the right side, you can notice that on the right the red square doesn’t have the black contour line.

With opengl picking this was a problem, since even if the button was clipped a hit was generated and I could click it outside the panel, that behavior wasn’t desired, so I implemented a color picking system for the GUI, until now it’s working flawless, though, creating a new GUI element needs some more code

Here is the interface of the object that makes the color picking

class PGIIColorPicker{
public:
    virtual bool testComponent(PGIComponent* component, PGIColor3ub color)=0;
    virtual int getPickCount()=0;
    virtual int getCurrentIndex()=0;
    virtual bool next()=0;
    virtual PGIColor3f current()=0;
    virtual void resetIndex()=0;
    virtual void clear()=0;
    virtual void setPickCoordinates(int x, int y)=0;
};

Here is the interface for my GUI elements

class dllexport PGIComponent {
public:
    
    virtual void draw()=0;
    virtual void draw(PGIColor3ub color)=0;
    virtual void pickElement(PGIIColorPicker* picker)=0;
    virtual void activate(GLuint* idx) =0;
    virtual void drag(GLuint* idx, float draggedX, float draggedY) = 0;
    virtual float getWidth()=0;
    virtual float getHeight()=0;
    virtual float getXPosition()=0;
    virtual float getYPosition()=0;
    virtual void setCoordinates(float topLeftX, float topLeftY, float width, float height)=0;
    virtual void setState(State state)=0;
    virtual void assignParent(PGIComponent* parent)=0;
    virtual PGIComponent* getParent()=0;
    virtual int getChildCount()=0;
    virtual PGIComponent* getChild(int idx)=0;
};

Like this my GUI manager would iterate through all the GUI root elements, calling the picker’s testComponent function, this function basically calls the component’s draw function with the desired color, and makes some setups too.

If an element is hit the color picker instance stores the element index based on its color, and the manager calls PGIComponent’s pickElement function passing the picker, and all this process repeats itself inside the parent component (in this case the panel) and there are no problems with clipping, since the process is based on the colors, and the parents transformations can still be applied to the child elements.

This design decision seems a bit weird, but I couldn’t find any better ideas, since with this i can emulate the opengl picking mechanism, without the need of passing through all the elements in the chain, just those that are really picked, still looks flexible, but I would like to hear some feedback about it, since I’m not too sure.

5 Replies

Please log in or register to post a reply.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jun 28, 2012 at 11:16

Personally I don’t use picking. It’s not portable and has its limitations. In my system, I use the mouse coordinates and navigate down the visual tree to determine which UI element was selected, and in some cases do alpha checking (a round button for example). For 3D UIs, I unproject the mouse coordinates and perform the same lookup. You should never do selection by colour though. What are you going to do when you replace your button with a texture image? Or add text on it? Or what if you have panel with a bunch of controls inside and you want to detect when the panel was clicked, not its children?

If you’re interested in a solid UI system, check out WPF from Microsoft. If you know Win32, you know MFC was built on top to improve that. .NET forms was the next advancement in UI libraries. WPF is the latest from Microsoft and it is very good. There’s a bit to learn with it, but it covers everything you could want in a UI system. I’ve ported the important parts of WPF over to both C++ and JavaScript and, just by structure of the UI system, it’s really easy to work with. If you play around with it for a bit, learn how the measurement and layout system works (similar to what you do, but a bit more advance as it supports alignments, margins, padding, etc.), and write your own framework to mimic it, you’ll pretty much have an awesome time developing sophisticated user interfaces.

Some tips:
1. Don’t do OpenGL drawing inside the GUI. Abstract drawing routines to a GraphicsContext class (create one for OpenGL in this case) and pass that into the “Render” method of the UI component. Let each user control decide how to draw itself using the drawing features available in the graphics context class.

  1. Don’t do picking at your component level. You should have a user input system hidden from the application layer, which feeds mouse or touch events to your root visual layout (a panel in most cases). Let the panel listen for this event and automatically forward the events to its children or its currently focused element (for example, keyboard events are routed to the focused element).

  2. Never manually assign a parent to a component. It’s bad practice. All containers (panels, stack panels, grids, etc.) should have a “Children” array. When you assign a component to that array, their parent member property should automatically be updated. One less thing the application developer has to worry about, and it avoids the temptation to just “float” your elements from one parent to another (which can cause internal problems with focused elements and event routing).

28223d0c1946e3be9e14018a80a2032d
0
Kaelb 101 Jun 28, 2012 at 13:02

@TheNut

  1. Don’t do OpenGL drawing inside the GUI. Abstract drawing routines to a GraphicsContext class (create one for OpenGL in this case) and pass that into the “Render” method of the UI component. Let each user control decide how to draw itself using the drawing features available in the graphics context class.

I have a class hierarchy that allows me to do the rendering. Each GUI element has an instance of these classes and it uses it to make the rendering, this way the GUI doesn’t make the rendering by itself, but passing a GraphicsContext class to the Render function seems like a good idea to maintain a certain “theme” inside my GUI, still, this GraphicsContext would have all the methods, in my implementation each GUI element has a specific object from the hierarquy that draws the elements in squares/rounded forms, etc.
Do you consider this bad or just a different aprouch to your solution?
@TheNut

  1. Don’t do picking at your component level. You should have a user input system hidden from the application layer, which feeds mouse or touch events to your root visual layout (a panel in most cases). Let the panel listen for this event and automatically forward the events to its children or its currently focused element (for example, keyboard events are routed to the focused element).

Well, the picking isn’t really done by the GUI components, its my picker that does it with the colors, after drawing the GUI element with only one color to a buffer, it uses the (x,y) coordinates given by the user to check if the GUI element was drawn there. Each components has a function to draw itself just with color (without textures or text like you said), and this color rendering is usually done by my object belonging to the refered class hierarchy.

But as far as I understood you’re telling me I should ask my GUI components if they’re hit by the (x,y) coordinates from the user input, since the transformations are done in a parent->child interaction as I navigate through my visual tree I would need to maintain state about the parent->child transformations done across the calls, and use them as an extra parameter for the intersection, this seems good, since I won’t have to use openGL calls and leaves space to develop or learn a new 2D collision/intersection library.

@TheNut

  1. Never manually assign a parent to a component. It’s bad practice. All containers (panels, stack panels, grids, etc.) should have a “Children” array. When you assign a component to that array, their parent member property should automatically be updated. One less thing the application developer has to worry about, and it avoids the temptation to just “float” your elements from one parent to another (which can cause internal problems with focused elements and event routing).

When I add a child component to a parent the parent assigns itself as a parent to the child, that way I don’t have to do that in my “GUI init” function, its all one in the parent “add” function, that way the application developer doesn’t need to worry about it :)

Thanks for help, this gave me some cool and fresh ideias to work with.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jun 28, 2012 at 14:06

@Kaelb

Do you consider this bad or just a different aprouch to your solution?

I’m not sure I follow this 100%. It sounds similar to the graphics context approach, but different in design. BTW, the graphics context isn’t designed to theme your GUI. Take a look here and here for example APIs. The graphics context provides an API for drawing stuff to the screen, like circles, rectangles, images, etc. If you want to theme your controls, you should provide them with a different texture/sprite sheet or set their brushes (check WPF for some examples). The graphics context is a convenient way to abstract the underlying drawing routines. If you wanted to render graphics using an OpenGL renderer, you would pass an OpenGL based graphics context. If you wanted a software based renderer, you would pass in a software graphics context. The key here is that there’s only one graphics context that you initialize and use everywhere. Every user control has access to the same drawing routines in order to perform complex renders. In my particular engine, I support a software graphics context as well as OpenGL 1.X (fixed function) and 2.X (programmable shader) graphics contexts. Very convenient to support multiple platforms.
@Kaelb

this seems good, since I won’t have to use openGL calls and leaves space to develop or learn a new 2D collision/intersection library.

I would say the biggest advantage comes from performance. You don’t have to waste time rendering another pass just for the UI. Doing simple point in rectangle checks is extremely fast in comparison.
@Kaelb

When I add a child component to a parent the parent assigns itself as a parent to the child,

Ah, ok. Your API exposes the method “assignParent”, which led me to believe there was some manual labour involved. You should perhaps “protect” that method so only containers are able to receive children.

B5262118b588a5a420230bfbef4a2cdf
0
Stainless 151 Jun 29, 2012 at 09:28

I use signals. You can write a simple signal class very easily and it is so flexible it makes writing GUI’s a pleasure.

The basic structure I use is…

main context gets mouse event
sends to list of mouse consumers
each consumer has a list of children
checks children first, then checks itself
if a consumer eats the event, signal any connected code and flag consumed

This is a very simple system, but works really well. The signal class becomes really useful as you can use it to easily abstract events.

A button can have a single signal, or multiple depending on what you need. So if you only need a button pressed event, just connect to the button pressed signal. If you want to know that the button is held down, connect to the button held signal.

You do need a container class though to hold all the GUI elements, but I want one of them anyway as that is the level that handles the layout of GUI elements within the container

820ce9018b365a6aeba6e23847f17eda
0
geon 101 Jun 29, 2012 at 11:08

Wouldn’t it make more sense to just not have the button possibly be larger than the window? You can implement that constraint in your layout code, and it simplifies your picking.