I have still not implemented a robust constrained physics system in my
game engine (right now there is unconstrained physics).
My engine progress has been VERY SLOW lately (for the past year :().
I am a bit worried about how the hierarchical scene graph is going to
tie in with the Physics engine. Currently, if I have a scene node and if
I rotate or move it, the child nodes rotate around it as if they were
connected by a fixed constraint.
Is this the correct way to go with tieing these two systems together? Or
should I change these fixed constraints be explicit?
I am still even doubting the usefulness of the scene graph all together.
I have a seperate spatial partitioning system in place for rendering and
collision detection, but have not realy utilized the scene graph yet. If
I create bone trees for animation, it seems like it wouldn’t have to go
into the scene graph and that it would be better as a specialized tree,
but then I wouldn’t know how to go about making each of the body parts
seperate RigidBody entities. So I am a bit confused.
Maybe someone can help me sort this out, thanks!
Please log in or register to post a reply.
I would vote for putting the bones in the scene graph. If you need to
access them at a future time store the references on those bones in a
separate place. Make your animation reference this place.
As I see it, one node will either be moved by the animation, either by
the physic engine. I would probably duplicate the hierarchy in the
physic engine. The hierarchy in the physic engine is more complex with
freedom degree, elastic relation, etc… This have no place in the
graphic scene graph. I would reference my graphic node from my physic
node and keep them synchronized (get and set on the graphic node from
the physic node)… I’m confusing myself too.
I would second everything MJeannig said with some additional comments.
Combining physics with your graphics can indeed raise some confusing
questions and the answeres are often made difficult by how much time we
have spent on the graphics side of the issue; we tend to view the world
from that technological perspective. As MJeannig suggested, keep the
graphics separate from the physics but realize they are aspects of the
same thing. The question immediately arises as to how they are
coordinated. The physics aspects shares aspects of the graphics (for
example geometry) but has additional defining information not present in
graphics (for example mass, stiffness, etc.). I would suggest a third
principal (class if you are in C++) that incorporates
(creates/destroys/manages) both the graphical and physics aspects and
coordinates there interaction. For example, the mining of geometric
information from the scenegraph for building the physics counterpart and
connecting the physics transforms to their corresponding graphic
transforms in the scenegraph (as MJeannig suggested) to name a couple of
tasks. This third concept also provides a unified view of the entity as
viewed by the rest of the application.
You mentioned a specific concern about the child nodes rotating in
response to physics control of a joint. This is fine providing the
context of the transform is the same in both the graphics and physics
systems. For example, if the transformation at a scenegraph node is
interpreted as local (in the coordinate system of the proximal body or
graphic), then the transform as provided by the physics system should be
local as well. It all works out. I have a real-time physics engine that
I license and this is how I have approached building the demos and
incorporating the engine into third party applications such as CGI
training simulations and human/vehicle models.
Hope this helps. Would be glad to answere any other questions.
(Mr.) Kristen Overmyer