I am going to present you the engine I have developed for all my
experiments during my PhD thesis in virtual reality. My subject is :
“Using Gaze Tracking Systems and Visual Attention Models to Improve
Interaction and Rendering of Virtual Environments”.
My PhD is about virtual reality so I needed an engine to render my
Virtual Environments (VE) and conduct my experiments. I like to do
things on the GPU so almost everything is computed on GPU. I wanted the
engine to display point and spot lights that could be static or dynamic.
Also, I wanted the model inside the environment to be simulated
physically. Every experiment I were going to be very different so I
needed the engine to be easily script-able. Finally, I needed to replay
all experiment sessions so I wanted to be able to replay recorded
navigations and interactions to apply coslty algorithms to study users’
gaze behavior. Finally, I wanted to be able to create my own VE very
easily. I have developed this engine during 2 month in summer 2008 from
Here are the features:
Virtual environments are created under Maya. Mental Ray
computes a lightmap containing global illumination only. I have
developed my own exporter/file format for the meshes, phong materials,
point and spot lights.
The renderer is an OpenGL/GLSL based zPrePass renderer.Static
lights come from the exported VE and, then, dynamic lights can be added
from the script. Concerning shadows, I use a simple depth map for spot
lights and a virtual depth cube map for point lights. I only use
native hardware shadow map filtering. Not very eye candy but it is
enough for the experiment I needed to conduct. Luminance adaptation is
Physics simulation is done using the PhysX API. (only rigid body)
Scripting is allowed using Lua together with LuaBind. Lua is a
very powerful scripting langage and I did a lot of thing with it and my
engine interface: simple navigation, shooting game, object following way
The engine is able to record and replay any session using an event
Because my Phd is about gaze-tracking, the engine currently take into
account the TobiiX50 gaze tracking hardware.
I hope that was not too long to read. You have a simple video on my
(not very demonstrative I have to admit)
I do not release the source code right now but if you still want it or
just some piece of code, contact me.
There are more details and screenshots about the engine on my blog:
To conclude, I had a lot of fun doing this and it was very interesting
to create an simple engine that uses all this technologies together! I
am currently changing the renderer for a light-prepass renderer and
performance are really impressive.
My website: sebastien.hillaire.free.fr
Please log in or register to post a reply.
Your lighting is very well done. The gaze tracking sounds neat too and
I’m certain your paper will shed potential on the technology for use in
future televisions and monitors. Perhaps the next best thing after 3D
television calms down, who knows :)
Thank you very much!
Yeah, I like to think that gaze tracking could be the next big thing
after public stereo tv (which still isn’t here). Maybe the next
interface after multi-touch screen. Moreover, Tobii gaze tracking
systems are really accurate and robust to head movements. It’s a
beautiful piece of hardware! However, it is still too expensive. If it
would be cheaper, I am pretty sure it would be a nice interface to have
on a PC.
Great coincidence! Last week, I mused about integrating gaze tracking
technology into Virtual Environment rendering – think something like
for Apache helicopter pilots. When the gaze drifts more than 5% from the
display’s center, the VE slowly shifts the scene rendering in that
direction, accelerating as the gaze goes further from the center. Three
approaches to gaze tracking:
Vision-tracking or bundled headset with shoulder sensors would be good
for portable displays such as helmets and eye glasses.
Instead of making people move their heads and eyeballs, mobile-phone and
tablet type devices could pan scenes by analysing internal gyroscopes or
other sensors. We already see this a little with motion sensing mobile
You mean something like this :
And concerning gaze tracking systems, there are existing ones which are
as your description: ASL or FaceLAB. In this case, you need extra
hardware to track head position (as compared to Tobii). I don’t say this
is bad, to my opinion, the need will depend on the use case.
After, there is the question on intrusivity. Maybe you are right,
people would accept wearing something like they do with the headset.
Yes, that’s a cool example, sebh!