My PhD 3D engine

A5bd7bba12c38eaadbd1efd1c2ac0c77
0
sebh 101 Jan 23, 2010 at 15:00

10-01-22.jpg

Description
I am going to present you the engine I have developed for all my experiments during my PhD thesis in virtual reality. My subject is : “Using Gaze Tracking Systems and Visual Attention Models to Improve Interaction and Rendering of Virtual Environments”.

My PhD is about virtual reality so I needed an engine to render my Virtual Environments (VE) and conduct my experiments. I like to do things on the GPU so almost everything is computed on GPU. I wanted the engine to display point and spot lights that could be static or dynamic. Also, I wanted the model inside the environment to be simulated physically. Every experiment I were going to be very different so I needed the engine to be easily script-able. Finally, I needed to replay all experiment sessions so I wanted to be able to replay recorded navigations and interactions to apply coslty algorithms to study users’ gaze behavior. Finally, I wanted to be able to create my own VE very easily. I have developed this engine during 2 month in summer 2008 from scratch.

Here are the features:

  • Virtual environments are created under Maya. Mental Ray computes a lightmap containing global illumination only. I have developed my own exporter/file format for the meshes, phong materials, point and spot lights.

  • The renderer is an OpenGL/GLSL based zPrePass renderer.Static lights come from the exported VE and, then, dynamic lights can be added from the script. Concerning shadows, I use a simple depth map for spot lights and a virtual depth cube map for point lights. I only use native hardware shadow map filtering. Not very eye candy but it is enough for the experiment I needed to conduct. Luminance adaptation is also implemented.

  • Physics simulation is done using the PhysX API. (only rigid body)

  • Scripting is allowed using Lua together with LuaBind. Lua is a very powerful scripting langage and I did a lot of thing with it and my engine interface: simple navigation, shooting game, object following way points, etc.

  • The engine is able to record and replay any session using an event based process.

  • Because my Phd is about gaze-tracking, the engine currently take into account the TobiiX50 gaze tracking hardware.

I hope that was not too long to read. You have a simple video on my YouTube channel: http://www.youtube.com/user/hillairesebastien. (not very demonstrative I have to admit)

I do not release the source code right now but if you still want it or just some piece of code, contact me.
There are more details and screenshots about the engine on my blog: http://sebh-blog.blogspot.com/.

To conclude, I had a lot of fun doing this and it was very interesting to create an simple engine that uses all this technologies together! I am currently changing the renderer for a light-prepass renderer and performance are really impressive.

My website: sebastien.hillaire.free.fr

6 Replies

Please log in or register to post a reply.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Jan 23, 2010 at 17:45

Your lighting is very well done. The gaze tracking sounds neat too and I’m certain your paper will shed potential on the technology for use in future televisions and monitors. Perhaps the next best thing after 3D television calms down, who knows :)

A5bd7bba12c38eaadbd1efd1c2ac0c77
0
sebh 101 Jan 23, 2010 at 20:08

Thank you very much!
Yeah, I like to think that gaze tracking could be the next big thing after public stereo tv (which still isn’t here). Maybe the next interface after multi-touch screen. Moreover, Tobii gaze tracking systems are really accurate and robust to head movements. It’s a beautiful piece of hardware! However, it is still too expensive. If it would be cheaper, I am pretty sure it would be a nice interface to have on a PC.

Ea5d5b7b64180c28ad889fcbd9e2bd2c
0
TaggM 101 Jan 23, 2010 at 20:10

Great coincidence! Last week, I mused about integrating gaze tracking technology into Virtual Environment rendering – think something like this http://www.stormingmedia.us/13/1341/A134115.html for Apache helicopter pilots. When the gaze drifts more than 5% from the display’s center, the VE slowly shifts the scene rendering in that direction, accelerating as the gaze goes further from the center. Three approaches to gaze tracking:

  • A camera to capture eye movement is the prefered option because it does not require head movement. For optimal performance, it would need 5-point look-and-blink calibration (each corner, and center display).
  • Head tracking from left and right sensors on chair or shoulder. This is perfect for people who already use headsets (headphones with integrated boom microphone). It would also be good for merchandizing because the sensors could be bundled with modified headsets.
  • Display mounted tracking, using targets worn on the shoulder or head.
Ea5d5b7b64180c28ad889fcbd9e2bd2c
0
TaggM 101 Jan 23, 2010 at 20:30

Vision-tracking or bundled headset with shoulder sensors would be good for portable displays such as helmets and eye glasses.

Instead of making people move their heads and eyeballs, mobile-phone and tablet type devices could pan scenes by analysing internal gyroscopes or other sensors. We already see this a little with motion sensing mobile phones.

A5bd7bba12c38eaadbd1efd1c2ac0c77
0
sebh 101 Jan 23, 2010 at 20:34

Hi!
You mean something like this : http://www.youtube.com/watch?v=3pRWYE2LRhk ?
:)

And concerning gaze tracking systems, there are existing ones which are as your description: ASL or FaceLAB. In this case, you need extra hardware to track head position (as compared to Tobii). I don’t say this is bad, to my opinion, the need will depend on the use case.
After, there is the question on intrusivity. Maybe you are right, people would accept wearing something like they do with the headset.

Ea5d5b7b64180c28ad889fcbd9e2bd2c
0
TaggM 101 Feb 12, 2010 at 00:35

Yes, that’s a cool example, sebh!