Remote offscreen hardware rendering

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Aug 18, 2009 at 21:38

A few weeks ago, I posted about getting an offscreen, hardware-accelerated OpenGL context to do some offline GPGPU stuff. I’ve got the app working on my own desktop, but now I’m trying to get it running on a headless server. Let’s assume the server has a powerful enough video card for what I want. However the app now has to run as a service, with no logged-in user, or occasionally be executed by someone logged in with Windows Remote Desktop.

It’s well known that GL acceleration doesn’t work over Remote Desktop. But I don’t care about showing the rendering output on a window; I just want to render to memory and then process the result programmatically. A few google searches turned up no one who seems to know one way or the other whether this can be done.

I messed around with EnumDisplayDevices a bit, but in Remote Desktop you can see only the remote mirroring device, not the real video card.

I found VirtualGL, which seems to do what I want, although I don’t care about getting the image back to the client. But it’s for *nix only, and all our servers run Windows.

There’s also CUDA, which I don’t know much about, but it seems like overkill for what I’m doing. It looks tricky to get set up and I’m not sure if it will solve my problem in the first place.

Finally, as a fallback I could probably use Mesa, which looks like it’s software-only in Windows, but probably still better than Microsoft’s software renderer (I hope). I haven’t used Mesa before; is it pretty much just a drop-in replacement for OpenGL?

The stuff I’m doing requires only OpenGL 1.1 features, with no extensions except possibly EXT_framebuffer_object to get an offscreen buffer. Does anyone have any experience with or advice about this sort of thing?

9 Replies

Please log in or register to post a reply.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Aug 19, 2009 at 14:15

Are you looking to roll your own solution that’s under your control or do you want a commerical/OSS, server-based rendering system, like RealityServer? Is this for real-time rendering, like processing your renders for a game off on another system, or static images, like offloading a CAD render to a server?

B7568a7d781a2ebebe3fa176215ae667
0
Wernaeh 101 Aug 19, 2009 at 15:08

There’s also CUDA, which I don’t know much about, but it seems like overkill for what I’m doing. It looks tricky to get set up and I’m not sure if it will solve my problem in the first place.

Actually, I think CUDA isn’t too difficult - if you can program in C, you can program in CUDA as well.

Setup merely consists of installing a driver with CUDA support and adding the specialized CUDA compiler nvcc to your build setup.

Once, I gave an introductory talk about CUDA as an university assignment. If you want, I can provide you with the slides I had back then, and with the associated composition.

I’m not sure, though, if CUDA works within a program running as a service. Furthermore you also should consider that CUDA limits you to a single graphics hardware vendor.

Maybe you should also have a look at the OpenCL:http://www.khronos.org/opencl
Yet to the best of my knowledge widespread support for the CL is still lacking.

Cheers,
- Wernaeh

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Aug 19, 2009 at 17:23

Reality Server is definitely way overkill for my needs. :) This is just a couple of in-house servers for building art assets. What I’m actually doing is computing ambient occlusion. I use OpenGL to rasterize a cubemap around each sample point and count the occluded pixels. So there are no textures, lighting, or shading - nothing but the geometry. I had been hoping to get hardware rasterization, but I could live with a decently fast software rasterizer - actually, even the Windows software OpenGL driver isn’t that bad for this. I don’t want to spend a lot of time setting up or maintaining this, so simple and minimal is best. I’m going to continue to look into Mesa.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Aug 19, 2009 at 19:54

Okay, I get it now.

Why does the service have to be run as a Service? Services run in non-interactive desktops by default, thus they don’t provide access to anything but GDI in generic VGA. You can get around that on anything less than Vista by setting “Allow Service to Interact with Desktop” to yes on the service and having the server always logged in as some user, and left unlocked. Basically, for a system where you don’t have to manually logon all the time, a nasty script can be used to force the logon/unlocking. At that point, the service will theoretically have access to a full desktop.

Downside, obviously it’s a security hazard, but may be acceptable depending on your circumstances.

Next step up may be something like:

http://www.pipelinefx.com/products/features.php

http://www.liquiddreamsolutions.com/web5/index.php

http://www.renderpal.com/features.php

For the above, you still write the renderer, but this software allows you to manage render jobs over many nodes.

860fe478a2545d6c07b88c759292499e
0
SmokingRope 101 Aug 20, 2009 at 02:50

As a somewhat convoluted approach, perhaps you could write a screensaver that an application executed using remote desktop can communicate with. The screensaver (even when not logged in) will still have access to your video card. You may have some tricky work making them interact (i have no experience writing screen savers so there may be hoops that would have to be jumped through).

6837d514b487de395be51432d9cdd078
0
TheNut 179 Aug 20, 2009 at 03:12

MesaGL is interchangeable with Windows OpenGL, although last I tried it was flaky to compile and work with Visual Studio. Perhaps with a bit of dog work, but it will work off the bat with mingw.

CUDA isn’t bad, but I’m not fond of that development paradigm. They try to open up the hardware to standard C, but I find myself still resorting to graphic rendering techniques to do things. I’d rather just work with shaders to do the same thing. The CUDA footprint, like CG, is also way to large for my tastes.

If you have the framework in place, I would suggest you write a socket implementation where clients can poll the render servers for updates and transmit the data over if you need it. If you are setting up a rendering farm, it’s the way to go otherwise micro management on each server will drive you nuts.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Aug 20, 2009 at 12:03

@TheNut

If you have the framework in place, I would suggest you write a socket implementation where clients can poll the render servers for updates and transmit the data over if you need it. If you are setting up a rendering farm, it’s the way to go otherwise micro management on each server will drive you nuts.

Agreed, although I think that’s what he’s doing. His problem was that he was trying to setup the server side as a Windows service, which under default behavior will not allow you access to the accelerated graphic hardware. It uses a logical VGA device.

What I did once was to futz with the service settings and force an autostart/user logon such that server is logged in at all times, no sleeping, no screensaver, just full login. Needless to say, it is a big security risk that many sysadmins won’t allow or dislike.

Another problem is that MS is phasing this workaround out. You can’t do this in Vista. And, I suspect likely the same in Server 2008 although I don’t know.

You know, Ring 0 and Shatter attacks and all… :)

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Aug 20, 2009 at 12:20

If you want to know more on why Vista can’t, read:

http://www.microsoft.com/whdc/system/vista/services.mspx

Got that from a buddy of mine who just suggested that you could also do it with a couple of apps. Basically, it involves calls to native library methods, in particular CreateProcessAsUser() from advapi32.dll. Your service would run in the background at level 0 as SYSTEM, and use that function to monitor/spawn/re-spawn another app in userspace.

Googleing that function name gave me more reading than I have time for, but it’s all yours Reedbeta…

B7568a7d781a2ebebe3fa176215ae667
0
Wernaeh 101 Aug 20, 2009 at 13:05

What I’m actually doing is computing ambient occlusion. I use OpenGL to rasterize a cubemap around each sample point and count the occluded pixels. So there are no textures, lighting, or shading - nothing but the geometry.

In this case, I’d strongly advise against CUDA, using a premade rasterizer such as OpenGl is the easier approach here.

Cheers,
- Wernaeh