Remote offscreen hardware rendering
Posted 18 August 2009 - 09:38 PM
It's well known that GL acceleration doesn't work over Remote Desktop. But I don't care about showing the rendering output on a window; I just want to render to memory and then process the result programmatically. A few google searches turned up no one who seems to know one way or the other whether this can be done.
I messed around with EnumDisplayDevices a bit, but in Remote Desktop you can see only the remote mirroring device, not the real video card.
I found VirtualGL, which seems to do what I want, although I don't care about getting the image back to the client. But it's for *nix only, and all our servers run Windows.
There's also CUDA, which I don't know much about, but it seems like overkill for what I'm doing. It looks tricky to get set up and I'm not sure if it will solve my problem in the first place.
Finally, as a fallback I could probably use Mesa, which looks like it's software-only in Windows, but probably still better than Microsoft's software renderer (I hope). I haven't used Mesa before; is it pretty much just a drop-in replacement for OpenGL?
The stuff I'm doing requires only OpenGL 1.1 features, with no extensions except possibly EXT_framebuffer_object to get an offscreen buffer. Does anyone have any experience with or advice about this sort of thing?
Posted 19 August 2009 - 02:15 PM
Posted 19 August 2009 - 03:08 PM
Actually, I think CUDA isn't too difficult - if you can program in C, you can program in CUDA as well.
Setup merely consists of installing a driver with CUDA support and adding the specialized CUDA compiler nvcc to your build setup.
Once, I gave an introductory talk about CUDA as an university assignment. If you want, I can provide you with the slides I had back then, and with the associated composition.
I'm not sure, though, if CUDA works within a program running as a service. Furthermore you also should consider that CUDA limits you to a single graphics hardware vendor.
Maybe you should also have a look at the OpenCL:http://www.khronos.org/opencl
Yet to the best of my knowledge widespread support for the CL is still lacking.
Posted 19 August 2009 - 05:23 PM
Posted 19 August 2009 - 07:54 PM
Why does the service have to be run as a Service? Services run in non-interactive desktops by default, thus they don't provide access to anything but GDI in generic VGA. You can get around that on anything less than Vista by setting "Allow Service to Interact with Desktop" to yes on the service and having the server always logged in as some user, and left unlocked. Basically, for a system where you don't have to manually logon all the time, a nasty script can be used to force the logon/unlocking. At that point, the service will theoretically have access to a full desktop.
Downside, obviously it's a security hazard, but may be acceptable depending on your circumstances.
Next step up may be something like:
For the above, you still write the renderer, but this software allows you to manage render jobs over many nodes.
Posted 20 August 2009 - 02:50 AM
Posted 20 August 2009 - 03:12 AM
CUDA isn't bad, but I'm not fond of that development paradigm. They try to open up the hardware to standard C, but I find myself still resorting to graphic rendering techniques to do things. I'd rather just work with shaders to do the same thing. The CUDA footprint, like CG, is also way to large for my tastes.
If you have the framework in place, I would suggest you write a socket implementation where clients can poll the render servers for updates and transmit the data over if you need it. If you are setting up a rendering farm, it's the way to go otherwise micro management on each server will drive you nuts.
Posted 20 August 2009 - 12:03 PM
Agreed, although I think that's what he's doing. His problem was that he was trying to setup the server side as a Windows service, which under default behavior will not allow you access to the accelerated graphic hardware. It uses a logical VGA device.
What I did once was to futz with the service settings and force an autostart/user logon such that server is logged in at all times, no sleeping, no screensaver, just full login. Needless to say, it is a big security risk that many sysadmins won't allow or dislike.
Another problem is that MS is phasing this workaround out. You can't do this in Vista. And, I suspect likely the same in Server 2008 although I don't know.
You know, Ring 0 and Shatter attacks and all...
Posted 20 August 2009 - 12:20 PM
Got that from a buddy of mine who just suggested that you could also do it with a couple of apps. Basically, it involves calls to native library methods, in particular CreateProcessAsUser() from advapi32.dll. Your service would run in the background at level 0 as SYSTEM, and use that function to monitor/spawn/re-spawn another app in userspace.
Googleing that function name gave me more reading than I have time for, but it's all yours Reedbeta...
Posted 20 August 2009 - 01:05 PM
In this case, I'd strongly advise against CUDA, using a premade rasterizer such as OpenGl is the easier approach here.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users