GPU vs. CPU
Posted 15 January 2012 - 08:18 PM
Obviously, the CPU and GPU handle different parts of the whole gaming pipeline. My Q is, do you explicitly control the parceling-off of the engine's functions to the CPU and GPU, or are you SUPPOSED to? Or do the various coding interfaces (the language itself, like C++, the various APIs, etc.) know what to do? It seems an awfully confusing task, telling what parts of code to use what processors at what time. Am researching all this and would love input.
Posted 15 January 2012 - 08:33 PM
Posted 15 January 2012 - 11:55 PM
It wasn't until just a few years ago that you could at all write "general purpose" code for the GPU, not just graphics rendering.
The GPU still isn't suitable for all kinds of processing. It is very different from the CPU, since it behaves like a million parallel processors, that all do exactly the same work on different data. They all work in lock-step, so if your code has conditions, you need to evaluate all data on all branches, and just throw away the results you don't want.
(Correct me if I'm wrong. I have no actual experience of GPU programming.)
Posted 16 January 2012 - 12:02 AM
Perhaps this is simplistically put, but basically, when you employ API code, it goes to the GPU, and all other (say C++) code goes to the CPU? I realize there are probably nuances to this, but is that a general gist? What I'm curious about is if you literally have to manhandle code portions and tell said code portions to be processed on which processor type. I mean, if API code is automatically geared to use the GPU, than I don't have to get very "hardware low level"-ish, something I'd like to avoid.
Hmm... I see what you mean. I guess my next questions, if I have any, would be based on the answer to the Q I just asked above.
Thanks for the input you all, great stuff as always.
Posted 16 January 2012 - 01:04 AM
The API is the C++-visible part of the graphics driver. The graphics driver is kind of like a wormhole that tunnels through from the C++ universe to the GPU / shader universe. Bit of an overblown analogy, but basically the C++ application will talk to the graphics driver by making API calls. Most API calls constitute a command for the GPU to execute. The driver will take those commands in the same order you issue them and transfer them to the GPU, where it will execute them. Some of those commands have data attached, such as a chunk of compiled shader code, which can't do anything in the CPU universe, but when sent through the wormhole, makes sense and does something in the GPU universe. Other data that would go through include textures and models.
Posted 16 January 2012 - 05:11 AM
Also, about shaders:
What would you staff guys here really define the scope of "shader" definition? Would grouping "everying done to a vertex or pixel beyond the brute raw coordinates and texture loading" qualify these days as shader code? As well, is it fairly common practice for a programmer to develop a bunch of his own shader code?
Posted 16 January 2012 - 10:46 AM
I write shaders for all sorts of things, the obvious ones like rendering and animation, but also for simulation stuff. Like cloud simulation physics, planet generation, mesh generation, etc.
There are two approaches to the rendering pipeline.
1) Uber shader
2) Shader library
The uber shader approach packs everything you want to do in a single shader and then uses flags to define the actual results.
So you could have coloured - gouraud , textured - gouraud, textured-phong, coloured-phong, bump mapped, cell shading all in the same shader.
The other approach sees you writing a seperate shader for each display mode.
It depends partly on the coder, and partly on the platform which approach is used.
There are limitations on what you can do with shaders though.
Which shader model the target platform supports is the main one, no good using an ubershader if the platform cannot load it.
Also we don't have read write textures, or random access textures, so everything has to be carefully worked out.
It's a part of the modern coding environment I really like, but can be as easy and painful as knocking down a skyscraper with your forehead.
Posted 16 January 2012 - 11:03 PM
Posted 17 January 2012 - 12:11 AM
Posted 17 January 2012 - 07:37 AM
Posted 17 January 2012 - 09:45 AM
It is designed for graphics manipulation, so inputs and outputs are in forms that are useful for graphics.
However as long as you format your inputs and outputs correctly, the code you actually run can be anything.
For example if you had an array of floating point numbers and you wanted to do some calculations on each value in the array, you could create a floating point texture. Write the array to the texture. Feed the texture to a shader. Get a new texture back.
The new texture would contain all the outputs from the calculation.
I do this all the time for things like fluid simulation and feature recognition.
As long as you can work with the limitations of the input/output mechanism, you can do pretty much anything with a shader.
A good, and amazing, practice piece is to code up Conway's game of life in a shader. The speed is incredible.
Posted 17 January 2012 - 10:11 AM
Posted 17 January 2012 - 01:31 PM
Posted 17 January 2012 - 05:18 PM
Posted 17 January 2012 - 08:42 PM
All the popular shading languages are somewhat C-based, whether it's OpenCL or GLSL or whatever. But in any case, they are separate languages for which you must use a separate compiler (which is built into the API/driver, since different GPUs will have different shader compilers - it's not like the CPU where you compile once and use the binary on any machine).
Posted 18 January 2012 - 03:38 AM
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users