Really fundamental Q here…
Obviously, the CPU and GPU handle different parts of the whole gaming
pipeline. My Q is, do you explicitly control the parceling-off of the
engine’s functions to the CPU and GPU, or are you SUPPOSED to? Or do the
various coding interfaces (the language itself, like C++, the various
APIs, etc.) know what to do? It seems an awfully confusing task, telling
what parts of code to use what processors at what time. Am researching
all this and would love input.
Please log in or register to post a reply.
Yes, you explicitly control it. Shader code runs on the GPU and C++ code
runs on the CPU. You interact with the GPU by writing C++ code to call
up an API such as Direct3D or OpenGL; by using the API you tell the GPU
what to do - essentially submit a list of commands for the GPU to
execute. Many of those commands will be of the form “load this shader”
and then “draw this stuff”, which is how you get code running on the
Making code automatically run on multiple types of processors as needed,
or even spread the load over several processors is the bleeding edge of
research today. So far it just isn’t possible.
It wasn’t until just a few years ago that you could at all write
“general purpose” code for the GPU, not just graphics rendering.
The GPU still isn’t suitable for all kinds of processing. It is very
different from the CPU, since it behaves like a million parallel
processors, that all do exactly the same work on different data. They
all work in lock-step, so if your code has conditions, you need to
evaluate all data on all branches, and just throw away the results you
(Correct me if I’m wrong. I have no actual experience of GPU
Perhaps this is simplistically put, but basically, when you employ API
code, it goes to the GPU, and all other (say C++) code goes to the CPU?
I realize there are probably nuances to this, but is that a general
gist? What I’m curious about is if you literally have to manhandle code
portions and tell said code portions to be processed on which processor
type. I mean, if API code is automatically geared to use the GPU, than I
don’t have to get very “hardware low level”-ish, something I’d like to
Hmm… I see what you mean. I guess my next questions, if I have any,
would be based on the answer to the Q I just asked above.
Thanks for the input you all, great stuff as always.
Yes, you’re telling portions of code which processor type to run on - in
a manner of speaking. Really, you’re telling it which processor to run
on by deciding which language to write it in. If something is written
in C++, it’s going to run on the CPU; if something’s written in HLSL or
GLSL or one of the other shading languages, it’s going to run on the
The API is the C++-visible part of the graphics driver. The graphics
driver is kind of like a wormhole that tunnels through from the C++
universe to the GPU / shader universe. :D Bit of an overblown analogy,
but basically the C++ application will talk to the graphics driver by
making API calls. Most API calls constitute a command for the GPU to
execute. The driver will take those commands in the same order you issue
them and transfer them to the GPU, where it will execute them. Some of
those commands have data attached, such as a chunk of compiled shader
code, which can’t do anything in the CPU universe, but when sent through
the wormhole, makes sense and does something in the GPU universe. Other
data that would go through include textures and models.
Thanks, really clarifies there. So basically…GPUs do a heck of a lot
of the total workload…
Also, about shaders:
What would you staff guys here really define the scope of “shader”
definition? Would grouping “everying done to a vertex or pixel beyond
the brute raw coordinates and texture loading” qualify these days as
shader code? As well, is it fairly common practice for a programmer to
develop a bunch of his own shader code?
Shaders are incredibly useful, and a pain in the ar*e at the same time.
I write shaders for all sorts of things, the obvious ones like rendering
and animation, but also for simulation stuff. Like cloud simulation
physics, planet generation, mesh generation, etc.
There are two approaches to the rendering pipeline.
1) Uber shader
2) Shader library
The uber shader approach packs everything you want to do in a single
shader and then uses flags to define the actual results.
So you could have coloured - gouraud , textured - gouraud,
textured-phong, coloured-phong, bump mapped, cell shading all in the
The other approach sees you writing a seperate shader for each display
It depends partly on the coder, and partly on the platform which
approach is used.
There are limitations on what you can do with shaders though.
Which shader model the target platform supports is the main one, no good
using an ubershader if the platform cannot load it.
Also we don’t have read write textures, or random access textures, so
everything has to be carefully worked out.
It’s a part of the modern coding environment I really like, but can be
as easy and painful as knocking down a skyscraper with your forehead.
Can there be, how to put this, dynamic shaders that can be called if
programmer-produced code logic dictates it?
If you mean can you switch shaders on and off, then yes, you can do this
at will. You have complete control of the polygons, shaders and textures
used in each rendered frame. Typically a game would have hundreds of
shaders, but only some of them will be used in any given frame, since
you typically only render the objects that are visible from the current
I’m interested in this particular topic too, if I’m allowed to ask a
question: Is it possible to do stuff with the GPU via C++? How and why
would I do that? (apart from Graphics-API calls)
The GPU is basically a very fast risc chip.
It is designed for graphics manipulation, so inputs and outputs are in
forms that are useful for graphics.
However as long as you format your inputs and outputs correctly, the
code you actually run can be anything.
For example if you had an array of floating point numbers and you wanted
to do some calculations on each value in the array, you could create a
floating point texture. Write the array to the texture. Feed the texture
to a shader. Get a new texture back.
The new texture would contain all the outputs from the calculation.
I do this all the time for things like fluid simulation and feature
As long as you can work with the limitations of the input/output
mechanism, you can do pretty much anything with a shader.
A good, and amazing, practice piece is to code up Conway’s game of life
in a shader. The speed is incredible.
I’ll try that someday :) But is it also possible without using shaders?
You can programm GPU with OpenCL, a C-based language.
So…there ARE specific, non-API function calls that do that stuff?
Like, erm, “GPUcall()” blah blah blah? I think that’s what i’m curious
about. It seems that OpenCL is also a quasi-API. Just wondering if there
are times when, in no API at all, you directly talk to the GPU.
No, all communication with the GPU is done through some API or other.
This is no different from any other piece of hardware; in modern
operating systems all communication with hardware is mediated by the OS
in some way. You never get to talk to any hardware directly because then
your program could run amuck and screw things up for all the other
programs, which is one of the things that modern OSes try very hard to
All the popular shading languages are somewhat C-based, whether it’s
OpenCL or GLSL or whatever. But in any case, they are separate languages
for which you must use a separate compiler (which is built into the
API/driver, since different GPUs will have different shader compilers -
it’s not like the CPU where you compile once and use the binary on any
Thanks Reedbeta, that clarifies a lot. Good explanation there.