You know what?!? I’m SICK!!!! Why?! Cause all of the idiots lately. The
people that actually know something about computers are getting less
every day! Nobody cares about assembler anymore, software render,
ray-tracing or casting. There is only hardware, per-pixel,vertex shaders
and etc. bullshit. And not only that… somebody is constantly deleting
all of the great articles tutorials and etc. from the net and replacing
them with cheap D3D or OGL imitations. People are using ultra high level
APIs(GFX that is) and doing per-pixel shader programs or etc. without
even knowing that 1/z is linear and z is not in screen space. The so
called programmers of the new times think that it’s enough to learn
C++(not even C =/) and start programming. They don’t care about
assembler optimizations or they don’t even care how the computer
actually work cause it’s enough for them that the Microsoft’s compiler
will do everything for them. Pretty sucki a? I myself think that this
sux and that in 10 to 15 years the programmers will be some brainless
idiots that use the Microsoft standarts for everything. Of course there
are people that still keep the old art of the programming alive and are
tring to imporve it. So my last words are: LET’S FIGHT FOR THE OLD ART
OF PROGRAMMING! DO NOT LET IT DIE!
Please log in or register to post a reply.
Don’t panic yet. Why do you think the operating system exists? It is
there so that the programmer can concentrate on the job at hand without
having to worry about lower level stuff. So, people are doing right if
they are usign as high level an API as possible. However, it is not
right if they don’t know the basics of how a computer works (and this
too, is valid only if their job requires it of them - eg: Game
Programmers). And as for using only microsoft interfaces and being
idiots - I think that microsoft would not even be the last entity that a
sane person would trust . I am sure the folks who produce good products
are not idiots and wouldn’t be so in the future either. The art of
programming is independeng of the language or the level at which you are
doing it (Ofcourse the style would greatly be affected by the language
I am sure others have more to say about this (and I am sure there is a
lot I missed here, which they would be kind enough to add).
heh, everything decays… even the mona lisa :)
on a serious note, there are just more pople trying to write games.
naturally, since computers have grown much in popularity and
possiblities over the last decade. also these pople are getting younger
and younger ( not saying that there aren’t any smart young people out
there ). the same can be seen in other communitys like, well, counter
strike. it used to be fun to play cs but if you enter a public server
today it’s havoc. still many of my friends keep playing the game.
so i don’t think that there are actually less “real” people in gamedev,
they are just harder to find in the crowd today.
I think that you’re expecting too much. In todays world, although it
would be nice for people to learn about assembler optimisation and
software rendering, people simply don’t want to.
I believe that’s OK for 2 reasons
1) many more people take up programming as a HOBBY. Yes it’s a big one,
in some cases the only one, but it’s not intended to be a profession.
Time is taken up by school so much that people don’t have the time to
learn such advanced things. Even if they did, they’d rarely find a use
for it (by this I mean a situation where they would *need* to know
2) it’s becoming less necessary. I’m not saying that’s a good thing, but
when people programmed in DOS, they communicated directly to the gfx
card. OK, people knew more and were on the whole better coders, the code
was convoluted. Ask anyone who programmed in DOS, and I think they’ll
say that todays code is cleaner.
What I think we do need to look out for are people who don’t even bother
to learn a language. They copy-paste nehe code, change the pictures and
call themselves 1337 programmers. Now *this* is a bad trend.
btw, I think it’s actually a bad idea for newbies to learn C and C++.
The two languages (when used properly) encourage two different types of
coding. It’s better to pick one and, sadly in my opinion, that is most
P.S. Paragraphs are good :)
I do not agree with Mihail121 on one thing. I think there is just as
many brilliant people as there were 10 years ago. The difference between
now and then is that nowadays, there’s simply more normal people. You
must have the will to learn something, and Microsoft does everything to
hide what we want from us. Most people doesn’t even want to learn, they
want to produce(anyone looked at the staff board on Mud
Connector?). Peoples are willing to produce
things, not necesseraly in learning.
The problem that I see today(from my point of view) is that programming
is exactly what baldurk mentionned : a hobby for most people. Most
people don’t need to learn assembly, and hence, they won’t learn it, or
if they do, it will be a hobby. Back then, there wasn’t any top of the
line Graphic Card, ultra fast memory, etc… if programmers wanted to
make games, they had to use their brains and everything available to get
the best performance out of their box simply to display a handful of
polygons on a screen. But as I said, today’s optimisation resides on the
hardware side of the computer, not on the software(or programmer). Even
though it’s good to have faster hardware, I still find sad that some
people buys Michael Abrash’s Graphics Programming Black
just to appear flashy, and let dust cover it. Unfortunately, there is
more and more people that have that kind of attitude, and I think that’s
what Mihail121 tried to explain(Correct me if I’m wrong). As was once
said : Attitude is no excuse for incompetence.
Kids have it easy these days…
I predict that in the far future (maybe 10+ years) that graphics cards
will become obsolete. The CPU will be able to handle everything and then
software renderes will be back!
Just think about it. CPUs are getting more powerful everyday, eventually
they’ll be powerful enough to handle everything possible *today* on
the CPU. if you just go with that analogy into the future a bit. then
you will end up with the CPU doing everything. When that happens, a
software renderer will be way more useful then a d3d render becuase it
will be 100% compatible and fully supported becuase everything is on the
Then only out down would then be the users CPU power. Actually. The PCU
will probably end up with loads of built in 3D functions, i mean we have
3DNow and similar thigns now that handle mediocre stuff. Eventually
processor capabilities will increase dramatically. and a whole new world
i cant wait i cant wait i cant wait.
That may not happen because the work done by the graphics subsystem has
ben increasing constantly. I am sure the CPU would have more physics, AI
and stuff to take care of while the graphics card would be churning out
more polys with all those complex shaders and stuff…
consider the play station 3 with an announced 1 terraflop of processing
power… you can pack everything including gfx into that :)
1) CPU power cannot increase exponentially, or even linearly. It is
simply not possible to keep adding power on, as eventually you will
reach physical barriers. Perhaps not insurmountable, but to keep the
cost down, it would be effectively so.
2) I’ve been playing FF7 recently. It’s running on software mode, but
everything is fine. Could they have said in 1997 that by 2003, we won’t
need graphics cards and been correct? no. The reason for this is that as
graphics increased, so did most other aspects - AI, physics, etc. Look
at halflife2. I’m sure that on max detail, the physics would be very
taxing for the current average CPU, and that’s STILL not perfectly 100%
lifelike. Imagine what CPU we’ll need when it is perfect?
Stupid smileys >:(.
main reason they are not correct is because marketing and all pushed
everything to the “we need gpu’s for all, software rendering is out”
todays cpu’s can do everything we _would_ need. but they are not
really designed to do it anymore, and gpu’s got pushed to just fit the
todays needs. they are not really usable at all, and still not at all
programable. but as long as it looks good, people pay for all sort of
if gpu’s wouldn’t have get pushed that much just for money, cpu’s would
have evolved in a bether way, SSE would be like vs3.0 or bether, and
running on one or several hyperthreads.
full cpu optimized rendering tasks are very fast. and very flexible.
and i see the trend started to dive much more into software rendering
again. gpu’s drift off the way they are really usable. they get too much
split from the rest of the hw. that will lead to more and more needs to
solve partial tasks completely in software..
in 2-3 years, we get the first cpu’s with several parallel running cores
in. this will be the end of gpu’s. local, united rendering performs much
more efficient than rendering with the separated gpu. resource and
bandwith efficient, that is. and the rest will be a cpu designed for
and rastericing is yet now dying.. it will be dead in some years.
10x all for not creating a flame war and discussing this nicely. Here is
It is quite true that programming nowdays is more like a hobby and it
was the same with me some years ago but i realized this is what i have
to do for living. We don’t dicuss people doing as a hobby cause they are
The people that don’t do it for living though shoud be discussed. I
don’t deny that there are still much people that try to keep the old art
alive but i think that they are getting less every day. For example we
have high level GFX APIs like OpenGL and D3D. I know thousand of people
that use those APIs without even know how they work. Now just don’t tell
me they don’t need to know that. I’m also partially agreed with the
Microsoft standarts. Of course there is no need to re-invent the wheel
each time an application is beeing written. But people shoud have
general idea of how those things work and i think they shoud be able to
write them too.
About that CPU/GPU war. I’m agreed with those of you that think that in
the future GPUs will be obsolete since CPUs will become
faster/better/stronger. Right now dual processor machines are taking
over and i don’t see where’s the problem to use the second one as GPU or
something like that. When i think now, the software rendering was never
dead( just look at the MESA drivers ) so there is no need to revive it
BUT as you pointed some people just don’t think it’s important to learn
how it works.
My final word is about all of those newly-born n00bish programmers that
we know and that are getting more and more recently. I’ve always
wondered why such people don’t make some efford and learn something more
useful than C++ and OGL for example. I would like to say that i realize
how much is expected from programmers nowdays but if you can’t hold it
than better not start it!
10x for reading
hmm, when i review my own path of learning it gets obvious that i
learned about software rendering first. i started out at a time where
software rendering was still being used. when you looked for game
programming tutorials you would still stumble over good old mode 13h. so
it somehow was clear that you had to know about software rendering
techniques. today the people who know about software rendering don’t usr
it anymore most of the time so there is less talk and less information
about it. when you are new to gfx/game programming i doubt that you’ll
find much that points you in the “learn software first” direction. much
rather you start out because someone told you about nehe. maybe if there
was a more present and active community around software rendering more
people would realize that it still exists.