Could you list the maths needed (vectors, matrices etc.)
As Stainless mentioned, engine development never really stops. Depending on the complexity of your project and your goals, you could get something up and running within as little as a couple weeks to a couple months. Either way, your engine will undergo improvements as your understanding of the APIs and engineering skills improve.
I often avoid 3rd party libraries unless they are provided with a flexible license and provide simple enough functionality that it imposes minimal risk and size. If feasible, I’ll write my own portable code to provide the functionality. For example, I ported most of the important parts of the .NET BCL and WPF over to C++. I now enjoy the ease and productive benefits of working with the .NET API in a multi-platform C++ engine. Some of the 3rd party libraries I use solve other complex problems I didn’t want to spend time on:
* Bullet physics
* Various DB connectors
* Xiph stuff
For all other stuff like supporting BMP, PNG, TGA, HDR, WAV, SVG, TTF, OBJ, ZIP, HTTP, etc. I wrote my own code. It was both an educational exercise and to provide small, lightweight, and portable code to use. This is over a span of many years of course. Each new year brings in new features and more productive frameworks. It’s an evolving process.
@vilem: I will probably release it for linux. Thanks.
I’ll answer Reed in this reply too - yup I’m looking to some example of your point 1 - e.g. do you know some example of this (or better full implementation - but I doubt that exists)? Because this is what I’ve been looking for.
Just a little note to Direct3D vs. OpenGL, once you learn one of them to some extent, it’s very easy to switch to another. I also recommend you to try also OpenGL once you are finished with Direct3D, it might be viable for future - e.g. you might decide to F.e. release your engine for Linux.
Yes, but there are ways of managing this.
1) Single shader and translation layer
Write your shaders in your own macro language and translate it into the correct version of GLSL before compilation
Write an include file for each GLSL version and write the shaders using only these subroutines.Change the include file on the fly.
Or you can use one of the freeware translation libraries that are out there. I can’t remember the name of the one I have used, but it converts opengl into direct3d on the fly. This is particularly useful in windows as you can use pixwin to debug.
@stainless: i meant ‘2 years’ as a good thing.
I’ll be using directx (everything i listed in the original post).
OK, if you want to look up/down then you’ll need the pseudo-pitch I mentioned.
When you construct your view matrix for a normal camera, you’d usually do it by combining some rotations for yaw and pitch (and maybe roll). The shear matrix I’m talking about would replace the pitch rotation.
If your camera starts out in standard position, facing along the -Z axis, you’d normally do a X-rotation to pitch it. Instead, you’re going to do a YZ shear, meaning there will be an offset along the Y axis proportional to position along the Z axis. The matrix for it will go like:
[1 0 0]
[x y z] [0 1 0] = [x y+az z]
[0 a 1]
The value ‘a’ there is the shear amount, which is equal to the tangent of the pitch angle. The matrix above is written for row-vector math; transpose it if you’re using column vectors in your app. Move the ‘a’ to a different component of the matrix if you need to shear along different axes.
That’s exactly what I did Reed, but the problem is, I don’t want to only display my model horizontally. For example, my camera is on a street at human height over the street, looking at high buildings. I want to look up a bit but I want the verticals to stay verticals. The same with Ortho projection, you can still look up/down, rotate etc. and the verticals still stay verticals except there is no depth. So I’m trying to build a matrix that will do kind of like ortho except with depth. Do I make sense?
You’re making it way more complicated than it needs to be. Just get a regular camera view and projection matrix working for your raytracer. You can look up the docs for glOrtho, glFrustum, and gluPerspective and they give you the exact formulas for the matrices. So just use those and get regular cameras working.
Then, for two-point perspective just use a regular camera with no pitch or roll, just yaw, so it stays horizontal. That’s all there is to it. Forget the idea of an extra “two-point matrix”. There’s no such thing. It’s just the regular view and projection matrix, with the camera horizontal.
(Optionally add back in a pseudo-pitch by shearing the camera up and down instead of rotating it. This is the only part that would be different from a standard camera matrix stack.)
I’m not sure what you’re looking for then. Obviously for the things that differ between GLSL versions you’re going to have to have multiple versions of the code somewhere, somehow. Nothing can save you from that.
The best you can do is segregate all the platform-specific stuff, so its impact is limited and you can write the rest of your shaders to a platform-independent API as much as possible. Whether you do it with the C preprocessor, or some preprocessor of your own, is just an implementation detail.
I finally got the Ortho to work in OpenGL without using glOrtho. thanks for the links.
For the two-point perspective, I have found lots of documentation, but I just can’t get it to work in OpenGL. I am not understanding it or I’m not using it right. What I want to do is use it with glLoadMatrixf after using glMatrixMode(GL_PROJECTION); or glMatrixMode(GL_MODELVIEW); and I don’t know which of the two to use, but I assume it’s the model view matrix. Also, do I have to build the normal view matrix first then multiply it with the two-point matrix? I’m lost!!!
Then you are not zooming the camera, you are moving it.
For ortho you have, well google.
For parallel you have
Yes, several. I won’t give a list - rather than that I’ll describe my newest (work-in-progress one).
Creating core functionality took around a week (part time work). Adding further stuff - 6 months of part-time work so far, and it will take more. Don’t be scared by the number, it has really a ton of features, naming few of them:
- Cloth physics; Ray tracing reflections (through custom OpenCL ray tracer); Custom full-featured GUI; etc.
Implementing each feature takes time, it’s up to you whether you need it for a game (or software) or don’t.
I’ve used some libraries so far - namely Alure, Open Dynamics Engine - apart from them I also use standard and platform libraries.
True, but you asked how long till it was “useable”
A 3D engine is never finished, you can always add features until you die
Yeah, well in the end thats similar system to what I have now (I decide upon GLSL version that hardware has, of course in case of some really complex shaders some hardware might not compile it -> lower-glsl version is supplied).
Still they have to (as do I) to write multiple versions of shaders.
Yup, I know I could use pre-processor, thanks for mentioning it. Although we basically handle the same thing by loading different shaders (they are at different locations -> example -> data/shaders/v330/ vs. data/shaders/v420/ … this way in case some shader compilation fails, the system tries to use one with older version).
We also wanted to avoid preprocessor directives in shaders, as we use somehow modified GLSL that is already pre-processed by our preprocessor (F.e. we have all shaders inside single file).
That’s less than 2 years
It’s not for a game but for my raytracer!
My zoom is done by moving the camera forward/back.
I do not use gluPerspective, glOrtho or gluLookAt. I’m trying to make my own matrices for them, which I did for the normal Projection and View matrices, but I’m having problem for Ortho and Parallel projection.
For me, I have written many 3D engines.
Probably easiest to list by machine if I can get this awful formatting system to do what I want….
C64 3 months 6502
Atari ST 4 months 68000
Amiga 2 days (ported from ST) 68000
PC (CGA) 3 weeks 8086
PC (VGA) 3 weeks 8086
OpenGL (no shaders) 3 weeks C++
OpenGL (with shaders) 2 months C++
OpenGLES (no shaders) 2 days C++
OpenGLES (with shaders) 2 weeks C++
XNA 1 month C#
I never use libraries, usually because of license or stability issues
A lot of commercial games use a shader bank.
They all call it different things, but in general they create a few flags to cover everything they need to know about the client platform. Things like if the device supports context less surfaces, how many instructions per shader, etc.
Then they load shaders based on those flags.
So you don’t want to pitch the camera up and down? You want the camera to rise and fall instead?
Zoom is easy, just increase or decrease the field of view
Rotating around the vertical is easy, take a forward vector and rotate it by the current heading
Then just create a look at matrix from the camera position towards the rotated forward vector.
Create a perspective matrix with the desired field of view and you are away.
When you have it working, a neat trick is to zoom in and move the camera at the same time to get the famous horror movie camera trick.
Yes, but I do want to rotate, pan and zoom, just like in ortho view!
It’s not something you do with a matrix. That’s down to the controls. You just don’t let the user tilt it up or down.
So how do you set such matrix to keep the camera horizontal?