Ok this is just thinking aloud… but
d = A sin (alpha); // d is the third side of the triangle formed by A and C, alpha is the angle between A and C
e = B sin (beta); // e is the third side of the triangle formed by B and C, beta is the angle between B and C
c1 = 10 / tan(alpha); // c1 is x coordinate of the intersection point
C - c1 = 10 / tan(beta);
C = 10/tan(beta) + 10/tan(alpha);
That should be enough to get you started
Is there any game math related concept not taught at khan academy
I don’t know anything about Khan academy, so I can’t say. Quick glance at their topics indicate they do cover the basics, so I suppose you could start there. You might have questions and it helps to have a professor / teaching assistant to help with that.
@TheNut: Khan academy is a good place to start, right? They’ve everything from maths to physics and it’s free to use.
Generally you should know linear algebra. Any 1st year University course (and book) will cover enough material for you to get by. Vectors and matrices are broad topics and their complexity is determined with what you do with them. For example, vector dot, cross, magnitude, or matrix determinant, quaternions, euler angles, etc. It’s not enough to simply know these operations, but what they’re for and why you do it. That could take a bit of time to absorb because the field of mathematics and 3D graphics (or gameplay, or physics, etc.) are not often intertwined. Alternatively, there’s many frameworks out there like XNA and DirectXTK (the successor to XNA) that include all these math routines for you. This reduces the problem to simply understanding 3D space.
I would say calculus is not as important as algebra. You could probably get by without knowing any calculus. It’s certainly nice to know and if you ever plan on writing a scientific module like any type of simulator (particle, physics, cloth), then you will definitely need some calculus background (in addition to algebra).
Once you get a handle for the above, the rest will come on its own. You will dive into topics that are very specific and require its own research. For example, Bezier curves, colour space, HDR, noise, random number generators, and intersection testing (which uses Algebra) will be topics you run into at some point.
It’s no big deal, it only does that when I tilt a lot, like > 45 deg. I’m still learning about all that matrices stuff in OpenGL.
Thanks Reed for your help. Have a Happy New Year….
Hmm. It shouldn’t be affecting the depth value of objects. You can see for yourself that the matrix doesn’t change Z values; it only affects Y values. I haven’t tried this myself but as far as I can envision in my head, the pseudo-pitch should only slide things around, not make them expand/contract on screen.
Your code looks correct though, so I’m not sure what’s going on.
About C4 Engine?
Never mind about shearing too much the more I pitch up/dn, that’s expected.
But when I pan up/dn, it works but it also affect the Z, meaning, it’s also moving away or closer. Is that normal too?
oh I see what you mean! It’s working, but it’s not quite right. Here is what I do. My matrices are in row major but I transpose them before sending them to OpenGL using glLoadMatrixf…
My view matrix M1…
Z = CamDir; // CamDir is vec3(0,0,-1) rotated by CamRot then normalized.
X = Normalize(Cross(Vec3(0,1,0),Z));
Y = Cross(Z,X);
M1 = (
X.x, X.y, X.z, 0,
Y.x, Y.y, Y.z, 0,
Z.x, Z.y, Z.z, 0,
0, 0, 0, 1
Then your shear matrix M2…
a = tan(CamRot.x); // CamRot is already Deg->Rad
M2 = (
1, 0, 0, 0,
0, 1, 0, 0,
0, a, 1, 0,
0, 0, 0, 1,
Then the cam pos matrix M3…
M3 = (
1, 0, 0, -CamPos.x,
0, 1, 0, -CamPos.y,
0, 0, 1, -CamPos.z,
0, 0, 0, 1,
Then I build the view matrix by doing…
ModelViewMatrix = m2*m1*m3
It looks good, and verticals are verticals, I have depth too, but it shear too much the more I pitch up/dn.
Could you list the maths needed (vectors, matrices etc.)
As Stainless mentioned, engine development never really stops. Depending on the complexity of your project and your goals, you could get something up and running within as little as a couple weeks to a couple months. Either way, your engine will undergo improvements as your understanding of the APIs and engineering skills improve.
I often avoid 3rd party libraries unless they are provided with a flexible license and provide simple enough functionality that it imposes minimal risk and size. If feasible, I’ll write my own portable code to provide the functionality. For example, I ported most of the important parts of the .NET BCL and WPF over to C++. I now enjoy the ease and productive benefits of working with the .NET API in a multi-platform C++ engine. Some of the 3rd party libraries I use solve other complex problems I didn’t want to spend time on:
* Bullet physics
* Various DB connectors
* Xiph stuff
For all other stuff like supporting BMP, PNG, TGA, HDR, WAV, SVG, TTF, OBJ, ZIP, HTTP, etc. I wrote my own code. It was both an educational exercise and to provide small, lightweight, and portable code to use. This is over a span of many years of course. Each new year brings in new features and more productive frameworks. It’s an evolving process.
@vilem: I will probably release it for linux. Thanks.
I’ll answer Reed in this reply too - yup I’m looking to some example of your point 1 - e.g. do you know some example of this (or better full implementation - but I doubt that exists)? Because this is what I’ve been looking for.
Just a little note to Direct3D vs. OpenGL, once you learn one of them to some extent, it’s very easy to switch to another. I also recommend you to try also OpenGL once you are finished with Direct3D, it might be viable for future - e.g. you might decide to F.e. release your engine for Linux.
Yes, but there are ways of managing this.
1) Single shader and translation layer
Write your shaders in your own macro language and translate it into the correct version of GLSL before compilation
Write an include file for each GLSL version and write the shaders using only these subroutines.Change the include file on the fly.
Or you can use one of the freeware translation libraries that are out there. I can’t remember the name of the one I have used, but it converts opengl into direct3d on the fly. This is particularly useful in windows as you can use pixwin to debug.
@stainless: i meant ‘2 years’ as a good thing.
I’ll be using directx (everything i listed in the original post).
OK, if you want to look up/down then you’ll need the pseudo-pitch I mentioned.
When you construct your view matrix for a normal camera, you’d usually do it by combining some rotations for yaw and pitch (and maybe roll). The shear matrix I’m talking about would replace the pitch rotation.
If your camera starts out in standard position, facing along the -Z axis, you’d normally do a X-rotation to pitch it. Instead, you’re going to do a YZ shear, meaning there will be an offset along the Y axis proportional to position along the Z axis. The matrix for it will go like:
[1 0 0]
[x y z] [0 1 0] = [x y+az z]
[0 a 1]
The value ‘a’ there is the shear amount, which is equal to the tangent of the pitch angle. The matrix above is written for row-vector math; transpose it if you’re using column vectors in your app. Move the ‘a’ to a different component of the matrix if you need to shear along different axes.
That’s exactly what I did Reed, but the problem is, I don’t want to only display my model horizontally. For example, my camera is on a street at human height over the street, looking at high buildings. I want to look up a bit but I want the verticals to stay verticals. The same with Ortho projection, you can still look up/down, rotate etc. and the verticals still stay verticals except there is no depth. So I’m trying to build a matrix that will do kind of like ortho except with depth. Do I make sense?
You’re making it way more complicated than it needs to be. Just get a regular camera view and projection matrix working for your raytracer. You can look up the docs for glOrtho, glFrustum, and gluPerspective and they give you the exact formulas for the matrices. So just use those and get regular cameras working.
Then, for two-point perspective just use a regular camera with no pitch or roll, just yaw, so it stays horizontal. That’s all there is to it. Forget the idea of an extra “two-point matrix”. There’s no such thing. It’s just the regular view and projection matrix, with the camera horizontal.
(Optionally add back in a pseudo-pitch by shearing the camera up and down instead of rotating it. This is the only part that would be different from a standard camera matrix stack.)
I’m not sure what you’re looking for then. Obviously for the things that differ between GLSL versions you’re going to have to have multiple versions of the code somewhere, somehow. Nothing can save you from that.
The best you can do is segregate all the platform-specific stuff, so its impact is limited and you can write the rest of your shaders to a platform-independent API as much as possible. Whether you do it with the C preprocessor, or some preprocessor of your own, is just an implementation detail.
I finally got the Ortho to work in OpenGL without using glOrtho. thanks for the links.
For the two-point perspective, I have found lots of documentation, but I just can’t get it to work in OpenGL. I am not understanding it or I’m not using it right. What I want to do is use it with glLoadMatrixf after using glMatrixMode(GL_PROJECTION); or glMatrixMode(GL_MODELVIEW); and I don’t know which of the two to use, but I assume it’s the model view matrix. Also, do I have to build the normal view matrix first then multiply it with the two-point matrix? I’m lost!!!
Then you are not zooming the camera, you are moving it.
For ortho you have, well google.
For parallel you have
Yes, several. I won’t give a list - rather than that I’ll describe my newest (work-in-progress one).
Creating core functionality took around a week (part time work). Adding further stuff - 6 months of part-time work so far, and it will take more. Don’t be scared by the number, it has really a ton of features, naming few of them:
- Cloth physics; Ray tracing reflections (through custom OpenCL ray tracer); Custom full-featured GUI; etc.
Implementing each feature takes time, it’s up to you whether you need it for a game (or software) or don’t.
I’ve used some libraries so far - namely Alure, Open Dynamics Engine - apart from them I also use standard and platform libraries.
True, but you asked how long till it was “useable”
A 3D engine is never finished, you can always add features until you die