Yes, recalculate it as the drone moves. You’d have to do that anyway, as presumably the plane is also moving and not just staying fixed in one spot relative to the drone…
oh ok. I get it now. It’s basically the same as a real camera zoom, the camera doesn’t get closer to the object but the zoom lens changes the focal length.
But because my drones are not fixed in the sky, as the drone gets closer to the airplane, all that will change? I guess I just have to re-calculate the frustum as the drone moves?
The bounding rectangle will get smaller and larger as the plane moves farther and closer away, so the plane will stay the same size on the image. It’s basically a 2D operation. BTW, “zoom” means to change the FOV, not to move the camera closer or farther.
I see what you mean. But will this also account for the FOV? I mean, if the airplane is 1/3 the size of the view plane size, rendering as you suggest will only magnify it, or will it zoom it as in moving the camera closer? If it zoom it, then the airplane will no longer look the same as it get more skewed the closer it is to the camera, hence no longer centered!
Putting the bounding rectangle (assuming it’s been calculated in the correct coordinate system) into glFrustum (or a similar D3D function) gives you a projection matrix. You use that to render the scene. The image rendered by that will contain the contents of that rectangle. So if the rectangle is already centered on the object then the image will be centered.
LOL. ok, seems too simple. You’re saying to project the object onto the view plane and get its 2D bounding rectangle. That part is easy, providing the object fit in the view entirely. But how does the frustum helps me with centering it? and how does this deal with “zoom extents”?
Uh…you render it?
ah ok. But what do I do next after I’ve set the frustum??
OK, well, if pointing at the center of the BB isn’t centered well enough then the sheared-frustum method should take care of it.
That was only an example. I do rotate the drone’s camera to the center of the airplane’s AABB so it is somewhat centered in the view. The problem I have at that point is to “zoom extents”.
Say for example you have the airplane perfectly centered on the screen. It is also rotated so that one of the wing is in a diagonal coming at you (bottom right), and have a FOV of 65+ If you move the camera away from the airplane so that the airplane appear half the size in the view, and draw a rectangle around the airplane, you’ll see that the rectangle is no longer centered, because one, that wing looks shorter as you move away, and, as you move away, the whole airplane’s perspective changes and affect how centered it was at first.
So if you do it the reverse way, the airplane is in the view but far away so it looks say half the size, it’s easy to make it centered and leave it, but as you zoom in, it’s no longer centered. So it seems we have to zoom/center, zoom/center, zoom/center as many times needed until it’s right, but that’s crazy isn’t it?
So the drone camera isn’t already pointing directly at the airplane? (I.e. the airplane is in the top left corner of the drone’s view)
In that case, you can still zoom to it, using the second method I posted in my last comment. It requires setting non-balanced values for the “left, right, bottom, top” parameters in the projection matrix. Basically it creates a skewed view frustum that corresponds to cropping/zooming a rectangle out of the original image.
But if the object is far from the center of the view it’ll still appear perspective-distorted when zoomed this way. And if you then animate the camera to turn to face the target over some amount of time, you’ll see the distortion changing, which seems like it could be weird.
I see what you mean, but the problem remains. Say for example my airplane is half way visible in the top left corner of my view plane. If I rotate the plane’s normal (which is the camera) to the direction of the AABB’s center (the airplane), the shape of the airplane will change (not the actual geometry but the perspective of it). At this point, I have to re-calculate how to center it in the view plane, and each time I rotate/move the view plane, the airplane perspective geometry changes, offsetting how centered it is.
I guess it’s something as simple as a “zoom extents” in a CAD software. So in other words, how do you “zoom extents” an object to the screen or a view plane?
You know the centre of the object, it’s 0,0,0
You know the location of the object
Therefore you know the centre of the object you need for displaying it (0,0,0) + position
To find the correct FOV you use the bounding sphere of the object. Take the radius of the bounding sphere, and the position of the camera to calculate the FOV. Think of it as a triangle with one point being the camera location, one point the position of the object, and one point the intersection with the bounding sphere.
The FOV you need is then obvious, it’s tan(angle) = sphere_radius / length(camera_position-object_position); FOV = 2 * angle;
Personally I like to increase the FOV slightly to allow a bit more of the background, but it works perfectly.
I use it all the time for racing games.
True, but it’s fast enough to go through the vertices since there aren’t that many.
I see. I didn’t exactly tried that, but I was simply zooming in bit by bit and re-calculating the object fitting in the plane and re-centered it, because each time I zoom in, the layout changes a bit. What I mean is, when the airplane is far from the drone, it looks more ortho than when the drone is very close, say, close to the tip of a wing, and it’s FOV=65 that wing looks very close and when you zoom just a little, that wing goes out of the view.
The other problem is the “center of?”, if I set my camera direction to the AABB center, it’s not always centered, especially when the airplane is rotated.
So what I’ve done is project the object on the plane, zoom in/out little by little until the object fits within the plane (not perfectly), then I move my camera so that the object is centered on the plane, and at that time, the object no longer fit in completely because of the FOV being so wide, so I redo all the above steps again until it’s good enough. May take 20+ passes. So that’s not the right way to do it, obviously.
So what you propose should take care of fitting the object within that view plane? But what about it being centered as well?
So you want to have the drone’s camera zoom in so the target object just fits in the display? Well, I suppose you can look at this as a problem of calculating the FOV. The drone would rotate its camera so it’s pointing at (the center of?) the object, then zooming is done by adjusting the FOV.
So for each point on the drone, you could calculate the FOV needed to enclose that point, by looking at the angle between the camera-to-point vector and the camera’s view vector. Then find the maximum FOV out of those. (You can simplify it to avoid too many trig functions by finding the minimum dot product, instead of maximum angle, etc.)
Another approach would be to project each point to the near plane (basically a ray-plane intersection) and find their bounding rect in the 2D coordinate system implied by the camera (camera is above the origin, xy axes point to the camera’s right and up). Then you can plug this straight into your projection matrix function as the “left, right, bottom, top” inputs they usually have (in glFrustum for instance).
Also, you can use a low-poly version (eg: last lod level) of your planes with the same technique proposed by Reedbeta.
an aabb has 8 points, the model has more, how is it not trading accuracy for speed.
The way I see it, C is no better than Pascal. It’s all about the compiler.
Beside, many beginners learn Delphi/Pascal to become programmers. Starting OpenGL with C++ for a beginner is discouraging. But Basic/Pascal? why not, more people would make games. I started with Pascal, and now I do C++ but I’m no expert at it because it’s so complicated. I still use both languages, depend what I want to do.
So in my opinion, go for it. Like DarkBasic uses Basic coding for making games. Why not Pascal too? and maybe DOS later (just kidding LOL)
Yes, exactly, but I still have the problem to “zoom extents” so that the object fit’s the entire cockpit display! I know how to do that now for the screen as you explained in your other post, but I’m not sure how to do that on a plane instead of the screen!
It sounds like maybe you want to render the drone’s camera to an offscreen buffer and then map it as a texture when rendering the cockpit display? That would probably give you the most flexibility.
You’re right. Thanks!
If you don’t mind going through all the vertices, just project them to screen space and calculate the AABB in 2D. That’ll be the most accurate way. There’s no reason to mess around with re-creating a 3D AABB and trying to map it to the near plane, which will be just as expensive but less accurate.
The problem with AABB and Bounding Sphere is that they don’t guarantee the object is centered in the rectangle.
I need something like a “zoom extents” function to find the 2D shape and get the extents from it to get a perfect fit rectangle.
What about re-creating the AABB of the object based on its current rotation and then projecting the object on the AABB near plane and then shrinking the near plane to fit, that would automatically make the 2D rectangle?
Or is there a better way? I don’t mind going through all the vertices for the object to get it done.
That’s what Vilem wanted too. The update I made essentially turns all render passes into “run once”, since pass N can now access the textures generated from N-2, N-3, N-4, etc. So in your case, go ahead and generate your water, grass, and rock textures individually over 3 passes. Sample0 = water, Sample1 = grass, Sample2 = rock. On the 4th pass, you can sample from all those textures.
Actually, there was an error in my documentation of how the textures work. I updated the zip file with new comments in the shaders, but it’s essentially what I described above. Sample0 = texture from first pass, Sample1 = texture from second pass, etc.