# Vector as linear comb. of vectors from same plane?

24 replies to this topic

### #21Mihail121

Senior Member

• Members
• 1059 posts

Posted 14 January 2010 - 09:10 AM

Nick said:

I'm curious why you have to do this in the first place. If v is known to be in the plane, it was constructed as a linear combination of v1 and v2 in the first place. So instead of working in world xyz coordinates, why not work in the plane's coordinate system instead (ab coordinates)?

Can you reveal anything more about the actual application you need this for?

Sure I have nothing to hide, the project behind it is rather stupid and useless, but I got so involved in the maths it made me blind as usual :) A friend of mine got a quadcore so we wanted to brute force ...ehm, test if they REALLY gave us a quadcore :) . We came up with a battery of sub-projects. One of them deals with breaking cyphers based on shift registers, another one deals with analysing data and yet another one we decided should be a software rasterizer, since I have some knowledge on the topic and need one for Silverlight anyway. I remember discussing an idea here with you (Nick et. al.), that I never found the time to test in practice, about how to crudely devide the rasterizer workload on multiple units. Can't find the post now, but it went like this. You write a small rasterizer that fills only an ID-Z-buffer, meaning stores z-values as in a normal z-buffer, but also stores the ID of the polygon covered by that pixel. After you've pushed all geometry through it, you do this:

for each element Element of ID-Z-Buffer {
reverse project (Element.screen_x, Element.screen_y) using Element.z into
point.x,point.y,point.z
get triangle by Triangles[Element.ID]
v1 = triangle.reference_vector1
v2 = triangle.reference_vector2
v3 = triangle.reference_vector3
v = vector formed by point.x, point.y, point.z in the same plane as v1, v2
using the same reference point
// v1, v2, v3 precomputed, orthonormal
//
// now come the funny spot
//
calculate alpha, beta such that v=v1*alpha+v2*beta
//
// OK now comes the even funnier spot I'm not sure about at all :D
//
use alpha & beta with two precomputed vectors on the texture plane
covering the triangle plane to calculate (u1,v1)....(un,vn), the texture
coordinates of of all available textures for this point
use alpha & beta to calculate some other shading
use the available vectors for lighting & stuff
use the shading information to shade the corresponding pixel on screen.
}


Now this is absurd, I know that, it's perfectly clear to me that it's a waste. It's just meant to test the new hardware and we want to put it together fast. Why can we test the hardware with it? Easy, we devide the screen in four almost equal squares and use each core to rasterize a square. It should work in my eyes. The only thing I'm not sure about in the pseudocode above is using alpha and beta to extract (u,v), but it seems ok on paper. As I said, I'm so excited and blind I cannot think straight about other alternatives right now, sure they'll be many.

### #22Reedbeta

DevMaster Staff

• 5340 posts
• LocationSanta Clara, CA

Posted 14 January 2010 - 06:39 PM

Why not just go for full deferred shading? That seems to be what you're heading toward, only in your system you have to do a bunch of extra per-pixel work to figure out triangle IDs, texture coordinates and crap...you could just rasterize out all the information for your shading equation and read it back in for the lighting pass.

Potentially even simpler, just write a normal forward shaded renderer with the screen split into four quadrants. This doesn't load balance among the cores (unless you do something tricky like adjusting the size of the quadrants) but since this is just a benchmark anyway, you can choose the scene you render to be nicely balanced.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #23Mihail121

Senior Member

• Members
• 1059 posts

Posted 14 January 2010 - 07:27 PM

Reedbeta said:

Why not just go for full deferred shading? That seems to be what you're heading toward, only in your system you have to do a bunch of extra per-pixel work to figure out triangle IDs, texture coordinates and crap...you could just rasterize out all the information for your shading equation and read it back in for the lighting pass. :)

Potentially even simpler, just write a normal forward shaded renderer with the screen split into four quadrants. This doesn't load balance among the cores (unless you do something tricky like adjusting the size of the quadrants) but since this is just a benchmark anyway, you can choose the scene you render to be nicely balanced. ;)

I want ALL of the information to be calculated in the very last loop, i.e. to be very easily deployable on a separate unit. I definetely do not want to calculate texture coordinates and other computations for pixels that are later going to be covered by other pixels. Besides, this mode of operation would allow for very simple scanline clipping than the Sutherland-Hodgeman & v-caches. The lighting is not my absolute priority anyhow.

### #24Reedbeta

DevMaster Staff

• 5340 posts
• LocationSanta Clara, CA

Posted 14 January 2010 - 07:46 PM

What about storing UVs and material IDs at each pixel as you rasterize, instead of triangle IDs? You'd save yourself solving the plane equation later on, but would still be sampling the textures and doing all the lighting/shading work in the final pass.

Maybe I've misunderstood, but it seems like rasterizing the ID/Z-buffer is a preprocess before starting the actual benchmark? In which case, I'm not sure why you're worried that much about computing pixels that will be later be covered by other pixels (in that phase)?
reedbeta.com - developer blog, OpenGL demos, and other projects

### #25Mihail121

Senior Member

• Members
• 1059 posts

Posted 14 January 2010 - 07:50 PM

Reedbeta said:

...
Maybe I've misunderstood, but it seems like rasterizing the ID/Z-buffer is a preprocess before starting the actual benchmark? In which case, I'm not sure why you're worried that much about computing pixels that will be later be covered by other pixels (in that phase)?

Actually you're right, but then again, not quite :) I am indeed benchmarking the last phase, but in that case it does not matter what I put in the buffers prior to it, does it? In fact, going with the ID/Z-buffer is more an act of laziness than a decision, supported by justificated claims and arguments. :)

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users