I’m new at all this, and while I understand everything quite well in
theory, I’m not too good with verbiage yet. I.e., conceptually, I really
understand all the bits and pieces. However, if I use such labels as
“pipeline” a bit out of place, forgive. Also, I don’t know if the
following Q will sound odd, but I really do know what I’m doing with it,
and I’d like to be able to pull it off. So…
I’m tinkering around with polygons and such on the rendering pipeline,
and I wanted to know how you could go about manipulating the individual
vectors from coordinate to coordinate that form the polygon faces. What
I’d like to do is have the polygon matrices all lined up (i.e., the
world all the way to the projection, the final “2D” image if you will)
and then mess around with the polygons vectors in between the vertices.
Actually, I would like to mess around with the vector lines before that
stage in the pipeline if that’s possible.
Basically, I would like to find out how to interface with the lines
drawn between vertices on any stage of the pipeline, do my own math on
them real-time, and also manipulate the art styles of the
vertex-to-vertex vectors as well.
Thanks for any help out there!
Please log in or register to post a reply.
I’m having a hard time understanding what you’re talking about. Maybe
some kind of a mocked-up screenshot would help communicate what you’re
trying to do?
Perhaps this is a better explanation…
Say you take the vertices 4,3 and, say, 2,1 of a square and the vector
line drawn between them. Is there a way to say to the engine, “take
these vertices, place one more in between them that I will give to you
by this mathematical function, and draw two vectors in lieu of one”?
And, is there a way to add artistic “adjustments” to vectors drawn
between coordinates, not just using the polygon edges as placeholders,
if you will, for the spaces they describe?
Hmm. It sounds like what you want to do is along the lines of
subdividing a mesh - cutting up its polygons (or in your case,
apparently, cutting up lines) into smaller ones and then applying
displacements to the new vertices generated by this. There are a lot of
ways to do this; which is the best way will depend on what you’re trying
to do. One way is to do it all on the CPU and include all the
subdivided, displaced vertices in the vertex buffer you submit to the
GPU. If the geometry must be animated, you would use a dynamic vertex
buffer that’s refilled each frame. Alternatively, you might use a
geometry shader on the GPU, or even the hardware tesselation shaders if
you have quite recent video hardware.
I still don’t know what you mean by “artistic adjustments to vectors” (I
think by vectors you really mean line segments) or “placeholders for the
spaces they describe”.
Thanks Reedbeta, def going in the right direction. Excellent. Really
appreciate your great help and advice. I’m getting the gist of what
you’re saying, and I think that’s pretty much what I’m getting at. Half
of my problem is figuring out the exact terminology and classifications
of what I’m seeking to do, which is very defined and thought-out
conceptually in my mind and on paper.
I’m looking to do quite a lot with what must be called “displaced
vertices” and such; basically a lot of math somewhere directly prior to
the actual frame display. The thing is, I want as few polygons as
possible, but I’d like to do adjustments to the polygons somewhere along
the render pipeline (and where is something I’m trying to figure out).
One of the things I’m struggling with is dealing with what are
inherently “structural” things and not actual display things. I.e.,
usually, the line segments (sorry! That’s what I meant by “vectors”,
learning the correct lingo here) of meshes exist in the undisplayed mesh
matrix world of the engine, but not in the actual frame on the screen,
where they’re “colored over” and not necessarily used as lines
Firstly, I’d like to use certain line segments (“outline” ones,
basically”) to draw comic-book style “ink” lines. This would obviously
vary frame to frame and require (I’m assuming) algorithms to determine
which polygon vertices and line segments are actually the current
outline ones. It would also mean, of course, that I’m seeking to
directly use the usually non-displayed polygon vertices as art tools
themselves (black lines), and that I’m seeking to have artistic control
over how they’re displayed (what I meant by artistic adjustment to
vectors). I’d also have to develop algorithms of some sort to “color in”
the spaces made by the outlines (what I meant by placeholders
basically), and I’d like to use the polygon points in between the
“outline” line segments to determine the shading and such, as those
points would obviously contain info about how far various portions of
the in-between-the-lines segments protrude inwards or outwards in the
mesh in that particular frame. I’m also seeking to figure out how to
dynamically create polygon “points” that would generate temporary
polygons over a certain area of a mesh. I.e., if for a brief moment a
mesh needs to bend or morph in a certain section of its body, I want to
be able to create said “points” and have either shading data and/or
literal polygons and vertices be created, if only temporarily for the
Hope that all makes sense…? If it doesn’t, let me know and I’ll go at
it again; I know exactly what I’m going at, but I don’t know if it’s
coming across very clearly…
Thanks so much for your help!
I’m really appreciating devmaster.net and the forums; by far the best
dev forum I’ve been on. Thanks again. Great stuff here.
Ahh, outlining. Something you may want to google is “non-photorealistic
rendering” or NPR. There’s actually a good amount of research out there
about rendering in artistic styles, making images look painted, inked
and so forth.
One pretty simple way to do outlining is to draw two copies of the mesh;
one is drawn normally, with full color, lighting shading, and the second
copy is drawn all in black, with backface culling reversed (so that it
draws back faces but not front faces) and is slightly expanded by
pushing the vertices outward along their normal vectors. This offsetting
can be done dynamically in the vertex shader quite easily. The result is
that since the front faces of the second draw are missing, it doesn’t
occlude the surfaces from the first draw, but at the edges you can see a
bit of the back side of the expanded mesh.
Doing it this way, the outline would automatically update based on
camera position and movement of the object. You could alter the color
and thickness of the outline per-vertex as well, if that’s what you
wanted to do. I think you would need pretty highly tesselated meshes to
make this look good from all angles, though.
Another option for outlines is to run an image-space postprocess that
detects edges using a Sobel filter and darkens them. That can also be
done in a pixel shader using what’s called a full-screen pass.
Ah ha, thanks a bunch Reedbeta. That’s terrific stuff to chew on. Def
going to track all that stuff down. I hadn’t thought about approaching
my aims with those techniques. I’m still curious, is there a way to
“draw” line segments themselves? Use them as art lines, if you will,
themselves, and manipulate the art? I gather that’s not necessarily a
common aim, but I do figure that there’s a way to do it.
PS–in reference to your above post, I’m assuming that those are
essentially the core techniques of the art of a game like Borderlands?
That game is fairly close to the look I want. I just want a few
additional abilities regarding line segments and art control.
Yes, the GPU can draw line segments directly and even run pixel shaders
on the lines. I don’t know specifically how Borderlands works but after
doing a google search and reading some forum posts, it seems that people
think it is being done as a postprocess. I did not find anyone from the
company explaining how it’s done, though, so people could be mistaken.
Thanks for the research Reedbeta, appreciate it. It seems that you are
indeed affirming what I’ve hoped for, that I can use the line segments
are art lines (like “brush strokes”), and that I can do some art on them
too. That, combined with the other techniques you’ve mentioned, give me
a lot to work with. Do you have any idea with sort of functions in APIs
deal directly with line segments? Or do you just take vertex and line
segment data from polygon data and..I dunno, write your own
Thanks again for all your help. This place is great.
Well, making lines look like brush strokes is going to be challenging.
When I say GPUs can draw lines, I mean that they basically give you what
the line tool does in MSPaint. They’re a fixed width and you have to
give a list of vertices which the lines will traverse, connect-the-dots
style, and each line is basically a long, thin rectangle. While you can
run a pixel shader to generate different colors along the length of the
line, it doesn’t sound like that’s enough to do what you want - although
it’s hard to judge without seeing something concrete, like a Photoshop
mockup of the kind of image you hope to create. You might have to do
your brush-strokes by drawing quad strips and mapping a texture along
the strip, or something like that. Basically, literal points and lines
are second-class citizens in the GPU world, and you get the greatest
flexibility by using polygons, which is what it’s really designed for.
Before going too much further with your project, I’d strongly suggest
that you roll up your sleeves and dive into the NeHe
tutorials and work through those until you
have a more concrete idea about how 3D graphics APIs work. There’s just
so many details and concepts that I won’t be able to explain in the
space of a forum post, and which you’ll learn much faster by trying out
and doing yourself than by reading about them in generalities.
It’s been a while since I touched cell shading techniques and felt like
whipping up a quick demo illustrating
sobel edge detection. You can view the shader source
special really). Both rendering techniques, post-process edge detection
and back-face rendering, have flaws though. Accurate edge detection
requires and image with high contrast. If your image lacks contrast, you
can end up with some less than desirable results. In my particular demo,
I rendered the first pass using full ambient lighting, which is why only
the silhouette edges are rendered and not the interior edges. Had I
included lighting in the first pass, some of the 3D models with low
polygons would end up looking like wireframes once the edges are
rendered. It just requires some careful maths to get around. Rendering
the edges with back-face polygons is another technique that works well
all-around, but you can run into problems with low detail models or
viewing models at certain angles.
I think it’s possible to do clean looking brush strokes if you can
render the edges as lines instead of reverse engineering that from
models or images. You can view a couple neat brush algorithms on Mr.
Doob’s Harmony web app.
I see. Yeah, that’s basically what I’m trying to do. I simply want to
have total access to line segments so I can take the data and write a
heck of a lot of my own shaders to do a bunch of things to them. And
yes, I’m looking into those NeHe tutorials. They look awesome, planning
to rip through most of them shortly.
Wowzers, thanks! That’s a lot of great info to take in. Ideally, I’d
like to do what you said, take some algorithms a la Mr. Doob’s example
there and apply them to outline line segments in order to achieve a
comic pen/ink/brush look with slight imperfections and variable width.
PS–how do I uploaded images to a post here from my comp? I’m trying to
load a few images from my comics… In the meantime, my reference
styles, very close to my own, are the B&W linework from Calvin and
Hobbes (i.e., what I’d like to develop line segment shader algorithms
for) and, as regards the coloration, both the watercolor images Bill
Watterson did in his books and the matte coloration found in the Tintin
comics. Sort of a medium between the two.
You can upload your images to ImageShack or
another free image host, and then insert them in a forum post using that
URL (there’s an image button on the toolbar here).
I’d recommend to use imgur.com. It works better than the others, with