Thanks for your tip, I’m new to WebGL and OpenGL in general. I’ve been stuck in DirectX land for too long. It never occurred to me to check on some specs for OpenGL ES 2.0. I checked the code and I already display terrain in chunks of 8x8 mesh objects. Therefore each mesh uses sub-grid of (128+1)x(128+1) vertices, well below 65536 indices (by pure coincidence). Cutting into multiple meshes was done for culling reasons. Otherwise for now I skipped advanced terrain rendering (like you mentioned Geo Clipmaps). Probably PCs should be able to handle at least 512x512 grid (1M triangles). Not sure about handhelds.
But I will read on OpenGL ES 2.0 specs.
A 1024x1024 heightmap will create a VBO with more than 1 million indices. The OpenGL ES 2.0 specification (which is what WebGL is based off) only allows a maximum of unsigned short, or 65536 indices. Since you’re going well beyond that limit, you’re likely to run into issues. 512x512 shouldn’t work either, but you may see the first 65536 indices before the rest gets cut off. If you’re seeing more, than there’s probably a driver bug that’s allowing that (FYI driver bugs are a dime a dozen).
An optimized heightmap engine will produce (width x height) vertices. This means the maximum heightmap you can support on OpenGL ES 2.0 is 256x256 = 65536 indices. Even if you were using standard OpenGL, you generally don’t want to use integers for indices because it will consume twice as much video memory compared to using shorts. In these cases, just render a set of 256x256 patches. This is also better on performance because you can quickly cull non-visible patches. A large singular heightmap would be difficult to cull because it has a larger view radius and thus wastes GPU cycles to process non-visible polygons.
You may also want to look into Geo Clipmaps. It’s a fairly easy technique to implement and gives you the quality and performance for rendering highly detailed terrain.
About loading everything in one pass, I think I handled that one.
Building terrain (grid size 1024x1024) freezes browsers - Firefox & Chrome. Terrain is loaded in chunks but still. There were no problems when I decreased terrain resolution to 512x512.
Looks like both browsers have to be restarted after failing to load 2M triangles. I wonder if these are browser limits or three.js has a lot of allocation overhead. Are there absolute memory allocation limits in browsers?
Easy to remember syntax? well, QBASIC? just kidding :-)
If you already started with C++ than stick with that. Otherwise, you’ll need to learn Pascal which is easy to remember the syntax, but there is a lot more to remember than syntax (function calls etc.)
Didn’t have time to leave a “like” on your facebook, nor had the time to visit it.
The more combinations your NN has to deal with, the more hidden layers you need (at a reversed exponential rate), and, the more complicated your sigmoid need to be because the bigger the network, the more you need to reduce your sigmoid error margin or make it more sensitive. On a very small network, a general sigmoid works good. But on a much larger one, you need to tighten your sigmoid function to more than a simple curve with a cutoff. It’ll learn much faster and use less hidden layers.
Like humans, it’s not the size of the brain that matters the most, it’s how the data is processed.
thanks for the reply…
i was thinking it wouldnt be able to store enough with just one hidden layer… it would overuse its connection weights.
i never really even thought the sigmoid shape would matter at all, as long as you evolved threshold with the weights of the connections… especially if it was harsh bit input to harsh bit output.
That might be one of those ink blot things.
uh, was that a question, an observation, a complaint? :-)
I’ve worked with NN and GA for a few years, and what I can say is that the bigger not always the better (ha ha). What matters is your fitness function for GA and your sigmoid for NN.
If your fitness function is not that good, growing your population will not work.
If your sigmoid is only the general one (http://en.wikipedia.org/wiki/Sigmoid_function), not tailored to the problem it solve or learn, making your network larger will not help.
It’s very hard to make a fitness or sigmoid that is near perfect for a particular problem, but the more perfect they are, the smaller the network and the faster you get the expected results.
Got some great software upgrades on the way, just in time for Christmas projects! I’m feeling generous, so send me a message if you want a free demo! :)
great, very relaxing
Looks really nice! I tried to zoom-in, hoping to see snow flake fractals ;) but I did see an interestingly effect nevertheless.
Being winter and all, I did a shader on snow. I was going to do koch flakes, but I decided for a quick 2 minute job instead :)
Neat - I like the art style, and how you play with the scale to keep things manageable; looks like fun.
I better have another look at it
bravo sir, bravo! I posted them all except the Mars one cause it was your fav… :) Actually I didn’t post it because the landscape exhibits… interestingly adult oriented patterns (o¿O).
And to finish off the sequence … rocky planets
and gas giants
No they were declared, but I wasn’t using Sample0.
So the compiler optomised it out
Since Sample0 was not found in the init methid, it never looked at Sample1 etc.
Using a dummy read is the safest way of dealing with it, in the end I used this value as a detail texture
And of course I had to do a 6 pass one with a cloud layer
The first one I really like
A five pass shader :>
Pass one generates a random number texture
Pass two generates a mercator for the planet
Pass three generates a pallete texture
Pass four generates a star field
Pass five draws the planet by ray tracing a sphere and mapping the mercator on to it
Yes I’m sure. I do not call glUseProgram at all, I made sure of it. When I do, then only my red light (from the shader code) shade the model, if I don’t, my red light shades the model + the glLights and everything else. The way I test both is compile with glUseProgram and then compile again with glUseProgram commented to make sure it’s not residue from the last display call.
On the ATI however, it won’t work at all that way. If I glUseProgram, then only my red light works. If I don’t gluseProgram, then my red light isn’t working at all but the glLights will.
Make me wonder, since NVidia is capable of doing this, they should actually make it a feature. It’s nice to be able to do both rather than one or the other. Some stuff is simpler and easier to do in GL and others in shaders, but it seems that ATI only let you do one but not both like NVidia.
Maybe it was an unintentional in the part or NVidia, but sometime, a bug can be a discovery of a new feature!
If it’s using your shader without ever calling glUseProgram, that’s definitely not supposed to happen. And if the objects rendered with your shader are really getting lit (without you having coded that into the program), that’s borderline impossible to believe.
Are you really sure it’s using your shader, but neither you, nor any framework code you might be using, is calling glUseProgram (or any other alias for it, like glUseProgramARB? Note that if you use the shader for a little while and then un-set it with glUseProgram(0), that could be undefined behavior depending on what version/profile of GL you’re using, so it might continue using the previous shader if you do further rendering after that. Basically, trying to mix fixed-function rendering and shader rendering in the same scene is an iffy proposition.