I’m not following. Can you give a pseudocode example?
“Merry Christmas” is anti-christian, real heathendom.
But pure reflection is still not the same. Reflection will reflect everything (or course except point lights) even the darkest of things, it’s simply a mirror. You can not have specular highlight with area lights using pure reflection, you need to use shininess derived from the reflection part of the refraction algorithm and drop the refraction part.
Say for example a white bath tub. You don’t want to make it reflective. But if you set an IOR value to it and only use the reflective part of the refraction algorithm, you’ll get proper specular highlight with area lights and bright areas of the scene and even windows, and you don’t even have to be concern with lights/sun at all. it’s purely natural.
Yeah, so do stupid headlines.
Physically they’re the same effect. The specular color and reflection color should physically be the same, but historically they’ve been separate in order to give more artistic control in non-physically-based renderers, and in scenes that didn’t have proper HDR light intensities.
Specular highlights are needed for point-lights. You can’t directly reflect point lights because a reflected ray will never hit them - they have no surface that can be intersected. So you have to add them in explicitly.
However, if you use area lights then the reflected rays will hit them and naturally produce a highlight. So you don’t need (and shouldn’t add) explicit specular highlights for area lights.
thanks, but is it necessary to add if you have reflection already?
Reflection reflects everything, specular highlight only works on lights such as a light bulb or the sun, and will not work for the sky or ambient lighting, and will also not work for lights reflecting off a mirror unless you explicitly program your raytracer to do it.
Have a hippy barfday…. no wait that’s wrong.
xmas merry have you must
antipsychotics cause permanent brain damage.
Nothing new, I already know that from experience :D
And merry Xmas.
I didn’t look :-) merry xmas
Are you trying to make it learn something, or find a solution to a problem?
If you have 2 inputs and one output, all as bytes, then the number of possible outcome is only 256 from 65536 combination of inputs. So the sigmoid can be very flexible, just about anything will work.
BUT, if you have 2 inputs and 2 outputs, this changes dramatically. But it depends. If all your inputs need to be different output, then your sigmoid will have to be 100% perfect (very, very, very sensitive) and you’ll need more hidden layers. If 2 combinations of 2 inputs are needed for any one of 2 output, then your sigmoid can be more flexible and more layers may not be necessary.
What are you trying to make it learn? or do?
ok. what about if its just a snap on off, at the threshold, a step, cause i liken thinking about that, like a kinda more complex condition, of all pass or nothing… wouldnt that surfice, as long as it was just true false only 1 bit input as well?
so as theres more inputs, you need more layers, to then convert to output, so it doesnt overlearn itself to death.
i dont know what the hell im doing, u seem to know more than me, but id just like to add perceptrons to my animation methods, and ive got a crazy idea for a real robot too…. ill have to keep thinking about it, but ive got something more down to earth to finish first.
Thanks for your tip, I’m new to WebGL and OpenGL in general. I’ve been stuck in DirectX land for too long. It never occurred to me to check on some specs for OpenGL ES 2.0. I checked the code and I already display terrain in chunks of 8x8 mesh objects. Therefore each mesh uses sub-grid of (128+1)x(128+1) vertices, well below 65536 indices (by pure coincidence). Cutting into multiple meshes was done for culling reasons. Otherwise for now I skipped advanced terrain rendering (like you mentioned Geo Clipmaps). Probably PCs should be able to handle at least 512x512 grid (1M triangles). Not sure about handhelds.
But I will read on OpenGL ES 2.0 specs.
A 1024x1024 heightmap will create a VBO with more than 1 million indices. The OpenGL ES 2.0 specification (which is what WebGL is based off) only allows a maximum of unsigned short, or 65536 indices. Since you’re going well beyond that limit, you’re likely to run into issues. 512x512 shouldn’t work either, but you may see the first 65536 indices before the rest gets cut off. If you’re seeing more, than there’s probably a driver bug that’s allowing that (FYI driver bugs are a dime a dozen).
An optimized heightmap engine will produce (width x height) vertices. This means the maximum heightmap you can support on OpenGL ES 2.0 is 256x256 = 65536 indices. Even if you were using standard OpenGL, you generally don’t want to use integers for indices because it will consume twice as much video memory compared to using shorts. In these cases, just render a set of 256x256 patches. This is also better on performance because you can quickly cull non-visible patches. A large singular heightmap would be difficult to cull because it has a larger view radius and thus wastes GPU cycles to process non-visible polygons.
You may also want to look into Geo Clipmaps. It’s a fairly easy technique to implement and gives you the quality and performance for rendering highly detailed terrain.
About loading everything in one pass, I think I handled that one.
Building terrain (grid size 1024x1024) freezes browsers - Firefox & Chrome. Terrain is loaded in chunks but still. There were no problems when I decreased terrain resolution to 512x512.
Looks like both browsers have to be restarted after failing to load 2M triangles. I wonder if these are browser limits or three.js has a lot of allocation overhead. Are there absolute memory allocation limits in browsers?
Easy to remember syntax? well, QBASIC? just kidding :-)
If you already started with C++ than stick with that. Otherwise, you’ll need to learn Pascal which is easy to remember the syntax, but there is a lot more to remember than syntax (function calls etc.)
Didn’t have time to leave a “like” on your facebook, nor had the time to visit it.
The more combinations your NN has to deal with, the more hidden layers you need (at a reversed exponential rate), and, the more complicated your sigmoid need to be because the bigger the network, the more you need to reduce your sigmoid error margin or make it more sensitive. On a very small network, a general sigmoid works good. But on a much larger one, you need to tighten your sigmoid function to more than a simple curve with a cutoff. It’ll learn much faster and use less hidden layers.
Like humans, it’s not the size of the brain that matters the most, it’s how the data is processed.
thanks for the reply…
i was thinking it wouldnt be able to store enough with just one hidden layer… it would overuse its connection weights.
i never really even thought the sigmoid shape would matter at all, as long as you evolved threshold with the weights of the connections… especially if it was harsh bit input to harsh bit output.
That might be one of those ink blot things.
uh, was that a question, an observation, a complaint? :-)
I’ve worked with NN and GA for a few years, and what I can say is that the bigger not always the better (ha ha). What matters is your fitness function for GA and your sigmoid for NN.
If your fitness function is not that good, growing your population will not work.
If your sigmoid is only the general one (http://en.wikipedia.org/wiki/Sigmoid_function), not tailored to the problem it solve or learn, making your network larger will not help.
It’s very hard to make a fitness or sigmoid that is near perfect for a particular problem, but the more perfect they are, the smaller the network and the faster you get the expected results.
Got some great software upgrades on the way, just in time for Christmas projects! I’m feeling generous, so send me a message if you want a free demo! :)
great, very relaxing