evolving nueral network

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 104 Dec 12, 2013 at 14:03

the strange thing is, the bigger the network the worse It is, most of the time (my idea to increase evolution is to multi instance it and get a whole generation per frame) which is basicly what you do…

it will be a gpu program, cause you just take it connection to the tower and connections in the tower itself, as a single draw call, for every instance all at once.

would a network learn quicker if it was just true and false input, and variables only add one to a set of neurons representing the variable.

the question there is, do significant bits hurt you? cause if you use significance for your variables, then theres less cells to represent it, but you can increase size, and make it less complex by having one cell per addition.

so then when your randomizing your instances, there would be more results which would obey the scoring system… actually keeping all the input the same for the entire activity, is probably all you have to do, and significance is actually allowed, but would be summing be good?? even though it increases input size?

cause then the motor side could just be summing cells, and if its better, you should do it! that’s all.

5 Replies

Please log in or register to post a reply.

88dc730f0f71e55be39de0ad103bd9ff
1
Alienizer 109 Dec 17, 2013 at 00:26

uh, was that a question, an observation, a complaint? :-)

I’ve worked with NN and GA for a few years, and what I can say is that the bigger not always the better (ha ha). What matters is your fitness function for GA and your sigmoid for NN.

If your fitness function is not that good, growing your population will not work.

If your sigmoid is only the general one (http://en.wikipedia.org/wiki/Sigmoid_function), not tailored to the problem it solve or learn, making your network larger will not help.

It’s very hard to make a fitness or sigmoid that is near perfect for a particular problem, but the more perfect they are, the smaller the network and the faster you get the expected results.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 104 Dec 17, 2013 at 06:10

thanks for the reply…

i was thinking it wouldnt be able to store enough with just one hidden layer… it would overuse its connection weights.

i never really even thought the sigmoid shape would matter at all, as long as you evolved threshold with the weights of the connections… especially if it was harsh bit input to harsh bit output.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Dec 17, 2013 at 17:22

The more combinations your NN has to deal with, the more hidden layers you need (at a reversed exponential rate), and, the more complicated your sigmoid need to be because the bigger the network, the more you need to reduce your sigmoid error margin or make it more sensitive. On a very small network, a general sigmoid works good. But on a much larger one, you need to tighten your sigmoid function to more than a simple curve with a cutoff. It’ll learn much faster and use less hidden layers.

Like humans, it’s not the size of the brain that matters the most, it’s how the data is processed.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 104 Dec 22, 2013 at 12:45

ok. what about if its just a snap on off, at the threshold, a step, cause i liken thinking about that, like a kinda more complex condition, of all pass or nothing… wouldnt that surfice, as long as it was just true false only 1 bit input as well?

so as theres more inputs, you need more layers, to then convert to output, so it doesnt overlearn itself to death.

i dont know what the hell im doing, u seem to know more than me, but id just like to add perceptrons to my animation methods, and ive got a crazy idea for a real robot too…. ill have to keep thinking about it, but ive got something more down to earth to finish first.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Dec 22, 2013 at 16:41

Are you trying to make it learn something, or find a solution to a problem?

If you have 2 inputs and one output, all as bytes, then the number of possible outcome is only 256 from 65536 combination of inputs. So the sigmoid can be very flexible, just about anything will work.

BUT, if you have 2 inputs and 2 outputs, this changes dramatically. But it depends. If all your inputs need to be different output, then your sigmoid will have to be 100% perfect (very, very, very sensitive) and you’ll need more hidden layers. If 2 combinations of 2 inputs are needed for any one of 2 output, then your sigmoid can be more flexible and more layers may not be necessary.

What are you trying to make it learn? or do?