0
101 Dec 22, 2005 at 13:11

I implemented Evolutionary Neural Networks, hoping they will learn to play Othello resonably well.

Now, I think my code is pretty bug free now :unsure: and I am currently running it to see what will happen.

Does anyone have experience about those kind of things? What should be a reasonable population size? How many generations do I need to run before I can expect better than random from my little brains?

Also, I am unshure about my method for sorting tyhe brains by IQ. Currently I implemented the < operator, and do a sort() on a list<> of brains. I realize the returned bool from the < operator is pretty random for any given pair of brains, since the mathces are not deterministic. Should this pose a problem? (Playing each pair against eachother is n\^2. No good.)

Thanks.

#### 5 Replies

0
101 Dec 22, 2005 at 21:34

First of all you let out a lot of details so the following will be based on the assumption that you have EA that in accordance to some fitness change the weights and structure of the NN?

First of all about traning, it is rarely a good idea to train anything against random input since what you will actually push the network against random. In this case the traning input is hard and therefore I would recommend you as human to play the computer serveral times to get good traning data. As to populations entire books have been written on the subject and many strategies are known. The problem with EA is the fact that the analytical properties are very complex and there rarely exist any good guidence other then trial and error so my advice would be try many settings. Perhaps try many population in parallell and have mutation between them .. the running time for the EA in aspects to traning one can expect everything from 1000 to MANY generations all depending on the complexity of the network required. You might get a acceptable player with a few generations! ..

As a side step why use EA for you NN.. it is common in robotics but in player intelligence I assumed the gradient approche or any other NN traning algorithm would be more suited?

0
101 Dec 23, 2005 at 13:43

I could give some implementation detauils, I assume…

The NN is simply a map<pair<int, int>, float> , representing the connections between nodes in the network. (Template parameters beeing From, To and Weight.)

Currently, I started from zero and randomly add connections and nodes, modify weights etc. in the mutation phase.

The reason I wrote this is because it seemed like fun.

I don’t see how I could train the network manually, since I have no idea what the topology of the network should look like. I believe I need at least a few thousand connections and hundreds of middle layer nodes in the network. Shouldn’t this take looong time to train manually?

Currently my NNs play against eachother, randomly. But since the same rules always apply, shouldn’t they learn from that?

I have run the evolution for approx. 1000 generations now (on my slow 266mhz laptop) with a population of 50 NNs. They seem to be playing the opening game pretty good now, but they are still crappy in the end of the matches. (I play against them regularly to try them out.)

0
101 Dec 23, 2005 at 14:40

First of all the mutation to grow might be good but I would implement is during crossover phase.. meaning the new part should perhaps not be completly new.

As of the size of the NN I would say that 100 middle layers I VERY large. The practial case comes down to that you rarely need more then a hand ful of layers. As of the number of neurons it all depends on the function. A hint is that the network shoud strive to have a small a network as possible. What is you fitness function. I would use something along the lines of fitness = error + 1/(10 + size) and let my EA minimize it.

As to your datastructure wouldn’t a tree structure be better since the network is naturally reprsented that way? Depending if you have a recurrent network or not.

The traning set could be some live games where it plays you with the current network.. then after finished it goes over all the traning matches and create a new generation.. the time consumption is huge.. you could of course run random sessions but it would garantee your AI player to be any good… rather it would give a expectency to be slightly better than random.. the only way to train it well is to train it against someone good.

0
101 Dec 26, 2005 at 12:56

The datastructure cant be a tree, since I need 60 output nodes. (I’m selecting the legal move with the highest output in the respective output node. Maby stupid?) I was also hoping the possible feedback created by loops in the graph would create some “memmory”.

The game of Othello is not really suited for a heuristic approach, as it would be if I had a more simple NN. Mostly because you cannot easily determine if a certain board setup is to the advantage of the player or the opponent.

I have some fairly good MiniMax based AI code for Othello. Would it be better to play my brains against that?

Lastly, would it be a better idea to play the same brainpair against eachother several times (\~10) to get a more “fair” descission of wichone is the better?

0
101 Dec 27, 2005 at 12:33

I do not know you application of your NN so I will not comment that but only input about the usage of complex datastructures and EA. Considering you use a standard appoche EA being:

1. selection(elitism, tournament selection … )
2. crossover
3. mutation
(4. population crossover & mutation)

then a complex datastructure might not be that good since you will insert a element that might not be determinisic. Meaning given a number of generations you inserting/removing of the structure might result in a non determinisic function which in the end removes any chance of converging. How you define you chromosome is important since in a EA the ordering matters if you define a structure that redefines this on every insert/remove and alters the ordering you may not get the result you want and distort the result.

After meantioning this I also want to encourgage the use of a datastructure but make sure it have the correct metric. And since you will be accessing elements on random there is no searching time therefor the indexing might be redundant?!

Don’t know if it helps!
Good luck!