funny idea for a game utilizing anns and back propagation.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Mar 02, 2013 at 18:05

say your ann is set up with its outputs controlling game mechanics (like camera movement, character animation) im not sure what the inputs would be, but inputs would be important in making it smart.

so the idea is, you sorta hand animate a 3d game movie and backpropagate it through the neural network, to achieve some kind of natural looking hand placed action.

has anyone ever made a game this way before? its a new kind of game creator system. (even non programmers could use it)

Youd have to sit there handmaking the animation for ages, and train it for a while, i imagine, but it really could be worth the effort, there would be no games like it.

I could use some help deciding what outputs (and more importantly, inputs) would be important to include.

4 Replies

Please log in or register to post a reply.

B5262118b588a5a420230bfbef4a2cdf
0
Stainless 151 Mar 03, 2013 at 11:40

I did some research into a game creator that worked on AI

I used genetic algos and hand drawn graphics, there were two layers to the ai. The evaluation function played the game.

The trouble was that it was so slow, a single generation took 2 hours.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Mar 03, 2013 at 19:19

backprop would be faster than evolution… but still youd have to handplace all the animation, so thats what would take a while.

Thanks for the respectful reply, I got little from gamedev.net - if you come up with something new, you are often put down for it.

6eaf0e08fe36b2c23ca096562dd7a8b7
0
__________Smile_ 101 Mar 03, 2013 at 20:46

Backprop (and any other gradient descend method) works only for smooth fitness function with one global minimum, while genetic algorithms can be used for more general case. I think motion control falls into second category so backprop can’t help much.

Dcdc1f8da93317d730963bffabbd78c3
0
TheHermit 101 Aug 13, 2013 at 06:54

You can help with multiple minima/non-smooth functions by using annealing - basically, you dynamically tune the learning rate over time, so you’re not taking infinitessimal steps but instead jumping around on a sort of coarse-grained landscape to start.

There’s a fast version of genetic algorithms called ‘CMA-ES’ that uses the statistics of the fitness/variable correlations to get huge speedups in low-dimensional problems (N=100 or so) that might be good enough to do this kind of thing in reasonable timescales, but you’re really going to have to make sure that your problem fits into that N=100.