Im thinking about the system behind the next generation of robotic learning of motor control, which is essentially still going to be randomization of parts of its system that are deemed inefficient.
As I see it, the main problem with ANN's is they dont successfully use prior experience with each new randomization.
Basicly any portion of the ANN is rerandomized to get a different weighting to output, and it overwrites successful connections as much as unsuccessful connections, which the evolution slowly weeds out.
Theres got to be some other system, that could not overwrite its successes so much, and be more useful in a practical application with a single instance. (without generations and generations of worthless instances)
That way you could have a single bot and it learns on the spot how to stand then walk, possibly in that order, the supervision program would have to be written more realtime to deal with the lack of virtual instancing.
I swear the secret of it, is reuse of successful activity, in some grouping system, some activity segmentizer, or something, so only unsuccessful segments need to be rerandomized.
So the learning, is still essentially random but its more efficient.
So, are evolved ANN's today the most successful use of randomization as a tool for learning motor?
thinking out loud about ANNS
No replies to this topic
Posted 25 January 2013 - 01:59 PM
you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users