neural network motion video (implementation with video)

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Jan 19, 2014 at 20:31

http://www.youtube.com/watch?v=3VNuPeArB9Q

i finally got my neural network to work, (with a lot of noise) its actually not as hard as subsurface scattering, i still cant even do integrals… well, the initial version you get after doing a bit of mathematical theft, but its a little plus for your intellect that you typed it in right even though you werent sure. its a bit of a computational nightmare, but when they (as in like the google crowd who like keeping secrets apparently) show you stills from an neural network, really the truth is theyre more purposed for animation! all you have to do is animate the numbers right your teacher told you in uni when you completely didnt understand and the amazing thing, is the neurons will depart from each other during feedback, and come together during feedforward.

6 Replies

Please log in or register to post a reply.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jan 19, 2014 at 23:52

just how did you manage to do that with NN?

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Jan 21, 2014 at 09:21

https://www.youtube.com/watch?v=JhyHg1BpRiE&feature=youtu.be

this video kinda explains it, you can see it working better, it just has a feature for every cell, it similarity matches to it by itself… so the end product is a subset of the total features.

but… its not really working properly :) im still working on it.

88dc730f0f71e55be39de0ad103bd9ff
0
Alienizer 109 Jan 21, 2014 at 19:36

ok, it’s sounds and look really cool but, are the movements that of what the NN learn? or is it learning as it goes? What prevent it from moving off screen? I guess I’m still not clear on how you get NN to work with what’s on the screen. Is every pixel an output from your NN?

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Jan 28, 2014 at 16:16

its not that exciting, im just forming pixel coincidences, then spitting them back out, im hoping i get some strange interactive playback, i tell it everything it knows, but what it does with it, could be the funny thing. :)

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Jan 28, 2014 at 16:14

(HTM, as a form of spacial compression, and frame search engine)

my reason for developing, is im trying to work out a way of storing a video game in synapse weights.

take the eye, and form coincident pixel groups. sometimes your not gonna get more than 2, because theres just no reoccurance, but limiting it to max 2 is wrong also, because later groups can only use the groups youve made here, and if there was a 3 coincidence and you didnt get it, the only way you could approximate it is by collecting the groups of 2 which included it, so theres an extra pixel taken in for the group, and its not exclusive, what i call it at the moment. for groups of 2 is ((eyexeyey)(eyexeyey))/2-(eyexeyey), (make it a cube if you want groups of 3, insane isnt it) so i dont think wed be collapsing in any hurry, in fact only by losing virtually all of the whole, could you record any of it… especially since it would expand the same way each time.

Hawkins may be right, when he shows you the piramid scheme, because for it, the later levels are all sparsely activated, and are all on or off, nothing inbetween, and hes implementing temporal compression (pooling) and if it had all those things, then maybe it could reduce, but against, is the fact that however many cells in the final region is basicly how many wholes you have in a purely spacial system with no stochastic nature (mag-tech)

sometimes you wont get a group, but its still important to at least grab the one, every cell needed to get the reconstruction working, even if its just one. (but hopefully this doesnt come up that much, because a 1 group, isnt a group, its just a pass up.)

its insane trying to do it, its really hard, trying to use a threshold activation to do it, is kinda hard, all ive got now is the motion coming back with lots of false positives, and im not sure whats causing it. but the frames are storing separately, and are all retrievable, in the mag-tech network.

i imagine once i get it working, (dynamic alotting of synapse connections is MANDATORY for a realtime system, being on gpu is no exception) youd be able to have the eye one side, and motor and functional detections other side, then it forms coincidences of them, and this i guess will give you the novel playback of the internals.

if it doesnt do anything interesting, well, you could try adding stochastism (which is the same as kinda greyscale all the way up, checking frequencies of fires) and then im stuck, but i cant even get 1 hidden layer working yet, and you wouldnt want to even attempt another level when there is noise in it, because it would totally destroy it coming back out if it was enough collections.

so, as i see it, you either probablisticly compress time (hinton) or you record sequence (hawkins) and whats better???? hmm yeh.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Jan 28, 2014 at 17:17

https://www.youtube.com/watch?v=Lmr9xrRPX-Q