quad tree recording

rouncer 103 Jul 26, 2012 at 18:31

just imagine storing a movie where you stored all the next pixels following the current pixels, by looking back through
time so many steps and actually starting off at a later date.

youd have to get over the problems that only each context of these pixels would be a valid way to make a record back to restore the previous layer of the quad tree.

this would cause it to cause the same pointer combination to come up which would involve possibly any amount of records coming out of that quad tree level, and the movie play back would split into segments, depending on what movie just overwrote the exact same context.

youd have to have some kind of still pixel removal, but that would complicate the pointers to being simple 2x2 pixels of the previous level, theyd have to have extra information to make sure the still pixels still get sent by other combinations, because there would be still pixels in lots of different total contexts.

i guess sending moving pixels up would change the total context, and bring movement from the earlier layers in, but youd have to send every ordered pixel in the combination at once so you could even get past the spacial destruction of the previous level… and then it still wouldnt even touch the contexts of virtually nearly all the other pixels there.

in short, its a complete waste of time, you can get some kind of ambiguity in the smaller levels and get some kind of fuzzy 2d recognition, but the more i read about this insanely over hyped piece of crap, it just lends me to think some people are really fucking stupid and cant understand theory to save their lives… but they seem to be awfully good at putting poor theory into practice and having boasting claims about the results.

every possible concept is a super set of smaller concepts. that is fundamentally a load of bullshit, way over simplified, and if you apply data to pixels in a 2d space, thats what you get. i imagine maybe if you gave it a voxel eye it could use the same technique for 3d recognition, but theres no way it could ever say a 2d object it saw was a 3d object it saw, unless there was a direct programmatic impossible to write combiner, if you got a dot response from the 3d car, and a dot response of the 2d car, there could be some potential amount of possible extra predictions stored to make it a more solid set of digits, but then how the hell would you ever associate them together, they are 2 sets of unknown data that you id’d, and thats how they remain, even if you knew what the exact symantics of the regions were, you could have a direct pattern for the word car, id it, then get the 3d car, id it, then the 2d car, id it, then maybe you could get it to make a group lot of ids out of the 3 senses with dot detection. but thats only because you told it to it. so actually, to teach one of these things is showing it a million million records, then you would have to label them all with words, and if you label the two senses with the same word it gets a concept of the joining of its senses. but then after all that, what the hell is it going to do with this if all it is, is a fuzzy ambiguity algorythm? and labelling every thing because it cant even use basic intelligence to work out if it sees the two things at once its the same.

it could be programmed that way! but thats not intelligence, its nowhere near it. after all symantics are there, there needs to be the common situation in time. then if two things happen at once, then it automatically puts them together in a group that happened at the same time. thats robotic, and would fail completely.

cause you know, sometimes different things happen at the same time, so it couldnt combine its senses at all. and all these are just analyzing the perception sense, that has nothing to do with object oriented like concept building that we do without even trying at all.

it ends up just this recognition database, that has joint senses, after its all completely programmed into it the sense connections, and then it just has this behaviour playback like its immitating what it was copying exactly never deviating once, and without the extra additions to the algorythm to stop multiconnections, which it cant specify diverging, then it would have no choice but to follow the possible looping predictions about its recorded behaviour.

without the divergence protection and convergence protection, then it would be completely retarded and hop on one foot whilst tapping itself on the head, doing two things at once with 2 different parts of its body at the same time.

so running back on playback is very disasterous when it comes to the specification of a subset from a superset.

and if its that sensitive to complete split retardation, then i dont think we run back on playback, and the brain is far more superior than a quad tree recorder.

0 Replies

Please log in or register to post a reply.

No replies have been made yet.