0
103 Sep 30, 2006 at 05:11

im thinking about writing a thinking entity, its got to have a memory, and something that uses its memory… i think thats the main bit about it.
cause think about it, all we do is repeat back what we already know… but
theres some of logical linking between objects which lets us draw conclusions
from what we know… which im having trouble with.

like it learns that meowing gets its attention.
it knows humans can open cupboards.
it knows theres food behind a cupboard opening.
and its described to it theres a human there and its in hungry mode.
so then it combines the four, “i shall meow to get attention from the human for the door to be opened for me…”

what do you think?

#### 19 Replies

0
101 Sep 30, 2006 at 09:57

Could be implemented as a graph where each node represents a situation, and each connection is an action.

If different actions are avaiable from the beginning, they could be tried out to see wich situation it would produce, building the graph piece by piece.

Then, the ai would just be a path finding algorithm.

0
101 Oct 01, 2006 at 05:23

@geon

Could be implemented as a graph where each node represents a situation, and each connection is an action.

If different actions are avaiable from the beginning, they could be tried out to see wich situation it would produce, building the graph piece by piece.

Then, the ai would just be a path finding algorithm.

That’s called a Nondeterministic Finite Automata (NFA). These are not turing-complete by themselves, and are rather computationally “weak”. They can be used to encode regular expressions, but certainly not thinking.

Anyhow, if you’re interested in “thinking” in relational terms, using logic, you might want to loop up knowledge bases such as CyC (http://en.wikipedia.org/wiki/Cyc).

You may also want to look into automated theorem proving (http://en.wikipedia.org/wiki/Automated_theorem_proving), which is a form of “automated reasoning”. It starts from a set of axiom (pre-existing knowledge), and attempts to assert the validity of a proof from what it knows. This can be used to prove things like mathematical identities, but it could also prove that a door can be opened using your hands provided that a door can be opened by turning its handle, and your hands can turn a door handle.

The limitations of such systems, however, are that in real life, you obviously don’t know everything you need to know… You need to guess and try… And well, once a knowledge base contains a million items, it’s very hard to prove things using brute force (very time consuming).

It’s interesting to see that our brain, because of its biological (neural) nature, functions in a very variable way. We don’t really rely on heuristics, hard logic. We often rely on “gut feelings”, intuition and educated guesses. My personal guess is that if “thinking” computers are ever invented, they will have to imitate some of this. I would propose that its actually our innate capability to learn that makes us able to think in the way we do… And this is because of the way our brain was constructed. However, artificial neural networks, while they have proven efficient at “learning” simple tasks, have never really been used for artificial “reasoning” per-se.

0
101 Oct 01, 2006 at 20:08

Interesting concept “thinking” but very hard to actually without mear logic map what it would be. The example with cupboard and food is indeed an interesting problem but as replied can be solved with in this context (being AI) simple methods. The “real” difficulty is raised when observing the fact that “intelligent” behaviour is in no way intelligent. The problem with intelligent machines is that you can’t use optimal methods. But the fact that we use non optimal methods when solving problems ensures us a resistence to outside elements (flexibility vs. optimiality). If we try to imitate the biological systems the second problem occurs as many other areas of AI where you get to a point that even if you have a method that applies the actual computation because the heavy burden. Neural networks for example would probably solve some problems but we still don’t have computers enough to compensate the human brain! Hard nut to crack.. =) Sorry for this ranting.. interesting topic!

0
103 Oct 02, 2006 at 11:54

im gonna write it, for sure, i feel that if i implement something, however useless it ends up being it will give me more ideas and bring me closer to the real thing.
Like Nyx said it needs some kind of optimization over a simple brute force once it starts getting more usefully knowledgable.
in my mind, its flaws are things like say - at night time a door is used for these things and at day time a door is used for these things, and you can see from that simple example, a single object in its mind can have really huge amounts of data attached to it, due to variables. you start with door and its next step is night and day, and when you think about writing it for real, it wont just be two things.:)
i swear how you write the database is how powerful it will end up being… its a real hard one. its not just a heirarchical system… its
something else. if you want it to be a continuous thing, when it learns new data it can change its memory banks completely, so the
structure must be written with learning in mind especially.
what i wonder is, at what point will something like this become useful, even though it cant compare to real thinking (us) yet.
in my imagining, if it was a continuous machine, most of its memory would
be of a temporary nature, continually reshuffling, perhaps its “environmental”
data.
but what Nyx said about understanding new concepts by guessing, is definitely seeming difficult to systemize, although
im quite happy without it, id love to see it in action.

0
102 Oct 02, 2006 at 18:00

There are people at universities all over the world that have deicated their careers to such problems. I’d suggest reading through academic literature on the subject before either reinventing the wheel or treading a path that is a proven dead end. Your posts make it sound like you suddenly had this idea to make something called “artificial intelligence” with a computer, and you are going to pursue it as if noone else has been doing so for last 50 years.

0
101 Oct 02, 2006 at 18:43

@monjardin

There are people at universities all over the world that have deicated their careers to such problems. I’d suggest reading through academic literature on the subject before either reinventing the wheel or treading a path that is a proven dead end. Your posts make it sound like you suddenly had this idea to make something called “artificial intelligence” with a computer, and you are going to pursue it as if noone else has been doing so for last 50 years.

I suggested him some readings already. It’s obvious the guy is very new to this… But I have a rule. If someone wants to try something, let them try. If they are to fail, they will, but at least they will learn something in the process. There’s nothing wrong with reinventing the wheel if you can learn something from it.

And who knows, new ideas don’t always fail, so stop being so pessimistic. People in academia spend long amounts of time looking at specific problem, but they have one major flaw, in that they usually look at very similar approaches to a problem. They don’t try anything very novel most of the time… They are often afraid to, because if they failed, they wouldn’t be able to publish an interesting paper (or so they think), and could lose some support (eg: financial support, research grants). This is why many new technologies are invented by people in their garage… And well, programming is ideal for independent research: it only costs time and a computer!

0
101 Oct 02, 2006 at 20:05

There is a big difference in being negative and actually pointing out that these ambitions aren’t new. And the fact still stand that this is NOT a programming issue it’s and issue concerning the nature of high complexity problems. They require a brute force solution and even though for example NP-Complete problems can’t yet be proven to require brute force there are at least empirical evidence of at least 200 years of mathematicans trying to find a way. This is not negative but only requires a change of focus for the attacker. You can never get around the fact that a problem is NP-complete or NP-hard or whatever but you can always make modification and make call of to the requirement of the answer to device an algorithm to fit your needs and as you point of engineering comes natural many times and some great inventions are made by the innocent minds! But I would not recommend that approche for AI due to the above fact of it’s basic complexity. Not negative, rather rational…

0
101 Oct 03, 2006 at 04:19

@GroundKeeper

There is a big difference in being negative and actually pointing out that these ambitions aren’t new. And the fact still stand that this is NOT a programming issue it’s and issue concerning the nature of high complexity problems. They require a brute force solution and even though for example NP-Complete problems can’t yet be proven to require brute force there are at least empirical evidence of at least 200 years of mathematicans trying to find a way. This is not negative but only requires a change of focus for the attacker. You can never get around the fact that a problem is NP-complete or NP-hard or whatever but you can always make modification and make call of to the requirement of the answer to device an algorithm to fit your needs and as you point of engineering comes natural many times and some great inventions are made by the innocent minds! But I would not recommend that approche for AI due to the above fact of it’s basic complexity. Not negative, rather rational…

Well, this has nothing to do with NP-completeness either. Automated theorem proving can require alot of computational time, but this is still irrelevant. What poses a problem is that automated theorem proving only works within a closed system. That is, a logical system in which you know all the axioms, and those axioms entail all that is true. In such a system you can always prove or disprove a statement from the elementary axioms, given enough time.

Fully automated reasoning is possible within such a system. Do take a look at SHRDLU, for example:

However, the real world is not a closed system. We don’t really know the “axioms” to real life. We can’t know everything that exists in the world at once with perfect accuracy. In the 70s, AI researchers tried to make a computer program that would understand stories directed at infants (eg: Mary went to play at her friend’s house, yadda yadda yadda). These stories seem very simple to us all, but their efforts failed because the program had no notion of common sense.

If you “tell” your program that Mary ate an apple, he won’t know what’s an apple, and he won’t know what eating means. You’ll have to define those. However, those can only be defined in terms of other things. What’s an apple? It’s a fruit. What’s a fruit? It’s something that grows from a plant. What’s a plant? A plant is a vegetal life form. This sequence never ends, and the computer never has a full grasp of the absolute meaning of everything being discussed.

Yet, I do think this is somewhat of a programming issue. We can sit there and philosophize on the issues involved for hours… But why not sit down and try to program it. And when we run into a problem, try to find a solution… Try to refactor to take new possibilities into account.

I would personally say that there are four major things we need:
- Algorithms to “learn” structured information
- Algorithms to infer meaning from language and map it on acquired structured information
- Algorithms to “reason” in terms of the existing acquired knowledge (and potentially learn new things)
- Some kind of definition of structured information. A data structure for knowledge

0
101 Oct 04, 2006 at 08:20

I want to make one thing clear. My point wasn’t that there is no method for reasoning. My point was that AI is limited by chaos theory and the laws of dynamic systems. To not end up in a state of chaos the systems we require to simulate a “human” intelligence must overcome a much larger problem which is the laws of computation. To imitate human behaviour is what the field research of AI actually does and that is applicable. But to expect that we will be able to build a simulation with the algorithms mentioned would probably end up in a dynamic system that gives you rubbish in the comparison to a biological system due to the problems with chaos in complex systems. This aside I’m not stating AI as futile i’m merely point at the problem that hard problems don’t get any easier by applying a dynamic that imitates the actual system in large scale. But that doesn’t say you can use it in the limited space. State machines and such are used widely in the small scale and can provide very strong mathematical expressions. But the fact still stands that is is NOT a cellular system which implies a simplification which in itself implies chaos at some point.

The diffrence of perspectives makes this discussion very interesting! =)

0
101 Oct 04, 2006 at 18:30

There is no point in imitating a cellular system if that cellular system is only used as a platform for a higher-level reasoning system to take place. Not that simulating the brain would be a bad idea. I just find that people make the common mistake that the reason we don’t have virtual brains yet is only because of CPU power limitations. The fundamental problem is that we don’t really have detailed plans of all the connections that are present in an infant’s brain. Babies aren’t born with their neurons connected in random ways. There is a structure to their brain, that is genetically defined, and allows it to function.

The purpose of working with symbolic system is to avoid going down to this low level of complexity and instead focus on the cognitive aspect of reasoning. I honestly don’t think simulating a human brain is going to be possible for a while. The only way we could get one is by simulating the evolution of life on earth, or by perfectly mimicking the way neurons work and building some sort of perfectly accurate brain scanning device (not likely).

I would, instead, advocate the use of neural networks and other machine learning systems inside of a symbolic system. This way, we could get the best of both worlds. We would get an intelligible representation of knowledge and data, as well as learning capability.

0
101 Oct 04, 2006 at 20:52

There is one big flaw with you reasoning and that is that given the current context of machine learning algorithms (like GA, ANN, Decision Trees and what ever) there is a limition due to the pure nature of the algorithms.

I can give a you a example:

Consider the dynamics of three balls (elastic or not) in a closed 2D simulation of the impact of gravity upon these balls. Given that the energy and mass can differ in each ball this simple dynamics will create chaos and will infact loss all meaning. This is a mathematically simple problem but when you try to create a simulation with it you get total nonsense.

The same applies to machine learning algorithms. Even though you can apply them their limited context (which in many cases is enough) but for the kind of intelligent reasoning need for “human” thinking machine learning is nothing more than trivia. We need tools far more “advanced” then those of today!

0
101 Oct 04, 2006 at 22:46

You’ve been spurting out “chaos theory” a few times in this discussion, but I still don’t see what your point is. Reality isn’t 100% predictable? So? That’s indeed what we’ve been discussing. Neural networks are meant to work with partial information, that’s what they do, and they do it very well…

As far as simulating bouncing balls and gravity, I’ve done it before, and I didn’t get “total nonsense” out of it… I got a simulation that visually made alot of sense. You’ll have to make your point clearer for those of us who don’t read minds :P

0
167 Oct 04, 2006 at 22:49

GK, I don’t think anyone is arguing that we will be able to build a human-level AI just using ANNs and so forth (algorithms that exist today). Of course we will need more advanced tools. But the tools that we have today like ANNs will become the basic components of whatever advanced tools we develop in the future.

0
101 Oct 06, 2006 at 20:38

I was suggesting that the problem is not the algorithmic approche which I myself find very interesting but infact the problem lies in the dynamic properties of the problem. My “spurting” of chaos theory referes to these properties. They are simply to complex to attack with the current resolution of a modern computer (32, 64, 53535256152625(random high number) bits). I could even bet that it could be argued that a problem like human intellligence will require some new branch of mathematics (not meaning a new algorithm but rather a new paradigm).

0
101 Oct 06, 2006 at 21:38

@rouncer

im thinking about writing a thinking entity, its got to have a memory, and something that uses its memory… i think thats the main bit about it.

Sounds like a flip-flop to me :P

0
101 Dec 08, 2006 at 00:21

I’m not getting deeply involved in this discussion so just one thing for now, but i often think about ‘life the universe, time and everything’ when daydreaming. Perhaps our ability to dream or our ability to allow our mind to wander about without getting bogged down is very important and why we are able to be artistic and creative. If we are to ever create a unique creature that can think for itself it may need to be free of it’s ‘workings’ to wander about in mind yet with ‘joy’. To me, mathematics is a very complex way of describing some relatively (resulting) seemingly simple behaviour, but i admit it’s a neccessary tool. To me creativity is sort of a random act of educated guesswork perhaps around a domain of perceived generalizations or classifications or some other ideals of a high level, it’s more of a ‘ride’ than a ‘task’ of computation. The mechanics of how it’s done is another world away from the seat of the mind which is at the helm of the body and has control of the ride. Nevertheless i think every piece of this puzzle should not be disregarded at all.

I like this quote Nyx wrote:

The purpose of working with symbolic system is to avoid going down to this low level of complexity and instead focus on the cognitive aspect of reasoning

0
101 Dec 10, 2006 at 07:07

anyone read the articles on the ai built for that robotic car race?

it sounds very similiar in the way they approuched teaching the car how to learn terrain.

0
101 Dec 10, 2006 at 10:26

The problem is not the mathematical doing. In most biological systems based upon evolution the system in itself is not advanced it is complex. In the meaning of the word complexity no aspect points to the advanced properties of a system but the dynamic behaviour/properties of such a thing. So if you want to simulate a complex system you will need to bother yourself with chaos one on one. Because a complex system like for example intelligence in the context of thinking entites would require you to deal with not advanced but complex dynamics of such a system. So to try to copy such a system is in practice impossible and you can at best make a system roughly the same (exponential divergence in time). A better approche is then to reduce the complexity and somehow make the simulation less complex. This can of course be done but at this point there are just very basic attempts at describing such dynamics and even less attempts at analysing it. As the problem was possed in this thread you would be better of trying to imitate such a system using some state machine or fuzzy logic which doesn’t spin off into chaos (random states).

0
101 Dec 16, 2006 at 00:56

Events (perceptions and actions), way to acquire, change and remove relationships between them, and algorithms to create (search, plan, guess) chains of them. There, “thinking AI” problem solved. As long as you have an infinitely capable machine, that is. ;)