Thursday, October 7, 2010

Strong AI mentioned in Big Bang Theory & Some Personal Takes on AI

Watched the new episode of the popular American sitcom Big Bang Theory today. Caltech engineer Howard built a robot arm which can follow instructions to unpack dinner for the group of scientist friends. After the robot arm finally managed to unpack the entire dinner in 28 minutes, Dr Sheldon Cooper commented, "Impressive, but we must be cautious."

"Why?" Asked Howard.

"Today, it's a Chinese food retrieval robot. Tomorrow, it travels back in time and tries to kill Sarah Conner (a character in the Terminator Series)."

Then Leonard said, "I don't think that's gonna happen, Sheldon."

"No one ever does. That's why it happens."

It is interesting to see the dilemma in our outlook of intelligent robots. We want them to come true, because they can help us do so much things. They are great extensions of human capability. But we fear that if the robots become too intelligent, we become the inferior race and get dominated.

My interested in artificial intelligence was also to a large extent aroused by much fictional and cinematic incarnation of intelligent robots. I still remember the funny and sometimes emotional R2-D2 and C-3PO in Star Wars, the terrifying Terminators, and of course the Matrix. And indeed, no one knows whether such intelligent robots will one day come true and live along with us.

In fact I am rather convinced by my own research that this dream is still far away. In my paper I need to be objective, but here I can write my own feelings. I feel it is far too bold to assert that intelligence can be achieved in just a few decades. Indeed the progress of technology is explosive and exponential, but the problem is that we don't even have a clear definition of the word "intelligence". We do not even have an idea of the scale of the question. There are many unsolved questions today. For example, we do not have cure for cancer and AIDS, despite the fact that they are known to us for so many years and costing millions of lives. Both doctors and AI researchers have the same goal, to unravel the mystery of life. But apparently the doctors are much more prudent and responsible in dealing with predictions.

Besides, all our technological advances today is realized upon a platform or architecture. We have exponential growth in our technical capabilities may be because we are still far behind the full potential of the current architecture. But if the problem we are aiming to solve is beyond the inherent capabilities of this architecture, we can not use the growth in this architecture to measure how far we are away from the goal. In fact, the architecture itself may one day become the limiting factor.

The architecture I'm referring to here is the algorithmic machine architecture, and we know intelligence is very unlikely to be algorithmic. It is like no matter how fast I drive a car, if I am not on the right road, then I will still not get to my destination.

But I still believe Strong AI is possible, maybe because I am a reductionist. The idea I want to convey in my paper is essentially that with whatever we have right now, no matter how much we improve it, Strong AI is not going to be possible. A revolutionary breakthrough is needed for Strong AI to be achieved. When this revolutionary breakthrough will happen, no one knows. It may be one decade, or it may be one century, or it may be never.

That is why I do not favor people making irresponsible predictions of the future of Strong AI. It may be a better idea to leave the idea of Strong AI to Sci-Fi writers or film producers, at least for now.

Tuesday, October 5, 2010

Thoughts On the Chinese Room II

I was contemplating the possibility of a person, previously know nothing about Chinese, to learn Chinese just by observing the algorithms of operation of Chinese characters. That is, is it possible for this person to derive any meaning from the symbols if he has sufficiently long exposure to the symbol system.

This thought was in fact motivated by the question that how did one learn Chinese in the real life. A child is born without any language abilities, but somehow could learn Chinese later. Could the execution of the algorithm be a simulation of the actual learning process? If that is possible to a certain extent, then one can argue that after enough executions are done, then the man in the room in fact can understand Chinese.

However now I see the flaw in the assumption. A child learn Chinese by interacting with the world, not with some abstract symbol system. For example, to learn the word "apple", he not only needs to remember the physical shape of this word, but more importantly he needs to map this physical shape to a whole range of concepts, such as the taste, the smell, the color etc. of an actual apple. All these concepts actually constitute the meaning of the word "apple". This process is lacking in the Chinese room algorithm. Clearly, no matter how many times the word "apple" is shown to the man, and no matter how many examples of using this word in context and so on, the link between the word and the meaning cannot be achieved, therefore it is not possible for him to really appreciate the meaning of any symbol.

This is also why the "system reply" is wrong. One of the replies to this thought experiment argues that the entire system consisting the room, the man and the algorithm understands Chinese. Searle responded with a new thought experiment that this man memorize the algorithm in his mind, and hence now the entire system becomes the person himself. But we cannot say the man as the system understands Chinese, because again, the symbols are not linked with meanings.

Wednesday, September 29, 2010

Thoughts on the Chinese Room

John Searle's Chinese Room Thought Experiment can be summarized as follows:
  1. Assume an algorithm to perfectly imitate native Chinese speaker does exist.
  2. In an isolated room, Searle himself manually execute the algorithm to conduct a seemingly intelligent conversation with the outside world.
  3. Since neither Searle, the Room nor the Algorithm understands Chinese, we conclude that no matter how closely a symbol manipulator imitates human intelligence, it cannot have a mind.
Many replies to this thought experiment have been proposed, but none is satisfactory enough. Indeed, Searle has responded to most of the replies and showed the conclusion still holds.

It just occurred to me that could the problem lie in the word "understand"? How does one learn Chinese? Could the process of execution of the algorithm be a process of learning Chinese for Searle?

Hmm...

Thursday, September 9, 2010

Hello World!

This is my blog for thought:)

And apparently blogger's user interface has become increasingly complicated during my absence.