Watched the new episode of the popular American sitcom Big Bang Theory today. Caltech engineer Howard built a robot arm which can follow instructions to unpack dinner for the group of scientist friends. After the robot arm finally managed to unpack the entire dinner in 28 minutes, Dr Sheldon Cooper commented, "Impressive, but we must be cautious."
"Why?" Asked Howard.
"Today, it's a Chinese food retrieval robot. Tomorrow, it travels back in time and tries to kill Sarah Conner (a character in the Terminator Series)."
Then Leonard said, "I don't think that's gonna happen, Sheldon."
"No one ever does. That's why it happens."
It is interesting to see the dilemma in our outlook of intelligent robots. We want them to come true, because they can help us do so much things. They are great extensions of human capability. But we fear that if the robots become too intelligent, we become the inferior race and get dominated.
My interested in artificial intelligence was also to a large extent aroused by much fictional and cinematic incarnation of intelligent robots. I still remember the funny and sometimes emotional R2-D2 and C-3PO in Star Wars, the terrifying Terminators, and of course the Matrix. And indeed, no one knows whether such intelligent robots will one day come true and live along with us.
In fact I am rather convinced by my own research that this dream is still far away. In my paper I need to be objective, but here I can write my own feelings. I feel it is far too bold to assert that intelligence can be achieved in just a few decades. Indeed the progress of technology is explosive and exponential, but the problem is that we don't even have a clear definition of the word "intelligence". We do not even have an idea of the scale of the question. There are many unsolved questions today. For example, we do not have cure for cancer and AIDS, despite the fact that they are known to us for so many years and costing millions of lives. Both doctors and AI researchers have the same goal, to unravel the mystery of life. But apparently the doctors are much more prudent and responsible in dealing with predictions.
Besides, all our technological advances today is realized upon a platform or architecture. We have exponential growth in our technical capabilities may be because we are still far behind the full potential of the current architecture. But if the problem we are aiming to solve is beyond the inherent capabilities of this architecture, we can not use the growth in this architecture to measure how far we are away from the goal. In fact, the architecture itself may one day become the limiting factor.
The architecture I'm referring to here is the algorithmic machine architecture, and we know intelligence is very unlikely to be algorithmic. It is like no matter how fast I drive a car, if I am not on the right road, then I will still not get to my destination.
But I still believe Strong AI is possible, maybe because I am a reductionist. The idea I want to convey in my paper is essentially that with whatever we have right now, no matter how much we improve it, Strong AI is not going to be possible. A revolutionary breakthrough is needed for Strong AI to be achieved. When this revolutionary breakthrough will happen, no one knows. It may be one decade, or it may be one century, or it may be never.
That is why I do not favor people making irresponsible predictions of the future of Strong AI. It may be a better idea to leave the idea of Strong AI to Sci-Fi writers or film producers, at least for now.
Thursday, October 7, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment