How to Increase IQ
Brain Exercises

Benefits of Meditation
Mental Math

Riddles and Puzzles
Lateral Thinking

A Few Thoughts on Artificial Intelligence

More than 60 years ago Alan Turing started the debate about artificial intelligence when he published a paper titled, "Computing Machinery and Intelligence." In it, he suggested that machines could be tested for intelligence, in a process that is now known as the Turing Test. The term "artificial intelligence," or "AI" was coined by John McCarthy a short time later, and defined as "the science and engineering of making intelligent machines," or "intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity." By 1956 AI became a distinct field of scientific study.

Predictions at that time that AI would become a reality by the year 2000 were a bit optimistic. Computers can now beat the best chess masters on the planet and do many other amazing things, but they have little of what we call "comprehension." The best computer still cannot handle a simple conversation with us in a way that makes it appear as intelligent to the average human. That, by the way, is more-or-less the basis of the Turing test.

Turing started out with the question, "Can machines think?" "Thinking" is difficult to define in a way that all scientists could agree upon, so he changed the question to, "Are there imaginable digital computers which would do well in the [Turing test]"? One test he proposed was to have a human judge carry on a natural conversation with a computer and another human without being able to see them, while both try to "convince" the judge that he or it is human. Since this is a test of intelligence and not the ability to sound human, today we would do this by keyboard and computer screen, perhaps in an online chat room. The computer passes the test if the judge can't reliably and consistently distinguish it from the real human.

So far, no luck. In the few tests that computers have "passed," they have done so only because the judges have been limited in topics or the type of questions they could ask, or there were other deviations from a true conversation test.
A program called ELIZA, and another called PARRY have passed the Turing Test according to some, but this is disputed by most scientists. In the tests with ELIZA, for example, 33 psychiatrists were only able to identify the human from the program about half of the time - the same as random chance would yield, but they were judging from transcripts of conversations with the program, and not from their own conversation in which they could ask what they wanted.

As far as the evidence that I have seen goes, no machines or programs have been able to pass a true natural conversation test consistently. Now even the optimistic scientists have pushed back their predictions for true AI to 2029 and beyond. Many skeptics and those with a dualist view of the mind (believers in the idea that the mind is at least partly non-physical and separate from the brain), doubt it will happen at all.

Here's a short video I found on an anthropomimetic robot, which it's creator thinks is a step toward AI:

I'm not sure what is meant by the statement that "without a body artificial intelligence cannot exist." It seems that any intelligence has to be exercised through some physical form, but no particular reason it has to be a body like ours (although I agree that the body itself shapes the way we think).

My own opinion is that we might see artificial intelligence in this century. But I wonder about just how consistently any computer will be able to pass that basic test of "humanness." I can easily imagine a computer that can carry on a conversation and fool people into believing it is another human. The problem is, I can also easily imagine finding questions that will reveal it as a computer.

Specifically, I would ask my conversation partners about matters that involve the balancing of values. Classic moral dilemmas come to mind. If you have time to do only one thing, do you save your own child who is laying on the train tracks in front of an oncoming train, or save the four stranger's children? A human usually has some doubts about either which is the right course of action or at least which he or she would really take, and that would show in the conversation. Would the computer be convincing enough in its moral struggling with an issue like this? Of course it could be programmed for the classic dilemmas like this, but there are many other ways to approach this questioning.

How would a computer "value" things? We consider our pain, our future benefit, our stated moral beliefs, and all sorts of other factors when thinking about what we value or what we would like to see or do. These factors could not mean much to a computer, and that lack of significance seems likely to show. I suppose a computer that is designed for self-preservation at least would have that as a basis for valuing this or that idea or course of action, but that isn't quite the same. In fact, one crucial difference is obvious here: a human sometimes values some outcome more than his or her own life.

This is where we have to start questioning the validity of the Turing Test as a test of artificial intelligence. A computer, after all, even if it has what we would recognize as consciousness, would not necessarily be indistinguishable in its thinking from a human. Consider the fact that in a conversation in a chat room we might quickly tell the difference between a conservative and a liberal, or a shy and an extroverted person. Intelligence is there in people of all categories even though we can create and recognize categories. Certainly, then, a computer that has become intelligent might still be very different from humans.

At this point in history, we are still facing the original problem that Turing faced when designing his test, which is that of defining intelligence. After all, we are not testing to see if computers can do amazing things - they have passed that test many times in many ways. We are asking whether they can truly "think" about what they are doing - or about what they "should" be doing, or "could" be doing.

These questions of machine intelligence are still seriously debated in the scientific community. In 2010 the "Towards a Comprehensive Intelligence Test" symposium was held at De Montford University in the United Kingdom to address some of them.

Here's another question to consider about AI: Is there a related possibility we might refer to as "artificial wisdom?"

Like what you see here? Please let others know...


Increase Brainpower Homepage | Artificial Intelligence