The Turing test is a test, developed by Alan Turing in 1950, of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.
Let’s say we create an AI, let’s saw we even put it into an android shell, we program it to say “ouch!” every time when pinched, and ooze redness when it is pricked, is it conscious or is it all dark inside? It could certainly be living the life of a mindless zombie. The problem with other minds is that I can’t see another’s consciousness. All each of us can do is make inferences based on our own conscious life that other people are conscious too. But what if we perfectly program something so well that it mimics the life of a human being, yet mindless of the slings and arrows of pain and sorrows? Would it have the spark of consciousness? Would it actually feel pain when pricked? Would it attach itself to life so very much that it would never want it to end, much like we do? As of right now, much like Deep Blue, all artificial intelligence is capable of creating are zombies which act on reflex of their programming. They may be able to quantify the beauty of Schubert in mathematical formulae but can they feel the depths of his masterpiece The Death of the Maiden?
Another problem for hard AI is Searle’s Chinese Room experiment. I’ll try to illustrate it in a short, concise example. Imagine a man named John enters a room that is entirely decorated in Chinese regalia: paintings, furniture, and books, all of which are alien to him. When he looks at the floor he sees spaces on the floor marked out as 1, 2, 3, 4, 5, 6. On the table next to the floor there is a deck of cards that each have a number on the back of the cards starting from 1 going to 6. On the back of the card is a Chinese symbol. He matches the cards to their corresponding numbers on the floor with the symbol facing upward. When he is done he steps back, looks at the symbols in their order, and scratches his head. The task is complete. Meanwhile, the Chinese scientists looking through a double mirror into the Chinese room are laughing, because, unbeknownst to John, the symbols tell a joke that John couldn’t decipher because he doesn’t understand Chinese. Here, John represents artificial intelligence’s failure at understanding what the words actually mean, and furthermore, the joke that it tells. John, acting mechanically, just followed instructions and blankly stared at the cards just as a machine is only capable of.
The gap between object and subject; between mind and matter create a problem, which I believe will leave us questioning and creating clever bulwarks in order to try and explain what consciousness is. We’re like fish in water, never knowing what water is like. And when we do leave consciousness, alas, we can’t be aware of what it is like for the very tools and measurements we judge by are silenced in the dark of night. The Turing test is a common sense answer to test an AI’s ability to think and feel as we do, however, like common sense, it has limitations. My last word on this subject is if we ever do create consciousness then I believe we will sit back and ask ourselves, “how did we ever do that?”