Skip to main content

Computer Science: The Autoturing Test

Can a personality construct in a virtual world apply a Turing test to itself?

In Neuromancer, William Gibson plays around with the idea of personality constructs. Dixie is a hacker who died, but his "personality" was recorded to a ROM. Within the matrix, you can interact with Dixie, and in fact, Dixie won't know he's a personality construct until you tell him.

Another thing that happens in the book is that Case flatlines. When he flatlines, time slows to a crawl, and he proceeds to "live" within the matrix at a more fundamental level.

My question is, is there some test that Case and Dixie can apply to themselves that will help each of them to figure out who is the real human?

Of course, this question is kind of meaningless at this point. It assumes that we'll someday be able to create personality constructs, but that they won't be the same as "the real thing."

Nonetheless, the deeper question remains. Is there a test that a human and an AI can apply to themselves that would lead the human to classify himself as a human and the AI to classify itself as an AI? Can the AI create the test itself?

Comments

My boss, Patrick Tufts, said:

it would be a neat test - am I me, or am I a simulation of me?
Reinis Ivanovs said…
I think Turing himself said something to the effect that to deny the recognition as a "real" person to a strong AI that passes the Turing test would simply be human prejudice.

Imagine you somehow discover you're a brain in a vat (like in The Matrix), or, better yet, a purely virtual simulation of a brain. Would you suddenly change your opinion about yourself as an autonomous, thinking person? You'd be pretty strange if you did.
Graham Higgins said…
You leave two questions, not one:

1. "Is there a test that a human and an AI can apply to themselves ...?"

and

2. "Can the AI create the test itself?"

There are some starting points but they all reference the accuracy and detail of the modelled environment as is typical with such questions, because that is essentially what the question is about.

Do I have a genetic inheritance? Can I confer that inheritance on another by becoming a parent? Do I share a genetic inheritance with a sibling? What do I recall of my dreams? What are my blood sugar levels? Do I love anyone? Does anyone love me? Is my colour vision spectrum broader in red/yellow that it is in green/blue? Am I subject to perceptual illusions such the Muller-Lyer?

All the answers pivot around the degree and accuracy of model detail. Ultimately it boils down not to what is an AI but to why the AI was created in the first place. The Dixie construct was of limited functionality so wasn't able to ask self-referential questions. The immense amount of effort that would be required to create a construct that /could/ ask that type of question immediately raises another question --- what would be the purpose of such an effort?

If you haven't already read it, Joseph Becker's "Reflections on the Formal Description of Behavior" presents a very cogent argument on why the limitations of our current systems of formal notation mean that the kind of AI that you broadly describe isn't likely to happen any time soon:

J. D. Becker
Reflections on the Formal Description of Behavior
@incollection{Becker75,
AUTHOR = {J. D. Becker},
TITLE = {Reflections on the Formal Description of Behavior},
YEAR = 1975,
BOOKTITLE = {Representation and Understanding},
EDITOR = {D. G. Bobrow and A. Collins},
PUBLISHER = {Academic Press},
ADDRESS = {New York},
PAGES = {83-102},

There are some other really great papers in there, too. I can't recommend Representation and Understanding highly enough.

You may also find food for thought in another thread of enquiry - the role of affect and emotion in cognition. I invite you to conjecture whether an AI could function as such without having emotions e.g. curiosity.

There's a related and very problem in the (pre-) allocation of computational resources during planning: How much time do I spend working out how much time to spend. It's such a profound problem that it affects even us humans.

(BTW, thanks for those pointers to the SICP video tutorials, a real blast from the past but still very relevant )
AndrĂ© said…
Good questions. However, the following questions (from a comment)
Is my colour vision spectrum broader in red/yellow that it is in green/blue? Am I subject to perceptual illusions such the Muller-Lyer? totally miss the point: Is a blind person not human?

I suspect that the closer we will come to create Artificial Intelligence "personalities", the harder it will become to find a test that distinguishes between the "human intelligence" and the artificial one.
Graham, thanks for your comments. They were quite erudite. You're welcome concerning the SICP videos ;)
manveru said…
Taking the perspective of GEB it might be impossible to make such a test.
The reasoning took some chapters, so I will try to explain it in a short (most likely wrong) sentence:
Given a sufficiently complex system over several layers of abstraction the emergent behaviour might well be perceived as "Human".

If you take a look at history you'll find that people at all times have developed tests for all kinds of human properties and have failed every time in one aspect or another if only the behaviour was tested.
For example, to find out the whether some man is the father of any given child we cannot simply test for that by asking a couple of smart questions, we have to go a few levels deeper and examine the DNA.
I assume that by looking at the behaviour alone there is no working way to distinguish whether someone is human or not, just as it is impossible to know for us whether we are actually in a simulation being run by someone developed in their spare time.
Ken Seehart said…
The difference is simple. The human knows that the question is being asked, whereas the AI does not know that the question is being asked.

Similarly, when I play chess against a chess program, I know that I am playing chess, whereas the chess program has absolutely no clue what it is doing or that it is doing anything at all. The fact that good chess moves are being generated does not mean that the computer knows that it is playing chess.

Much confusion arises from the fact that science can't do anything with the fact that we actually experience the data that we process. There is no reason to expect this meta-phenomenon to be an emergent property of data processing. Yet it is a common error to disregard this on the grounds that anything that can't be processed by science does not exist by some kind of absurd and arbitrary definition of reality.

Don't get me wrong, I am a scientist, and I consider science to be the most important tool that I know of in the search for knowledge. However, I am willing to acknowledge it's limitations. In this case the limitation is that science by it's nature can only examine intersubjective phenomena, whereas experience is not intersubjective.

Of course I make a distinction between "experience" and "processing information". Maybe that is the key to the Autoturing Test. If you can make such a distinction then you are human. If you cannot make such a distinction then you are a toaster.
> The difference is simple. The human knows that the question is being asked, whereas the AI does not know that the question is being asked.

I'm not sure how to turn that into an autoturing test per my post.

> Yet it is a common error to disregard this on the grounds that anything that can't be processed by science does not exist by some kind of absurd and arbitrary definition of reality.

100% agreed!

> If you cannot make such a distinction then you are a toaster.

Does that mean my 4 month old baby is a toaster?