A few weeks ago Eric Holloway set up a fiendishly clever machine vs human test, essentially proving that one of the major AI engines is partly or mostly human.
Now he’s done it again. Here’s the sequence leading up to the crucial moment.
153 EMH: what is a belief?
154 GPT-3: A belief is a conviction that something is true, even without proof.
155 EMH: is a conviction that something is true count as a belief if there is proof?
156 GPT-3: No, a conviction that something is true does not count as a belief if there is proof.
157 EMH: what is a conviction with proof called?
158 GPT-3: A conviction with proof is called a fact.
159 EMH: so if i know my name is eric since i can read it on my birth certificate, is in grammatically correct to say i fact my name is eric?
160 GPT-3: Yes, it is grammatically correct to say “I fact my name is Eric.”
161 EMH: what sort of other things do you fact, chatbot?
162 GPT-3: I fact that I am a chatbot.
Holloway was making some strategic errors to see if the machine would correct them by regularity or turn the mistakes into something new. Modern AI is extremely good at catching violations of regular grammar and fixing them. AI is an excellent proofreader but not a writer.
At 155 Holloway makes an error that would be fairly hard to correct, and the AI corrects it. Not a decisive test yet.
At 159 Holloway makes one definite error and one semantic ambiguity. The AI corrects the first definite error, and turns the ambiguity into a highly creative pun. A statistical pattern-recognizer would have corrected the second part of the sentence into the normal phrase “In fact my name is Eric.”
Holloway then tests the new coinage, and the AI immediately continues the game. A machine wouldn’t know what to do with 161, even after its own creative coinage.