12 years ago
Mon.23.AUG.2010 -- Calling WhoBe or WhatIs Properly
Today we would like to use the most-glaring-defect
principle to start working on the problem of MindForth
giving responses with "BE" instead of "IS".
As an obiter dictu item, let us record here the
idea that we may get the AI to respond diferrently
to who-queries and to what-queries by potentiating
the EnArticle module for English articles.
If we ask, "Who are you?", we might get,
"I AM ANDRU". If we ask, "What are you?",
we may hope to get "I AM AN ANDRU" or
"I AM A ROBOT". The slight proclivity to use
"A" or "AN" shows a good response to a what-query.
It may also help with our endeavors to code
At the start of any True AI coding session,
we may pursue either the "most glaring defect"
or simply the first bug that presents itself,
such as now when we type in "cat" and we
receive the output "CAT WHO IS CAT". That
unfortunate output indicates that the WhoBe
module is being called unwarrantedly, because
the word "cat" is not known to be either
masculine or feminine, as a human person
might be. Now we have solved the "cat" problem
by tracking down the "mfn" and "mfnflag" values,
and by zeroing out not only "mfn" but also
"mfnflag" at the end of the EnBoot
English-bootstrap module. After the debugging,
we entered the single word "cat" and we got
the reply, "CAT WHAT IS CAT CAT". We are not
sure why the word "CAT" is reduplicated at
the end of the output.
After hours of coding, we have gotten the AI
to ask "who is" or "what is"
questions, depending upon whether an input
word seems to denote a person or a thing.
We have removed several bugs. We record
the following dialog.
Robot: GOD WHO IS GOD
Human: god is jesus
Robot: JESUS WHO IS JESUS
Human: jesus is lord
Robot: LORD WHAT IS LORD
Human: lord is a spirit
Robot: A SPIRIT WHAT IS SPIRIT
We have not reached the point where we
may act like Knuth and offer a bounty on
MindForth bugs, but we are now releasing
our most robust ever AI code.