MindForth Programming Journal (MFPJ) 2010 August 25


Wed.25.AUG.2010 -- Coordinating InStantiate and WhoBe
A problem exists in the following man-machine
dialogue, where "a spirit" is unwarrantedly
treated as a "who".
Robot:
Human: god
Robot: GOD WHO IS GOD
Human: god is a spirit
Robot: A SPIRIT WHO IS SPIRIT
Human:
The following report from the "En" English
lexical array does not reveal to us why the
word "WHO" was used at time point "t=433",
but it does show an "mfn" gender flag set
to one ("1") for masculine.
389 100 10 0 1 100 5 100 386 to GOD
393 100 10 0 1 100 5 100 390 to GOD
397 55 6 0 0 55 7 55 394 to WHO
400 66 0 2 0 66 8 58 398 to IS
404 100 10 0 1 100 5 100 401 to GOD
408 100 12 0 1 100 5 100 405 to GOD
411 66 0 2 0 66 8 58 409 to IS
413 1 15 0 0 1 1 1 412 to A
420 109 31 1 0 109 5 109 414 to SPIRIT
422 1 15 1 0 1 1 1 421 to A
429 109 36 1 0 109 5 109 423 to SPIRIT
433 55 0 1 0 55 7 55 430 to WHO
436 66 0 2 0 66 8 58 434 to IS
443 109 0 1 0 109 5 109 437 to SPIRIT
t nen act num mfn fex pos fin aud
By forcing diagnostic reports on the values
of "mfn" and "mfnflag", we tracked down a bugfix
for the problem of WhoBe saying "WHO IS SPIRIT"
instead of "WHAT IS SPIRIT". In the
formatting link
module, we added one (indicated)
line of code.
mfn @ 1 = mfn @ 2 = OR IF \ masc or fem; 17aug2010
CR ." InSt: setting mfnflag to " mfn @ . \ 25aug2010
mfn @ mfnflag ! \ 17aug2010
THEN \ 17aug2010
mfn @ 0 = IF 0 mfnflag ! THEN \ test; 25aug2010
0 mfn ! \ Test code applies only to En array.
0 preset !
The AI properly began saying "WHAT IS SPIRIT",
but it also began to leave out the word "IS"
from the output of the WhoBe module, as shown
in the dialog reproduced below.
Robot:
Human: god
Robot: GOD WHO GOD
Human: god is a spirit
Robot: SPIRIT WHAT IS SPIRIT
Human:
It turns out that the WhoBe module was
testing for a positive value of "mfn"
or "mfnflag" as a pre-condition for
saying "IS" after "WHO". Once we began
resetting the "mfnflag" to zero in the
formatting link
stopped saying "IS" -- until we
removed the IF-THEN precondition.
Wed.25.AUG.2010 -- Tweaking the Codebase
Now we have a problem because the user can
answer a who-query with a sentence of input,
but the AI does not regurgitate the knowledge
upon receiving the same who-query.
Thurs.26.AUG.2010 -- Changelog Entry
The changelog entry on yesterday's upload of
formatting link
AI source code to the Web reads,
"25aug10A.F retrieves knowledge gained from
human users." The AI is becoming better able
to ask a question about a new concept,
comprehend the answer by storing it with
the proper associations recorded among
concepts, and respond with the assimilated
knowledge when queried about the original
concept. Curiously, the evolving albeit
buggy AI responds differently to asking
about a new concept with "who" and with
"what", even when the same information
provides the content of the response.
To illustrate this point, we run the AI
right now and we obtain the following dialog.
Robot:
Human: god
Robot: GOD WHO IS GOD
Human: god is spirit
Robot: SPIRIT WHAT IS SPIRIT
Human: who is god
Robot: GOD IS A SPIRIT
Human: what is god
Robot: A GOD BES SPIRIT
Human:
The answers recorded above are different because
different parts of the AI Mind software respond
differently to who-queries and to what-queries.
Meanwhile we inch closer to our next AI milestone
formatting link
demonstrating self-referential thought.
Mentifex
--
formatting link

Reply to
Mentifex
Loading thread data ...
Maybe you are building a counter example for the theorem of Archimedes in the AI field.
P.S. Paraphrazed the theorem would be: For each given goal and a minimum step size there is a maximum to the number of steps it takes to reach the goal.
And wouldn't you rather teach your mind how to count to 5 before talking about god/GOD? Do you yourself know what/who god/GOD is?
Groetjes Albert
--
Reply to
Albert van der Horst

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.