The MindForth AI-Complete milestone is near as we troubleshoot one bug after another in the free AI source code written in the Win32Forth programming language. Our JavaScript Mind.html Tutorial AI lags far behind as we race to the finish line.
Most recently we solved a mystifying bug that was causing the AI to ask unnecessary questions about things that it already knew. It turned out that activation from deep concepts was not passing properly upwards to the shallow English lexicon during the generation of a sentence of thought, with the result that the AI Mind was rejecting and detouring away from the thinking of verbs that were not sufficiently activated to be valid components of a meandering chain of thought.
The solution to the extra-questions bug has left us facing a totally different bug. The chain of AI thought is being interrupted because more than one concept has an activation high enough to win selection as the subject of a thought, with the result that thoughts are jumping inexplicably from one topic to another. At Microsoft, such behavior might be called not a bug but a feature. We won't quibble about the nomenclature but we do want to know why any given mental behavior is happening inside the MindForth artificial mind.
Oh, gee, what an easy bug! (Dare we say such a thing?) Our following (edited) diagnostic output shows us that the nounPhrase module, after the thinking of "CATS EAT FISH", which should be followed by "FISH EAT BUGS", is urging the selection of both FISH and GERMS, but GERMS wins because of higher activation, and the AI says, "GERMS KILL CATS".
nounPhrase: aud = 340 urging psi concept #78 FISH activation = 19
nounPhrase: aud = 285 urging psi concept #80 GERMS activation = 20
To the experienced AI-Mind-attendant eye, is is fishy that FISH and GERMS are both near the psiDamp "residuum" value of twenty (20) that is left on a concept immediately after the word of the concept has served as a coomponent in thought-gerneration. Obviously, the activation of these two concepts has just been lowered by the operation of the psiDamp mind-module. It may be possible to remove the bug simply by removing an unwarranted call to psiDamp. Let's comment out the call in nounPhrase. No, that's not where the problem is. We need to thoroughly analyze the whole picture of calls to psiDamp and how the AI can keep only one concept highly active at a time.
-------------- Second Session
Our Diagnostic display mode shows that too many calls are being made to the audDamp module.
Robot: GERMS WHAT DO GERMS DO Human: g psiDamp called for urpsi = 59 by module ID #104 Audition erms k psiDamp called for urpsi = 80 by module ID #104 Audition ill c psiDamp called for urpsi = 81 by module ID #104 Audition ats
psiDamp called for urpsi = 81 by module ID #104 Audition
Robot: CATS psiDamp called for urpsi = 76 by module ID #62 verbPhrase E psiDamp called for urpsi = 76 by module ID #104 Audition AT F psiDamp called for urpsi = 77 by module ID #104 Audition ISH psiDamp called for urpsi = 78 by module ID #104 Audition
Robot: CATS EAT FISH Human: Robot: GERMS psiDamp called for urpsi = 80 by module ID #62 verbPhrase K psiDamp called for urpsi = 80 by module ID #104 Audition ILL C psiDamp called for urpsi = 81 by module ID #104 Audition ATS psiDamp called for urpsi = 76 by module ID #104 Audition
Robot: GERMS KILL CATS
We may set up a general principle here that we do not want a known, reentrant concept to go to psiDamp from the generative module (i.e., nounPhrase or verbPhrase) that selects the concept, but rather from Audition when the concept is laid back down again at the forefront of memory. There are several reasons for adopting this principle. First of all, Audition is a catch-all venue through which all words thought by the AI mind must pass. If we let the generative modules just think the words and let Audition psi-damp them, then we do not have to code a psiDamp-call into each and every generative module. Secondly, this principle may be helpful in those situations where the AI "detours" into asking a question, but no human user provides the answer. The AI might then repeat the question-triggering word several times, each time spreading activation both backwards and sideways to other concepts, so that eventually some other thought is generated. It would be like human beings repeating a word several times to themselves, in order to remember something about the word.
ATM/Mentifex
--