MindForth 20.JAN.2008

The MindForth AI-Complete milestone is near as we troubleshoot one bug after another in the free AI source code written in the Win32Forth programming language. Our JavaScript Mind.html Tutorial AI lags far behind as we race to the finish line.

Most recently we solved a mystifying bug that was causing the AI to ask unnecessary questions about things that it already knew. It turned out that activation from deep concepts was not passing properly upwards to the shallow English lexicon during the generation of a sentence of thought, with the result that the AI Mind was rejecting and detouring away from the thinking of verbs that were not sufficiently activated to be valid components of a meandering chain of thought.

The solution to the extra-questions bug has left us facing a totally different bug. The chain of AI thought is being interrupted because more than one concept has an activation high enough to win selection as the subject of a thought, with the result that thoughts are jumping inexplicably from one topic to another. At Microsoft, such behavior might be called not a bug but a feature. We won't quibble about the nomenclature but we do want to know why any given mental behavior is happening inside the MindForth artificial mind.

Oh, gee, what an easy bug! (Dare we say such a thing?) Our following (edited) diagnostic output shows us that the nounPhrase module, after the thinking of "CATS EAT FISH", which should be followed by "FISH EAT BUGS", is urging the selection of both FISH and GERMS, but GERMS wins because of higher activation, and the AI says, "GERMS KILL CATS".

nounPhrase: aud = 340 urging psi concept #78 FISH activation = 19

nounPhrase: aud = 285 urging psi concept #80 GERMS activation = 20

To the experienced AI-Mind-attendant eye, is is fishy that FISH and GERMS are both near the psiDamp "residuum" value of twenty (20) that is left on a concept immediately after the word of the concept has served as a coomponent in thought-gerneration. Obviously, the activation of these two concepts has just been lowered by the operation of the psiDamp mind-module. It may be possible to remove the bug simply by removing an unwarranted call to psiDamp. Let's comment out the call in nounPhrase. No, that's not where the problem is. We need to thoroughly analyze the whole picture of calls to psiDamp and how the AI can keep only one concept highly active at a time.

-------------- Second Session

Our Diagnostic display mode shows that too many calls are being made to the audDamp module.

Robot: GERMS WHAT DO GERMS DO Human: g psiDamp called for urpsi = 59 by module ID #104 Audition erms k psiDamp called for urpsi = 80 by module ID #104 Audition ill c psiDamp called for urpsi = 81 by module ID #104 Audition ats

psiDamp called for urpsi = 81 by module ID #104 Audition

Robot: CATS psiDamp called for urpsi = 76 by module ID #62 verbPhrase E psiDamp called for urpsi = 76 by module ID #104 Audition AT F psiDamp called for urpsi = 77 by module ID #104 Audition ISH psiDamp called for urpsi = 78 by module ID #104 Audition

Robot: CATS EAT FISH Human: Robot: GERMS psiDamp called for urpsi = 80 by module ID #62 verbPhrase K psiDamp called for urpsi = 80 by module ID #104 Audition ILL C psiDamp called for urpsi = 81 by module ID #104 Audition ATS psiDamp called for urpsi = 76 by module ID #104 Audition

Robot: GERMS KILL CATS

We may set up a general principle here that we do not want a known, reentrant concept to go to psiDamp from the generative module (i.e., nounPhrase or verbPhrase) that selects the concept, but rather from Audition when the concept is laid back down again at the forefront of memory. There are several reasons for adopting this principle. First of all, Audition is a catch-all venue through which all words thought by the AI mind must pass. If we let the generative modules just think the words and let Audition psi-damp them, then we do not have to code a psiDamp-call into each and every generative module. Secondly, this principle may be helpful in those situations where the AI "detours" into asking a question, but no human user provides the answer. The AI might then repeat the question-triggering word several times, each time spreading activation both backwards and sideways to other concepts, so that eventually some other thought is generated. It would be like human beings repeating a word several times to themselves, in order to remember something about the word.

ATM/Mentifex

--

formatting link

Reply to
mentifex
Loading thread data ...

ATM, Just some observations: I of course find the same issues you are having, but on a different scale. The 1K mind core you are working on rejuvenates rapidly and regularly, wait till you pump the core up in size, the problems, stray activations, rise as well. I am working from a 10K core that does not rejuvenate as regularly, I did notice in the Think module you state, examine recent psi nodes for activations and if found go into English. Well with 1K core rejuvenating frequently recent has a differen meaning then a 10K core. So I had to modify my examination from looking at the entire core to literally just a fraction to include the recent activities. Doing this reduced the stray activation and unrelated thoughts generated.

An aside, I often have thought about how you go about training an AI mind. So I put together input that keeps the AI asking what, always a new concept entered by me. Of course once I repeat concepts it no longer ask what. After entering 50-60 sentences I then ask the mind questions, who are you, who am I, what is a dog, what does cat eat... all question based on the previous sentences I entered and the AI responded with single answers all in line with what I asked. Actually it surprised the mess out of me. Next I'll attempt to ask a quesion that does not have a direct answer in the core to see how it is handled. I was so surprized at the results I just kept asking questions I knew there should be answers to.

I still have not upload the '08 version maybe I'll get to it before the weekend gets here.

Frank AIMind-i.com

Reply to
Frank

FJR, you mention the "cns" core size most opportunely, because with my "AI-Complete" uploads of yesterday (22.JAN.2008) for the first time I am ready to start doubling the mind-core size of MindForth in the near future. Previously I felt no need to have a large mind-core because I could work on everything I needed to in 1K.

Just today I updated the User Manual at

formatting link
the first time in a year or two, with the following material about your own AI:

"As AI Mind versions proliferate in Forth and other programming languages, you may want to install and try out alternative AI Mind versions such as

formatting link
-- Franks AI Mind by Mr. Frank J. Russo -- a version which tends to incorporate theoretical and algorithmic improvements from the original MindForth, while going beyond the original with such advanced features as the ability to send and receive e-mail, and the ability to surf the World Wide Web."

I don't really understand what John Passaniti is griping about so much in his recent post. Here we are, trying to advance the state of the art, and JPass can olnly gripe instead of leading the cheering section?

Anyway, Frank, I would like to respond here to your remarks about activation-levels in the "MindForth 15.JAN.2008" thread. You say, "Or - it has just occured to me - you are using the value set for spike from nounact of the subject?"

I am not sure off the top of my head. I do know, however, that the activation-settings of MindForth are all getting tightened up in recent releases.

The 22.JAN.2008 uploads were the first-ever truly working Mind.Forth AI. First I uploaded

formatting link
the morning, after two days of hard work.

In that first working model (after 13 years), I discovered that pressing the [Enter] key to force the AI to pause for user input, actually introduces glitches if the AI is in the middle of thinking a thought. So therefore I made the "non-invasive" Tab key cause a pause for user input, and I wrote up the change in the User Manual. (Tab still switches display modes.)

Over the course of the day (22jan2008), it bothered me that here I finally had a working AI Mind in Forth, but its performance was so lackluster and so unconvincing, so I wrote a mechanism to detect repetitious thoughts and I uploaded it separately to

formatting link
updating the VirtualEntity site, because the repetition-detecting version needs to be improved a little more.

From now on it will be so pleasant to be coding an AI that already works, rather than eliminating bug after bug after bug, just to see some vexing new bug pop up.

I may try to fall a little 'Net-silent now, instead of constantly harping about MindForth, although I keep brainstorming new AI Mind ideas that I would like to tell the world about. For instance, my "thotnum" method of detecting repetitious thoughts engendered in me today a way of perhaps consolidating the knowledge-base during the Rejuvenate process -- just check all the memories of thoughts and remove any duplicates. In that way, the knowledge in the KB becomes more concentrated.

Whoever looks at MindForth right now might think it's still pretty primitive, and it is, but it _does_ think and it no longer spouts the gibberish for which Mentifex AI was famous.

By the way, "The answer is 42." That's how many years it took me to solve AI in theory (1979) and in software (22 January 2008), starting in December of 1965.

Bye for now.

ATM/Mentifex

--

formatting link

Reply to
mentifex

Just letting people know that because of the sporge attack on the usenet comp.robotics.misc newsgroup, messages like this quickly get burried. Because of this, Gordon McComb set up a web based discussion group as an alternative place to discuss robotics.

I know there are benefits to usenet. Particularly because the news readers allow you to consolidate all the groups you subscribe to in one place. But, until this sporge attack is stopped, I don't think there is really a choice.

Here is Gordon's discussion site;

formatting link
Joe Dunfee

Reply to
cadcoke4

Thanks, Joe, for the heads-up. I have put the link on

formatting link
-- which is a documentation page for the Mind.Forth Motorium Module -- the mind-module most closely related to robotics, since a robot is generally a motor-output device, often in dire need of an artificial mind like
formatting link
or MindForth AI.

Please pass the word on that people should demonstrate MindForth at robotics clubs.

Bye for now.

ATM/Mentifex

--

formatting link

Reply to
mentifex

I would encourage anyone working on "out of the box" ideas like MindForth to continue on. I have't looked into the technical aspects of this particuliar algorithm involving your AI experimentation, but I would encourage your posts to this newsgroup. MindForth kinda sounds like a Turing Machine on the surface, I would like to hear from you how it is different.

I developed the Three Tiered Behavioral Archetecture I call The Triune OS for robotics - most autonomous robots today use the general idea behind this concept including JPL, and the DARPA Grand Challenge robots.

I encourage you to keep researching and posting to this newsgroup - I read your posts!

Don AI Engineering

Reply to
news.la.sbcglobal.net

news.la.sbcglobal.net wrote:

MindForth is not a rigidly-defined entity like a Turing Machine, but rather just a software implementation of "spreading activation" among deep concepts giving rise to linguistic thought. See

formatting link
or
formatting link
for Free AI Download.

A "Triune Operating System" for robotics -- makes one wonder what the three-in-one parts of it are.

Thank you for the encouragement and for reading. Going out on a limb here, i.e, stating goals that I may not be able to live up to, I would like to show my appreciation for the above post by providing a glimpse into the "back office" of Mentifex AI. Here are the previously secret further goals for Mind.Forth artificial intelligence:

Tasks (from the MindForth Programming Journal template) Post-22-January-2008 True-AI-followup tasks [27.JAN.2008] Remove old change-log entries prior to True AI. [27.JAN.2008] Improve the repetition-detecting "thotnum" system in such a way that it not only activates the oldest post-vault concept, but also starts a chain of thought going. [1.FEB.2008] Remove "actset" code. [1.FEB.2008] Remove HCI "uract" code and "uract" in general, because subject-verb-object inputs no longer need a descending scale of activations when the Moving Wave Algorithm lets all concepts be activated equally. Perhaps let Instantiate use a numeric, standard activation.

[ ] Ensure that a thought will be generated even if the user enters no input upon start-up. [ ] Expand the repetition-detecting "thotnum" system into a mechanism for knowledge-base (KB) consolidation, i.e., the weeding out of duplicate memory traces, so that each Rejuvenate cycle will "compress" the AI knowledge base. [ ] Remove a lot of commented-out material. [ ] Start putting permanent comments into parentheses so that backslash comments may be deleted without ill effect by the JavaScript program for doing so. [ ] Prevent "echo" of output older than the last thought. [ ] Make sure input of "you" activates "I" and vice versa. [ ] Increase the size of the "cns" memory capacity. [ ] Study and clarify the lopsi/hipsi scheme. Integrate it not only with SVO but also with other mind-modules. If possible, use "hipsi" directly with psiDamp and retire the "urpsi" used in psiDamp. [ ] Utterly streamline the whole system of conceptual activations so that Forth AI coders will find it easy to calibrate activation-levels in their own Forthminds. [ ] Start publishing only de-commented code in the normal release venues and start archiving each fully-commented release as its own special page somewhere, so that people may track what changes have occurred and consult the comments. [ ] Among task items, "linkify" any completed task item to the special archival-version page of the task-accomplishment. [ ] Take out slow-downs for the sake of humans and see how fast the AI can think at top speed, especially in machine reasoning. [ ] Introduce intransitive verbs of being and becoming. [ ] Make a public domain upload with WinZIP. [ ] Change bootstrap to include both English and German. [ ] For adjective "all" implement supervenient concepts.

Other Forth AI coders are welcome to "jump the gun" and implement these and other goals ahead of time.

formatting link
is far ahead in some aspects.

- Arthur

Reply to
mentifex

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.