Standards in Artificial Intelligence

A webpage of proposed Standards in Artificial Intelligence is at

formatting link
-- updated today.

Reply to
Arthur T. Murray
Loading thread data ...

Besides not having anything to do with C++, you should stop posting your notices here because you are a crank. You claim to have a "theory of mind", but fail to recognize two important criteria for a successful theory: explanation and prediction. That is, a good theory should *explain observed phenomena*, and *predict non-trivial phenomena*. From what I have skimmed of your "theory", it does neither (though I suppose you think that it does well by way of explanation).

In one section, you define a core set of concepts (like 'true', 'false', etc.), and give them numerical indexes. Then you invite programmers to add to this core by using indexes above a suitable threshold, as if we were defining ports on a server. When I saw this, and many other things on your site, I laughed. This is such a naive and simplistic view of intelligence that you surely cannot be expected to be taken seriously.

I dare say one of the most advanced AI projects in existence is Cog. The philosophy behind Cog is that an AI needs a body. You say more or less the same thing. However, the second part of the philosophy behind Cog is that a simple working robot is infinitely better than an imaginary non-working robot. That's the part you've missed. Cog is designed by some of the field's brightest engineers, and funded by one of the last strongholds of AI research. And as far as success goes, Cog is a child among children. You expect to create a fully developed adult intelligence from scratch, entirely in software, using nothing more than the volunteer labor of gullible programmers and your own musings. This is pure comedy.

At one point, you address programmers who might have access to a 64-bit architecture. Pardon me, but given things like the "Hard Problem of Consciousness", the size of some programmer's hardware is completely irrelevant. These kinds of musings are forgivable when coming from an idealistic young high school student who is just learning about AI for the first time. But the prolific nature of the work implies that you have been at this for quite some time.

Until such time as you can A) show that your theory predicts an intelligence phenomenon that is both novel and later confirmed by experiment or observation of neurological patients, or B) produce an artifact that is at least as intelligent as current projects, I must conclude that your "fibre theory" is just so much wishful rambling.

The level of detail you provide clearly shows that you have no real understanding of what it takes to build a successful AI, let alone something that can even compete with the state of the art. The parts that you think are detailed, such as your cute ASCII diagrams, gloss over circuits that researchers have spent their entire lives studying, which you leave as "an exercise for the programmer". This is not only ludicrous, but insulting to the work being done by legitimate researchers, not to mention it insults the intelligence of anyone expected to buy your "theory".

Like many cranks and crackpots, you recognize that you need to insert a few scholarly references here and there to add an air of legitimacy to your flights of fancy. However, a close inspection of your links shows that you almost certainly have not read and understood most of them, or A) you would provide links *into* the sites, rather than *to* the sites (proper bibliographies don't say: "Joe mentioned this in the book he published in '92" and leave it at that), or B) you wouldn't focus on the irrelevant details you do.

A simple comparison of your model with something a little more respectable, such as the ACT-R program at Carnegie-Mellon, shows stark contrasts. Whereas your "model" is a big set of ASCII diagrams and some aimless wanderings on whatever pops into your head when you're at the keyboard, the "models" link (note the plural) on the ACT-R page takes you to what...? To a bibliography of papers, each of which addresses some REAL PROBLEM and proposes a DETAILED MODEL to explain the brain's solution for it. Your model doesn't address any real problems, because it's too vague to actually be realized.

And that brings us to the final point. Your model has components, but the components are at the wrong level of detail. You recognize the obvious fact that the sensory modalities must be handled by specialized hardware, but then you seem to think that the rest of the brain is a "tabula rasa". To see why that is utterly wrong, you should take a look at Pinker's latest text by the same name (The Blank Slate). The reason the ACT-R model is a *collection* of models, rather than a single model, is very simple. All of the best research indicates that the brain is not a general-purpose computer, but rather a collection of special-purpose devices, each of which by itself probably cannot be called "intelligent".

Thus, to understand human cognition, it is necessary to understand the processes whereby the brain solves a *PARTICULAR* problem, and not how it might operate on a global scale. The point being that the byzantine nature of the brain might not make analysis on a global scale a useful or fruitful avenue of research. And indeed, trying to read someone's mind by looking at an MRI or EEG is like trying to predict the stock market by looking at the arrangement of rocks on the beach.

Until you can provide a single model of the precision and quality of current cognitive science models, for a concrete problem which can be tested and measured, I must conclude that you are a crackpot of the highest order. Don't waste further bandwidth in this newsgroup or others with your announcements until you revise your model to something that can be taken seriously (read: explains observed phenomena and makes novel predictions).

Dave

Reply to
David B. Held

"David B. Held" wrote on Wed, 10 Sep 2003:

formatting link
-- yes.

formatting link
explains that Newconcept calls the English vocabulary (enVocab) module to form an English lexical node for any new word detected by the Audition module in the stream of user input.

formatting link
(q.v.) explains that not "the size of some programmer's hardware" counts but rather the amount of memory available to the artificial Mind.

The Mentifex AI Mind project is extremely serious and ambitious. Free-lance coders are morking on it in C++ and other languages:

formatting link
-- C++ with starter code;
formatting link
-- see Mind.JAVA 1 and 2;
formatting link
-- Lisp AI Weblog;
formatting link
-- first Perl module;
formatting link
-- Prolog AI Weblog;
formatting link
-- Python AI Weblog;
formatting link
-- Ruby AI Blog (OO AI);
formatting link
-- Scheme AI Weblog;
formatting link
-- see "Mind.VB #001" link.

AI Mind project news pervades the blogosphere, e.g. at

formatting link
-- etc.

The Mentifex Seed AI engenders a new species of mind at

formatting link
-- Mind2.Java -- and at other sites popping up _passim_ on the Web.

AI has been solved in theory and in primitive, free AI source code. Please watch each new species of AI Mind germinate and proliferate.

A.T. Murray

Reply to
Arthur T. Murray

Brittle. Language-specific. Non-scalable. You are trying to build something "intelligent", aren't you?

Umm...pardon me, but the emperor is wearing no clothes. "uniconceptual filaments"? "comceptual minigrids"? "massively parallel aggregate"? Where is the glossary for your pig Latin? How on earth is a programmer supposed to build a computational model from this fluff? Read your mind? She certainly can't read your text. This sounds more like a motivational speech from a pointy-haired boss in a Dilbert strip than instructions for how to build an "AI Mind". I would parody it, but you've done a fine job yourself. Here's the real cheerleading right here:

Then go beyond human frailties and human limitations by having any number ad libitum of local and remote sensory input devices and any number of local and remote robot embodiments and robotic motor opportunities. Inform the robot of human bondage in mortal bodies and of robot freedom in possibilities yet to be imagined.

Wow. I have a warm fuzzy feeling inside. I think I'll stay up another hour writing more of the Sensorium module.

The amount of memory is completely irrelevant, since you have not given enough detail to build a working model. It's like me saying: "If you have a tokamak transverse reactor, then my spaceship plans will get you to Alpha Centauri in

8 years, but if you only have a nuclear fission drive, then it will take 10. Oh and drop your carrots and onions in this big black kettle I have here." Also, the memory space of a single processor really isn't that important, since a serious project would be designed to operate over clusters or grids of processors. But I suppose it never occurred to you that you might want an AI brain that takes advantage of more than one processor, huh? I suppose you think the Sony "Emotion Engine" is what Lt. Cmdr. Data installed so he could feel human?

There's no doubt it's ambitious. And I have no doubt that you believe you have really designed an AI mind. However, I also believe you hear voices in your head and when you look in the mirror you see a halo. Frankly, your theory has too much fibre for me to digest.

If I knew what "morking" was, I would probably agree. However, your first example of someone "morking" on it in C++ tells me that "morking" isn't really a good thing. At least not as far as C++ goes. Namely, it more or less proves that the "interest" in this project mainly consists of the blind being (b)led by the blind.

This is the only sign of progress you have shown. Without even looking at the link, I can believe that the "VB Mind" already has a higher IQ than you.

Oh, I see...so if enough people report on it, then it's "serious" and should be taken seriously? A lot of people reported on cold fusion. But I'd take the cold fusion researchers over you any day of the week.

And what, pray tell, is a "mind species"? Is it subject to crossover, selection, and mutation?

LOL!!!! Wow! Whatever you're smoking, it has to be illegal, because it's obviously great stuff!

Here is an example of "primitive, free AI source code":

10 PRINT "Hello, world!"

See? It's got a speech generation and emotion engine built right in! And the AI is so reliable, it will never display a bad attitude, even if you tell it to grab you a cold one from the fridge. It always has a cheerful, positive demeanor. It is clearly self-aware, because it addresses others as being distinct from itself. And it has a theory of mind, because it knows that others expect a greeting when meeting for the first time. Unfortunately, it has no memory, so every meeting is for the first time. However, its output is entirely consistent, given this constraint. I guess I've just proved that "AI has been solved in theory"!

I'm still waiting to see *your* mind germinate. I've watched grass grow faster. While ad homs are usually frowned upon, I don't see any harm when applied to someone who cannot be reasoned with anyway. Since you seem to have single-handedly "solved the AI problem", I'd like to ask you a few questions I (and I'm sure many others) have.

1) How does consciousness work? 2) Does an AI have the same feeling when it sees red that I do? How do we know? 3) How are long-term memories formed? 4) How does an intelligent agent engage in abstract reasoning? 5) How does language work? 6) How do emotions work?

Please don't refer me to sections of your site. I've seen enough of your writing to know that the answers to my questions cannot be found there.

Like a typical crackpot (or charlatan), you deceive via misdirection. You attempt to draw attention to all the alleged hype surrounding your ideas without addressing the central issues. I challenged your entire scheme by claiming that minds are not blank slates, and that human brains are collections of specialized problem solvers which must each be understood in considerable detail in order to produce anything remotely intelligent. You never gave a rebuttal, which tells me you don't have one. Why don't you do yourself a favor and start out by reading Society of Mind, by Minsky. After that, read any good neurobiology or neuroscience text to see just how "blank" your brain is when it starts out. Pinker has several good texts you should read. There's a reason why he's a professor at MIT, and you're a crackpot trying to con programmers into fulfilling your ridiculous fantasies.

Dave

Reply to
David B. Held

"David B. Held" wrote on Sat, 13 Sep 2003:

ATM:

DBH:

ATM: You are right. It is precariously brittle. That brittleness is part of the "Grand Challenge" of building a viable AI Mind. First we have to build a brittle one, then we must trust the smarter-than-we-are crowd to incorporate fault-tolerance. DBH:

ATM: Do you mean "human-language-specific" or "programming-language"? With programming-language variables, we have to start somewhere, and then we let adventitious AI coders change the beginnings. With variables that lend themselves to polyglot human languages, we achieve two aims: AI coders in non-English-speaking lands will feel encouraged to code an AI speaking their own language; and AI Minds will be engendered that speak polyglot languages. Obiter dictu -- the Mentifex "Concept-Fiber Theory of Mind" --

formatting link
-- features a plausible explanation of how to implant multiple Chomskyan syntaxes and multiple lexicons within one unitary AI Mind. The AI textbook AI4U page 35 on the English language module --
formatting link
-- and the AI textbook AI4U page 77 on the Reify module --
formatting link
-- and the AI textbook AI4U page 93 on the English bootstrap module --
formatting link
-- all show unique and original diagrams of an AI Mind that contains the thinking apparatus for multiple human languages -- in other words, an AI capapble of Machine Translation (MT).

DBH:

ATM: Once again, we have to start somewhere. Once we attain critical mass in freelance AI programmers, then we scale up.

DBH:

ATM:

formatting link
-- Machine...
formatting link
-- Intelligence.

DBH:

ATM:

DBH: Besides the fact that the "enVocab" module is embarrassingly underspecified, the notion of indexing words is just silly. ATM: Nevertheless, here at the dawn of AI (flames? "Bring 'em on.") we need to simulate conceptual gangs of redundant nerve fibers, and so we resort to numeric indexing just to start somewhere.

DBH:

formatting link
-- by T. Winograd?

ATM: Yes. Each simulated nerve fiber holds one single concept.

ATM: Yes. Conceptual fibers may coalesce into a "gang" or minigrid distributed across the entire mindgrid, for massive redundancy -- which affords security or longevity of concepts, and which also aids in massively parallel processing (MPP).

Ha! You're funny there!

Then go beyond human frailties and human limitations by having any number ad libitum of local and remote sensory input devices and any number of local and remote robot embodiments and robotic motor opportunities. Inform the robot of human bondage in mortal bodies and of robot freedom in possibilities yet to be imagined.

ATM: If the AI coder has an opportunity to go beyond 32-bit and use a 64-bit machine, then he/she/it ought to do it, because once we arrive at 64-bits (for RAM), we may stop a while.

ATM: The desired "unitariness of mind" (quotes for emphasis) may preclude using "clusters or grids of processors."

ATM:

formatting link
tries to track each new species of AI Mind. We do _not_ want standard Minds; we only wish to have some standards in how we go about coding AI Minds.

ATM: Through a "searchlight of attention". When a mind is fooled into a sensation of consciousness, then it _is_ conscious.

ATM: You've got me there. Qualia totally non-plus me :(

ATM: Probably by the lapse of time, so that STM *becomes* LTM.

ATM: Syllogistic reasoning is the next step, IFF we obtain funding.

formatting link
- $send____.

formatting link
-- AI4U.

ATM: By the influence of physiological "storms" upon ratiocination.

IIRC the problem was with how you stated the question.

Arthur

Reply to
Arthur T. Murray

Aoccdrnig to rsecearh at an Elingsh uinervtisy, it deson't mtaetr in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and the lsat ltteer is at the rghit pclae.

The rset can be a taotl mses and you can sitll raed it wouthit porbelm.

Tihs is bcuseae we do not raed ervey lteter by it slef but the wrod as a wlohe.

Reply to
Robin G Hewitt

Wow! It's true!

Weird, and intriguing...

Tom

Reply to
Tom McEwan

Thank God! :-]

ken

| >

| >

| > Aoccdrnig to rsecearh at an Elingsh uinervtisy, it deson't mtaetr in waht | > oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the | frist | > and the lsat ltteer is at the rghit pclae. | >

| > The rset can be a taotl mses and you can sitll raed it wouthit porbelm. | >

| > Tihs is bcuseae we do not raed ervey lteter by it slef but the wrod as a | > wlohe. | >

| >

| |

Reply to
KP_PC

For those who have AoK, the 'funny-words' inherent attract attention be-cause they constitute relatively-small instances of TD E/I(up) that activate, first, the "inverting reward mechanism" [Ap5] and then the "True reward mechanism" as the 'nonsense words' intended meanings are converged upon via TD E/I-minimization.

It's a gentle form of the "nonsense sentence" example of Ap6, and applies, in-General, to =all= aspects of behavioral manifestation, not only 'spilling'.

It's why light-'hearted' 'goofy' companions are 'fun' to have around - their behavioral 'goofiness' constitutes relatively-small TD E/I(up) [relatively- small 'instances' of novelty] that activate[s] the inverting reward mechanisms as is discussed in AoK, Ap5.

A correlated deeper thing is why such behavioral 'goofiness' doesn't communicate across cultural boundaries. Differential experience, to the degree of it, elevates TD E/I beyond the threshold for the activation of inverting reward - so the generalized TD E/I(up) 'swamps' the relatively-small TD E/I(up).

That's why it'd probably to begin Diplomatic conferences with a showing of excerpts that are funny from a collection of movies from a wide range of cultures :-]

ken [k. p. collins]

| | >

| | >

| | > Aoccdrnig to rsecearh at an Elingsh uinervtisy, it deson't mtaetr | in waht | | > oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht | the | | frist | | > and the lsat ltteer is at the rghit pclae. | | >

| | > The rset can be a taotl mses and you can sitll raed it wouthit | porbelm. | | >

| | > Tihs is bcuseae we do not raed ervey lteter by it slef but the | wrod as a | | > wlohe. | | >

| | >

| | | | | |

Reply to
KP_PC

'course, having NDT's understanding in-hand allows the 'lesson' in such a hypothetical cross- cultural funny-'movie' to be grasped.

Without it, the viewing might descend into the trading of insults, or worse.

| | | >

| | | >

| | | > Aoccdrnig to rsecearh at an Elingsh uinervtisy, it deson't | mtaetr | | in waht | | | > oredr the ltteers in a wrod are, the olny iprmoetnt tihng is | taht | | the | | | frist | | | > and the lsat ltteer is at the rghit pclae. | | | >

| | | > The rset can be a taotl mses and you can sitll raed it wouthit | | porbelm. | | | >

| | | > Tihs is bcuseae we do not raed ervey lteter by it slef but the | | wrod as a | | | > wlohe. | | | >

| | | >

| | | | | | | | | | | |

Reply to
KP_PC

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.