Re: How Robots Will Steal Your Job

Here's my stab at defining intelligence.
Intelligence: The ability to choose in every commonly encountered type of situation an executable action which produces
a desirable (satisfactory, useful) consequence.
Intelligent computer programs work on a domain of situation types to produce behaviors that are desira- ble (satisfactory, useful).
(I would appreciate criticisms and improvements of this definition.)
George W. Cherry http://sdm.book.home.comcast.net
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
"George W. Cherry" wrote:

What if you s/desirable/optimal/ ? That resolves conflicts between two strategies in a fashion that makes 'em well-ordered.
-- Les Cargill
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Les, I couldn't parse the above sentence (on which the next sentence seems to depend).
George Cherry

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
"George W. Cherry" wrote:

Srorry. I keep thinking eveyrbody knows vi .
So substituted, it would then read. Intelligence: The ability to choose in every commonly encountered type of situation an executable action which produces an optimal consequence.
All it does is use a slightly more measurable verison of the word "desirable".

-- Les Cargill
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Fri, 12 Dec 2003 19:26:29 GMT, Les Cargill

that's a bit strong. We consider humans intelligent and we humans rarely hit on the optimal solution to a problem.
In a Darwinian sense, it means finding solutions that are good enough for survival within the given time constraint.
-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See http://mindprod.com/jgloss/jgloss.html for The Java Glossary.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Roedy Green wrote:

You have to consider "optimal with resepect to what". If thing A hits on a better solution than thing B, I'd say A is more intelligent than B.
Intelligence means going beyond the "good enough" level of solution, in other words. If you don't then the definition doesn't dovetail with Markov models of evolution.

Actually, not so much survival as the ability to procreate.

-- Les Cargill
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
wrote or quoted :

It's not a "one time" matter. Intelligence is demonstrated over an extended period in a number of situation types.

"Markov models of evolution" ????

Well, Les, survival is a prerequisite for procreation, n'est pas?

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
"George W. Cherry" wrote:

Right, but intelligence would be a measure from results of tests iterated over all such "comparisons".
If "good enough" is the only standard, then the state space of the thing being measured has a bound at one end. I suspect that's a problem.
Given person A and person B, if A consistently chooses a closer to optimum performance than B, it seems B is less intelligent than A. And we're kind of conflating intelligence and evolutionary fitness, which I'm not sure is correct.
I thought this was a Stephen Jay Gould hypothesis? Can't remember where from.
If intelligence is modelable as a continuous distribution, I believe that the measures it's distributing have to be well ordered. Intelligence isn't the same thing as evolutionary fitness.

I don't mean strictly Markov, but each mutation can be analogous to a "game move", moving the animal closer to or father from optimum. I'm using the term "optimum" because things like sharks haven't changed in millenia - looks like a stable species to me.

Not always. There's hysteresis there - sometimes the procreation is at the expense of the survival of the parent(s). This is especially true with bacteria and very small animals.

-- Les Cargill
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
George W. Cherry" wrote:

I think we can resume the concept ant generalize and define the intelligence as the capacity to store mental (Virtual ) images * eventually and the faculty(power) to treat(manipulate) them for react to environnement stimuli
* Virtuality representation of Reality eventually.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
message wrote or quoted :

Right. As Nobelist Herbert Simon famously said: we aim to "satisfice", to define a "good enough" solution.

Exactly. One must act "in time": the "optimum" action-- executed too late--is worthless.
George W. Cherryy

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Intelligent behavior--given an agent's bounded rationality and imperfect information--aims at "good enough" solu- tions, not optimum solutions. (Although, sometimes, op- timum solutions are both possible and essential, as in the Apollo Lunar Module's guidance and control system. I designed the ascent and descent algorithms and the LM's digital autopilot, and they were on the money optimum; but such solutions are often impractical and/or unnecessary.) I could not conceive or design an optimum investment strat- egy. (I wish I could!)
George W. Cherry

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Sat, 13 Dec 2003 04:33:48 GMT, "George W. Cherry"

We have something vaguely in common. JPL used my 32-bit BBL Forth interpreter to write the code for some unmanned missions. How did you get such an interesting job? One thing many people don't realize is that space missions use hardware of considerably older design than the rest of us.
-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See http://mindprod.com/jgloss/jgloss.html for The Java Glossary.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
message wrote or quoted :

I applied for it! The MIT Instrumentation Laboratory (now the Draper Laboratory) was the prime contractor for the Apollo Navigation, Guidance, and Control System. I joined the Laboratory in 1961 to work on the ANGCS and be- came the project manager of the Lunar Module NGCS. The computer was a very interesting one, but the algorithms and programs were what made the project a success. I designed the Lunar Module digital autopilot. It was the first digital autopilot ever. It was far superior to Grumman Air- craft and Engineering's back-up analog autopilot.
Apollo was a moondoggle. It is a mark of Bush league's strange "intelligence" that he wants to "establish a presence on the moon". The purpose of Apollo was to one-up the Soviet Union in space. I didn't care about that, though: I just wanted to do aerospace engineering on a challenging project that was not military.
George

The Apollo spacecraft computer was really quite innovative.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

If you're looking to maximize satisfaction over the long term, then choosing the optimal action in every state won't always be your best net...there's tonnes of stuff about this in the reinforcement learning literature...
Fred.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Fred Mailhot wrote:

That's one thing I like about my version - it doesn't specify the domain over which the choice is omptimized :)
-- Les Cargill
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
George W. Cherry wrote:

Here's mine:
The ability to obey the Three Laws of Robotics.
This would necessitate some rewriting if we were discussing, say, dolphin intelligence ... "nor by inaction allow a dolphin to come to harm", etc, but its original form would be good enough to cover both humans and robots/computers built by humans.
Note that I am associating intelligence with the *ability* to obey the three laws; that doesn't necessarily imply a *willingness* to obey them, nor the *necessity* of obeying them. For example, I am (reluctantly) ready to concede that the UK Prime Minister, Mr Blair, qualifies as "intelligent", even though he has undoubtedly broken at least two of Asimov's three laws during the course of his political career.
--
Richard Heathfield : snipped-for-privacy@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

A computer program can test an entity for these conditions to see if they are broken. Some time needs to be spent in defining the actors and environment, but this is plausible.

But these rich contextually informed assessments are not so conducive to a program. You could probably emulate it via a huge list of conditioals and definitions....perhaps at some point that simply stops being emulation and in every way that's meaningful becomes intelligence.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Can anyone obey the three laws of robotics? It appears to me that, even if we could agree on the definitions of "harm" and "human" (and we can't) situations would arise in which any action or inaction would change which individuals were harmed. In which case some individual could always claim to be harmed by the action or inaction.
- Gerry Quinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gerry Quinn wrote:

Probably not, in which case we must all be dense. :-)

Positronic robots dealt with that by going for "least harm" - which, you could argue with plenty of justification, is not the same as "no harm".
In at least one story, Asimov played games with the definition of "human" too. (Was it "The Tricentennial Man"?)
--
Richard Heathfield : snipped-for-privacy@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Certainly has more scope for debate, given that such things tend to be unpredictable!

Don't know - I've never actually been a big fan of Asimov.
- Gerry Quinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.