Re: How Robots Will Steal Your Job

On Wed, 10 Dec 2003 13:57:54 -0600, Programmer Dude wrote or quoted :

When I was working on this in 1979 we were doing simple fourier transforms and looking at them.

People speculated about phase, also that possibly the language could even be holographic -- painting a sonic picture of sorts.

Funding with pretty sparse. The people doing most research are the Navy who use dolphins to deliver explosives.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green
Loading thread data ...

On Wed, 10 Dec 2003 13:57:54 -0600, Programmer Dude wrote or quoted :

Have you read Pinker's The Language Instinct?

formatting link
It argues our grammar is hard wired. There are just a few tweaking configuration parameters. It would be a bit much to expect some other species to deal with it without the hardware.

The communication schemes of cetacea may be equally inaccessible to us because we lack the wetware.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

in article snipped-for-privacy@4ax.com, Roedy Green at snipped-for-privacy@mindprod.com wrote on 12/10/03 4:16 PM:

"Barn owls, which are especially good at localizing prey by acoustic cues alone, have timing difference thresholds as low as one microsecond. There is no question that the temporal information essential to these discrimination tasks is carried in the phase locking of the auditory nerve, and in the case of the barn owl it has been possible to identify the neural circuits responsible for making the precise temporal comparison between phase locked spikes coming from the two ears. For a review of the work, see Carr and Konihisi (1990).

from "Spikes: Exploring The Neural Code", Rieke, Warland, de Ruyter van Steveninck, Bialek.

The paper referenced is:

Carr, C. E., and M. Konishi (1990) A circuit for detection of interaural time differences in the brain stem of the barn owl, J. Neurosci. 10,

3227-3246.

Rieke et. al. also discuss echolocation in bats (and give plenty of references), but nothing on dolphins.

-- Michael

Reply to
Michael Olea

in article snipped-for-privacy@4ax.com, Roedy Green at snipped-for-privacy@mindprod.com wrote on 12/10/03 4:19 PM:

This summary of a 400 page book, itself a popular summary of much more voluminous technical work, might overstate a wee bit what Pinker argues. "Hard wired" is a bit much, but aside from such quibbles I'm sure you'll find universal agreement on usenet with Pinker's thesis. ;-)

I'm waiting for the collected works of Nim Chimpsky, but in the meantime "Wild Minds: What Animals Really Think", M. Hauser is a strong alternative to reruns of "The Simpsons".

-- Michael

Reply to
Michael Olea

Here's my stab at defining intelligence.

Intelligence: The ability to choose in every commonly encountered type of situation an executable action which produces a desirable (satisfactory, useful) consequence.

Intelligent computer programs work on a domain of situation types to produce behaviors that are desira- ble (satisfactory, useful).

(I would appreciate criticisms and improvements of this definition.)

George W. Cherry

formatting link

Reply to
George W. Cherry

What if you s/desirable/optimal/ ? That resolves conflicts between two strategies in a fashion that makes 'em well-ordered.

-- Les Cargill

Reply to
Les Cargill

Les, I couldn't parse the above sentence (on which the next sentence seems to depend).

George Cherry

Reply to
George W. Cherry

Here's mine:

The ability to obey the Three Laws of Robotics.

This would necessitate some rewriting if we were discussing, say, dolphin intelligence ... "nor by inaction allow a dolphin to come to harm", etc, but its original form would be good enough to cover both humans and robots/computers built by humans.

Note that I am associating intelligence with the *ability* to obey the three laws; that doesn't necessarily imply a *willingness* to obey them, nor the

*necessity* of obeying them. For example, I am (reluctantly) ready to concede that the UK Prime Minister, Mr Blair, qualifies as "intelligent", even though he has undoubtedly broken at least two of Asimov's three laws during the course of his political career.
Reply to
Richard Heathfield

If you're looking to maximize satisfaction over the long term, then choosing the optimal action in every state won't always be your best net...there's tonnes of stuff about this in the reinforcement learning literature...

Fred.

Reply to
Fred Mailhot

I would define intelligence as the ability to solve new problems. By this definition, computers are not intelligent because they cannot program themselves.

Isn't intelligence something gradual? I mean, since humans (by definition intelligent) evolved from amoebae (not intelligent), at some point along the way, a threshold of "intelligence" was crossed. But I don't think it would be like flicking a switch. Rather, I think consciousness, like intelligence, is on a greyscale.

Animals don't realize what we are doing, because nobody can tell them, and they are probably insufficiently intelligent. And given that "intelligent" humans can't be bothered to rectify the planet and create a just society, why should animals bother? Animals just look out for themselves, just like humans. No, human intelligence still has a way to go, at least judging by the chimp the Americans chose as their president.

Reply to
Calum

A computer program can test an entity for these conditions to see if they are broken. Some time needs to be spent in defining the actors and environment, but this is plausible.

But these rich contextually informed assessments are not so conducive to a program. You could probably emulate it via a huge list of conditioals and definitions....perhaps at some point that simply stops being emulation and in every way that's meaningful becomes intelligence.

Reply to
gswork

Can anyone obey the three laws of robotics? It appears to me that, even if we could agree on the definitions of "harm" and "human" (and we can't) situations would arise in which any action or inaction would change which individuals were harmed. In which case some individual could always claim to be harmed by the action or inaction.

- Gerry Quinn

Reply to
Gerry Quinn

Well, they can, but someone has to make the program that programs them.

I am sure you can make a self-modifying program that is sufficiently complex that you cannot predict how it is going to end up. In this case, will it be correct to say that it was _you_ who created the end product?

Cheers Bent D

Reply to
Bent C Dalager

Those programs could be considered part of the initial program. And they wouldn't be solving any _new_ problems, just ones the original programmer anticipated.

I suppose you could have genetic programming - whereby you have a pool of programs each recombining, getting randomly mutated, trying to solve a problem. The problem is that you need some kind of fitness function, and being one bit away a solution could render a program worthless. The chances of getting a working solution out of such a system are quite small.

You may as well just start at the number 0 and test every possible program you can generate to see if it solves your problem. But there isn't enough time to do that.

I think in the future, computers may be able to exceed their original programming? But I can't imagine how! The question is as much _what_ to solve as how to solve it. With an animal it's easy: how to locate/catch/disable/open nutrition, how to escape cage, how to woo mate, how to kill rival, how to locate/build home. There is no logical reason to solve those problems, but without those urges the animal would die out. What problems would a computer decide to solve???

Calum

Reply to
Calum

Probably not, in which case we must all be dense. :-)

Positronic robots dealt with that by going for "least harm" - which, you could argue with plenty of justification, is not the same as "no harm".

In at least one story, Asimov played games with the definition of "human" too. (Was it "The Tricentennial Man"?)

Reply to
Richard Heathfield

Srorry. I keep thinking eveyrbody knows vi .

So substituted, it would then read. Intelligence: The ability to choose in every commonly encountered type of situation an executable action which produces an optimal consequence.

All it does is use a slightly more measurable verison of the word "desirable".

-- Les Cargill

Reply to
Les Cargill

That's one thing I like about my version - it doesn't specify the domain over which the choice is omptimized :)

-- Les Cargill

Reply to
Les Cargill

Dunno. As I read Asimov, the Three Laws were intended as constraints on already intelligent "beings". They're an ethical construct, not a description of "how to" or "is it".

-- Les Cargill

Reply to
Les Cargill

On Thu, 11 Dec 2003 22:47:34 GMT, "George W. Cherry" wrote or quoted :

You need something to measure the degree of intelligence in your definition. By your definition an electric eye to open a door might be considered intelligent since it decides whether to open the door or not.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

On Fri, 12 Dec 2003 19:26:29 GMT, Les Cargill wrote or quoted :

that's a bit strong. We consider humans intelligent and we humans rarely hit on the optimal solution to a problem.

In a Darwinian sense, it means finding solutions that are good enough for survival within the given time constraint.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.