Re: How Robots Will Steal Your Job

On Fri, 12 Dec 2003 12:56:57 +0000, Calum wrote or quoted :

In genetic algorithms, computers change their own programming. Neural nets constantly tinker with their own programming.

What we are reserving for humans is the task of framing new problems that would be interesting to solve.

Even that is not strictly human any more. Some theorem proving program go looking for interesting theorems to prove, then prove them.

We used to think we were special because we were alive. DNA sequencing squashed that bit of pomposity. We still have pockets of pride left based on the fact we are human. Yet this superiority is a temporary accident, not some fundamental feature of the universe.

We are probably one of the stupidest creatures ever to walk the earth, in a Darwinian sense. No other creature managed to destroy its environment or create so many way to make itself go extinct in so short a time. We likely will not last another 100 years. We have simply created too many avenues for our demise. You get a multiplying odds effect.

I see our best hope as artificial intelligence that is capable of overpowering man's primitive wetware, designed for tribal warfare and intra-tribal competition. If we are lucky, it will force us to survive. On the other hand, it may decide man has to go to save life on planet earth. On the other hand, it may have no sentimentality about carbon-based life, and set about eliminating it all and replacing it with designed life forms based on silicon, germanium etc.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green
Loading thread data ...

On Fri, 12 Dec 2003 15:06:22 +0000 (UTC), snipped-for-privacy@pvv.ntnu.no (Bent C Dalager) wrote or quoted :

I remember back in the early 70s when my high voltage transmission line program started doing things, "developing a personality" that I DID NOT CODE INTO IT.

It made my hair stand on end. The "personality" emerged from the thousands of trigonometric equations and decision rules I put into it.

This has lead me to speculate that many of the higher abstract traits we so highly value in humans are emergent properties of our neural wiring. Therefore we can expect similar wooly things to magically appear when we start creating electronic analogs.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

On Fri, 12 Dec 2003 17:53:28 +0000, Calum wrote or quoted :

To be fair, if you put a human in an isolation cell, he would not likely come up with thousands of interesting problems to solve.

People need input too.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

You have to consider "optimal with resepect to what". If thing A hits on a better solution than thing B, I'd say A is more intelligent than B.

Intelligence means going beyond the "good enough" level of solution, in other words. If you don't then the definition doesn't dovetail with Markov models of evolution.

Actually, not so much survival as the ability to procreate.

-- Les Cargill

Reply to
Les Cargill

Intelligent behavior--given an agent's bounded rationality and imperfect information--aims at "good enough" solu- tions, not optimum solutions. (Although, sometimes, op- timum solutions are both possible and essential, as in the Apollo Lunar Module's guidance and control system. I designed the ascent and descent algorithms and the LM's digital autopilot, and they were on the money optimum; but such solutions are often impractical and/or unnecessary.) I could not conceive or design an optimum investment strat- egy. (I wish I could!)

George W. Cherry

Reply to
George W. Cherry

Right. As Nobelist Herbert Simon famously said: we aim to "satisfice", to define a "good enough" solution.

Exactly. One must act "in time": the "optimum" action-- executed too late--is worthless.

George W. Cherryy

Reply to
George W. Cherry

It's not a "one time" matter. Intelligence is demonstrated over an extended period in a number of situation types.

"Markov models of evolution" ????

Well, Les, survival is a prerequisite for procreation, n'est pas?

Reply to
George W. Cherry

In its limited domain, the automatic door may operate intelligently. For example, the engineers must brain-storm the situations that can occur: what to do in the case of a power failure, the approach of a very small creature like a squirrel, whether to allow an over-ride by a button push, how to handle a very windy day, what to do when there is a fire in the building, and so on.

if (situationA) {actionA;} else if (situationB) {actionB;} else if (situationC) {actionC;} else if (situationD) {actionD;} else if (situationE) {actionE;}

George W. Cherry

formatting link

Reply to
George W. Cherry

Write Langton's Ant. It has a distinct emergent personality that you don't code into it. Give it an environment, and watch it adapt!

The emergence is not surprising. What /is/ surprising is the behaviour of your hair. Were you not expecting emergent behaviour from a complex system?

Reply to
Richard Heathfield

Making such a program is nearly trivial. Conway's Game of Life is proven to be unpredictable in the sense that you can only see what the outcome will be by running it. In the same vein, you can easily make a program that shares this property. Therefore, making unpredictable results in self-modifying code can be almost guaranteed, given a little unsupervised running time. As for the question, "who created the final outcome", that is for the philosophers. It is true that you made the efforts that produced the program that wrote the program, and if it is done on a computer using pseudo-random number generators, then it was wholly and completely deterministic- so, you are directly responsible. But that does not mean that you envisioned the end product! Sort of like dropping a jar of jam from 10 stories up and knowing what the splat would look like.

Cheers!

Chip Shults My robotics, space and CGI web page -

formatting link

Reply to
Sir Charles W. Shults III

Certainly has more scope for debate, given that such things tend to be unpredictable!

Don't know - I've never actually been a big fan of Asimov.

- Gerry Quinn

Reply to
Gerry Quinn

Right, but intelligence would be a measure from results of tests iterated over all such "comparisons".

If "good enough" is the only standard, then the state space of the thing being measured has a bound at one end. I suspect that's a problem.

Given person A and person B, if A consistently chooses a closer to optimum performance than B, it seems B is less intelligent than A. And we're kind of conflating intelligence and evolutionary fitness, which I'm not sure is correct.

I thought this was a Stephen Jay Gould hypothesis? Can't remember where from.

If intelligence is modelable as a continuous distribution, I believe that the measures it's distributing have to be well ordered. Intelligence isn't the same thing as evolutionary fitness.

I don't mean strictly Markov, but each mutation can be analogous to a "game move", moving the animal closer to or father from optimum. I'm using the term "optimum" because things like sharks haven't changed in millenia - looks like a stable species to me.

Not always. There's hysteresis there - sometimes the procreation is at the expense of the survival of the parent(s). This is especially true with bacteria and very small animals.

-- Les Cargill

Reply to
Les Cargill

On Sat, 13 Dec 2003 06:51:57 +0000 (UTC), Richard Heathfield wrote or quoted :

Not back then. My image of the computer was something I completely controlled, something that allowed me to push for and sometimes attain perfection because I could have as many tries as I wanted, and it made no record of my previous attempts. It was not like a mechanical drawing that deteriorated with each erasure and change.

I did not think of my program as being any sort of AI. I knew every line it in thoroughly.

I had been surprised by programs before in that they may turn out to be more useful than I expected, that users would find novel ways to employ them. I had seen erratic and interesting behaviours in buggy programs, but this was something coherent that LOOKED as if somebody had done a great deal of work planning it. It was as if somebody had been monkeying with my program in my sleep to add higher order features to it.

People often baldly assert there are no such things as emergent properties, e.g. that a computer could never have anything like a personality or an artistic style so I don't think many programmers have yet had this experience.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

On Sat, 13 Dec 2003 04:33:48 GMT, "George W. Cherry" wrote or quoted :

We have something vaguely in common. JPL used my 32-bit BBL Forth interpreter to write the code for some unmanned missions. How did you get such an interesting job? One thing many people don't realize is that space missions use hardware of considerably older design than the rest of us.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

I applied for it! The MIT Instrumentation Laboratory (now the Draper Laboratory) was the prime contractor for the Apollo Navigation, Guidance, and Control System. I joined the Laboratory in 1961 to work on the ANGCS and be- came the project manager of the Lunar Module NGCS. The computer was a very interesting one, but the algorithms and programs were what made the project a success. I designed the Lunar Module digital autopilot. It was the first digital autopilot ever. It was far superior to Grumman Air- craft and Engineering's back-up analog autopilot.

Apollo was a moondoggle. It is a mark of Bush league's strange "intelligence" that he wants to "establish a presence on the moon". The purpose of Apollo was to one-up the Soviet Union in space. I didn't care about that, though: I just wanted to do aerospace engineering on a challenging project that was not military.

George

The Apollo spacecraft computer was really quite innovative.

Reply to
George W. Cherry

Wonderful Java Glossary there Roedy!! I am surprised I have not stumbled upon it before. I have been working with Java/J2EE/JavaSpaces/etc. for several years. I based an Intelligent Agent System on that platform (along with KQML).

AFA emergent properties are concerned I agree with your take; synergy has many exemplars in nature and reductionism is a lossy process in many cases.

But that you found programs more useful than thought/intended is a product of both the program + user = system, not the program itself, right! Implicate features are not necessarily features those that are characterized as emergent when explicated. Rather, emergent features are not only surprising, but are unpredictable from a reductive analysis of the parts and their relationships.

Reply to
OmegaZero2003

You might be interested in:

formatting link
// Jim

Reply to
Zagan

On Sun, 14 Dec 2003 19:02:42 GMT, "OmegaZero2003" wrote or quoted :

What I did was invent a number of generic utilities that worked on files. What my users did was create giant incredibly complex jobs weaving the utilities together in an elaborate web. They treated each step with the same casual abandon I would add two variables in a program. Each utility became a programming atom to them. When I created the suite I never envisioned more perhaps half a dozen being strung together. Part of the problem was my eyes were focussed too much on the low level. I had a failure of nerve to use the suite in such a high level way. I would likely have resorted to custom programming to solve their problems, rather than using a somewhat more indirect approach using the standard utilities.

We were working on a large lawsuit. BC Hydro had been sued by the contractors on the Peace River Power Project. We had rooms full of punch cards (moved to tape). The lawyers would demand some statistics, graphs or information, and we had to come up with in it with a couple of hours notice. This was in the days when you typically got one batch run per day on the mainframe. So the utility technique worked very well since the jobs usually worked first time. The goal was to shorten coding time, not execution time.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

pseudo-random

If I may insert my 2 cents here... :*)

As Chip noted, " 'who created the final outcome' is for the philosophers." I tend to agree with this, but I will put a differenct twist on it below.

But before I do that, I want to make the point that it seems most people confuse intelligence and consciousness. While there is certainly a connection between intelligence and consciousness (as demonstrated by those entities that possess both), I must insist that intelligence can exist without consciousness. Is a paramecium conscious? It, being a single cell organism, I doubt it. But yet it does display intelligence (however primitive that may be). Consider a colony of bees. Is the qween or worker bees conscious? Again, I doubt it, but yet the colony does exhibit intelligent behaviour and produce complex structures.

The world is full of life forms that exhibit intelligence, but not necessarily consciousness.

If we take this as a given, then we might ask the question, "where does this intelligence come from?" The religious-minded may say, "God did it." The non-religious may say the intelligence is a product of evolution. If a religious scientist and a non-religious scientist were to study a colony of bees or ants independently, they will likely reach similar conclusions in their research. The point being that the "source" of the intelligence is not relevant. Bees exhibit intelligent behavior no matter whether "God did it" or evolution.

Returning to the issue of programming, we can say that the intelligence exhibited by a program is that of the programmer(s) who wrote the program. And this would be true in a sense. I recently worked on code for a rather large medical program that responded to external hardware as well as user input. It could be said that the intelligence of the program is my intelligence. However, I no longer work for that company. Eventually other programmers will modify the code I wrote; adding features and fixing bugs. Thus additional intelligence will be added to the intelligence I put into the code to begin with. The program runs and works according to the intended design. I don't have to be there for this to happen. This last statement may seem pointless, but think about it.

The program works as intended regardless of who wrote the code. It could be said that the intelligence in the program is a result of the fact God created me, or that I am a product of evolution. My point is that it doesn't matter!

I'm not saying that the source of "intelligence" is not of interest, I'm simply saying it doesn't matter in our discussion of "what" is intelligence.

Comments welcomed.

// Jim

Reply to
Zagan

In fact, in the later books (not written by Asimov), the robots design a "Zeroth Law" above the other three, and the Zeroeth law places *humanity* as paramount. Thus robots can harm individuals if humanity is served.

Of course, the fundimental moral questions remain: who decides what's Good and what's Harmful. Oh, and what is "Humanity"....

Reply to
Programmer Dude

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.