Re: How Robots Will Steal Your Job

I think the real 'problem' is that we're not going to all wake up one morning, drive to work, and find that we've been replaced by an army of robots. All of these changes will happen gradually, which will make the case for a 'counter-revolution' much harder to present to the undecided fraction of the populace. The fast-food workers mentioned in these postings will not be replaced en-masse, but over a period of years or even decades. This isn't to say that I am against automation; in fact, I welcome it. Thousands upon thousands of assembly line workers have been slowly replaced by industrial robots, which perform the same jobs better, faster and cheaper. But over the last 50 years, the US unemployment rate hasn't skyrocketed as naysayers would have us believe. If the shift towards greater automation is gradual, then the number of displaced workers is kept minimal and the labor force self-regulates as workers are redistributed to other sectors. If at some point in the future our society reaches such a level of sophistication that robots are repairing robots and human labor has become obsolete, then the face of the economy will have changed so much so that any predictions we dare to make now about the eventual impact on society are meaningless.

Cheers,

Sina Tootoonian

Reply to
Sina Tootoonian
Loading thread data ...

Is there a robot who can listen to my boss say what he thinks he wants, translate that into VB.net, and build an application the boss will then find workable? Good Luck... Al

Reply to
Al Gerharter

snipped-for-privacy@victoria.tc.ca (Arthur T. Murray) nattered:

Sure it has. 25 years ago I played chess on a computer by talking to it. Today, 25 freakin' years later, the general state of the art isn't much better, outside of some specialty academia projects. It sure hasn't shown up anywhere *I* frequent, and microcontrollers running a gas pump aren't truly robotic, they're just a cheap replacement for relay logic and the dude with the cash. There's not a lot of smarts required from a pump, nor for the dude that can't figure out how to make change.

When it can understand what I _mean_ when I tell it to run out and grab me a Coke, it'll be useful. Until then, it's a freakin' toy for the idle few. Yeah, a $1500 lawnmower is cute, but it's still a toy. Get real. Only Detroit and a few high-tech manufacturing centers in Japan are running any significant robotics: Detroit 'cos of the overpaid bumper- hangers, and Japan 'cos of their love with robot cartoons. Other than that, it's a dead issue to 99% of manufacturing. Robots don't go on strike, and they don't charge $45K plus medical plus retirement for someone that can barely spell, so they're replacing the (brain)deadwood in Motown. Tough, the UAW forced themselves out of a job by being greedy.

Based on the progress over the last 25 years, 2030 is WAY too optimistic. The only people actually USING voice control are the severely handicapped, and that's not out of any real choice. I've yet to see any stunning advances in AI that even come close to the learning ability of a retarded 3 year old, so I won't worry about losing my job to that same retarded 3 year old anytime soon.

< exit soapbox, stage left >
Reply to
Harry Chestwig

You will know there is trouble when they start serving gear oil at your local pub!

Cheers,

Ned

Reply to
Ned Flanders

We may not have to wait until 2020 - regards job loss. Article in the paper last week mentioned that one of the major factors slowing economic recovery in the US after the dot.com.bomb is that the major corporations are increasingly moving their technical jobs out of the US. Just like the manufacturers did 15-20 years ago - maquiladoras for engineers, not just chips.

Reply to
dan michaels

just an opinion: pattern (of any kind) recognition (matching with internal database) is basicaly very simple task. after recognition, whatever matched action may be started, etc ...

"patternisation" seems to be the problem

Reply to
gabor salai

I am aware of no fundamental law that says a system must be incapable of understanding itself. A system based on surprisingly simple concepts can become mindboggingly complex once the number of components starts growing large. And yet, all you need to understand to be able to describe the system are those simple concepts and a very powerful computer to simulate large scale integration.

Do you have a reference to a proof for that assertion?

Cheers Bent D

Reply to
Bent C Dalager

There's a theory that what made us develop our obscenely large brains was the need to socialize with other humans. There were contenders to the title, of course, but we killed them. Eradicate the humans and give chimps another few mill years and perhaps their desire to make friends has put them on the moon also :-)

Cheers Bent D

Reply to
Bent C Dalager

How about the board game "Diplomacy"? :-)

In any game with more than two players, evaluating the correct strategy becomes much much more complicated than just understanding the game rules. An AI that was really down on its luck would have to be a master of human group behaviour and other fun subjects before it would see much progress.

Cheers Bent D

Reply to
Bent C Dalager

On Mon, 25 Aug 2003 12:25:32 +0200, "gabor salai" wrote or quoted :

We seem to notice patterns in things quite unrelated to each other. You don't even know what the pattern is to start. You just have a universe of examples. We get a lot of ideas by thinking by analogy.

Perhaps it works with a massively parallel problem. what is most similar to anything I already know that I have not noticed already?

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

-------- Signal theory has a principle that an array of regions cannot ALWAYS be represented by less than that number of identified regions in a signal. I think that does it. But you're right, in a way, if the pattern is repetitive, it COULD be represented by less detail. That's the principle of compression, but, as I said, you can't ALWAYS do that.

----------- But that assumes repetition in the complexity. The human brain used some things over and over, but the complexity of the design may indeed excede the complexity of the function of the design, that is, mentality.

------------- There is no sort of proof till the human brain is totally understood, in some fashion, but it may be possible, given repetition, and the IF the maximum complexity of thought excedes the complexity of the largest strategic brain structure.

---------

-Steve

Reply to
R. Steve Walz

Understanding the brain may not require total knowledge of it, including everything contained within, etc. By analogy it is possible for a computer to contain a complete schematic diagram of every circuit it possesses, which is analogous to 'knowing' how it works, without having to also have a record of the current state of all of those millions of semiconductor components.

Although analogy is always suspect - especially when it comes from me :>

- I think that one is fairly good. At the very least we have the capacity to understand a lot more about the brain than we do currently... after all, at this point we know very little.

I think it is axiomatic that it is impossible for an intelligence to have absolute knowledge of itself. There's a lovely little recursive 'proof' of that :)

Reply to
Corey Murtagh

You can certainly use random development in order to achieve a desirable direction. You just need to do some culling of the random results.

When it comes to evolution, you get random development from random mutations and you have the environment providing the culling of the unsuitable mutations. Overall, this provides progress towards the desirable goal of "succeeding in producing offspring".

Consider a naive sort algorithm:

1) Rearrange the data in a random order 2) Are they now in sorted order? If not, go to 1 3) Success

Randomness harnessed to solve a problem.

Cheers Bent D

Reply to
Bent C Dalager

On Tue, 26 Aug 2003 11:42:55 +1200, Corey Murtagh wrote or quoted :

Some of the most interesting brain research is on vision. We learn how frog and human eyes summarise low level information, finding edges, movement etc. and pass that on.

There may be equivalent sumarising techniques for what we consider extremely wooly information, such as interpersonal relations. It may be that they are so close, so obvious, we can't see them.

What seems to hang computers up is getting nice clear raw data. Humans seem to just pick it out of the air just by existing, functioning all the time without complete information.

"We don't know who discovered water, but we're certain it wasn't a fish."

~ John Culkin

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

It's impossible to compare it anyway since we don't really have the first clue what we mean by the term. The only thing we can know for sure about intelligence is that whatever it is, humans have it :-)

Cheers Bent D

Reply to
Bent C Dalager

Let me guess, you're posting from the philosophy group right? :>

Attempting to paraphrase your statement in (slightly) less ambiguous terms:

"You can use random input plus selection to produce, or increase progress toward, a desired outcome."

Is that close to what you intended to say?

In that case I have no problem with it. It's a simple 'chaos can produce order' statement. Infinite monkeys and all that :>

What I have a problem with is the sophistry exercised by Roedy - specifically his mixture of two different meanings of the word 'direction.'

Reply to
Corey Murtagh

Vision is a fascinating field. Unfortunately it's something I can only hover on the edge of and go "ooh, ahh" with the rest of the mob. Interesting stuff, but well above my head :)

The day someone figures that one out, I'll be first in line for the answer. Interpersonal relationships are incredibly complex things involving reactions of someone else. At best you can guess at what's going to happen, and hope you had enough information to guess right.

Fortunately we collect information on people at a subconscious level whenever we're around them. If we're lucky that information can help us guess more accurately :)

I think it's more the case that computers are, for the most part, designed and programmed around the concept of absolutes. Digital devices deal with analog data poorly at the best of times. When we're trying to use them to model something that even we don't really understand, it's pretty much doomed to fail.

Something I heard a few years ago: "If we do manage to create a true artificial intelligence, there's every possibility that we won't recognize it."

IMO We have too narrow a view of 'intelligence,' and probably will until we encounter an intelligence that is equal to, but significantly different from, our own. But I'm not placing any bets on whether or not we'll ever recognize it as such :>

Reply to
Corey Murtagh

No, a programming one. Or is that "yes, a programming one"

Yes.

Ok. I must say that on the face of it, it seems like a disagreement over semantics, but I must admit to jumping in without having read the post of contention so I can't really comment intelligently :-)

Cheers Bent D

Reply to
Bent C Dalager

I think this is much too optimistic. Even humans have trouble understanding written human language. There's far too much room for poetic improvisation within in.

I think a process in which you first develop a fledgling AI (which has the necessary complexity to become intelligent, but not a lot of knowledge yet) and then parametrize human knowledge in a strict, formal language has a much greater chance of success. The AI could then learn a lot by assimilating Encyclopædia Britannica (in the formal language) without having to battle with all the inaccuracies of human language.

Programmers could learn to communicate in this formal language in order to converse with the computer. The start of human-cyborg relations :-)

Of course, this means you'd have to start by getting a bunch of people to translate the most essential human works into the formal language. This is probably less work than what you'd have to do to enable the AI to understand normal human language.

Cheers Bent D

Reply to
Bent C Dalager

If we are foresightful enough to ensure that this is a machine-based superior race, chances are they'll let us have the biosphere and go out to explore the universe for their essentials: metals and energy.

They might keep us around for much the same reasons we have museums today, perhaps even seeding us to new biospheres they come across just for the novelty value :-)

"Upload to the new Tau Ceti #3113-B full service server farm, with cutting edge sensor pods to rent and a human zoo planet within 2.3543 clicks"

Cheers Bent D

Reply to
Bent C Dalager

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.