Re: How Robots Will Steal Your Job

Any serious reading of "Continental" philosophy causes these sorts of questions to seem absurd, and what is interesting is that a PROGRAMMER, hero computer scientist, thought them ungrammatical, ill formed and absurd.

"Can ai be conscious?" would be an interesting question if ai met one tenth of its technical promises, indeed, if it worked.

But what it proves is something of which Kant and Hegel were aware. Tis is that in fact the ability to reason-with-symbols is (1) mechanizable, (2) not definitiational of intelligence.

The "reasonintg" that may be definitional is that which subsumes itself (in, for example, Kant's transcendental analytic) in a benigh-rather-than-vicious circle and which participates in a collective and a social process of reasoning: not mathematical calculation at all.

What's amusing is that Hegel "knew" how to frame and answer what happens to be a question that exceeds the topos of natural science: for he called the mechanist explainers of his time, along with various quacks, "phrenologists" who pretended to crudely divide the brain in an operation which presumes that social organization is secondary without argument.

Words, my lord: words, words words. 17th century philosophers knew all about the structure of the real workd which consists, with an apparent irreducibility, of material extension in space and time, and an antiworld consisting of sensory qualia, dreams and reflections.

Reducing the one to the other is folly, and a philosopher's stone. Apprehending instead why it is we are even able to know the existence of the two worlds, and find ways to relate them, is what life is all about.

Toe believe in AI is dehumanizing for it presents the logical possibility of an impossible reduction. This is used by totalitarians of all stripes (including "libertarians") to excuse dehumanization.

But what is most germane to this ng is that belief in "ai" is positively correlated with inability to do math or program computers. "AI" consists merely of ill-understood software and it is a programmer's responsibility to understand his software. Sure, an expert system or neural net can "learn" and create a data base of useful facts. But this data base's utility depends STRICTLY on continued expert oversight, and legally so in the case of mission critical software.

The human element is necessary, and this happens not to be a scientific assertion at all. Instead its is philosophical and synthetic apriori: necessarily true. Transcendental arguments will be provided on request.

Reply to
Edward G. Nilges
Loading thread data ...

I keep up with general progress in the latter and general lack of progress in the former.

Gerry Quinn

Reply to
Gerry Quinn

Which ones? That is a pretty curious statement. I think I know a few of those algorithms, but not as many as 26 :)

Cheers,

__ Eray

Reply to
Eray Ozkural exa

Off topic I know, but I don't know of any Libertarians who excuse dehumanisation. I certainly don't.

Having seen how dehumanised those dumped in Camp X-ray are, perhaps the real test for AI is, if given the chance, will the AI kill us or enslave us? If it can, but chooses not to, then it is intelligent. If it does wipe us out because it can, then all we have done is create our own stupid replacement, and hence we failed to create real intelligence.

Reply to
Nigel Tolley

Conduct a thought-experiment. As yourself whether the "libertarian", commonly understood, is PERMITTED, REQUIRED, or FORBIDDEN to go altruistically to the aid of another.

  • Clearly, liberatianism would not REQUIRE someone to aid another
  • It might seem clear that the libertarian is PERMITTED to do so
  • But in fact the permission CANNOT be one of the rights enumerated in libertarianism. This is because none of the rights enumerated speak to the relations of the libertarian with his fellow human beings except to demand that the libertarian must be "left alone." The permission is a privilege revocable at will and this is seen in the way that "libertarians" tend systematically to discourage social organization for altruistic ends.

In fact the consistent libertarian has himself a right of revolution against the state found in Hobbes while completely lacking any right to come altruistically to the aid of another for the above reasons.

Libertarians have excessive self esteem but in fact seek no more than the right of revolution and the right to step over homeless people in the street.

Reply to
Edward G. Nilges

The problem here is that you haven't really created machine emotion, you've merely transfered human emotion to a different platform. Without a human to duplicate, the machine would have no emotion.

Just because a printing press can create high quality reproductions of famous paintings, do not mistake the printing machine (nor indeed the camera which captured the image for print) as an artist!

-FISH- >

Reply to
FISH

The question raised is serious, and this is whether intelligence would be "benign" in your sense.

Arthur C. Clarke is highly overrated, as a deep thinker on these issues, and his "laws" of robotics are jejune, for the bot has to be told, as an axiom, not to harm a human being. Genuine philosophy has to ask why that must be a separate axiom, and whether or not willingness to "kill or enslave" can be derived from intelligence.

Furthermore, it may be that there is no "libertarian" intermediate zone at all between unwillingness to kill or enslave, and a more generalized altruism INCLUDING active solidarity. Libertarianism is probably untenable because its central axiom encodes all efforts, no matter how grassroots, to act in solidarity (such as union organization) as nascent tyranny which needs to be nipped in the bud, and libertarianism actually reduces to the thought of Hobbes, who gave the individual a right of revolution but no right to come to the aid of another unless in self-interest.

In fact, because of the puzzles generated by any attempt to reduce altruism to self interest or metaphysics, Emmanuel Levinas reduces the possibility of thought itself to ethics and asks if we cannot derive our metaphysics from ethics.

Arthur C. Clarke is a participant in the absurd...nearly insane....proposal to build a space elevator in an Earth where the

3000 year old Ross ice shelf is collpasing. What is needed instead is responsible philosophy and not science fiction. You did not raise science fiction issues, I did, and so it's my bad. I wanted to exorcise a form of thinking with a demonic potential.
Reply to
Edward G. Nilges

On 24 Sep 2003 10:34:37 -0700, snipped-for-privacy@yahoo.com (Edward G. Nilges) wrote or quoted :

I think you have A.C. Clarke and Isaac Asimov conflated.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

------------ The Shit at camp X-ray DESERVE IT!

-Steve

Reply to
R. Steve Walz

Ok... so if you created a hardware or software simulation of a human mind, and that simulation was able to perform all of the intellectual and emotional feats of a real human... would that be artificial intelligence?

Flawed analogy. An artist is something (conventionally, some/one/) which/who creates works of art. Duplication of works of art is not a function of an artist. If I was to create a machine capable of /creating/ (rather than duplicating) works of fine art, then that machine by definition would /be/ an artist.

Similarly, if I create a machine capable of independant intelligence, then that machine by definition would be artifically intelligent. It doesn't matter how I create the machine, does it. Whether it's a copy of an existing human intelligence or a whole new intelligence created from scratch is irrelevant.

Reply to
Corey Murtagh

I know of a few, but Libertarians are just people. Some are more altruistic than others.

An interesting test, but one I'd prefer to have tested in a simulation rather than reality. :)

The laws of robotics come from Asimov, not Clarke, but your point is valid.

A space elevator could have many benefits, both economic and for research.

-- D. Jay Newman

Reply to
D. Jay Newman

After posting my reply to this message, it dawned on me where you (maybe) had mis-interpretted the discussion.

When we were talking about duplicating a human mind, we were not implying reverse-engineering the workings of the mind and building a machine version. (I agree, that would indeed qualify as machine intelli- gence and emotion). If you read the postings leading up to the message you followed up you'll see that I was responsing to a suggestion that a specific living human mind (actually Bill Gates' - although why he was choosen as the candidate is beyond me!) be transfered from 'wetware' to a machine. In my mind this is just Bill's mind running on different hardware - it does not qualify as independent machine intelligence.

Hopefully that will make things a lot clearer for you... ;-)

-FISH- >

Reply to
FISH

Arthur C. Clarke is the guy who came up with the idea of the geo-synchronous satellite (now simply known as "satellite". The same thing you use for a GPS, television, and numerous other things.)

You are talking about Asimov's laws of robotics. They are kind of absurd, but do provide a good way to generate fiction. Fiction requires "suspension of disbelief". The idea is to enjoy yourself.

I don't think he was a "participant" -- I think it was simply another one of his bright ideas. It may or may not be practical from an engineering point of view, but sci-fi authors don't have to be restricted by engineering concerns. Who knows, there may be a breakthrough waiting in the wings, and C-60 chains strong enough for a space elevator may just be around the corner (or C-120, whatever.)

Reply to
soft-eng

ACC did not invent the "satellite". He came up with the idea to use a geosynchronous satellite for communications. And GPS uses a set of beeping low-orbit satellites; doppler shifts in their beeps reveal their relative location.

But, general points taken.

-- Phlip

Reply to
Phlip

Only to the extent that human emotion is a computational process which is implemented entirely in neurones and their "electrical" and biochemical connections. If, as seems not unreasonable, at least some parts of human emotion involve the dynamics of the endocrine system, which in turn involves the dynamics of general human physiology and biochemistry, then simply transferring the "contents" of a human brain won't give you human emotion. And if, as seems not implausible, emotions are a powerful cognitive filter which stops thinking from becoming both computationally intractable and silly, then doing this "bottled brain" stunt might not give you a usefully functional mind at all.

Of course, if what you wanted this brain to do was to run Micros**t, that might not matter.

-- Chris Malcolm snipped-for-privacy@infirmatics.ed.ac.uk +44 (0)131 651 3445 DoD #205 IPAB, Informatics, JCMB, King's Buildings, Edinburgh, EH9 3JZ, UK

formatting link

Reply to
Chris Malcolm

On Thu, 25 Sep 2003 14:36:05 GMT, "Phlip" wrote or quoted :

That does not make sense. Doppler shift is a way of measuring velocity and if the satellites are geosynchronous, they are not moving relative to the ground stations.

I think they work by comparing time signals from a number of satellites. The amount they differ is a measure of how long it took the signal to travel to the ground station. The GPS unit can then work out the relative distance to the various satellites and triangulate to get a fix.

The civilian system encrypted the low order bits of the time, but now I believe that scheme has been made public, and the military uses a new more accurate system. They would just shut down the civilian one entirely in a war.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

Thanks for the correction.

Unfortunately, this mere entertainment drives policy when policymakers are "above" engineering literacy and critical thought. Ronald Reagan, in the 1930s, was a big fan of Amazing Stories, and his attitude of technical possibility (unleavened by mathematical sophistication or critical thought) resulted in the death of Christa McAuliffe, for it was on the Gipper's watch that technical criticism (known as "pushback") became, at NASA, confused with goofing off.

Who knows, the construction of this idiocy may discover new natural laws such as "don't construct space towers lest you set the Van Allen belt on fire and destroy all life on Earth, Bozo." At a minimum it diverts resources away from needed environmental cleanup INCLUDING software correctness and what I call "forensic" software (discovery of hidden law, usually to the disadvantage of the least well off, in software.)

Reply to
Edward G. Nilges

Yes, I do. I used to read science fiction: I do not read "classic" science fiction any more. Thanks for the correction.

Reply to
Edward G. Nilges

On Thu, 25 Sep 2003 16:56:29 +0000 (UTC), snipped-for-privacy@holyrood.ed.ac.uk (Chris Malcolm) wrote or quoted :

What are emotions?

They are a way of categorising a situation onto a finite number of general responses.

They are also affective in that once you have made the computation, your body reacts -- blood pressure goes up or down, adrenalin goes up or down, tears appear or dry up, body posture changes, hair stand up or lie down, to feel motivated to fight or flight or sleep or sex or eat or skulk away.

The computational part of emotion is hard. The affective part may be quite a bit easier -- mostly handled by a subcutaneous adrenalin pump.

Back in the 30s Delgado was able to stop a charging bull with brain implants.

-- Canadian Mind Products, Roedy Green. Coaching, problem solving, economical contract programming. See

formatting link
for The Java Glossary.

Reply to
Roedy Green

First use of the geo-synchronous concept. Once he did that, using it for other purposes seems a little obvious.

That's very interesting. I would have guessed the statellites would have id's based on their location -- I guess using a doppler shift is less prone to drifts etc. But how do you get an actual absolute position (on earth) out of a doppler shift? Given that the basic information you get, is just that something is moving so fast wrt you.

Reply to
soft-eng

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.