Artificial Intelligence in Government: Asimov's Multivac

Science is pursued for a variety of subjective reasons not open to scrutiny. What's important is the end product and not the motivation.

What's the criterion for valid research as opposed to invalid research? Motivations are what drive research and motivations are subjective. You don't like religious motivations and I agree. It doesn't, however, speak to the significance of the science that results from religious motivations as opposed, say, to academic career advancement motivations.

I'd rather leave political decisions to the tender mercies of me. All career politicians are corrupt and have hidden agendas. So do I. They're hidden because they're subjective and they're corrupt because they involve choices among competing ideas some of which have to be drawn at the expense of others. Show me a robot that isn't subjective and doesn't make such choices and I'll show you a robot that doesn't know what it's doing or should be doing much less what anyone else should be doing.

We can open source the code for human intelligence. That isn't going to make the results of that code any less subjective in terms of the results of its mechanization. Show me a political robot and I'll show you a subjective and corrupt robot with hidden agendas.

Regards - Lester

Reply to
Lester Zick
Loading thread data ...

You missed my point. If religious reasons are one of the motivations behind research, then the "end product" can be skewed to favor those motivations. For instance, if research produced a result that supported evolution, then many scientists that want continued funding from religion-oriented sources might re-word or omit some of their findings to appease those that provided the means.

How about research that benefits humanity, and isn't skewed by biased sources?

If the results are skewed towards the motivations, as stated above, then the science is distorted.

agendas,

I'm sure many people back in the 50's would have said "I'd rather leave piloting to the tender mercies of me, not some computer". If you've flown in the past decade, chances are your life was in the hands of a computer for some of the flight. Having said that, current computers are far from being able to make political decisions... just keep in mind that perspectives about technology change as technology's capacities increase and are proven.

Pardon? Please do... if you can, you'll win the more academic awards than anyone in history. A "human source code" would have billions of different variations, and considering things like mood, things are impossibly complex. Besides, source code changes with nurturing and enviroment, so unless you can download the "source code" from a person directly (not just determined via DNA), then you're going to have problems providing a source code.

So you're saying anything and everything that is programmed to make a political decision is corrupt? Lets say a computer was programmed to determine the best way to distribute food in order to feed the most people. You're saying it's impossible to program an uncorrupt AI to make that decision?

Dave

Reply to
dave.harper

And you missed my point that if motivations are subjective every motivation would skew results because subjective considerations cannot be objectively identified and compensated for.

What exactly benefits humanity according to what biases? I think you'll find the only answer are utilitarian measures or your own biases and subjective ideas of relative value.

The ends science aims at are always distorted by the subjective motivations of those ponying up the bucks. Let me know when you find a science that isn't distorted by the motivations of those doing or paying for the science.

There's a huge difference between computers and technology as tools and computers and technology as substitutes especially for utilitarian measures of subjective values for people.

Well opening the source code for human intelligence is perhaps an overstatement. What I meant specifically was the source code for consciousness and conscious beings. We can never get at subjective circumstances except by executing the source code in individuals. So we regress the problem but don't solve it just by having open source.

I'm saying that that objective is a corruption by definition since it assumes a subjective value judgment not evident in the fact of ai. Basically all you're suggesting is that ai artifacts should employ your own corruptions instead of someone elses. I have no idea why you think your subjective and utilitarian motivations are pure and those of others are not or that those of an ai artifact would not be. Subjective motivations are endemic to the problem of utility. We only get from one utilitarian standard to some other through corruption of one in morphing from it to another.

Regards - Lester

Reply to
Lester Zick

motivations

funding

In order to apply your arguement to reality, you'd have to believe that all motivations skewed results equally. I find that assumption flawed. If that was true, then 3rd party and independant studies wouldn't be any more valid in the public's opinion than others.

biased

And I contend that all biases are not equal. How can you proove an AI's programmed biases would be worse than a human government? As I mentioned before, an open-source AI wouldn't be able to hide biases as easily as a human government.

And again, all motivations aren't equal magnitude "skewers". Religious-backed research, which is likely to overlook results pointing to evolution or the like, is less likely to produce valid results in some fields. For example, say a researcher motivated to cure a type of cancer. If he finds a drug that works via a biological mechanism that lends credit to evolution, do you really think a religious funder would be just as likely to pursue the drug as a non-religious funder? And on the other hand, if a the drug lended proof of God, do you think the non-religious funder would be equally less likely to fund than the religious backer in the first scenario?

eliminate

computers

utilitarian

Currently, yes... But how do you know what potential new technology and discover holds in the future? That statement may join the leagues of other quotes said in the past.

That statement reminds me of a few other quotes:

"Radio has no future."

- Lord Kelvin (1824-1907), British mathematician and physicist, ca.

1897.

"What can be more palpably absurd than the prospect held out of locomotives traveling twice as fast as stagecoaches?"

- The Quarterly Review, England (March 1825)

"This `telephone' has too many shortcomings to be seriously considered as a practical form of communication. The device is inherently of no value to us."

- Western Union internal memo, 1878

They're not, and I never said they were "pure" on a "perfect value" yardstick. What I did say was that an AI governing tool/government, which may or may not uplhold my values, might be LESS biased (via transparency) and more efficient than a human based government.

Dave

Reply to
dave.harper

Not at all. I don't have any reason to suppose they're skewed equally. They're just all skewed.

And I never contend they were equal. You're the one who claimed that I contended that they're equal for some incomprehensible reason.

you proove an

I can't. What I can prove is that an ai's programmed biases would not be human and not those of the governed and that a government's biases may not be either but are much more so than the programmed biases of a non human.

As I

And you know this how?

Well this is all very ad hoc secular mindbending. I don't hold any special brief for religion. But all you're doing is making a general attack on religious motivation without so much as a by-your-leave.

New technology for subjective mechanics? By all means. That will just make ai artifacts subjective and we can all stop pretending they're gods who'll do for us what we can't do for yourself. If you want to believe in god, I suggest undoubtedly less expensive alternatives.

Well it would sure as hell be a more efficient, transparent, and less biased upholder of the programmer's values. That was never the issue. The issue is and always has been whether it would be a more efficient, transparent, and less biased upholder of the governed's values. Quite often the real motivation of people advancing such arguments is to convert their values into political mandates via ai instead of politics.

Regards - Lester

Reply to
Lester Zick

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.