Artificial Intelligence in Government: Asimov's Multivac

I was interested in hearing some people's views on AI eventually evolving to the point where it could aid in (or even become?)
government. Yes, yes... this question was spurred after seeing "I, Robot", although I don't think did justice to Asimov's stories. I'm not sure how often this topic has been discussed before, but a 'quick' search did not show any previous discussions (although I'm pretty sure there's plenty).
For those not familiar with Asimov's fiction, Multivac is a common "character" in some of his short stories. It's usually a computer that is constantly fed data, analyzes problems facing humanity, and outputs solutions in accordance with the 3 (or 4) laws.
So the question I'm posing is this:
In your opinion, could a future government that incorporates artificial intelligence govern humanity better than a strictly human-run government?
This question is open to interpretation, and can include AI subject to additional (or different) laws than the 3 (or 4) traditional ones.
I'm holding my personal views back until other people throw in their two cents, but I'm looking forward to the comments!
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
David Harper wrote:

Was I the only one rooting for the mainframe in I Robot? You've got to admit, it had a good point. We've not always proven worthy of our freedoms.
In regards to your original question, of course. Assuming its eventual technical feasibility, a computer assisted or fully automated government would undoubtably be less corrupt and inefficient than its human based equivalent. Although I don't think we'll ever give up *full* control. Naturally, the end result will depend on who creates it. After all, a computer is only a smart as the human who programs it.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

No, although I wouldn't have rooted for "that" mainframe. Hypothetically speaking, I don't think a super-intelligent, "non-corrupt" computer would have attempted a hostile takeover, or a takeover period. It probably would have come up with a better solution that would not have conflicted with the 1st and 2nd laws to that extreme... maybe somehow convincing humanity to trust it with certain decisions...?

Making a non-biased AI would be really tough though. Programmer's bias could arguably come into play... making the "government" open source could definately help prevent that (i.e. Linux Government 1.0 as opposed to Microsoft Government XP...ha, little joke...) Not that some politicians' bias is any better for sure.
One of the biggest obstacles would be the social backlash against computers... alot of people would be highly opposed to having even portions of government run by AI that would impact them. (imagine the street interviews on CNN: "Ain't no damn computer gonna make dis-is'ns for me and my kin...")
I wonder which political party would be more opposed to an AI aided gov't?
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

/semirant on
I'd imagine the old-left. Democrat, Labour (maybe not UK Labour, who aren't very old-left) or whatever. The right (supposedly) stand for small, efficient government, and oppose state run bureaucracy. An AI could eliminate all the inefficiencies, corruption and ineptitude's involved in trying to get enormous state bureau to solve massively complex problems. There was a time when the left were strongly allied with science and the cause of reason, but I'd say that this clearly isn't that time. I don't think they'd want to have social problems rendered as "cold-hard logic". I reckon they'd complain that it was missing the point, which they would allege was some indefinable, unquantifiable "essence of human drama" or someshit. I'd reckon that an AI running the economy would support free market capitalism too, being that it would instantly realise that export subsidies and import tariffs don't do anyone any good in the long run and that the reasons for retaining them are political rather than mathematical.
/semirant off
Depends very much on the AI of course. I'm sure there would be plenty of pundits from the cultural left queuing up to add a commandment or two!
beep,
--
Kevin Calder

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Kevin Calder wrote:

allied
Well, I would venture that the Republicans, right now, are more opposed to giving science and technology the reins of power than Democrats are. Look at all the types of science that many people in the GOP are trying to limit...

Well, I could just as easily see Republicans being disgusted by being governed by "Godless, souless machines"...
Secondly, what makes you assume that an AI would uphold a Republican agenda?

<SNIP>
That depends on what the AI's priorities are, doesn't it? Is it programmed to promote efficiency? Or is it programmed to promote human happiness and well-being? (the two aren't equivalent) Or some other value?

plenty
A additional commandment from the left? Or right? Regardless, I imagine the values of the programmers would be the biggest influence on how an AI governed.
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
in comp.ai.philosophy wrote:

Mainly social sciences I expect and socially oriented sciences that Republicans feel are more properly capitalized by the market. And what makes anyone think an ai artifact wouldn't be just as vulnerable to corruption as people?
Regards - Lester
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

opposed
are.
You mean any socially oriented science that didn't clash with the general religious beliefs of the GOP.

It would be vulnerable. The main advantage is that it could, if properly programed, go through a set of choices along with probable results alot faster than a current government could.
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
in comp.ai.philosophy wrote:

Certainly. Appeals to science are often justified in religious terms. It doesn't mean the science is flawed.

The hell you say! So now we're supposed to leave political decisions to the tender mercies of ai theorists and computer programmers to properly program issues even they admit they don't fully grasp just so we can make decisions faster? I could make bad decisions even faster.
Regards - Lester
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

that
1. It just means that the researchers are more likely to be biased towards results that don't conflict with any religious beliefs. 2. It means that some other valid research isn't pursued based on religious reasons.

vulnerable
so
As opposed to leaving political decisions to the tender mercies of career politicians, some of which are corrupt and have hidden agendas, as well as being hampered by special interest groups that eliminate many options that an AI wouldn't have to eliminate...
This was a thought exersize, and I'm not saying AI could or will ever become a factor in government. However, one thing that would make AI less suseptable to corruption is that the programming could be open source. Politicians can always spin things, lie, etc. It would be a lot harder to hide corruption in an open source code.
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
in comp.ai.philosophy wrote:

Science is pursued for a variety of subjective reasons not open to scrutiny. What's important is the end product and not the motivation.

What's the criterion for valid research as opposed to invalid research? Motivations are what drive research and motivations are subjective. You don't like religious motivations and I agree. It doesn't, however, speak to the significance of the science that results from religious motivations as opposed, say, to academic career advancement motivations.

I'd rather leave political decisions to the tender mercies of me. All career politicians are corrupt and have hidden agendas. So do I. They're hidden because they're subjective and they're corrupt because they involve choices among competing ideas some of which have to be drawn at the expense of others. Show me a robot that isn't subjective and doesn't make such choices and I'll show you a robot that doesn't know what it's doing or should be doing much less what anyone else should be doing.

We can open source the code for human intelligence. That isn't going to make the results of that code any less subjective in terms of the results of its mechanization. Show me a political robot and I'll show you a subjective and corrupt robot with hidden agendas.
Regards - Lester
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

You missed my point. If religious reasons are one of the motivations behind research, then the "end product" can be skewed to favor those motivations. For instance, if research produced a result that supported evolution, then many scientists that want continued funding from religion-oriented sources might re-word or omit some of their findings to appease those that provided the means.

How about research that benefits humanity, and isn't skewed by biased sources?

If the results are skewed towards the motivations, as stated above, then the science is distorted.

agendas,
I'm sure many people back in the 50's would have said "I'd rather leave piloting to the tender mercies of me, not some computer". If you've flown in the past decade, chances are your life was in the hands of a computer for some of the flight. Having said that, current computers are far from being able to make political decisions... just keep in mind that perspectives about technology change as technology's capacities increase and are proven.

ever
AI
a
Pardon? Please do... if you can, you'll win the more academic awards than anyone in history. A "human source code" would have billions of different variations, and considering things like mood, things are impossibly complex. Besides, source code changes with nurturing and enviroment, so unless you can download the "source code" from a person directly (not just determined via DNA), then you're going to have problems providing a source code.

So you're saying anything and everything that is programmed to make a political decision is corrupt? Lets say a computer was programmed to determine the best way to distribute food in order to feed the most people. You're saying it's impossible to program an uncorrupt AI to make that decision?
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
in comp.ai.philosophy wrote:

And you missed my point that if motivations are subjective every motivation would skew results because subjective considerations cannot be objectively identified and compensated for.

What exactly benefits humanity according to what biases? I think you'll find the only answer are utilitarian measures or your own biases and subjective ideas of relative value.

The ends science aims at are always distorted by the subjective motivations of those ponying up the bucks. Let me know when you find a science that isn't distorted by the motivations of those doing or paying for the science.

There's a huge difference between computers and technology as tools and computers and technology as substitutes especially for utilitarian measures of subjective values for people.

Well opening the source code for human intelligence is perhaps an overstatement. What I meant specifically was the source code for consciousness and conscious beings. We can never get at subjective circumstances except by executing the source code in individuals. So we regress the problem but don't solve it just by having open source.

I'm saying that that objective is a corruption by definition since it assumes a subjective value judgment not evident in the fact of ai. Basically all you're suggesting is that ai artifacts should employ your own corruptions instead of someone elses. I have no idea why you think your subjective and utilitarian motivations are pure and those of others are not or that those of an ai artifact would not be. Subjective motivations are endemic to the problem of utility. We only get from one utilitarian standard to some other through corruption of one in morphing from it to another.
Regards - Lester
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

motivations
funding
In order to apply your arguement to reality, you'd have to believe that all motivations skewed results equally. I find that assumption flawed. If that was true, then 3rd party and independant studies wouldn't be any more valid in the public's opinion than others.

biased
And I contend that all biases are not equal. How can you proove an AI's programmed biases would be worse than a human government? As I mentioned before, an open-source AI wouldn't be able to hide biases as easily as a human government.

And again, all motivations aren't equal magnitude "skewers". Religious-backed research, which is likely to overlook results pointing to evolution or the like, is less likely to produce valid results in some fields. For example, say a researcher motivated to cure a type of cancer. If he finds a drug that works via a biological mechanism that lends credit to evolution, do you really think a religious funder would be just as likely to pursue the drug as a non-religious funder? And on the other hand, if a the drug lended proof of God, do you think the non-religious funder would be equally less likely to fund than the religious backer in the first scenario?

of
eliminate
leave
a
computers
utilitarian
Currently, yes... But how do you know what potential new technology and discover holds in the future? That statement may join the leagues of other quotes said in the past.
That statement reminds me of a few other quotes:
"Radio has no future." - Lord Kelvin (1824-1907), British mathematician and physicist, ca. 1897.
"What can be more palpably absurd than the prospect held out of locomotives traveling twice as fast as stagecoaches?" - The Quarterly Review, England (March 1825)
"This `telephone' has too many shortcomings to be seriously considered as a practical form of communication. The device is inherently of no value to us." - Western Union internal memo, 1878

They're not, and I never said they were "pure" on a "perfect value" yardstick. What I did say was that an AI governing tool/government, which may or may not uplhold my values, might be LESS biased (via transparency) and more efficient than a human based government.
Dave
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
in comp.ai.philosophy wrote:

Not at all. I don't have any reason to suppose they're skewed equally. They're just all skewed.

And I never contend they were equal. You're the one who claimed that I contended that they're equal for some incomprehensible reason.

I can't. What I can prove is that an ai's programmed biases would not be human and not those of the governed and that a government's biases may not be either but are much more so than the programmed biases of a non human.

And you know this how?

Well this is all very ad hoc secular mindbending. I don't hold any special brief for religion. But all you're doing is making a general attack on religious motivation without so much as a by-your-leave.

New technology for subjective mechanics? By all means. That will just make ai artifacts subjective and we can all stop pretending they're gods who'll do for us what we can't do for yourself. If you want to believe in god, I suggest undoubtedly less expensive alternatives.

Well it would sure as hell be a more efficient, transparent, and less biased upholder of the programmer's values. That was never the issue. The issue is and always has been whether it would be a more efficient, transparent, and less biased upholder of the governed's values. Quite often the real motivation of people advancing such arguments is to convert their values into political mandates via ai instead of politics.
Regards - Lester
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
In comp.robotics.misc,alt.cyberpunk,comp.ai.philosophy, "Chris S."

If this is a true, conscious "intelligence" it may well have human-like motives and possibly be as corrupt, made even worse by its greater efficiency.

Not without a fight...

There were those on a BBS 20 years ago who used the same argument to say I was wrong when I said that I believed a computer would someday beat the world's best human chess player.
----- http://mindspring.com/~benbradley
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Ben Bradley wrote:

Possibly. Thus my remark about "being only as good as its programmer".

You misunderstand me. Clearly we're able to create machines capable of out-performing humans in nearly any specific category, from lifting heavy weights to performing complex calculations. Why should the function of intelligence or government be any different? What I meant was that the degree of success or failure of our implementation will depend on the skill of whoever creates it. Naturally, it's a lofty goal, but like most obstacles, it's simply a matter of our technological evolution.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Perhaps a network of highly specialised agents, each dedicated to a single problem or a set of very similar problems would be a better implementation than a single general purpose 'multivac'?
beep,
--
Kevin Calder

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Wed, 28 Jul 2004 05:53:24 GMT, snipped-for-privacy@gmail.com (David Harper) wrote:

I'd have thought Iain M Banks fans might have a lot to say about A.I. in government, have you tried alt.books.iain-banks?
The cyberpunk view might be more - 'governments? how quaint!'
=;o) -- Iain x http://18hz.com
'.. it was a matter of common knowledge that a man could carry about in a handbag an amount of latent energy sufficient to wreck half a city.' - The World Set Free / H.G.Wells / 1914
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
--

Looking for other swingers? Just need to find someone for sex?
Check us out!
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Well, actually, AI is already beginning to play a role in government and the military.
With the capture of an ever-expanding amount of raw data, it is becomming increasingly important to be able to quickly parse this data and generate information that can be presented as knowledge and used for informed, timely decisions. The parsing is being handled by some AI algorithm. While many of the larger projects are still considered research, progress is being made. The results no doubt influenced by the programmer's approach: knowledge base, expert system, neural network that can learn, what have you. So, yes, AI is aiding government.
As for AI controlling government... I think as long as we program a sense of humor into our AI constructs and allow that to propegate, they'd get too much pleasure out of watching the politicing to want to ever completely take over. In a sense, we'd be their court jester, allowed to rule in our own little world for their own amusement.
- Malekith
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.