Artificial Intelligence in Government: Asimov's Multivac

I was interested in hearing some people's views on AI eventually evolving to the point where it could aid in (or even become?) government. Yes, yes... this question was spurred after seeing "I, Robot", although I don't think did justice to Asimov's stories. I'm not sure how often this topic has been discussed before, but a 'quick' search did not show any previous discussions (although I'm pretty sure there's plenty).

For those not familiar with Asimov's fiction, Multivac is a common "character" in some of his short stories. It's usually a computer that is constantly fed data, analyzes problems facing humanity, and outputs solutions in accordance with the 3 (or 4) laws.

So the question I'm posing is this:

In your opinion, could a future government that incorporates artificial intelligence govern humanity better than a strictly human-run government?

This question is open to interpretation, and can include AI subject to additional (or different) laws than the 3 (or 4) traditional ones.

I'm holding my personal views back until other people throw in their two cents, but I'm looking forward to the comments!

Dave

Reply to
David Harper
Loading thread data ...

Was I the only one rooting for the mainframe in I Robot? You've got to admit, it had a good point. We've not always proven worthy of our freedoms.

In regards to your original question, of course. Assuming its eventual technical feasibility, a computer assisted or fully automated government would undoubtably be less corrupt and inefficient than its human based equivalent. Although I don't think we'll ever give up *full* control. Naturally, the end result will depend on who creates it. After all, a computer is only a smart as the human who programs it.

Reply to
Chris S.

No, although I wouldn't have rooted for "that" mainframe. Hypothetically speaking, I don't think a super-intelligent, "non-corrupt" computer would have attempted a hostile takeover, or a takeover period. It probably would have come up with a better solution that would not have conflicted with the 1st and 2nd laws to that extreme... maybe somehow convincing humanity to trust it with certain decisions...?

Making a non-biased AI would be really tough though. Programmer's bias could arguably come into play... making the "government" open source could definately help prevent that (i.e. Linux Government 1.0 as opposed to Microsoft Government XP...ha, little joke...) Not that some politicians' bias is any better for sure.

One of the biggest obstacles would be the social backlash against computers... alot of people would be highly opposed to having even portions of government run by AI that would impact them. (imagine the street interviews on CNN: "Ain't no damn computer gonna make dis-is'ns for me and my kin...")

I wonder which political party would be more opposed to an AI aided gov't?

Dave

Reply to
David Harper

If this is a true, conscious "intelligence" it may well have human-like motives and possibly be as corrupt, made even worse by its greater efficiency.

Not without a fight...

There were those on a BBS 20 years ago who used the same argument to say I was wrong when I said that I believed a computer would someday beat the world's best human chess player.

-----

formatting link

Reply to
Ben Bradley

Possibly. Thus my remark about "being only as good as its programmer".

You misunderstand me. Clearly we're able to create machines capable of out-performing humans in nearly any specific category, from lifting heavy weights to performing complex calculations. Why should the function of intelligence or government be any different? What I meant was that the degree of success or failure of our implementation will depend on the skill of whoever creates it. Naturally, it's a lofty goal, but like most obstacles, it's simply a matter of our technological evolution.

Reply to
Chris S.

I'd have thought Iain M Banks fans might have a lot to say about A.I. in government, have you tried alt.books.iain-banks?

The cyberpunk view might be more - 'governments? how quaint!'

=;o)

-- Iain x

formatting link

'.. it was a matter of common knowledge that a man could carry about in a handbag an amount of latent energy sufficient to wreck half a city.' - The World Set Free / H.G.Wells / 1914

Reply to
18hz

Well, actually, AI is already beginning to play a role in government and the military.

With the capture of an ever-expanding amount of raw data, it is becomming increasingly important to be able to quickly parse this data and generate information that can be presented as knowledge and used for informed, timely decisions. The parsing is being handled by some AI algorithm. While many of the larger projects are still considered research, progress is being made. The results no doubt influenced by the programmer's approach: knowledge base, expert system, neural network that can learn, what have you. So, yes, AI is aiding government.

As for AI controlling government... I think as long as we program a sense of humor into our AI constructs and allow that to propegate, they'd get too much pleasure out of watching the politicing to want to ever completely take over. In a sense, we'd be their court jester, allowed to rule in our own little world for their own amusement.

- Malekith

Reply to
malekith

I'll offer some speculative remarks.

I think it's much more tragic than that.

By making silly assumptions about what intelligence is, we would be handing over our control to machines that are not genuinely intelligent, ie. fascist optimization engines.

Today, most people prefer to be stupid, efficient automatons, instead of creative and intelligent machines. (An ideal alternatively called the american dream)

Indeed, most people have already become as dumb as Microsoft Windows operating systems. You feed in tasks, and they process these orders according to some predetermined priority.

The only way out of this dilemma would be solving AI, not impeding progress.

I suppose the automatons I refer to will have a hard time appreciating my argument. That which doesn't need to generate control signals of its own, can never gain consciousness.

Regards,

-- Eray Ozkural

Reply to
Eray Ozkural exa

I think that the biggest thing that you have to consider is that its people that program the computer, so keep in mind the GIGO garbage in garbage out.

Reply to
jabberwocky

In message , David Harper writes

/semirant on

I'd imagine the old-left. Democrat, Labour (maybe not UK Labour, who aren't very old-left) or whatever. The right (supposedly) stand for small, efficient government, and oppose state run bureaucracy. An AI could eliminate all the inefficiencies, corruption and ineptitude's involved in trying to get enormous state bureau to solve massively complex problems. There was a time when the left were strongly allied with science and the cause of reason, but I'd say that this clearly isn't that time. I don't think they'd want to have social problems rendered as "cold-hard logic". I reckon they'd complain that it was missing the point, which they would allege was some indefinable, unquantifiable "essence of human drama" or someshit. I'd reckon that an AI running the economy would support free market capitalism too, being that it would instantly realise that export subsidies and import tariffs don't do anyone any good in the long run and that the reasons for retaining them are political rather than mathematical.

/semirant off

Depends very much on the AI of course. I'm sure there would be plenty of pundits from the cultural left queuing up to add a commandment or two!

beep,

Reply to
Kevin Calder

Perhaps a network of highly specialised agents, each dedicated to a single problem or a set of very similar problems would be a better implementation than a single general purpose 'multivac'?

beep,

Reply to
Kevin Calder

In message , snipped-for-privacy@charter.net writes

For commentary on the future of AI's in the military see Alienthe's technical musings from a while back:

formatting link

beep,

Reply to
Kevin Calder

In message , Eray Ozkural exa writes

What are these assumptions?

What constitutes non-genuine intelligence, and what on earth has it to do with fascism?

You have taken a poll on people's preferences I presume?

To be honest I can't see any argument at all. Why don't you try re-phrasing it?

thanks,

Reply to
Kevin Calder

allied

Well, I would venture that the Republicans, right now, are more opposed to giving science and technology the reins of power than Democrats are. Look at all the types of science that many people in the GOP are trying to limit...

Well, I could just as easily see Republicans being disgusted by being governed by "Godless, souless machines"...

Secondly, what makes you assume that an AI would uphold a Republican agenda?

That depends on what the AI's priorities are, doesn't it? Is it programmed to promote efficiency? Or is it programmed to promote human happiness and well-being? (the two aren't equivalent) Or some other value?

plenty

A additional commandment from the left? Or right? Regardless, I imagine the values of the programmers would be the biggest influence on how an AI governed.

Dave

Reply to
dave.harper

Since the requirement for politicians before entering politics is to have half their brains removed, AI is bound to be more effective and all should benefit

Paddy

Reply to
P RUSKIN

Mainly social sciences I expect and socially oriented sciences that Republicans feel are more properly capitalized by the market. And what makes anyone think an ai artifact wouldn't be just as vulnerable to corruption as people?

Regards - Lester

Reply to
Lester Zick

You mean any socially oriented science that didn't clash with the general religious beliefs of the GOP.

It would be vulnerable. The main advantage is that it could, if properly programed, go through a set of choices along with probable results alot faster than a current government could.

Dave

Reply to
dave.harper

Certainly. Appeals to science are often justified in religious terms. It doesn't mean the science is flawed.

The hell you say! So now we're supposed to leave political decisions to the tender mercies of ai theorists and computer programmers to properly program issues even they admit they don't fully grasp just so we can make decisions faster? I could make bad decisions even faster.

Regards - Lester

Reply to
Lester Zick

  1. It just means that the researchers are more likely to be biased towards results that don't conflict with any religious beliefs.
  2. It means that some other valid research isn't pursued based on religious reasons.

vulnerable

As opposed to leaving political decisions to the tender mercies of career politicians, some of which are corrupt and have hidden agendas, as well as being hampered by special interest groups that eliminate many options that an AI wouldn't have to eliminate...

This was a thought exersize, and I'm not saying AI could or will ever become a factor in government. However, one thing that would make AI less suseptable to corruption is that the programming could be open source. Politicians can always spin things, lie, etc. It would be a lot harder to hide corruption in an open source code.

Dave

Reply to
dave.harper

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.