The Ethics of Autonomous Robots - A Way To Blow-Off Human Responsibility ?

via BBC NEWS

Robot future poses hard questions

Scientists have expressed concern about the use of autonomous decision-making robots, particularly for military use.

As they become more common, these machines could also have negative impacts on areas such as surveillance and elderly care, the roboticists warn.

The researchers were speaking ahead of a public debate at the Dana Centre, part of London's Science Museum.

Discussions about the future use of robots in society had been largely ill-informed so far, they argued.

Autonomous robots are able to make decisions without human intervention. At a simple level, these can include robot vacuum cleaners that "decide" for themselves when to move from room to room or to head back to a base station to recharge.

Military forces

Increasingly, autonomous machines are being used in military applications, too.

Samsung, for example, has developed a robotic sentry to guard the border between North and South Korea.

It is equipped with two cameras and a machine gun.

The development and eventual deployment of autonomous robots raised difficult questions, said Professor Alan Winfield of the University of West England.

"If an autonomous robot kills someone, whose fault is it?" said Professor Winfield.

"Right now, that's not an issue because the responsibility lies with the designer or operator of that robot; but as robots become more autonomous that line or responsibility becomes blurred."

Professor Noel Sharkey, of the University of Sheffield, said there could be more problems when robots moved from military to civil duties.

"Imagine the miners strike with robots armed with water cannons," he said. "These things are coming, definitely."

The researchers criticised recent research commissioned by the UK Office of Science and Innovation's Horizon Scanning Centre and released in December 2006.

Robot rights

The discussion paper was titled Utopian Dream or Rise of the Machines? It addressed issues such as the "rights" of robots, and examined developments in artificial intelligence and how this might impact on law and politics.

In particular, it predicted that robots could one day demand the same citizen's rights as humans, including housing and even "robo-healthcare".

I can imagine a future where it is much cheaper to dump old people in big hospitals where machines care for them Professor Noel Sharkey "It's poorly informed, poorly supported by science and it is sensationalist," said Professor Owen Holland of the University of Essex.

"My concern is that we should have an informed debate and it should be an informed debate about the right issues."

The robo-rights scan was one of 246 papers, commissioned by the UK government, and complied by a group of futures researchers, the Outsights-Ipsos Mori partnership and the US-based Institute for the Future (IFTF).

At the time, Sir David King, the government's chief scientific adviser, said: "The scans are aimed at stimulating debate and critical discussion to enhance government's short and long-term policy and strategy."

Other scans examined the future of space flight and developments in nanotechnology.

Raised questions

The Dana Centre event will pick up some of these issues.

"I think that concerns about robot rights are just a distraction," said Professor Winfield.

"The more pressing and serious problem is the extent to which society is prepared to trust autonomous robots and entrust others into the care of autonomous robots."

Caring for an ageing population also raised questions, he said.

Robots were already being used in countries like Japan to take simple measurements, such as heart rate, from elderly patients.

Professor Sharkey, who worked in geriatric nursing in his youth, said he could envisage a future when it was "much cheaper to dump a lot of old people" in a large hospital, where they could be cared for by machines.

Scenarios like these meant that proper debate about robotics was imperative, he added.

"In the same way as we have an informed nuclear debate, we need to tell the public about what is going on in robotics and ask them what they want."

Story from BBC NEWS:

formatting link
2007/04/24 03:29:27 GMT

*

"Robo-rights" are a bit far off at the moment, our best robots barely show the gleam of a single grain of intelligence or "personality". The time WILL come, but not for 50+ years at the very least. We don't even know why humans are "beings" yet, much less how to re-create that from scratch. "Robo-COPS" however, robo-soldiers, robo-docs, robo-nurses, robo-laborers ... those will come sooner and will be used to the limits of the technology. If it's CHEAPER to throw grandpaw into a robo-monitored old folks home, many will. If the government can save a buck using robots, it will. If anyone can avoid responsibility by delegating it to autonomous machinery, they will.

Last years (successful) DARPA rally, where fully-autonomous vehicles had to navigate a 30 mile trek through the badlands, wasn't just for fun, it was to develop the technology for fully autonomous WEAPONS systems. Imagine a small tank-like rover you can send off to find its way into an enemy camp or town - equipped with automatic guns that can target and shoot so quickly and accurately that it could literally have several bullets in the air at the same time heading towards different "combatants". Will it be able to detect a combatant from a non-combatant from a 4-year-old ? How reliably ? What level of reliability will we EXPECT ? What's "good enough" for a machine ? A lower standard than for a human soldier ? You betcha !

Expect the police to use more and more sophisticated robots every year. They too will be equipped to kill. They too will be held to a lower standard, so humans can dodge responsibility.

Reply to
Luminoso
Loading thread data ...

Agreed, and they're heading the pack of the ill-informers.

Automation in lethal weapons has been used for the last century. Is this the same debate about the use of influence mines that detonate automatically (sometimes decades after they are planted), cruise missles that use artificial intelligence to home into any of several targets hundreds of miles away, or radar-controlled gattling guns that can automatically spray 3000 rounds a minute at an approaching "enemy"? Or, how about a nuclear arsenal that can carry on even after every living soul on the planet is fried. Answer THESE questions first, and then you can tackle robotics.

The notion that this is new with "robotics" is laughable. So is the notion that there is anything debateable in a lethal robot made to patrol the Korean DMZ. Aren't they aware standing orders (on both sides) to human solders are to shoot-to-kill? These researchers need to first confront the political and military doctrines that apply to orders given to humans, and to the systems that are already in place. Who takes responsibility there? Why would it be any different with a robot, and why is that different than any guided missle operated by any nuclear nation?

Silly concerns for things like robo-rights, to which there is no supporting technology currently in sight, is premature. I'll worry about it when I can fly to work in my oewn personal heli-car.

-- Gordon

Reply to
Gordon McComb

They will and are answering "those questions", but each level of automation will create different and more complex questions and probably less and less satisfactory answers.

Take a pseudo-tech example, the common "police dog". Their main use seems to be to torture 'black' people the police dislike. Thing is, the dog is semi-intelligent, it has a certain amount of will, certain internal goals, a certain amount of unpredictability. When the dog chews the hell out of someone, the cop can offload some of the responsibility to the dog and the dog/'suspect' dynamic. If the cop had inflicted identical damage himself, he'd be in big trouble.

We will be able to build robo-dogs long before we'll be able to build robo-humans. A robo-dog is a few levels above a 'dumb' device like a mine, a couple of levels above an automated gun or cruise missile. As the weapons get "smarter" though, the question is how much the humans will defer responsibility for undesireable outcomes to the "smart" device.

During the recent invasion of Iraq, there were numerous instances where "smart" technology failed. Cruise missiles went off into civvie neighborhoods, "smart" bombs didn't follow the beam, automated guns didn't seperate babies and armed opponents. The standard excuse was that the technology isn't perfect and many more would have died if older techniques had been used. In short, THEY were not responsible, the MACHINE was responsible. A massacre on the scale Lt. Calley was imprisoned for is totally ignored because the MACHINE screwed up.

Yes, but there IS still that element of judgement you get with a human. People can discern subtle details about a situation no current machine can discern. They can even disobey orders if it's the right thing to do.

The lowest-ranking officer who can't come up with a good excuse as to why he isn't responsible. The Abu Ghraib prison scandal comes to mind. Now it's Tillman, the football player who didn't keep his head down, and the subsequent information managment scheme.

No matter HOW smart the robot - at least until it's clearly at the human level both in congitive abilities and emotive responses - it can't be prosecuted or persecuted. As time goes on the robots will become smart enough to stalk and kill and enjoy it in their own special way just like any human, and that just makes it easier for the humans who are supposedly in charge to escapt responsibility, to blame the robot. Won't do any good to excute a robo-dog or whatever though, no 'justice' there.

When I was a kid they promised us personal ATOMIC POWERED heli-cars. I'm still waiting.

"Robot rights" is indeed a bit premature. However, noting how far the law lags behind the technology these days, it might not hurt to write a FEW laws, some basic ground-rules, that can be used when the the robots really DO become smart, or smarter, than ourselves. S.Korea actually did this a few months ago.

Reply to
Luminoso

There are ethical questions related to the use of all technology for warfare. Concentrating on robotics as the next frontier to the ways people can be unkind to eachother misses the forest for the trees.

The American government has never officially acknowledged responsibility for the high civilian loss in the fire bombing of Dresden, and that was a deliberate planned attack that with hingsight was unncessary in bringing down Nazi Germany. War brings out the worst in us, and terrible things happen. During it no one can accurately foretell the outcome. But when the dust clears, if governments can't accept responsibility for the things they do intentionally, there's no chance they'll accept it for the act of machines. In the end, this is a circular argument that has nothing to do with technology or machines. Framing it around "robotics" is a cynical attempt at appearing cogent.

-- Gordon

Reply to
Gordon McComb

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.