Save 40 Campaign expertIP Top position

Robot ethics: Can we program consciousness?

As artificially intelligent robots, devices and cars begin running our lives, the question naturally arises: Will they work in our best interests? And how can we tell?


3D visual of a humanoid robot overthinking something.

Any new technology inevitably brings not only technical benefits, but worries about its cultural and ethical impact. Videogames sparked worries of violence. Experts worry that the Internet is making us all stupid. And of course, phones and tablets are killing our ability to interact with each other. But there’s a far bigger bogeyman on the horizon: artificially intelligent robots.

As smart devices, self-driving cars and home robots begin running our lives for us, the question naturally arises: Will they work in our best interests? And how can we tell?

Isaac Asimov famously described his three laws of robotics in his 1942 short story Runaround, and later in I, Robot. Robots are not allowed to harm humans or allow them to come to harm through inaction. They must obey orders given by humans unless they contravene the first law, and finally the robot must protect itself, so long as this doesn’t conflict with the first or second laws.

He wrote this before robots were a thing, but now, these rules are more than hypothetical. Self-driving cars are already roaming our roads, and concerns are already cropping up. Your self-driving car finds itself in a situation where it must total an expensive car and incur a huge insurance claim, or slightly injure a jaywalking pedestrian. How can we trust it to make a choice that we’d find morally acceptable?

In the movie Eye in the Sky, a military drone trains its sights on a house where a terrorist is plotting to kill hundreds, but a strike would endanger the life of a young girl. Should it take the shot? A utilitarian philosopher might say yes. A deontological philosopher would likely refuse. If an autonomous robot takes that decision away from a human operator — for reasons of expediency, say — then which philosophy should it use? And how should the person who programmed it decide?

Technologists are working on it, albeit slowly. The British Standards Institute, sufficiently alarmed at the prospect of malevolent robot overlords, has issued a set of guidelines highlighting ethics hazards in robotic design. BS 8611 goes beyond physical dangers, looking at the potential to ‘de-humanize’ humans, or over-reliance on robots.

The BSI document warns against the forming of emotional bonds with robots. That automated care partner that’s taking care of a lonely and confused older person could easily persuade them to buy products and services, for example. A robotic ‘play pet’ could easily become a surrogate for irresponsible parents, providing emotional support for the child. But should it?

The BSI ethics guide says that humans, rather than robots, should be the responsible agents, and that it should be possible to find out who is responsible for a robot’s behaviour. That may be a more difficult task that it seems, though, given many robots’ reliance on machine learning.

In this subset of broader artificial intelligence research, computers use historical data to determine their actions in ways that aren’t immediately obvious to human operators. In short, they begin making decisions that we don’t understand.

The worry here is that algorithms will contain intrinsic bias reflecting the design choices of the people that build them. Those people are typically young, white males, evoking what research Margaret Mitchell has called the ‘sea of dudes’ problem.

We’re already seeing worrying accusations of algorithmic bias in some areas. One study shows discrimination in online advertising delivery. Searches on names commonly associated with black people tend to show advertising mentioning arrests, it said. Bias in online ad delivery may be little more than irritating and insulting, but bias in decision-making algorithms can have far worse results.

Investigative journalism project ProPublica analyzed scores from a software program designed to predict recidivism in felons, used by judges during sentencing. Despite a protest from the software vendor, Northpointe, the project stood by its claim that black defendants were 45% more likely to be misclassified as future re-offenders when they did not reoffend. Misclassification of black defendants was far higher than for white defendants, it said.

One way to help protect against algorithmic bias might be transparency. Companies could publish the algorithms themselves for review, and be clear about the data feeding those algorithms: where it comes from, how it is qualified and how it is weighted.

The other way is to introduce technical measures that detect and weed out bias. Researchers are already tackling this with programs that test whether an algorithm can distinguish different demographic attributes in a data set and deliver biased results based on them. It can then blur those data points to smooth out any discrimination, they claim.

These are complex issues that will only become more arcane as algorithms get more sophisticated and the machines that act on them become more capable. Some are already calling for a federal regulator to oversee algorithmic neutrality. With the U.S. army already considering military robots that can identify and kill targets themselves, the stakes are rising by the minute.

Image: iStock

SIP eBook 2014 expertIP bottom banner update
Comments are closed.