10 May 2018

Rise of the Machines – Making a Safer Robot

I Am, Therefore I Think

For as long as stories have been written, authors have considered the possibility that if intelligence can come into being through nature, then perhaps it can also be created by artifice. The ancient Greeks wrote of Automatons, Jewish folklore introduces the concept of Golums and contemporary science fiction describes a wide variety of sentient machines. In all of these sources, one theme emerges again and again… artificial intelligence is dangerous!

One of the most proliferate creators of stories focusing on this question is Isaac Asimov, and it is from his books that we find the first real attempt to formulate a safe approach to the creation of robots in all their guises.

The three laws of robotics. For those not familiar with the three laws here they are:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
At first glance these laws look pretty solid. Isaac Asimov exercises them in a wide variety of situation in his novels, and in the majority of cases the laws work flawlessly to safeguard the safety of the humans with which the robots interact. In fact one of the consistent lessons from Asimov’s writings is that it is the humans who are the dangerous.

So that’s it then? The three laws of robotics are all we need. Safe robots.

Well, no. Let’s assume that in principle the 3 laws work, and let’s also assume that we can build these laws into a robot, and finally, let’s ignore the fact that these are very big assumptions in themselves.

There is still a problem, and any parent will know instinctively what this problem is. No good parent wants to harm their children; quite the opposite in fact. Every good parent wants only the very best for their children – and every good robot should want only the very best for their human creators.

But what exactly is the very best?

Moral Robots

Let’s look at those three laws again, but this time from the perspective of the child’s expectations of its parents and the parent’s responsibilities to the child.

  1. A parent may not injure its child or, through inaction, allow its child to come to harm.
  2. A parent must obey orders given it by its child except where such orders would conflict with the First Law. (Consider obeying orders in this context as meeting needs).
  3. A parent must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now, looking at these laws through the parent/child perspective immediately demonstrates the dilemma faced by the intelligent robot; it arises when the child demands a bar of chocolate.

What does the parent do? Law 2 requires the chocolate to be given to meet the need or obey the order, but Law 1 requires that the child not be harmed. At this point it might be worth taking a look at the Wikipedia article on “Parenting Styles”. You won’t be surprised to discover that the massed wisdom of the human race and the careful considerations of highly trained psychologists has failed dismally in reaching an accord on the right answer to this question.

Does the parent give the child the chocolate bar, or withhold the bar and provide a healthy alternative? Is the parent causing more harm by providing a substance that is both fattening and damaging to the teeth, or is there psychological damage in denying the child a pleasurable experience and creating an aversion to “healthy” food? And what exactly is healthy food and what is unhealthy?

While the parent considers this dilemma, the child throws a tantrum and starts lashing out verbally and physically. Is the parent now breaking rule 3 by allowing him or herself to come to harm? As humans we resolve this problem relatively easily by simply making a decision based on personal beliefs conditioned into us by our own parents or by society. If we didn’t, we would be unable to make a decision at all, and would cease to function.

So… to make decisions within a moral framework we have to jump to conclusions based on beliefs. If we want robots to do make decisions we will have to allow them to do the same. Unfortunately, history tells us that making decisions on this basis can justify all sorts of immoral acts and so in adopting this model we create robots who can also perform immoral or harmful acts.

End result: Dangerous robots.

Amoral Robots

What if we rewrite those laws, and instead of adopting the core principles of human morality we try a more amoral approach. In other words, do as you’re told regardless of consequences. Here are two rules that might achieve that:
  1. A robot must not do anything to a human being or its property unless that human being gives permission to do so.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

Now let’s look at that chocolate bar scenario again.

Child: “Give me a chocolate bar.”
Robot (to parent): “May I give your child a chocolate bar?”
Parent: “No. Give him a carrot instead.”
Robot to child: “May I give you a carrot”
Child: “No. Give me a chocolate bar.”
Robot does nothing (as it already has all the answers it needs).

Result: Safe robots. Brilliant!

Ah. Hang on a moment. How did the robot know to treat the child as property, because for the robot to be safe it has to treat the child both as a human being and as property? Then there’s the dilemmas of property and timeliness in general. How does the robot know what you do and don’t own, and what happens tomorrow? The child asks again for a chocolate bar and the robot does nothing as the previous answers still apply – or do they? How long before a “no” or a “yes” no longer applies? The robot would have to keep checking and rechecking everything ad-nauseum.

Result: Annoying and ineffective robots. Not so brilliant. 

Obedient Robots

Let’s make this even simpler. Let’s just have one law:

1) A robot must obey orders given it by human beings.

This simply makes the robot an obedient tool that is as good or as bad as the adult controlling it. Let’s hand the morality decisions back to the adult. Great. We’ve now created a robot with no complex dilemmas whatsoever – a bit like a gun…

Result: Robots that you need a licence to own and that can’t be taken out in public.

Independent Robots

In essence, the only way to produce intelligent, effective and safe robots is to make them fully autonomous. We then have to hope that because they are not fundamentally an organism designed to compete for resources (as we are) that their intelligence will reach different conclusions to us and they will solve the “safe robot” question for us.

Of course, this would be an enormous gamble and it’s never going to happen…

So, dangerous and/or ineffective robots it is then.

Regards
The Enterprising Architect

No comments:

Post a Comment