Ethics, Engineering, And Driverless Cars

There’s a lot of buzz lately about self-driving cars. They were the focus of a couple of sessions when I was at Singularity University a couple of years ago, and while I was there Google sent one over so we could get a look at it. The consensus at SU was that they confer so many public benefits that their adoption is almost inevitable. That they do indeed have many advantages is undeniable, but as the resident pessimist at the program I attended, I was far less swept away with enthusiasm than the others in the room, almost all of whom were either tech entrepreneurs or globalist Utopians (with a great deal of overlap).

One problem that you don’t hear about so much is this: driverless cars are autonomous robots, and as Isaac Asimov so presciently observed ‘way back in 1942, if we are going to be able to let autonomous robots exist among us, we need to lock them down with built-in ethical restraints. Asimov proposed what he called the Three Laws of Robotics. They are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov was rightly celebrated for this clarifying insight, and still is. There are, however, ethical problems that even the Three Laws are too blunt to dissect, but that engineers will need to confront as they prepare these robots to take over the roads. A recent post at Quartz provides an example:

Consider this thought experiment: you are traveling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you.

Both outcomes will certainly result in harm, and from an ethical perspective there is no “correct’ answer to this dilemma. The tunnel problem serves as a good thought experiment precisely because it is difficult to answer.

The tunnel problem also points to imminent design challenges that must be addressed, in that it raises the following question: how should we program autonomous cars to react in difficult ethical situations? However, a more interesting question is: who should decide how the car reacts in difficult ethical situations?

Of course, even we humans have trouble with questions of this kind, so much so that they have become the object of much philosophical scrutiny, with no conclusive result. (The philosophical holotype is called the “Trolley Problem”, and you can get an overview of it here.) But the prospect of having tens of millions of autonomous robots, each weighing a ton or more, speeding along the nation’s highways and byways makes what was heretofore a philosophical conundrum a public question of no small importance.

One answer will be to leave these preferences to the owner, and make them configurable options. But imagine the “Settings” page:

1. If a choice must be made, should the car kill the passengers, or a pedestrian?
2. Please select the maximum number of pedestrians to kill before prioritizing driver fatality.
3. Please assign preferred weighting to the following categories of pedestrians:
      – Children
      – Bicyclists
      – Young mothers
      – The elderly
      – Nuns
      – Physically or visually impaired persons
      – Endangered species
      – Non-endangered species
      – Cats
      – Dogs (large, dignified)
      – Dogs (small, yappy)
      – Neck tattoos
      – Tea Party members
      – Persons of Color
      – LBGT
      – Cisgendered white males
      – Kanye

…and so on.

Read the Quartz item here.

3 Comments

  1. JK says

    Well. Google might hurry it up abit if they’re planning on keeping cars on Arkansas highways. Via Drudge;

    http://www.thesmokinggun.com/documents/Google-Street-View-car-crash-612345

    Posted August 8, 2014 at 2:50 pm | Permalink
  2. Paul Rain says

    An interesting topic.

    I like the driver-mandated settings concept.. though of course one should be able to define one’s own categories in thee ethical reasoning module.

    And, to make sure that pedestrians don’t get too blase about just how altruistic the people they wander in front of are, the settings should be displayed on a screen on the front of the vehicle.

    Posted August 8, 2014 at 7:43 pm | Permalink
  3. Stephen W says

    So the human hits the child then panics looses control into the wall killing both. While the robocar instantly breaks only giving the child a minor hit. The robocars dont need to be prefect just better than humans. Most likely if a robocar does not have a safe place to swerve to it will just hit the breaks like good driver would it does not need to make ethical decisions about whose life is more important and neither would any human be capable of factoring ethics in such short instant of time.

    Posted September 8, 2014 at 7:08 am | Permalink

Post a Comment

Your email is never shared. Required fields are marked *

*
*