There’s a lot of buzz lately about self-driving cars. They were the focus of a couple of sessions when I was at Singularity University a couple of years ago, and while I was there Google sent one over so we could get a look at it. The consensus at SU was that they confer so many public benefits that their adoption is almost inevitable. That they do indeed have many advantages is undeniable, but as the resident pessimist at the program I attended, I was far less swept away with enthusiasm than the others in the room, almost all of whom were either tech entrepreneurs or globalist Utopians (with a great deal of overlap).
One problem that you don’t hear about so much is this: driverless cars are autonomous robots, and as Isaac Asimov so presciently observed ‘way back in 1942, if we are going to be able to let autonomous robots exist among us, we need to lock them down with built-in ethical restraints. Asimov proposed what he called the Three Laws of Robotics. They are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov was rightly celebrated for this clarifying insight, and still is. There are, however, ethical problems that even the Three Laws are too blunt to dissect, but that engineers will need to confront as they prepare these robots to take over the roads. A recent post at Quartz provides an example:
Consider this thought experiment: you are traveling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you.
Both outcomes will certainly result in harm, and from an ethical perspective there is no “correct” answer to this dilemma. The tunnel problem serves as a good thought experiment precisely because it is difficult to answer.
The tunnel problem also points to imminent design challenges that must be addressed, in that it raises the following question: how should we program autonomous cars to react in difficult ethical situations? However, a more interesting question is: who should decide how the car reacts in difficult ethical situations?
Of course, even we humans have trouble with questions of this kind, so much so that they have become the object of much philosophical scrutiny, with no conclusive result. (The philosophical holotype is called the “Trolley Problem”, and you can get an overview of it here.) But the prospect of having tens of millions of autonomous robots, each weighing a ton or more, speeding along the nation’s highways and byways makes what was heretofore a philosophical conundrum a public question of no small importance.
One answer will be to leave these preferences to the owner, and make them configurable options. But imagine the “Settings” page:
1. If a choice must be made, should the car kill the passengers, or a pedestrian?
2. Please select the maximum number of pedestrians to kill before prioritizing driver fatality.
3. Please assign preferred weighting to the following categories of pedestrians:
– Young mothers
– The elderly
– Physically or visually impaired persons
– Endangered species
– Non-endangered species
– Dogs (large, dignified)
– Dogs (small, yappy)
– Neck tattoos
– Tea Party members
– Persons of Color
– Cisgendered white males
…and so on.
Read the Quartz item here.