*This is some top-notch speculative thinking. This is a guy who knows robots and also understands people. This long essay has an air of plausibility that one rarely sees in these matters.
*I especially like the extensive, indignant part where he tells people to stop beguiling themselves with metaphysical puzzles when thinking about real-life human-machine interactions.
(…)
There is a serious question about how safe is safe. 35,000 people in the US are killed in motor vehicle accidents per year, with about 1.25 million world wide. Right now all these deaths involve human drivers. They are both horribly large numbers. Over the last 120 years we, the human race, has decided that such high numbers of deaths are acceptable for the usefulness that automobiles provide.
My guess is that we will never see close to such high numbers of deaths involving driverless cars. We just will not find them acceptable, and instead we will delay adopting levels 4 and 5 autonomy, at the cost of more overall lives lost, rather than have autonomous driving systems cause many deaths at all. Rather than 35,000 annual deaths in the US it will not be acceptable unless it is a relatively tiny number. Ten deaths per year may be deemed too much, even though it could be viewed as minus 34,990 deaths. A very significant improvement over the current state of affairs.
It won’t be rational. But that is how it is going to unfold.
Meanwhile, there has been a cottage industry of academics and journalists looking for click bait (remember, their whole business model got disrupted by the Internet–they are truly desperate, and have been driven a little mad), asking questions about whether we will trust our cars to make moral decisions when they are faced with horrible choices.
You can go here to a web site at M.I.T. to see the sorts of moral decisions people are saying that autonomous cars will need to make. When the brakes suddenly fail should the car swerve to miss a bunch of babies in strollers and instead hit a gaggle of little old ladies? Which group should the car decide to kill and which to save, and who is responsible for writing the code that makes these life and death decisions?
Here’s a question to ask yourself. How many times when you have been driving have you had to make a forced decision on which group of people to drive into and kill? You know, the five nuns or the single child? Or the ten robbers or the single little old lady? For every time that you have faced such decision, do you feel you made the right decision in the heat of the moment? Oh, you have never had to make that decision yourself? What about all your friends and relatives? Surely they have faced this issue?
And that is my point. This is a made up question that will have no practical impact on any automobile or person for the forseeable future. Just as these questions never come up for human drivers they won’t come up for self driving cars….