Automaker Ford announced plans to have self-driving cars in the hands of consumers by 2021. These would not be the first self-driving, or autonomous, cars on the road. But Ford is the first to announce a definitive date for mass production. However, the tricky aspect of automotive autonomy has always been the ethical dilemma is poses.
In short, the problem harkens to the philosophical thought experiment, The Trolley Problem. Is the ethical.moral choice to save the most people? Is it more ethical if you do nothing, as the initial tribulation is not of your doing? What should a car do?
What if a car has to choose between doing something illegal, say speeding or crashing, in order to avoid a more calamitous end? Would your car kill you in order to save more strangers? Should a machine be programmed to kill, period? What happens if two or more autonomous cars are going to crash and they’re both programmed to save their driver but doing so would kill more people?
If a car is programmed to drive within all legal limits and to guarantee the safety of its occupants, in what situations could that programming be overridden? Would we want many humans to be making those decisions, much less machines?
And if a car is built not only pre-programmed but also with the ability to learn, then the outcome is dependent on the environment. Do we want a Nazi car?