top of page
  • Writer's pictureSoheil Sohrabi

Driverless Cars Ethics

Updated: Nov 12, 2018

Despite the significant impact of Autonomous Vehicles (AVs) on traffic safety, eliminating drivers’ responsibility on crashes may cause emerging new set of problems. Picture an AV is boxed in, on all sides, by other cars. Suddenly, a large heavy object falls from the truck in front of the AV. The AV cannot avoid the collision because of the shortage in time. The AV options include striking the object which results in killing the AV passengers, or swerving right/left, hit a car and killing their passengers (watch the following video).

But what the AV should do?

The responses to this question can be categorized into three groups. The first group of respondents simply allude to the driver’s instinctively reaction in such a situation and suggest the accordingly reaction of the AV. The driver is more likely to chooses the second option, sacrificing the others life to save his. This idea is condemned by the second group that are against the idea of killing many people to save AVs’ passengers’ life. In other words, the reasoning behind this idea is that to minimize the total damage in the system. This idea is called consequentialism. Consequentialism thinking to solve the problem cannot be necessarily effective in some cases since it may cause discrimination against a group of road users. Back to the example, if the right-hand car is equipped with a more airbags comparing to the left car, the AV tends to hit the safer car with a lower chance of fatality. Again if one of the car's passengers are not wearing a seat-belt, the AV will choose the other car with a higher level of safety which does not look fair. Hence, the third idea shaped based on ethics. The AV need to pick the option that is based on human morality to overcome the mentioned consequentialism shortcoming. Although the absolute good cannot be defined in ethics, the society agreement on moral decisions can be assumed as the absolute good.

Picture: Iyad Rahwan/MIT Moral Machine Group

Among numerous benefits of Autonomous Vehicles (AVs), improving traffic safety is expected to be one of the most significant. This improvement is mainly derived from eliminating driver’s error in the level 4 and 5 AVs (according to NHTSA definition), by taking control of vehicle and removing the human’s responsibility. Although, many studies verified the tremendous reduction in crashes frequency and severity after self-driving cars’ operation on roads, a controversial concern raised. In real-road experiment of self-driving cars, the possibility of choosing between two (or more) evils during unavoidable crashes is not inevitable (especially crashes with fatal outcome). The AVs reaction on unavoidable crashes is keeping in high profile these days, where many researchers addressed it. Three major groups of solutions have been discussed, (1) the AVs should act based on a human driver instinct, (2) the AVs need to minimize the total damage in system, and (3) the AVs’ decision should be compatible with ethics.

Based on the discussed three opinions, researchers proposed a few decision making approaches for AVs in avoidable crashes. In a study conducted by Bennefon et al. (2016) with 1900 participants, several hypothetical unavoidable crash scenarios designed to find the participants decision in AVs’ social dilemma. Results confirm the findings of previous research on Trolley problem, where respondents chose to sacrifice one to save more life. Using the data collected from the Moral Machine Website ( consisting of about 1.3 participants, Noothigatu et al. aggregated the people’s opinion on AVs ethical dilemma and submitted a machine learning approach to make machines to choose morally. Despite the accurate performance of the proposed algorithm, machine learning approaches are not able to provide information about the error. In other words, the algorithm is not traceable in some cases. Also, the learning process is based on human ethical choices in designed scenarios. Given the unlimited number of alternative scenarios, the machine learning algorithm may end up to a biased result for a set of scenarios. Goodall and Nyholm et al. argued that the Trolley problem is a simplified version of AVs decision making in an ethical dilemma (Goodall 2016; Nyholm and Smids 2016). Taking the uncertainty in each crash scenario outcome into account, the AVs dilemma cannot be considered as trolley problem. In this case, translating the problem to a risk management problem would be more realistic (Goodall 2016). Still, the risk management can be conceived as the consequentialism idea with a new level of complexity, the likelihood of accruing each scenario, without accounting for ethics.

Overall, the self-driving cars’ ethical decision making is compared with the so-called Machine Ethics term. Machine Ethics initially discussed by Mitchel Waldrop in robotics context (Waldrop 1987) and thereafter, several attempts have been made to make ethics computable or at least formal. Resembling the AVs decision to the unsolved machine ethics criticized by transportation experts by arguing the negligible chance of occurring such a dilemma in reality. Critics draw attention to the threat of losing the bigger good, saving many lives (Zhao et al. 2016). Despite the fact that this comment is valid, the AVs are required to be programmed deal with such an unpleasant situation even if it happens very rare.


Bonnefon, J.-F., Shariff, A., and Rahwan, I. (2016). "The social dilemma of autonomous vehicles." Science, 352(6293), 1573-1576.

Noothigattu, R., Gaikwad, S. N. S., Awad, E., Dsouza, S., Rahwan, I., Ravikumar, P., and Procaccia, A. D. (2017). "A voting-based system for ethical decision making." arXiv preprint arXiv:1709.06692.

Nyholm, S., and Smids, J. (2016). "The ethics of accident-algorithms for self-driving cars: an applied trolley problem?" Ethical theory and moral practice, 19(5), 1275-1289.

Goodall, N. J. (2016). "Away from trolley problems and toward risk management." Applied Artificial Intelligence, 30(8), 810-821

Waldrop, M. M. (1987). "A question of responsibility." AI Magazine, 8(1), 28.

Zhao, H., Dimovitz, K., Staveland, B., and Medsker, L. "Responding to challenges in the design of moral autonomous vehicles." Proc., The 2016 AAAI Fall Symposium Series: Cognitive Assistance in Government and Public Sector Applications, Technical Report FS-16-02, 169-173.

25 views0 comments

Recent Posts

See All
bottom of page