Abstract
The rise of self-driving cars raises numerous ethical conundrums, none has attracted so much public attention as the question of how to programme AVs in crash scenarios. How does a car respond when difficult, life-and-death choices are to be made? The most popular approach to answering this question is to employ trolley problems (trolleyology). Trolleyology, employed within the context of AVs, pits one human life against another on the basis of their distinctive characteristics: old vs young, single vs group membership, innocent vs guilty, etc. Doing so attempts to create value hierarchies for human particularity. This article argues for the failure of this approach by considering the distinctive manner in which trolley problems are employed within crash scenarios involving AVs. The article argues for the evaluative error of establishing human value on human particularity through a process of abstract individualisation. Properly speaking, individual humans are valued not on the basis of their particularities, but on the basis of their innate unsubstitutability.
1 Introduction
The prospect of autonomous vehicles (AVs) has captured the public’s imagination. The ability to forego parking, traffic jams, and wasted time commuting is seductive, not to mention an increase in the speed limit and road capacity; ecological advantages; and even a reduction in state expenses on signage, accident clean up, and police investigations.[1] The most beneficial advantage is a radical reduction of road deaths, currently 1.3 million per annum – costing the global economy 2.8 trillion dollars.[2] This loss of human life is tragic and on this basis alone the advantages of AVs are so overwhelming that there is a coherent argument to be made for the moral imperative to implement self-driving cars as soon as possible. Indeed, even a reduction in road deaths by one person would make any delay unethical.[3]
Yet there are reasons to pause. Not only is it questionable to justify AV implementation based on the “highest moral imperative” implied in death reduction,[4] but a failure to implement widespread autonomous use in an ethical manner may backfire. After all, it is society that will either accept or reject the use of AVs. If society perceives AVs to be amoral, or to be implemented unethically, it could result in society’s rejection of the technology altogether.[5] If society is at all sceptical about the ethical consequences of AVs, the results would be slow adoption, opportunity costs, and the underuse of AVs.[6] Consequently, the ethics of implementing AVs on public roads are of paramount importance.[7]
In this regard, no other ethical dilemma involving AVs has captured the public’s imagination more than that of crash scenarios. This is no surprise. With millions of accidents occurring each year, the moral dilemmas of collisions are impressed upon society daily. While some may argue that AVs will never face these dilemmas,[8] many argue that they are inevitable.[9] A solution to programming decision-making algorithms must be found; we cannot ignore this problem and optimistically hope that AVs will be omniscient, omnipotent beings capable of avoiding all dilemma scenarios.[10]
Arguably the most common approach for debating these ethical quagmires is that of the trolley problem – a thought experiment first introduced by the philosopher Philippa Foot.[11] This particular philosophical approach to ethics, sometimes referred to as trolleyology,[12] remains immensely popular: from popular books[13] to Facebook pages with hundreds of thousands of followers,[14] there seems to be no stopping societies’ fascination with no-win dilemmas. This fascination has spilled over into the area of self-driving cars, with numerous works published on the topic of AVs and trolley problems.[15] Even presidents and Google (the owner of Waymo – a company that develops AVs) have responded to the phenomenon.[16]
There is substantial criticism as to the suitability of trolley problems as an approach to programming AVs in crash scenarios.[17] This article, however, will explore one particular criticism that hitherto has not been noted in these debates: That human value is based on human unsubstitutability and not on human particularity.[18] Therefore, one cannot, and should not, relativise human value on the base of peripheral human characteristics.
To establish this point, we will begin with a very brief discussion of the most popular presentation of trolleyology within the context of AVs and crash scenarios: The Moral Machine Experiment. We will then proceed to discuss our particular criticism of trolleyology, before discussing an apparently obvious connection between trolley problems and triage.
2 The (im)Moral Machine Experiment
That the public is highly interested in trolley problems involving AVs can hardly be disputed. Consider, for example, the immense public response to the on-going Moral Machine Experiment (MME). The experiment comprises a multilingual online “serious game” that presents responders with numerous trolley problem scenarios which ask respondents to assist an AV in deciding who to kill in a dilemma scenario: either human or animal, child or elderly, more people or less people, or between certain characteristics (able-bodied, or disabled). The number of respondents is impressive. Over the first eighteen months more than 10 million people engaged with the experiment, producing almost 40 million responses from 233 countries and territories.[19] [20] The immense data reveals interesting trends within its sample group – even exposing cultural predictors of preference. On the whole, those sampled preferred saving humans than animals; more lives over less lives; younger rather than older people; and had a slight preference for those with “social values” such as saving doctors rather than bankers.[21]
Using these findings, the authors argue that AVs – and the MME itself – present “a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences.”[22] The MME’s team is adamant that their approach to solving moral dilemmas like these is to be preferred above other approaches generally (such as philosophical/normative approaches). This is applicable not only in crash scenarios involving AVs, but in general moral dilemmas such as in triage for other crises.[23] Two months after their article in Nature, the same team published a follow-up article.[24] In this article, they berate ethicists and moral philosophers for spending centuries in debate and yet failing to produce a formal specification of ground-truth ethical principles. The authors argue (based on Dwork et al.) that without such ground-truth principles, an approximation of society’s agreed ground truth must be used. The authors believe that ethicists will be unable to agree on this approximation and that a better approach is to create an algorithm for aggregating individual preferences. They propose “a voting rule to aggregate [society’s] preferences into a collective decision.”[25]
One may interpret their findings as promulgating a democratic approach to solving moral dilemmas founded on the belief that the community may decide what is right and wrong through a simplistic public vote. This vote may then be used to program machines which will be required to implement these preferences “unerringly.” This “unerring” application suggests a uniform approach that does not waiver in its application but implements the community’s preferences blindly. While laudable in their confidence, there are serious challenges with uncritically adopting the MME’s suggested approach. Beyond the challenge of collecting unconsidered, initial intuitions from vast numbers of people in ways that will avoid poorly formed moral theories, public opinion is rarely synonymous with good ethical thinking. Consider the case of slavery for hundreds of years.[26] It is no wonder that the MME has received serious criticism for its ethical proposals.[27]
3 The Problem with Trolley Problems
Beyond questions about the lack of reflection, biased data, and challenges surrounding ethics by public vote given in the MME, there are other questions about the suitability of trolley problems themselves to address moral dilemmas involving AVs and crash scenarios.[28] Objections include the disanalogies in terms of the allotted time to make a choice; the role of moral and legal responsibility implied in programming AVs; the certainty explicit in trolley problems that are not present in the real world; and the distinction between individual and collective decision making that is dissimilar in trolley problems and AV programming.[29] Some go so far as to question the suitability of trolley problems generally to address moral dilemmas,[30] although this is hardly widely accepted.[31]
While numerous concerns have been raised about trolley problem gambits, what has often gone unremarked is the inherent challenge of establishing a relative value hierarchy between human beings based on individual characteristics. The question of who lives and dies in crash scenarios involving an AI directly speaks to the question of human rights (particularly the right to life). Rights, and in particular the concept of universal rights, is based on the notion of human value often presented in the idea of human dignity.[32] [33] Here, human unqualified value is the foundation for universal human rights. That is to say: it is because humans have a distinctive value that they have inherent rights. Posed as they are within the context of AVs, trolley problems place one human being’s right to life against another’s right to life on the basis of their unique features or characteristics. Since one’s right to life is intimately linked with one’s value, employed this way, the trolley problem attempts to draw out the relative value of different human beings based on characteristics such as age; gender; number/group association; social status, etc., and, in other words, on the particularities of the individuals involved.
At first, this may appear coherent. That is to say, it is individual human beings (at least in the West) that have human rights. One thinks for example of the UN’s Universal Declaration of Human Rights,[34] or the statements contained in Vatican II’s declaration Dignitatis Humanae.[35] These documents affirm the individual locus of human value. While these are challenged in some circles for their emphasis on individualism at the expense of collectivism,[36] there is general agreement that it is the individual in which rights are invested. Indeed, it seems almost natural to argue that one’s individuality is the basis for one’s value. An ordinary person may say: “that I am unique makes me special, and because I am special I am valuable.” Therefore, it makes sense to argue that the individuality of one human being is significant to determining their respective value. However, arguing that absolute value is inherent in individual human beings is different from arguing that human value is/should be established through human particularity.
4 Individuality, not Particularity, as the Basis of Human Value
The question of human value is ultimately a question of human identity. Who and what humans are should be considered the basis for their unqualified universal value. Yet one should be careful with the way human identity is being present in debates about human value. The question of who someone is (their identity) may be answered in a number of ways. Most common of which is through a process of distinction.[37] This process involves producing a list of characteristics that are common to all human beings (age, gender, weight, height, etc.) and then comparing how instances of human beings are different, or similar, in these characteristics. For example, one may ask: “Who is the host of a party?” Here one may answer the question by pointing to an individual whose descriptive identity matches that of the host, for example, by pointing to the individual who is 35-year old; 1.65 m tall; blond; female; and named Jane. This process of comparison and contrasting Jane’s characteristics with those of other members of the party predicates human individuality on an ontologically prior list of attributes generally shared by other instances of Homo sapiens:[38] [39] all human beings have height, age, gender, race, and name. One’s particularity (identity), in this case, rests in the unique manner they exhibit these characteristics.
Yet this process of identification is markedly different from the way human characteristics are deployed within the context of trolley problems involving AVs. In trolley problem cases, these characteristics are directly linked to human value. In these instances, the manner in which someone exhibits elements of commonly shared attributes determines an individual’s relative value. The particularities of one individual are the determinate factors as to whether she lives, gets seriously injured, or dies. Consequently, characteristics such as one’s age, gender, social status (criminal, rich, poor, etc.), and even one’s connection to a group of people (being one of many) may increase or decrease one’s relative value to others. We can note three profound consequences of this construction of human value.
First, the challenge of internal fluctuation. Individual characteristics are subject to fluctuation throughout a human being’s life course. For example, age, height, weight, and even gender can radically change over the course of a human lifespan. If we were to say that a human being’s value is based on their age, gender, membership of a group, or social status, we risk destabilising the inherent absolute value of each human being. In such a construction, one’s value would fluctuate over the course of one’s life; rising and falling in relation to others and oneself. A person’s value, for example, may be low at the beginning of their life, rising in the middle, and dropping off as they come to the end of it.
Second, the challenge of external fluctuation. Not only do human individual’s characteristics fluctuate throughout their lives, but they change in relation to others. For example, suppose a young person would have equal value to another young person based on the characteristic of age, but a high value in relation to an older person. As the young person’s life progresses, their inherent value would change relative to others whose age characteristics were similarly fluctuating. Consequently, their relative value changes throughout their life in relation to others. Establishing individual human values based on such fluctuating characteristics (internally and externally) would be contradictory to the very well-established notion of universality when it comes to human rights, i.e. that all human beings, everywhere, and at all times, have the same value.
Third, contrastive construction would challenge the notion of inalienable human value: that is to say, that one’s value cannot be removed from one’s self. This is particularly true when we consider the nature of the degree of distinction in models based on comparative particularity. One could ask: “Is it merely the fact that one is distinct that makes one valuable?” I.e. that one is in any way a particularity? Or does the degree of distinction make a difference? Posing trolley problems as they have often been employed within AV crash scenarios, presupposes the latter – that the degrees of distinctions are relevant. One of the findings of the MME is that on the whole the young were more valued than the elderly. As the distinction between young and old wanes, so too does the likelihood of choosing one over the other. When one is asked to choose between a 5-year-old and a 95-year-old, the choice (according to the MME) is quite clear. However, when comparing a 40-year-old with a 45-year-old, the choice is – presumably – not so clear. Consequently, one’s value is directly linked to the degree to which one diverges from other human beings in one or many commonly shared characteristics. This predicates inalienable human value on relative human distinctions and results in a relative value hierarchy based on degrees of distinction. As these distinctions increase and decrease, as individuals become similar or dissimilar to others, their value increases and decreases.
This would have practical implications not only for crash scenarios involving AVs but in general. For example, people of radically distinctive races (such as Innuits or Murri) may be individually so distinct that their value increases relative to each other, while identical twins (whose individuality is far less pronounced) may not have a large relative value distinction simply because they appear alike. The practical consequences would be problematic. One may, perhaps, argue that were an AV to choose between an Innuit and Murri, the moral dilemma would be great, but the dilemma in choosing between identical twins would not be – a simple random decision may suffice. Indeed, it is telling to note that this latter choice is very rarely presented.
The challenges of establishing a universal, inalienable basis for human value based on contrastive characterisation are insurmountable. Not only is individual human particularity practically infinite, especially when degrees of distinction are combined with fluctuating internal and external characteristics in comparative models, but these models imply contingent human value. In these models, human value is contingent on non-fundamental human characteristics such as age, gender, race, etc., rather than on more ontologically fundamental properties of the human being, such as personhood. Such contingent value risks the universality of human rights as individual members of Homo sapiens either display or fail to display superficially valued characteristics to varying degrees. Particular individuals could find their value affirmed or denied in different contexts. Consequently, moral decisions in these cases are not simplified, but complexified, leading to the danger that numerous sets of individuals could have their rights denigrated.
Therefore, in the context of questions about human value, identity should not be predicated on human individual particularity, but rather human individual unsubstitutability.[40] Put simply, when one speaks about the value of a human being, one is not speaking about the ways one human being differs from another. Rather, we are seeking descriptions that can account for the unique status individual human beings have compared to other classes of beings (rocks, trees, bees, etc.). It can be noted that on the whole, the MME establishes this fact in the acceptance that human beings should be spared above animals. It matters not the characteristics of the animal (its age, gender, social status, or number). While there are some who object to speciesism (a tangent we will refrain from exploring here), it is widely accepted that human beings have an unqualified, universal, inalienable value that animals do not. This is because, strictly speaking, individual human beings are not valued based on their relative particularity, but on their identity as personal beings apart from their fluctuating characteristics. It is this category (personal beings) that is the foundation of their unqualified, universal, inalienable value.[41]
One’s value cannot be described by merely attributing a list of uniquely arranged properties to a subject that happens to bear a person’s proper name. The list of properties that may describe the subject is indefinitely long, constantly changing, and relative to other subjects. Ascribing one’s value to this list assumes that the subject in question is an otherwise featureless subsisting individual entity that exists, somehow, behind, or beneath the prescribed properties. A formless “ghost in the machine,” to use Ryle’s phrase.[42] To be true, it may be more accurate to claim that the subject’s personal identity sits in the unique way they concretely exemplify these characteristics across time and changes – i.e. that a subject defines what each of these attributes concretely is in each subject’s case.[43] Yet it is inaccurate to claim that one’s value is describable merely by appealing to a list of attributes that happen to apply to them at a point in time.
Let us give an example. Take the descriptive phrase: Sara is young, law-abiding, and single. Here Sara is the subject that continues across time and space, the actual person in which unqualified, universal, inalienable value rests. Her properties – that she is young; law-abiding; and single – are abstracted from a prior list of properties associated with human beings elsewhere. These properties are abstracted and attributed to the subsistent personal identity (Sara) that, in this construction, is otherwise featureless. What has value is not the features of Sara, but Sara herself. Here Sara concretely defines what these features are as her personal identity inhabits them concretely in this moment in time. Yet they are not the basis upon which her value rests. Here we touch on, and perhaps even reject, Hume’s “bundle of qualities” theory[44] and affirm a more Kantian approach to understanding human beings as being all equally irreplaceable.[45] However, it should be noted that we do not affirm this on the same basis as Kant (i.e. the rational nature of human beings) but on their interrelated nature. We have dealt with this in-depth elsewhere.[46] In brief, a person qua person is more than the sum of their qualities and as such is entirely irreplaceable.[47]
One’s value rests solely in the fact that one is included in the class of beings (termed personal beings) who are evaluated to have such dignity. This is the categorical and evaluative force of the term person.[48] It is individual persons who have value, not individuals who are particularities. What exactly is the ontologically primary basis for human personal identity that is sufficient on which to predicate human dignity and value is beyond the scope of our project here. This may be species membership, historical progeny, or DNA. It may be metaphysical, as is the case of Scotus’ theory of haecceity.[49] All that is being said in our current project is that one’s value is not a result of one’s distinctive set of characteristics.
What is important about the category person is that it does not lend itself to degrees.[50] All members of this category of being carry equal value. In this way, human personal identity is ontologically prior to one’s individual characteristics. Human beings are more than simply the sum of their parts, especially when these parts fluctuate and are shared by others.[51]
5 Trolley Problems as Crisis Triage
At this point, it is necessary to provide a very brief note on the seemingly obvious connection between AV trolley problems and triage in emergency situations such as those that occur in hospital accidents and emergency/casualty wards. One might argue that the AV trolley problem is similar to the question of emergency triage – where one must choose who to treat and who to let die – and that in triage the very nature of the emergency necessitates a judgment on the relative value of one person to another. Apart from the normative distinction present in medical triage between killing and letting die – which is not present in AV trolley problems – the analogy between AV trolley problems and medical triage fails in a significant way.
The discussions around triage are extensive,[52] but the most widely adopted triage protocol (at least in the US) is START – Simple Triage and Rapid Treatment.[53] Within this framework, patients are not judged on their relative value (who is more valuable than another) but on their prospects of survival. For example, a child with a severe head injury who is bleeding profusely and has little chance of survival should not be given emergency treatment above a senior citizen who has an abdomen injury and would survive if given urgent treatment. Likewise, COVID-19 triage often focused on survivability, not on any particular characteristic. For example, the Swiss Academy of Medical Sciences’ guidelines state explicitly: “the short-term survival prognosis is the primary decision criterion for purposes of triage.”[54] On the whole, triage problems do not deal with the question of relative human value, but with the question of chances of survival. Indeed, any characteristic discrimination would be against the policy of most healthcare providers.
Thus, if we are to take triage as our example for trolley problems involving AVs (and we are not here arguing this should be the case) the question posed in AV crash scenarios should not be who should be saved: a child or a 45-year old athlete – as the MME would have us do – but who is most likely to survive a collision: a child or a 45-year old athlete.[55] In this case, it may be possible to argue that AVs should be programmed to strike larger road users rather than smaller road users if it can be shown that larger road users are more likely to survive. This may generally be adults over children. Yet it should not be argued that younger people per se should be saved above older people.
6 Conclusion: Individual Persons, not Particular Individuals
As our introductory remarks have noted, it is understandable that questions arise about crash scenarios and AVs. However, trolleyology is not the most suitable approach to answer these questions. These thought experiments force us to place human beings on a value scale based on non-ontologically primary human characteristics that are abstracted from a wider list of commonly shared attributes. This is contrary to the foundation of inalienable human value. Human value is not predicated on human particularity. Rather it is predicated on the inclusion of a being into the category of person. This category is composed of members who are unsubstitutable, one-for-another, in terms of their personhood, not their individuality. Consequently, members of this category of beings are universally evaluated to have equal, inalienable, and unqualified value. Trolley problems that force choices between human beings on the basis of their divergent characteristics, force participants to create a false value hierarchy and to place human beings – whose value is independent of these characteristics – into categories alien to their inherent value. While at first appearing coherent when dealing with superficial characteristics (such as age), the immorality of such a choice becomes clear with more significant characteristics. Etienne, for example, makes mention of students’ abhorrence to the trolley problem when presented with fundamental characteristics (such as race or religion). He notes that students were happy to choose between the young and the old, but refused to choose between black and white, or Muslim and Christian.[56]
That trolley problems are inappropriate ways of establishing relative human value for crash scenarios involving AVs, and that human value is distinct from human particularity, raises numerous questions. What would a better approach be? What are the implications of our conclusions for other practices? How does one make the right decision when saving everyone is impossible? Questions like these are endless and beyond the scope of a single discussion. We, therefore, reluctantly refrain from discussing these here. For now, it must suffice to simply state that it is often the case that pragmatism necessitates actions that are difficult to justify on reflection. Actions taken by drivers, doctors, and politicians in resource-sensitive scenarios are afforded a certain leeway. When a driver swerves to avoid a child and kills a senior citizen, no one argues that they made the right decision by valuing the senior citizen less than the child. We understand that in that moment, the driver did not reflect on their actions but simply reacted, almost by instinct. One may even coherently argue that they have diminished moral responsibility for the devaluation of the senior citizens in relation to the child in that their response was reactionary rather than thoughtful. Even if they had taken the time prior to the situation to reflect and decide, it is questionable that their prior decision would be congruent with their actions at the time of the crash. Human instinctive reactions often override previous thought through decisions.
Crash scenarios involving AVs, however, are different. One can reasonably predict that such scenarios are inevitable. AVs will need to choose between pedestrians and occupants, horse rides and cyclists, large vans or passenger cars. While these scenarios are time sensitive as they occur, the decision as to how to programme the AV may be taken with a great deal of foresight and thought. Consequently, as we continue to reflect on these scenarios, the implications of our conclusion here may also be afforded greater reflection. In due course, for example, we may find a more suitable approach for evaluating these scenarios. That we are not currently at that point, however, does not imply that our conclusions presented in this project are illegitimate.
It is interesting that the world’s first ethical guideline for driverless cars (implemented in Germany) expressly prohibits any programming of AVs that would discriminate between persons on the grounds of personal characteristics such as age, sex, physical or mental constitution.[57] What this exactly means for the ethics involved in crash scenarios is still unclear. How do we, for example, protect vulnerable road users if we are not able to appeal to personal characteristics, or even to numbers? The recent UK Highway Code makes it very clear that vulnerable road users are to be given a certain preferential treatment based on certain characteristics, such as being a child, a pedestrian, a cyclist, or a horse rider – it does this on the basis of their vulnerability and thereby implied survivability.[58] It may be that grounds other than personal characteristics need to be sought when programming cars. For example, the roles played by road users rather than characteristics (pedestrian, cyclist, etc.). Much further discussion is necessary. Nevertheless, what is welcomed is the refusal of the German government to base human value on particular individuals and not on individual persons.
-
Funding information: Funding has been received to support this research work through the NCCR (Automation – Grand ID: 180545) a Swiss National foundation supported research activity.
-
Author contributions: S. R. M.: Conceptualisation, Writing – Original Draft; B. S. E.: Writing – Review and Editing; D. M. S.: Writing – Review and Editing.
-
Conflict of interest: Authors state no conflict of interest.
References
“Algiers Charter: Universal Declaration of the Rights of Peoples,” 1976. http://permanentpeoplestribunal.org/wp-content/uploads/2016/06/Carta-di-algeri-EN-2.pdf.Search in Google Scholar
Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. “The Moral Machine Experiment.” Nature 563, no. 7729 (2018), 59–64. 10.1038/s41586-018-0637-6.Search in Google Scholar
Bauman, Christopher W., A. Peter McGraw, Daniel M. Bartels, and Caleb Warren. “Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology.” Social & Personality Psychology Compass 8, no. 9 (2014), 536–54. 10.1111/spc3.12131.Search in Google Scholar
Bauman, Melissa and Alyson Youngblood. “Rand Corporation.” Why Waiting for Perfect Autonomous Vehicles May Cost Lives (blog), 2017. https://www.rand.org/blog/articles/2017/11/why-waiting-for-perfect-autonomous-vehicles-may-cost-lives.html#:∼:text=It’s%202020%2C%20and%20consumers%20can,this%20leads%20to%20more%20fatalities.Search in Google Scholar
Bostyn, Dries H., Sybren Sevenhant, and Arne Roets. “Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas.” Psychological Science 29, no. 7 (2018), 1084–93. 10.1177/0956797617752640.Search in Google Scholar
Cathcart, Thomas. The Trolley Problem, or, Would You Throw the Fat Man off the Bridge? New York: Workman, 2013.Search in Google Scholar
Cova, Florian. “What Happened to the Trolley Problem?.” Journal of the Indian Council of Philosophical Research 34 (2017), 543–64.10.1007/s40961-017-0114-xSearch in Google Scholar
Davnall, Rebecca. “Solving the Single-Vehicle Self-Driving Car Trolley Problem Using Risk Theory and Vehicle Dynamics.” Science and Engineering Ethics 26, no. 1 (2020), 431–49. 10.1007/s11948-019-00102-6.Search in Google Scholar
Department of Transport. “The Highway Code,” 2022.Search in Google Scholar
Dewitt, Barry, Baruch Fischhoff, and Nils-Eric Sahlin. “‘Moral Machine’ Experiment Is No Basis for Policymaking.” Nature (London) 567, no. 7746 (2019), 31–1. 10.1038/d41586-019-00766-x.Search in Google Scholar
“Dignitatis Humanae - The Case for Religious Freedom.” Vatican II, 1965. https://www.vatican.va/archive/hist_councils/ii_vatican_council/documents/vat-ii_decl_19651207_dignitatis-humanae_en.html.Search in Google Scholar
Engler, Mark. “Toward the ‘Rights of the Poor’ Human Rights in Liberation Theology.” Journal of Religious Ethics 28, no. 3 (2000), 339–65.10.1111/0384-9694.00053Search in Google Scholar
Ethik-Kommission. “Ethik-Kommission: Automatisiertes Und Vernetztes Fahren.” Bundesminister für Verkehr und digitale Infrastruktur, 2017. https://www.bmvi.de/SharedDocs/DE/Publikationen/DG/bericht-der-ethik-kommission.pdf?__blob=publicationFile.Search in Google Scholar
Etienne, Hubert. “The Dark Side of the ‘Moral Machine’ and the Fallacy of Computational Ethical Decision-Making for Autonomous Vehicles.” Law, Innovation and Technology 13, no. 1 (2021), 85–107. 10.1080/17579961.2021.1898310.Search in Google Scholar
Evans, Katherine, Nelson de Moura, Stéphane Chauvier, Raja Chatila, and Ebru Dogan. “Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project.” Science and Engineering Ethics 26, no. 6 (2020), 3285–312. 10.1007/s11948-020-00272-8.Search in Google Scholar
Facebook. “Trolley Problem Memes.” Accessed December 29, 2018. www.facebook.com/TrolleyProblemMemes/.Search in Google Scholar
Fleetwood, Janet. “Public Health, Ethics, and Autonomous Vehicles.” American Journal of Public Health 107, no. 4 (2017), 532–37. 10.2105/AJPH.2016.303628.Search in Google Scholar
Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, M. V. Dignum, Christoph Luetge, Robert Madelin, and Ugo Pagallo. “AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.” Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Sciences 28, no. 4 (2018), 689–707. 10.1007/s11023-018-9482-5.Search in Google Scholar
Foot, Philippa. “The Problem of Abortion and the Doctrine of Double Effect.” Oxford Review 5 (1967), 5–15.Search in Google Scholar
Francis, Kathryn B., Charles Howard, Ian S. Howard, Michaela Gummerum, Giorgio Ganis, Grace Anderson, and Sylvia Terbeck. “Virtual Morality: Transitioning from Moral Judgment to Moral Action?” PLoS ONE 11, no. 10 (2016), 1–22.10.1371/journal.pone.0164374Search in Google Scholar
Fried, Barbara H. “What Does Matter? The Case for Killing the Trolley Problem (or Letting It Die).” The Philosophical Quarterly 62, no. 248 (2012), 505–29. 10.1111/j.1467-9213.2012.00061.x.Search in Google Scholar
Furey, Heidi and Scott Hill. “MIT’s Moral Machine Project Is a Psychological Roadblock to Self-Driving Cars.” AI and Ethics 1, no. 2 (2020), 151–5. 10.1007/s43681-020-00018-z.Search in Google Scholar
Geisslinger, Maximilian, Franziska Poszler, Johannes Betz, Christoph Lütge, and Markus Lienkamp. “Autonomous Driving Ethics: From Trolley Problem to Ethics of Risk.” Philosophy & Technology 34, no. 4 (2021), 1033–55. 10.1007/s13347-021-00449-4.Search in Google Scholar
Ghanbari, Vahid, Ali Ardalan, Armin Zareiyan, Amir Nejati, Dan Hanfling, Alireza Bagheri, and Leili Rostamnia. “Fair Prioritization of Casualties in Disaster Triage: A Qualitative Study.” BMC Emergency Medicine 21, no. 1 (2021), 119. 10.1186/s12873-021-00515-2.Search in Google Scholar
Gill, Tripat. “Ethical Dilemmas Are Really Important to Potential Adopters of Autonomous Vehicles.” Ethics & Information Technology 23, no. 4 (2021), 657–73.10.1007/s10676-021-09605-ySearch in Google Scholar
Gogoll, Jan and Julian F. Müller. “Autonomous Cars: In Favor of a Mandatory Ethics Setting.” Science and Engineering Ethics 23, no. 3 (2017), 681–700. 10.1007/s11948-016-9806-x.Search in Google Scholar
Goodall, Noah J. “Away from Trolley Problems and Toward Risk Management.” Applied Artificial Intelligence 30, no. 8 (2016), 810–21. 10.1080/08839514.2016.1229922.Search in Google Scholar
Hansson, Sven Ove, Matts-Åke Belin, and Björn Lundgren. “Self-Driving Vehicles – an Ethical Overview.” Philosophy & Technology 34, no. 4 (2021), 1383–408. 10.1007/s13347-021-00464-5.Search in Google Scholar
Heinze, Eric. “The Myth of Flexible Universality: Human Rights and the Limits of Comparative Naturalism.” Oxford Journal of Legal Studies 39, no. 3 (2019), 624–53. 10.1093/ojls/gqz019.Search in Google Scholar
Himmelreich, Johannes. “Ethics of Technology Needs More Political Philosophy.” Communications of the ACM 63, no. 1 (2019), 33–5. 10.1145/3339905.Search in Google Scholar
Holstein, Tobias and Gordana Dodig-Crnkovic. “Avoiding the Intrinsic Unfairness of the Trolley Problem.” In Proceedings of the International Workshop on Software Fairness, 32–7. FairWare ’18. New York, NY, USA: Association for Computing Machinery, 2018. 10.1145/3194770.3194772.Search in Google Scholar
Hume, David. A Treatise of Human Nature, edited by Lewis Selby-Bigge and P. H. Nidditch. 2nd ed. Oxford: Oxford University Press, 1985.Search in Google Scholar
JafariNaimi, Nassim. “Our Bodies in the Trolley’s Path, or Why Self-Driving Cars Must *Not* Be Programmed to Kill.” Science, Technology, & Human Values 43, no. 2 (2018), 302–23. 10.1177/0162243917718942.Search in Google Scholar
Jenkins, Ryan, David Cerný, and Tomás Hríbek. Autonomous Vehicle Ethics: The Trolley Problem and Beyond. Oxford University Press, 2022.10.1093/oso/9780197639191.001.0001Search in Google Scholar
Kalra, Nidhi and David G. Groves. “Rand Corporation.” The Enemy of Good: Estimating the Cost of Waiting for Nearly Perfect Automated Vehicles (blog), 2017. https://www.rand.org/pubs/research_reports/RR2150.html.10.7249/RR2150Search in Google Scholar
Kamm, F. M. “The Use and Abuse of the Trolley Problem: Self-Driving Cars, Medical Treatments, and the Distribution of Harm.” In Rights and Their Limits: In Theory, Cases, and Pandemics, edited by F. M. Kamm. Oxford: Oxford University Press, 2022. 10.1093/oso/9780197567739.003.0010.Search in Google Scholar
Kant, Immanuel. “Groundwork of the Metaphysics of Morals.” In Practical Philosophy, edited by Mary J. Gregor. Cambridge: Cambridge University Press, 1996.Search in Google Scholar
Kelsey, David H. Eccentric Existence: A Theological Anthropology. Louisville: Westminster John Knox, 2009.Search in Google Scholar
Kochupillai, Mrinalini, Christoph Lütge, and Franziska Poszler. “Programming Away Human Rights and Responsibilities? ‘The Moral Machine Experiment’ and the Need for a More ‘Humane’ AV Future.” Nanoethics 14, no. 3 (2020), 285–99. 10.1007/s11569-020-00374-4.Search in Google Scholar
Königs, Peter. “Of Trolleys and Self-Driving Cars: What Machine Ethicists Can and Cannot Learn from Trolleyology.” Utilitas 35, no. 1 (2023), 70–87. 10.1017/S0953820822000395.Search in Google Scholar
Kuschner, Ware G., John B. Pollard, and Stephen C. Ezeji-Okoye. “Ethical Triage and Scarce Resource Allocation during Public Health Emergencies: Tenets and Procedures.” Hospital Topics 85, no. 3 (2007), 16–25. 10.3200/HTPS.85.3.16-25.Search in Google Scholar
Lerner, E. Brooke, Richard B. Schwartz, Phillip L. Coule, Eric S. Weinstein, David C. Cone, Richard C. Hunt, Scott M. Sasser, et al. “Mass Casualty Triage: An Evaluation of the Data and Development of a Proposed National Guideline.” Disaster Medicine and Public Health Preparedness 2, no. Suppl 1 (2008), 25–34. 10.1097/DMP.0b013e318182194e.Search in Google Scholar
Liljamo, Timo, Heikki Liimatainen, and Markus Pöllänen. “Attitudes and Concerns on Automated Vehicles.” Transportation Research Part F: Traffic Psychology and Behaviour 59 (2018), 24–44. 10.1016/j.trf.2018.08.010.Search in Google Scholar
Liu, Hin-Yan. “Irresponsibilities, Inequalities and Injustice for Autonomous Vehicles.” Ethics and Information Technology 19, no. 3 (2017), 193–207. 10.1007/s10676-017-9436-2.Search in Google Scholar
Martinho, Andreia, Nils Herber, Maarten Kroesen, and Caspar Chorus. “Ethical Issues in Focus by the Autonomous Vehicles Industry.” Transport Reviews 41, no. 5 (2021), 556–77. 10.1080/01441647.2020.1862355.Search in Google Scholar
McFarland, Matt. “Google’s Chief of Self-Driving Cars Downplays ‘The Trolley Problem.’” The Washington Post, 2015.Search in Google Scholar
Milford, Stephen R. “Animals or Not-Animals: Reflections on the Postliberal Move from Particularity to Unsubstitutability.” Pharos Journal of Theology 101 (2020), 1–19.Search in Google Scholar
Milford, Stephen R. Eccentricity in Anthropology: David H. Kelsey’s Anthropological Formula as a Way Out of the Substantive-Relational Imago Dei Debate. Eugene: Pickwick, 2019.Search in Google Scholar
Milford, Stephen R. “The Problem with Sandra: Addressing the Unfortunate Consequences of Relational Ontological Personhood.” Religion and Theology 27, no. 3–4 (2020), 275–98. 10.1163/15743012-02703004.Search in Google Scholar
Milford, Stephen R., Bernice S. Elger, and David M. Shaw. “Various Vulnerabilities in Highway Hierarchies: Applying the UK Highway Code’s Hierarchy of Road Users to Autonomous Vehicle Decision-Making.” International Journal of Technoethics (IJT) 15, no. 1 (2024), 1–12. 10.4018/IJT.342604.Search in Google Scholar
Milford, Stephen R. and David Shaw. “Schrödinger’s Foetus and Relational Ontology: Reconciling Three Contradictory Intuitions in Abortion Debates.” Ethical Theory and Moral Practice (2023). 10.1007/s10677-023-10422-z.Search in Google Scholar
Mills, Alex F., Nilay Tanık Argon, and Serhan Ziya. “Resource-Based Patient Prioritization in Mass-Casualty Incidents.” Manufacturing & Service Operations Management 15, no. 3 (2013), 361–77. 10.1287/msom.1120.0426.Search in Google Scholar
Morley, Jessica, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo, and Luciano Floridi. “The Ethics of AI in Health Care: A Mapping Review.” Social Science & Medicine 260 (2020), 113172. https://nwulib.nwu.ac.za/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=edselp&AN=S0277953620303919&site=eds-live.10.1016/j.socscimed.2020.113172Search in Google Scholar
Nastjuk, Ilja, Bernd Herrenkind, Mauricio Marrone, Alfred Benedikt Brendel, and Lutz M. Kolbe. “What Drives the Acceptance of Autonomous Driving? An Investigation of Acceptance Factors from an End-User’s Perspective.” Technological Forecasting and Social Change 161 (2020), 120319. 10.1016/j.techfore.2020.120319.Search in Google Scholar
Noothigattu, Ritesh, Snehalkumar “Neil” S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel D. Procaccia. “A Voting-Based System for Ethical Decision Making.” Preprint-ArXiv 1709.06692v2 (2018). https://arxiv.org/abs/1709.06692.Search in Google Scholar
Nyholm, Sven and Jilles Smids. “The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?” Ethical Theory and Moral Practice: An International Forum 19, no. 5 (2016), 1275–89.10.1007/s10677-016-9745-2Search in Google Scholar
Othman, Kareem. “Public Acceptance and Perception of Autonomous Vehicles: A Comprehensive Review.” AI and Ethics 1, no. 3 (2021), 355–87. 10.1007/s43681-021-00041-8.Search in Google Scholar
Perring, Christian. “Degrees of Personhood.” Journal of Medicine and Philosophy 22, no. 2 (1997), 173–97.10.1093/jmp/22.2.173Search in Google Scholar
Robinson, Jonathan, Joseph Smyth, Roger Woodman, and Valentina Donzella. “Ethical Considerations and Moral Implications of Autonomous Vehicles and Unavoidable Collisions.” Theoretical Issues in Ergonomics Science 23, no. 4 (2022), 1–18. 10.1080/1463922X.2021.1978013.Search in Google Scholar
Rowthorn, Michael. “How Should Autonomous Vehicles Make Moral Decisions? Machine Ethics, Artificial Driving Intelligence, and Crash Algorithms.” Contemporary Readings in Law and Social Justice 11, no. 1 (2019), 9–14. 10.22381/CRLSJ11120191.Search in Google Scholar
Ryle, Gilbert. The Concept of Mind. New University of Chicago Press ed. University of Chicago Press, 2002. https://nwulib.nwu.ac.za/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=cat01185a&AN=nwu.b1850176&site=eds-live.Search in Google Scholar
SAMW. “Intensive Care Triage under Exceptional Resource Scarcity: Guidance on the Application of Section 9.3 of the Sams Guidelines «Intensive-Care Interventions» (2013).” Swiss Academy of Medical Sciences, 2021. https://www.sams.ch.Search in Google Scholar
Schäffner, Vanessa. “Between Real World and Thought Experiment: Framing Moral Decision-Making in Self-Driving Car Dilemmas.” Humanistic Management Journal 6, no. 2 (2021), 249–72. 10.1007/s41463-020-00101-x.Search in Google Scholar
Scotus, John Duns. Early Oxford Lecture on Individuation, translated by Allan B. Wolter. St. Bonaventure: The Franciscan Institute, 2005.Search in Google Scholar
Sparrow, Robert. “Why Machines Cannot Be Moral.” AI & Society 36, no. 3 (2021), 685–93. 10.1007/s00146-020-01132-6.Search in Google Scholar
Umbrello, Steven and Roman V. Yampolskiy. “Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.” International Journal of Social Sobotics 14, no. 2 (2022), 313–22. 10.1007/s12369-021-00790-w.Search in Google Scholar
United Nations. “Universal Declaration of Human Rights,” 1948. https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf.Search in Google Scholar
Wang, Qingfan, Qing Zhou, Miao Lin, and Bingbing Nie. “Human Injury-Based Safety Decision of Automated Vehicles.” iScience 25, no. 8 (2022), 104703. 10.1016/j.isci.2022.104703.Search in Google Scholar
Wang, Yutian, Xuepeng Hu, Lingfang Yang, and Zhi Huang. “Ethics Dilemmas and Autonomous Vehicles: Ethics Preference Modelling and Implementation of Personal Ethics Setting for Autonomous Vehicles in Dilemmas.” IEEE Intelligent Transportation Systems Magazine 15, no. 2 (2023), 177–89. 10.1109/MITS.2022.3197689.Search in Google Scholar
WHO. “Road Traffic Inuries.” World Health Organization, 2022. https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries.Search in Google Scholar
Williams, Thomas. “John Duns Scotus.” Stanford Encyclopedia of Philosophy, 2015. http://plato.stanford.edu/entries/duns-scotus/.Search in Google Scholar
Wired. “Barack Obama Talks AI, Robo Cars, and the Future of the World.” 2016. https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/.Search in Google Scholar
Wolkenstein, Andreas. “What Has the Trolley Dilemma Ever Done for Us (and What Will It Do in the Future)? On Some Recent Debates About the Ethics of Self-Driving Cars.” Ethics and Information Technology 20, no. 3 (2018), 163–73. 10.1007/s10676-018-9456-6.Search in Google Scholar
World Bank. “GDP (Current US$).” The World Bank, 2022. https://data.worldbank.org/indicator/NY.GDP.MKTP.CD?end=2021&start=1960.Search in Google Scholar
Zhu, Anrun, Shuangqing Yang, Yunjiao Chen, and Cai Xing. “A Moral Decision-Making Study of Autonomous Vehicles: Expertise Predicts a Preference for Algorithms in Dilemmas.” Personality and Individual Differences 186 (2022), 111356. 10.1016/j.paid.2021.111356.Search in Google Scholar
© 2025 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Special issue: Sensuality and Robots: An Aesthetic Approach to Human-Robot Interactions, edited by Adrià Harillo Pla
- Editorial
- Sensual Environmental Robots: Entanglements of Speculative Realist Ideas with Design Theory and Practice
- Technically Getting Off: On the Hope, Disgust, and Time of Robo-Erotics
- Aristotle and Sartre on Eros and Love-Robots
- Digital Friends and Empathy Blindness
- Bridging the Emotional Gap: Philosophical Insights into Sensual Robots with Large Language Model Technology
- Can and Should AI Help Us Quantify Philosophical Health?
- Special issue: Existence and Nonexistence in the History of Logic, edited by Graziana Ciola (Radboud University Nijmegen, Netherlands), Milo Crimi (University of Montevallo, USA), and Calvin Normore (University of California in Los Angeles, USA) - Part II
- The Power of Predication and Quantification
- A Unifying Double-Reference Approach to Semantic Paradoxes: From the White-Horse-Not-Horse Paradox and the Ultimate-Unspeakable Paradox to the Liar Paradox in View of the Principle of Noncontradiction
- The Zhou Puzzle: A Peek Into Quantification in Mohist Logic
- Empty Reference in Sixteenth-Century Nominalism: John Mair’s Case
- Did Aristotle have a Doctrine of Existential Import?
- Nonexistent Objects: The Avicenna Transform
- Existence and Nonexistence in the History of Logic: Afterword
- Special issue: Philosophical Approaches to Games and Gamification: Ethical, Aesthetic, Technological and Political Perspectives, edited by Giannis Perperidis (Ionian University, Greece)
- Thinking Games: Philosophical Explorations in the Digital Age
- On What Makes Some Video Games Philosophical
- Playable Concepts? For a Critique of Videogame Reason
- The Gamification of Games and Inhibited Play
- Rethinking Gamification within a Genealogy of Governmental Discourses
- Integrating Ethics of Technology into a Serious Game: The Case of Tethics
- Battlefields of Play & Games: From a Method of Comparative Ludology to a Strategy of Ecosophic Ludic Architecture
- Research Articles
- Being Is a Being
- What Do Science and Historical Denialists Deny – If Any – When Addressing Certainties in Wittgenstein’s Sense?
- A Relational Psychoanalytic Analysis of Ovid’s “Narcissus and Echo”: Toward the Obstinate Persistence of the Relational
- What Makes a Prediction Arbitrary? A Proposal
- Self-Driving Cars, Trolley Problems, and the Value of Human Life: An Argument Against Abstracting Human Characteristics
- Arche and Nous in Heidegger’s and Aristotle’s Understanding of Phronesis
- Demons as Decolonial Hyperobjects: Uneven Histories of Hauntology
- Expression and Expressiveness according to Maurice Merleau-Ponty
Articles in the same Issue
- Special issue: Sensuality and Robots: An Aesthetic Approach to Human-Robot Interactions, edited by Adrià Harillo Pla
- Editorial
- Sensual Environmental Robots: Entanglements of Speculative Realist Ideas with Design Theory and Practice
- Technically Getting Off: On the Hope, Disgust, and Time of Robo-Erotics
- Aristotle and Sartre on Eros and Love-Robots
- Digital Friends and Empathy Blindness
- Bridging the Emotional Gap: Philosophical Insights into Sensual Robots with Large Language Model Technology
- Can and Should AI Help Us Quantify Philosophical Health?
- Special issue: Existence and Nonexistence in the History of Logic, edited by Graziana Ciola (Radboud University Nijmegen, Netherlands), Milo Crimi (University of Montevallo, USA), and Calvin Normore (University of California in Los Angeles, USA) - Part II
- The Power of Predication and Quantification
- A Unifying Double-Reference Approach to Semantic Paradoxes: From the White-Horse-Not-Horse Paradox and the Ultimate-Unspeakable Paradox to the Liar Paradox in View of the Principle of Noncontradiction
- The Zhou Puzzle: A Peek Into Quantification in Mohist Logic
- Empty Reference in Sixteenth-Century Nominalism: John Mair’s Case
- Did Aristotle have a Doctrine of Existential Import?
- Nonexistent Objects: The Avicenna Transform
- Existence and Nonexistence in the History of Logic: Afterword
- Special issue: Philosophical Approaches to Games and Gamification: Ethical, Aesthetic, Technological and Political Perspectives, edited by Giannis Perperidis (Ionian University, Greece)
- Thinking Games: Philosophical Explorations in the Digital Age
- On What Makes Some Video Games Philosophical
- Playable Concepts? For a Critique of Videogame Reason
- The Gamification of Games and Inhibited Play
- Rethinking Gamification within a Genealogy of Governmental Discourses
- Integrating Ethics of Technology into a Serious Game: The Case of Tethics
- Battlefields of Play & Games: From a Method of Comparative Ludology to a Strategy of Ecosophic Ludic Architecture
- Research Articles
- Being Is a Being
- What Do Science and Historical Denialists Deny – If Any – When Addressing Certainties in Wittgenstein’s Sense?
- A Relational Psychoanalytic Analysis of Ovid’s “Narcissus and Echo”: Toward the Obstinate Persistence of the Relational
- What Makes a Prediction Arbitrary? A Proposal
- Self-Driving Cars, Trolley Problems, and the Value of Human Life: An Argument Against Abstracting Human Characteristics
- Arche and Nous in Heidegger’s and Aristotle’s Understanding of Phronesis
- Demons as Decolonial Hyperobjects: Uneven Histories of Hauntology
- Expression and Expressiveness according to Maurice Merleau-Ponty