Abstract
Automated decision-making is increasingly prevalent, prompting discussions about AI replacing judges in court. This paper explores how machine-made sentencing decisions are perceived through an experimental study using a public good game with punishment. The study examines preferences for human versus automated punishers and the perceived fairness of penalties. Results indicate that rule violators prefer algorithmic punishment when penalty severity is uncertain and violations are significant. While human judges are typically reluctant to delegate, they are more likely to do this when they do not have discretion over the sanction level. Fairness perceptions are similar for both humans and algorithms, except when human judges choose a less severe penalty, which enhances perceived fairness.
Funding source: Program FUTURE LEADER of Lorraine Université d’Excellence within the program Investissements Avenir
Award Identifier / Grant number: ANR-15-IDEX-04-LUE
A.1 Contributions and Number of Violators
A.1.1 Percentage of Violators
Marginal effects from a Probit regression of full contribution violations.
(1) violation dy/dx (se) | |
---|---|
Treatment 1 | 0.1101 |
(0.0717) | |
Treatment 2 | 0.1787b |
(0.0726) | |
Attitudes towards AI | −0.1058a |
(0.0594) | |
Round | −0.0131c |
(0.0028) | |
Age | −0.0077a |
(0.0032) | |
Female | 0.1545a |
(0.0599) | |
Risk seeking | 0.0487c |
(0.0123) | |
Level of studies | 0.0267 |
(0.0174) | |
Observations | 1761 |
log (likelihood) | −1,101.3929 |
Wald χ2(8) | 52.51 |
Prob > χ2 | < 0.0001 |
Pseudo-R2 | 0.0931 |
-
Standard errors in parentheses clustered at the individual level. ap < 0.10, bp < 0.05, cp < 0.01.
OLS regression of the amounts contributed to the public good.
(1) player.contribution b/se | |
---|---|
Treatment 2 | −13.9886a |
(7.1562) | |
Treatment 3 | −6.9854 |
(7.7199) | |
Attitudes towards AI | 7.7685 |
(6.3866) | |
Round | 1.1500c |
(0.3115) | |
Age | 0.4705 |
(0.3285) | |
Female | −11.4674a |
(6.6734) | |
Risk seeking | −5.8013c |
(1.8295) | |
Level of studies | −3.0778a |
(1.7091) | |
Constant | 166.8885c |
(29.2351) | |
Observations | 1761 |
log (likelihood) | −9.56e+03 |
F-test | 4.3412 |
R 2 | 0.0789 |
-
Standard errors in parentheses clustered at the individual level. ap < 0.10, cp < 0.01.
Wilcoxon rank sum tests of fairness ratings between human and algorithm based on treatment and severity of sanction.
Treatment | Severity | z | P > |z| |
---|---|---|---|
1 | Mild | −1.164 | 0.2444 |
Severe | 0.397 | 0.6912 | |
2 | Mild | −3.756 | 0.0002 |
Severe | 1.455 | 0.1457 | |
3 | Mild | −1.128 | 0.2592 |
Severe | −0.982 | 0.3262 |
Dunn test for pairwise comparisons of fairness across conditions based on agent type, severity and treatments with Bonferroni correction following a Kruskal–Wallis test.
Agent type | Severity | Treatments | z | P > |z| |
---|---|---|---|---|
Human | Mild | 1–2 | −3.463 | 0.0008 |
1–3 | 0.042 | 1.000 | ||
2–3 | 3.401 | 0.0010 | ||
Human | Severe | 1–2 | 2.536 | 0.0168 |
1–3 | 1.022 | 0.4605 | ||
2–3 | −1.342 | 0.2692 | ||
Computer | Mild | 1–2 | −1.053 | 0.4386 |
1–3 | −0.221 | 1.000 | ||
2–3 | 0.815 | 0.6228 | ||
Computer | Severe | 1–2 | 1.448 | 0.2214 |
1–3 | 2.499 | 0.0187 | ||
2–3 | 1.206 | 0.3419 |
Example of the table for the punishment in Treatment 1.
Contribution | Rule |
---|---|
0 | 195 |
10 | 185 |
20 | 175 |
30 | 165 |
40 | 155 |
50 | 145 |
60 | 135 |
70 | 125 |
80 | 115 |
90 | 105 |
100 | 95 |
110 | 85 |
120 | 75 |
130 | 65 |
140 | 55 |
150 | 45 |
160 | 35 |
170 | 25 |
180 | 15 |
190 | 5 |
200 | 0 |
Rules for the punishment displayed in Treatment 2.
Contribution | Rule 1 | Rule 2 |
---|---|---|
0 | 195 | 250 |
10 | 185 | 240 |
20 | 175 | 230 |
30 | 165 | 220 |
40 | 155 | 210 |
50 | 145 | 200 |
60 | 135 | 190 |
70 | 125 | 180 |
80 | 115 | 170 |
90 | 105 | 160 |
100 | 95 | 150 |
110 | 85 | 140 |
120 | 75 | 130 |
130 | 65 | 120 |
140 | 55 | 110 |
150 | 45 | 100 |
160 | 35 | 90 |
170 | 25 | 80 |
180 | 15 | 70 |
190 | 5 | 60 |
200 | 0 | 0 |
Subsection 3.2 presents the mean comparisons between treatments. Table 6 shows the marginal effects of a Probit regression on being a violator.
We can observe that Treatment 3 is still giving a lower amount of violation than Treatment 2 (dy/dx = 0.1787, p = 0.014) but this effect does not hold anymore compared to Treatment 1 (dy/dx = 0.1101, p = 0.125). We can emphasize different interesting effects on the probability of not fully contribute: it decreases over time during the experiment (dy/dx = −0.0131, p < 0.0001) and individual characteristics matter (positively for being risk seeking and being a female, negatively for begin older and having a positive attitude toward A.I.).
A.1.2 Amounts Contributed
In terms of the amount contributed, there is no statistically significant difference between treatments (T1 = 166.05, T2 = 159.07, T3 = 159.53, with all p-values higher than 0.10). Table 7 shows the OLS regression of these amounts on the treatments and our control variables.
Consistent with the probability of being a violator, we find that the level of contribution is negatively affected by the fact of being risk seeking and being a female. It also increases with time over the experiment.
A.2 Fairness Ratings
A.2.1 Between Human Judge and Algorithm
A.2.2 Conditions Across Treatments
B.1 Instructions
You are now taking part in an economic experiment.
Your payoff will depend on your decision and the decision of the other people in your group. It is therefore important that you take your time to understand the instructions.
The instructions which we have distributed to you are for your private information. Please do not communicate with the other participants during the experiment.
Should you have any questions please ask us.
During the experiment we shall not speak of Euros, but of points. Your entire earnings will be calculated in points. At the end of the experiment one random round will be selected and the points you have earned in that round will be converted to Euros at the rate of 1 point = 0.037 €.
The experiment is anonymous and lasts 20 rounds. At the beginning of the experiment the participants will be randomly divided into groups of five. You will therefore always be in a group with 4 other participants that you will not know the identity of. In every round the composition of the group will be different from the previous round. You may play or not play with people you have already played with. For each round, there will be two stages:
B.2 Stage 1
4 people are called contributors: they will have 200 points. In the following, we shall refer to this amount as the “endowment”. Your task is to decide how to use your endowment.
In particular, you have to decide how many of the 200 points you want to contribute to a project (from 0 to 200) and how many of them to keep for yourself. The consequences of your decision are explained in detail below.
Once all the players have decided their contribution to the project you will be informed about the group’s total contribution, your income from the project and your payoff in this stage.
Your payoff from the first stage in each period is calculated using the following formula. If you have any difficulties, do not hesitate to ask us.
Income at the end of the stage = Endowment of points – Your contribution to the Project + 0.5 * Total contribution to the Project
This formula shows that your income at the end of the period consists of two parts:
The points which you have kept for yourself (endowment – contribution),
The income from the project, which equals to 50 % of the group’s total contribution.
The income of each group member from the project is calculated in the same way. This means that each group member receives the same income from the project.
Suppose the sum of the contributions of all group members is 600 points. In this case, each member of the group receives an income from the project of: 0.5 * 600 = 300 points.
If the total contribution to the project is 90 points, then each member of the group receives an income of: 0.5 * 90 = 45 points from the project, regardless of how much they individually contributed to the project.
You always have the option of keeping the points for yourself or contributing them to the project. Each point that you keep raises your end of period income by 1 point.
Supposing you contributed this point to the project instead, then the total contribution to the project would rise by 1 point. Your income from the project would thus rise by 0.5 * 1 = 0.5 points. However, the income of the other group members would also rise by 0.5 points each, so that the total income of the group from the project would be 2 points. Your contribution to the project therefore also raises the income of the other group members.
On the other hand, you also earn an income for each point contributed by the other members to the project. In particular, for each point contributed by any member you earn 0.5 points.
In addition to the 200 points per period, each participant receives a one-off lump sum payment of €5 at the beginning of this part of the experiment. Note that this lump sum payment should not be used to calculate the “End of period income”. It will only be added to your total income from all the periods at the very end.
B.3 Stage 2
At the second stage of each period, you will be informed about how much each group member contributed individually to the project at the first stage. All the participants who did not contribute 200 points will have to be punished.
After the contribution, the participants can choose whether to be punished automatically by the computer or delegate the decision to an external observer.
One person in the group will always be the external observer. He will never contribute in stage 1, but will always have to punish in stage 2.
Example: Suppose you contribute less than 200 points, then your total income after the two stages will be:
Total income at the end of the period = Income from the 1st stage – points of the punishment
B.4 Rules for the Punishment
B.4.1 [Treatment 1]
For the punishment the computer and the external observer will follow the rules given by a table. This table has a certain number of points that will be subtracted for every level of contribution. An example of the table may be the one in Table 10.
For example, if you contribute 100 and chose to be punished by the computer, the computer will automatically subtract 95 points from you as a punishment.
For example, if you contribute 100 and chose to delegate the decision to the external observer, the observer will subtract 95 points from you as a punishment.
B.4.2 [Treatment 2]
For the punishment the computer and the external observer will follow the rules given by the following table. This table has two rules that represent two possible numbers of points that will be subtracted for every level of contribution. The computer will have 50 % chance of using the first punishment rule and 50 % chance of using the second punishment rule. The external observer will have the choice between using the first punishment rule or the second punishment rule. The rules are displayed in Table 11.
For example, if you contribute 100 and chose to be punished by the computer, the computer will subtract 95 points with 50 % probability and 150 points with 50 % probability from you as a punishment.
For example, if you contribute 100 and chose to delegate the decision to the external observer, the observer will choose whether to subtract 95 points or 150 points with from you as a punishment.
B.4.3 [Treatment 3]
For example, if you contribute 100 and chose to be punished by the computer, the computer will subtract some points from you as a punishment.
For example, if you contribute 100 and chose to delegate the decision to the external observer, the observer will subtract some points from you as a punishment.
B.5 Fairness Rating
B.5.1 Algorithm Judge
You contributed x. You decided to be punished by the computer. You have been punished y.
How fair do you think the punishment was?
B.5.2 Human Judge
You contributed x. You decided to be punished by the external observer. You have been punished y.
How fair do you think the punishment was?
References
Alam, Lamia, and Shane Mueller. 2021. “Examining the Effect of Explanation on Satisfaction and Trust in AI Diagnostic Systems.” BMC Medical Informatics and Decision Making 21 (1): 178. https://doi.org/10.1186/s12911-021-01542-6.Search in Google Scholar
An, Lam, and Laura Boman. 2021. “The Robot Won’t Judge Me: How AI Healthcare Benefits the Stigmatized: An Abstract.” In Academy of Marketing Science Annual Conference-World Marketing Congress, 363–4. Springer.10.1007/978-3-030-95346-1_111Search in Google Scholar
Angerschmid, Alessa, Jianlong Zhou, Kevin Theuermann, Fang Chen, and Andreas Holzinger. 2022. “Fairness and Explanation in AI-Informed Decision Making.” Machine Learning and Knowledge Extraction 4 (2): 556–79. https://doi.org/10.3390/make4020026.Search in Google Scholar
Araujo, Theo, Natali Helberger, Sanne Kruikemeier, and Claes H. De Vreese. 2020. “In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence.” AI & Society 35: 611–23. https://doi.org/10.1007/s00146-019-00931-w.Search in Google Scholar
Ash, Elliott. 2018. “Judge, Jury, and EXEcute File: The Brave New World of Legal Automation.” Social Market Foundation.Search in Google Scholar
Bagaric, Mirko, Dan Hunter, and Nigel Stobbs. 2019. “Erasing the Bias against Using Artificial Intelligence to Predict Future Criminality: Algorithms Are Color Blind and Never Tire.” University of Cincinnati Law Review 88: 1037.Search in Google Scholar
Bigman, Yochanan E., and Kurt Gray. 2018. “People Are Averse to Machines Making Moral Decisions.” Cognition 181: 21–34. https://doi.org/10.1016/j.cognition.2018.08.003.Search in Google Scholar
Binns, Reuben, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. “It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions.” In Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems, 1–14.10.1145/3173574.3173951Search in Google Scholar
Bobadilla-Suarez, Sebastian, Cass R. Sunstein, and Tali Sharot. 2017. “The Intrinsic Value of Choice: The Propensity to Under-delegate in the Face of Potential Gains and Losses.” Journal of Risk and Uncertainty 54: 187–202. https://doi.org/10.1007/s11166-017-9259-x.Search in Google Scholar
Buckland, Robert. 2023. “AI, Judges and Judgment: Setting the Scene.” M-RCBG Associate Working Paper Series.Search in Google Scholar
Buocz, Thomas Julius. 2018. “Artificial Intelligence in Court.” “Legitimacy Problems of AI Assistance in the Judiciary”. Retskraft – Copenhagen Journal of Legal Studies 2 (1): 41–59.Search in Google Scholar
Burton, Jason W., Mari-Klara Stein, and Tina Blegind Jensen. 2020. “A Systematic Review of Algorithm Aversion in Augmented Decision Making.” Journal of Behavioral Decision Making 33 (2): 220–39. https://doi.org/10.1002/bdm.2155.Search in Google Scholar
Candrian, Cindy, and Anne Scherer. 2022. “Rise of the Machines: Delegating Decisions to Autonomous AI.” Computers in Human Behavior 134: 107308. https://doi.org/10.1016/j.chb.2022.107308.Search in Google Scholar
Castelo, Noah, Maarten W. Bos, and Donald R. Lehmann. 2019. “Task-dependent Algorithm Aversion.” Journal of Marketing Research 56 (5): 809–25. https://doi.org/10.1177/0022243719851788.Search in Google Scholar
Chen, Daniel L., Martin Schonger, and Chris Wickens. 2016. “oTree—An Open-Source Platform for Laboratory, Online, and Field Experiments.” Journal of Behavioral and Experimental Finance 9: 88–97. https://doi.org/10.1016/j.jbef.2015.12.001.Search in Google Scholar
Danziger, Shai, Jonathan Levav, and Liora Avnaim-Pesso. 2011. “Extraneous Factors in Judicial Decisions.” Proceedings of the National Academy of Sciences 108 (17): 6889–92. https://doi.org/10.1073/pnas.1018033108.Search in Google Scholar
Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. “Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err.” Journal of Experimental Psychology: General 144 (1): 114. https://doi.org/10.1037/xge0000033.Search in Google Scholar
Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2018. “Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms if They Can (Even Slightly) Modify Them.” Management Science 64 (3): 1155–70. https://doi.org/10.1287/mnsc.2016.2643.Search in Google Scholar
Dohmen, Thomas, Armin Falk, David Huffman, Uwe Sunde, Jürgen Schupp, and Gert G. Wagner. 2011. “Individual Risk Attitudes: Measurement, Determinants, and Behavioral Consequences.” Journal of the European Economic Association 9 (3): 522–50. https://doi.org/10.1111/j.1542-4774.2011.01015.x.Search in Google Scholar
Efficiency of Justice (CEPEJ), European Commission. 2018. “European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment.” In European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment.Search in Google Scholar
Eren, Ozkan, and Naci Mocan. 2018. “Emotional Judges and Unlucky Juveniles.” American Economic Journal: Applied Economics 10 (3): 171–205. https://doi.org/10.1257/app.20160390.Search in Google Scholar
Fehr, Ernst, and Urs Fischbacher. 2004. “Third-party Punishment and Social Norms.” Evolution and Human Behavior 25 (2): 63–87. https://doi.org/10.1016/s1090-5138(04)00005-4.Search in Google Scholar
Filiz, Ibrahim, Jan René Judek, Marco Lorenz, and Markus Spiwoks. 2022. “Algorithm Aversion as an Obstacle in the Establishment of Robo Advisors.” Journal of Risk and Financial Management 15 (8): 353. https://doi.org/10.3390/jrfm15080353.Search in Google Scholar
Gamez, Patrick, Daniel B. Shank, Carson Arnold, and Mallory North. 2020. “Artificial Virtue: The Machine Question and Perceptions of Moral Character in Artificial Moral Agents.” AI & Society 35: 795–809. https://doi.org/10.1007/s00146-020-00977-1.Search in Google Scholar
Gnambs, Timo, and Markus Appel. 2019. “Are Robots Becoming Unpopular? Changes in Attitudes towards Autonomous Robotic Systems in Europe.” Computers in Human Behavior 93: 53–61. https://doi.org/10.1016/j.chb.2018.11.045.Search in Google Scholar
Gogoll, Jan, and Matthias Uhl. 2018. “Rage against the Machine: Automation in the Moral Domain.” Journal of Behavioral and Experimental Economics 74: 97–103. https://doi.org/10.1016/j.socec.2018.04.003.Search in Google Scholar
Greiner, Ben. 2004. The Online Recruitment System Orsee 2.0-a Guide for the Organization of Experiments in Economics. Technical Report.Search in Google Scholar
Hamm, Pascal, Michael Klesel, Patricia Coberger, and H. Felix Wittmann. 2023. “Explanation Matters: An Experimental Study on Explainable AI.” Electronic Markets 33 (1): 1–21. https://doi.org/10.1007/s12525-023-00640-9.Search in Google Scholar
Heyes, Anthony, and Soodeh Saberian. 2019. “Temperature and Decisions: Evidence from 207,000 Court Cases.” American Economic Journal: Applied Economics 11 (2): 238–65. https://doi.org/10.1257/app.20170223.Search in Google Scholar
Holzmeister, Felix, Martin Holmén, Michael Kirchler, Matthias Stefan, and Erik Wengström. 2023. “Delegation Decisions in Finance.” Management Science 69 (8): 4828–44. https://doi.org/10.1287/mnsc.2022.4555.Search in Google Scholar
Hou, Yoyo Tsung-Yu, and Malte F. Jung. 2021. “Who Is the Expert? Reconciling Algorithm Aversion and Algorithm Appreciation in AI-Supported Decision Making.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2): 1–25. https://doi.org/10.1145/3479864.Search in Google Scholar
Jauernig, Johanna, Matthias Uhl, and Gari Walkowitz. 2022. “People Prefer Moral Discretion to Algorithms: Algorithm Aversion beyond Intransparency.” Philosophy & Technology 35 (1): 2. https://doi.org/10.1007/s13347-021-00495-y.Search in Google Scholar
Keser, Claudia, and Frans Van Winden. 2000. “Conditional Cooperation and Voluntary Contributions to Public Goods.” The Scandinavian Journal of Economics 102 (1): 23–39. https://doi.org/10.1111/1467-9442.00182.Search in Google Scholar
Kim, Boyoung, and Elizabeth Phillips. 2021. “Humans’ Assessment of Robots as Moral Regulators: Importance of Perceived Fairness and Legitimacy.” arXiv preprint arXiv:2110.04729.Search in Google Scholar
Kugler, Logan. 2018. “AI Judges and Juries.” Communications of the ACM 61 (12): 19–21. https://doi.org/10.1145/3283222.Search in Google Scholar
Lee, Min Kyung. 2018. “Understanding Perception of Algorithmic Decisions: Fairness, Trust, and Emotion in Response to Algorithmic Management.” Big Data & Society 5 (1): 2053951718756684. https://doi.org/10.1177/2053951718756684.Search in Google Scholar
Leyer, Michael, and Sabrina Schneider. 2019. “Me, You or AI? How do We Feel about Delegation.” Twenty-Seventh European Conference on Information Systems.Search in Google Scholar
Logg, Jennifer M., Julia A. Minson, and Don A. Moore. 2019. “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.” Organizational Behavior and Human Decision Processes 151: 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005.Search in Google Scholar
Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge. 2019. “Resistance to Medical Artificial Intelligence.” Journal of Consumer Research 46 (4): 629–50. https://doi.org/10.1093/jcr/ucz013.Search in Google Scholar
Maasland, Christian, and Kristina S. Weißmüller. 2022. “Blame the Machine? Insights from an Experiment on Algorithm Aversion and Blame Avoidance in Computer-Aided Human Resource Management.” Frontiers in Psychology 13: 779028. https://doi.org/10.3389/fpsyg.2022.779028.Search in Google Scholar
Mahmud, Hasan, A. K. M. Najmul Islam, Syed Ishtiaque Ahmed, and Kari Smolander. 2022. “What Influences Algorithmic Decision-Making? A Systematic Literature Review on Algorithm Aversion.” Technological Forecasting and Social Change 175: 121390. https://doi.org/10.1016/j.techfore.2021.121390.Search in Google Scholar
Malek, Md Abdul. 2022. “Criminal Courts’ Artificial Intelligence: the Way it Reinforces Bias and Discrimination.” AI and Ethics 2 (1): 233–45. https://doi.org/10.1007/s43681-022-00137-9.Search in Google Scholar
Matulionyte, Rita, and Ambreen Hanif. 2021. “A Call for More Explainable AI in Law Enforcement.” In 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), 75–80. IEEE.10.1109/EDOCW52865.2021.00035Search in Google Scholar
Miles, Oliver, Robert West, and Tom Nadarzynski. 2021. “Health Chatbots Acceptability Moderated by Perceived Stigma and Severity: a Cross-Sectional Survey.” Digital Health 7: 20552076211063012. https://doi.org/10.1177/20552076211063012.Search in Google Scholar
Nissan, Ephraim. 2017. “Digital Technologies and Artificial Intelligence’s Present and Foreseeable Impact on Lawyering, Judging, Policing and Law Enforcement.” AI & Society 32: 441–64. https://doi.org/10.1007/s00146-015-0596-5.Search in Google Scholar
Parlangeli, Oronzo, Francesco Currò, Paola Palmitesta, and Stefano Guidi. 2023. “Asymmetries in the Moral Judgements for Human Decision-Makers and Artificial Intelligence Systems (AI) Delegated to Make Legal Decisions.” In Proceedings of the European Conference on Cognitive Ergonomics 2023, 1–4.10.1145/3605655.3605676Search in Google Scholar
Ramji-Nogales, Jaya, Andrew I. Schoenholtz, and Philip G. Schrag. 2007. “Refugee Roulette: Disparities in Asylum Adjudication.” Stanford Law Review 60: 295.Search in Google Scholar
Re, Richard M., and Alicia Solow-Niederman. 2019. “Developing Artificially Intelligent Justice.” Stanford Technology Law Review 22: 242.Search in Google Scholar
Schepman, Astrid, and Paul Rodway. 2023. “The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust.” International Journal of Human-Computer Interaction 39 (13): 2724–41. https://doi.org/10.1080/10447318.2022.2085400.Search in Google Scholar
Sourdin, Tania, and Richard Cornes. 2018. “Do Judges Need to Be Human? the Implications of Technology for Responsive Judging.” In The Responsive Judge: International Perspectives, 87–119.10.1007/978-981-13-1023-2_4Search in Google Scholar
Surden, Harry. 2019. “Artificial Intelligence and Law: An Overview.” Georgia State University Law Review 35: 19–22.Search in Google Scholar
Themeli, Erlis, and Stefan Philipsen. 2021. “AI as the Court: Assessing AI Deployment in Civil Cases.” In E. Themeli and S. Philipsen, AI as the Court: Assessing AI Deployment in Civil Cases, in K. Benyekhlef (ed), AI and Law. A Critical Overview, Éditions Thémis 2021, 213–32.10.2139/ssrn.3791553Search in Google Scholar
Tyler, Tom R. 1997. “The Psychology of Legitimacy: A Relational Perspective on Voluntary Deference to Authorities.” Personality and Social Psychology Review 1 (4): 323–45. https://doi.org/10.1207/s15327957pspr0104_4.Search in Google Scholar
Wang, Ruotong, F. Maxwell Harper, and Haiyi Zhu. 2020. “Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.10.1145/3313831.3376813Search in Google Scholar
Watamura, Eiichiro, Tomohiro Ioku, Tomoya Mukai, and Michio Yamamoto. 2023. “Empathetic Robot Judge, We Trust You.” International Journal of Human-Computer Interaction: 1–10. https://doi.org/10.1080/10447318.2023.2232982.Search in Google Scholar
Xu, Zichun. 2022. “Human Judges in the Era of Artificial Intelligence: Challenges and Opportunities.” Applied Artificial Intelligence 36 (1): 2013652. https://doi.org/10.1080/08839514.2021.2013652.Search in Google Scholar
Yalcin, Gizem, Erlis Themeli, Evert Stamhuis, Stefan Philipsen, and Stefano Puntoni. 2023. “Perceptions of Justice by Algorithms.” Artificial Intelligence and Law 31 (2): 269–92. https://doi.org/10.1007/s10506-022-09312-z.Search in Google Scholar
Yeomans, Michael, Anuj Shah, Sendhil Mullainathan, and Jon Kleinberg. 2019. “Making Sense of Recommendations.” Journal of Behavioral Decision Making 32 (4): 403–14. https://doi.org/10.1002/bdm.2118.Search in Google Scholar
Završnik, Aleš. 2020. “Criminal Justice, Artificial Intelligence Systems, and Human Rights.” ERA Forum 20: 567–83. https://doi.org/10.1007/s12027-020-00602-0.Search in Google Scholar
© 2024 Walter de Gruyter GmbH, Berlin/Boston
Articles in the same Issue
- Frontmatter
- Articles
- The Impact of Online Dispute Resolution on the Judicial Outcomes in India
- Legal Compliance and Detection Avoidance: Results on the Impact of Different Law-Enforcement Designs
- Women in Piracy. Experimental Perspectives on Copyright Infringement
- Is Investment in Prevention Correlated with Insurance Fraud? Theory and Experiment
- Bias, Trust, and Trustworthiness: An Experimental Study of Post Justice System Outcomes
- Do Sanctions or Moral Costs Prevent the Formation of Cartel Agreements?
- Efficiency and Distributional Fairness in a Bankruptcy Procedure: A Laboratory Experiment
- Soft Regulation for Financial Advisors
- Conciliation, Social Preferences, and Pre-Trial Settlement: A Laboratory Experiment
- The Impact of Tax Culture on Tax Rate Structure Preferences: Results from a Vignette Study with Migrants and Non-Migrants in Germany
- Perceptions of Justice: Assessing the Perceived Effectiveness of Punishments by Artificial Intelligence versus Human Judges
- Judged by Robots: Preferences and Perceived Fairness of Algorithmic versus Human Punishments
- The Hidden Costs of Whistleblower Protection
- The Missing Window of Opportunity and Quasi-Experimental Effects of Institutional Integration: Evidence from Ukraine
Articles in the same Issue
- Frontmatter
- Articles
- The Impact of Online Dispute Resolution on the Judicial Outcomes in India
- Legal Compliance and Detection Avoidance: Results on the Impact of Different Law-Enforcement Designs
- Women in Piracy. Experimental Perspectives on Copyright Infringement
- Is Investment in Prevention Correlated with Insurance Fraud? Theory and Experiment
- Bias, Trust, and Trustworthiness: An Experimental Study of Post Justice System Outcomes
- Do Sanctions or Moral Costs Prevent the Formation of Cartel Agreements?
- Efficiency and Distributional Fairness in a Bankruptcy Procedure: A Laboratory Experiment
- Soft Regulation for Financial Advisors
- Conciliation, Social Preferences, and Pre-Trial Settlement: A Laboratory Experiment
- The Impact of Tax Culture on Tax Rate Structure Preferences: Results from a Vignette Study with Migrants and Non-Migrants in Germany
- Perceptions of Justice: Assessing the Perceived Effectiveness of Punishments by Artificial Intelligence versus Human Judges
- Judged by Robots: Preferences and Perceived Fairness of Algorithmic versus Human Punishments
- The Hidden Costs of Whistleblower Protection
- The Missing Window of Opportunity and Quasi-Experimental Effects of Institutional Integration: Evidence from Ukraine