Tanisha Chivate is a Fourth Year Law student at Maharashtra National Law University Mumbai.
I. Introduction
In an era where Artificial Intelligence (AI) rapidly reshapes our world, Elon Musk’s prediction that AI might be the catalyst for World War III highlights the urgent need for its ethical and legal regulation.[i] Without oversight, AI can produce biased and unfair outcomes. While embedding fairness into AI algorithms is theoretically possible, the absence of a universal fairness definition complicates this. Additionally, designing AI to optimize legal outcomes and handle moral dilemmas, like the trolley problem, raises the question: To what extent should morality intertwine with the law? Addressing this is crucial for establishing AI regulatory mechanisms.
II. Law and Morality— Natural Law School v Analytical Legal Positivism
The Natural Law School is premised on the simple statement that law is what it “ought to be.” It is regarded as the intersection between law and morals. Analytical Legal Positivism arose as a reaction to this, and theorists believed there is no necessary connection between law and morality. Analytical Legal Positivists believe that one can say what the law is without making moral judgments about what it should be. Thus, the key point of contention is the role of morals in law and legal interpretation.
A philosophical approach to differentiating between legal and moral norms is Kant’s structure of regarding laws as external conduct and morals prescribing internal conduct, specifically subjective factors such as motive.[ii] By this reasoning, law is only concerned with external manifestations, and this reasoning is also applied in criminal law as mens rea alone cannot constitute a crime. By this reasoning, law is only concerned with external manifestations, and this reasoning is also applied in criminal law as mens rea alone cannot constitute a crime.
With respect to AI, it has been argued that technological machines intrinsically have a moral character as the objectives sought by humans while creating the device are not separated from the characteristics of the object itself. Additionally, since technology affects the way humans perceive and interact with the world, no technology can be considered morally neutral.[iii] The integration of AI in legal systems also poses challenges to several fundamental legal principles, such as equal treatment before the law, fairness in the design and application of law, and adequate justice for all.[iv]
III. The Contemporary AI Perspective on the Trolley Example
As attempts are being made to resolve the dilemmas caused by AI, simultaneously, we are confronted with age-old questions: what role should the law play in regulating our behavior, and do we have a moral duty to follow the law in determining its content? Autonomous cars, the 2020s version of the common trolley ethical question, provides a compelling example because programming autonomous vehicles would involve not just technical automobile knowledge but also moral philosophy.[v]
Consider an autonomous vehicle on a collision course with two pedestrians, where the risk to the sole passenger is minimized. The alternative is a swerve off a cliff, sparing the pedestrians but killing the passenger. This scenario poses two primary challenges. The first is technological: the vehicle must evaluate the outcomes of its potential actions while adhering to traffic laws. We assume that manufacturers can devise a system to assess these adverse scenarios. The second challenge is moral: what decision should the car make in such a situation? This depends on who is deemed morally responsible for any resulting harm. Car manufacturers concerned with legal liability must program their vehicles accordingly. If there’s a moral obligation to follow the law, how should lawyers advise manufacturers about legal constraints and permissions? We must then assume the car’s algorithm is sophisticated enough to make such ethical choices.
Discriminatory biases in AI
A critical issue with AI is its inability to eliminate immoral or unethical considerations.[vi] A car’s algorithm, tasked with reducing legal liability, might inadvertently reinforce societal biases. If the algorithm assesses other vehicles’ values, harm probabilities, and potential liability from lost income, it might favor drivers of more expensive cars or individuals in their prime earning years. This approach could lead to decisions that perpetuate class, age, gender, race, and caste inequalities.
Key Approaches to Natural Law Regulating AI
While it is complex whether AI can or should render moral judgments, there are three approaches to how natural law and morality can regulate AI.
Top-Down
The top-down approach would involve humans identifying general moral principles for AI, which, once in place, can direct or restrict the goals of AI.[vii] A drawback of this approach is that there has not been a consensus on the right moral principles for time immemorial. More importantly, there might be a need for moral judgments sensitive to context and setting, meaning that a top-down approach to natural law will likely fail.
Bottom-Up
In this approach, AI is given data about diverse situations and the moral (or other desired) acts to be taken in them. Once moral decision-making patterns are identified, AI can use them as a guide. In particular cases, AI would have to receive input about the right moral outcome or action or develop some capacity to make relevant moral judgments. Without this, AI would lack the information required to find patterns. This approach is criticized because it complicates moral judgments with prudence and other actions like biases. It is vague in principle how an AI system could simplify the various reasons to support or reject a particular outcome. If AI were to develop its own moral judgment, it would need to acquire emotions and empathy.
Predictive Approach
This approach would involve AI imitating how human beings would make those judgments. However, this would fall flat to Hart’s argument that a judge would not do well to predict how she herself would rule in a case as that inquiry would seem circular.[viii]
IV. Legal Dualism
Legal dualism might reconcile the debate between Natural Law School and Analytical Legal Positivism and the role of morality in law. This theory posits that while determining the law’s content, morality isn’t necessary for mere description or prediction but is essential for moral guidance.[ix] As per this theory, in some cases, making the moral judgments necessary to render the law determinate and say what it is for guiding conduct. Legal positivism explains the essence of law in descriptive scenarios, whereas natural law provides better explanations when seeking moral direction from the law. Thus, legal dualism highlights the constraints on the role AI can play in legal interpretation – AI cannot replace human beings when law serves as a source of moral guidance, such as when programming autonomous cars.[x] However, one may argue that even by accepting legal dualism, AI may be able to predict judicial rulings of courts better than human beings by studying the data of past court precedents. AI can do this by analyzing the ethics incorporated in past court judgments and, thus, eliminating the role of human beings in interpreting the law. The same reasoning can be applied to autonomous cars, which can then decide based on what the law would favor, considering past court decisions. If a car manufacturer programs its autonomous cars in a way that treats human injury and death in violation of people’s legal rights as a mere cost of doing business and earning quick profits, courts could impose penalties, including punitive and compensatory damages. Legal dualism, however, has its own limitations. An autonomous car’s choice in terms of its course of action may be obscure and difficult to assess in hindsight. Lawyers advising car manufacturers and programmers for autonomous cars would have the legal obligation to exercise independent judgment in dictating what the law requires and not just to lay out the potential financial consequences of different choices of conduct. As per Legal Dualism, this independent judgment also includes ethical and moral considerations.[xi]
Analytical Legal Positivism may disagree with legal dualism, but AI solves the issue because AI is the best way to assess the facts and apply the law blatantly. If moral judgments are required to a certain extent, then AI can leave that portion for human beings to give their input.
V. Conclusion
In the context of AI and law, it’s vital to acknowledge the significance of moral judgment in legal interpretation. This article concludes that (1) moral judgments are sometimes necessary to define the law, (2) only humans are capable of making such judgments, and (3) while AI might not make moral judgments, it can predict the variety of moral decisions humans might make. Thus, natural law could potentially regulate AI. Despite technological advancements in pattern recognition, AI has yet to master goal identification and prioritization. Embracing this change could enable AI to make moral judgments. This evolution in AI might also offer new grounds for accepting Legal Dualism, benefiting from technological progress in a broader sense.[xii]
[i]Seth Fiegerman, Elon Musk predicts World War III, September 3, 2017, available at https://money.cnn.com/2017/09/04/technology/culture/elon-musk-ai-world-war/index.html (last visited August 21, 2023).
[ii] Michael Freeman, Lloyd’s Introduction to Jurisprudence (Sweet & Maxwell 9th) (2018).
[iii] Magrani, E. (2019). New perspectives on ethics and the laws of artificial intelligence. Internet Policy Review,[online] 8(3), available at: https://policyreview.info/articles/analysis/new-perspectives-ethics-and-laws-artificial-intelligence (last visited February 23, 2024].
https://policyreview.info/articles/analysis/new-perspectives-ethics-and-laws-artificial-intelligence
[iv] Surden, Harry, ‘Ethics of AI in Law: Basic Questions,’ in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds), The Oxford Handbook of Ethics of AI (2020; online edition, Oxford Academic, 9 July 2020), https://doi.org/10.1093/oxfordhb/9780190067397.013.46, last visited February 25. 2024.
[v] Joshua P. Davis, Law without Mind: AI, Ethics, and Jurisprudence, 55 CAL. W. L. REV. 165 (2018).
[vi] Id.
[vii] Joshua P. Davis, Artificial Wisdom? A Potential Limit on AI in Law (and Elsewhere), 72 Oᴋʟᴀ. L. Rᴇᴠ. 51 (2019).
[viii] HLA HART, THE CONCEPT OF LAW (Oxford University Press, Incorporated 3rd) (2012).
[ix] See Supra note 5.
[x] See Supra note 5.
[xi] See Supra note 5.
[xii] See Supra note 5.
