Should legal disputes be determined by artificial, rather than human, means?

Kate – Year 12 Student

Editor’s Note: Talented Year 12 student Kate has written this insightful essay in response to the Robert Walker Prize for Essays in Law competition, organised by Trinity College, Cambridge. Launched in 2013, the Robert Walker Prize has three objectives: to encourage students with an interest in Law to explore that interest by researching, considering and developing an argument about a legal topic of importance to modern society; to encourage those interested in Law to apply for a university course in Law; and to recognise the achievements of high-calibre students, from whatever background they may come. You can read more excellent essays written by Kate in The GSAL Journal. CPD

“Should legal disputes be determined by artificial, rather than human, means?” 

For some, to determine legal disputes by artificial means would be to declare the human decision as inadequate, or, alternatively, would be to label certain legal cases as “bootless errands”: projects deemed as tiresome for the human mind. As the likes of Silicon Valley stride closer to full automation, there remains the long-standing interrogative: Need we be afraid of AI? Such an equivocal evaluation of what humanity stands to gain via AI and other aspects of robotics is what makes the deliberation of such a topic a famous one. Coined in 1956 by John McCarthy1, it would undeniable to argue that ‘Artificial Intelligence’ is very much a contemporary, modernist issue. The outward concerns of AI apply to many an industry, but the moral implications within wider society is what makes it consistently relevant to the focus of law. As detailed in the scholarly article “Social and Juristic challenges of artificial intelligence”, “Artificial intelligence inherently touches upon a full spectrum of legal fields”2 and this, in turn, makes it the area of expertise at the largest ethical risk from computerisation. Changes in the way we justify and regulate societal actions, therefore, ultimately influences the wellbeing of an entire population. 

Resolving legal disputes via the insertion of “Artificial” methods, in which this particular lexeme is defined by the Cambridge Dictionary as “Made by people, as a copy of something natural”3, is a system that various countries have already attempted to, or have, implemented. One notable example would be China’s large-scale introduction to the ‘Robot Judge’: “Xiaofa”.4 Currently, China handles 19 million cases a year, with just 120,000 judges5, making it little wonder that such an aide had such a delayed introduction in the first place, particularly considering China’s technology market is valued at $256 billion.6 Yet, China isn’t the only country, nor is it the first, to merge the law with mechanization – In the United States, in 28 states at present, algorithms assist in recommending criminal sentences. In the case of Virginia, such introductions were implemented to minimise prison populations following the abolishment of discretionary parole. However, this introduces one of the more immediate problems of computerisation – The system failed to recognise racial disparities; inherently demonstrating that it is entirely possible to include racial bias in such programming. The use of the software relied on the decision of the judge determining the ruling, so its use was also dependent on a lack of human discrimination, a trait clearly not prevalent enough in the Virginia legal system to counteract such poorly influenced decisions. In addition, the idea of already present discrimination suggests that it would be vulnerable to any other type of discrimination (be it sexism, ageism or homophobia), and that other systems have the potential for such defects. Unfortunately, only time will demonstrate whether this is the case for China, as well. In a society already built on exploitation, is it not entirely fair to say that anything created with such a background has the potential to be as biased, if not simply more expository, than that of its creators?  

Another concept that must be considered is the possibility of error in the use of such machinery. Whilst our first assumption of machinery or programmed equipment is culturally manipulated to be something without flaw, it is undoubtable that, eventually an error will be made. Estonia, much like China, is another country implementing a robotic version of a judge.7 The most vital question to ask in such an occasion, should an error be made, is whom do we blame?  In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC), based in the UK, jointly published five ethical “Principles” for “use by designers, builders, and users of robots”8 The second principle explicitly declares; “Humans, not robots, are responsible agents.”9 So, in the case of an incorrect or falsely automated ruling, an ideology which may soon apply to more ethically demanding cases such as aggravated assault, or rape, how could machinery possibly determine an ethical outcome? Not only is the infrastructure for such an idea lacking, but also, as detailed by the principles, a human must still be held responsible in case of error. Of course, this is something that would still apply in the case of a human ruling, whereby there may be human error. For example, in a miscarriage of justice, with one particularly famous example of this being South Carolina v. Stinney10. Yet, unlike in this case, where an entirely white jury or breaking of Amendment rights were to blame, it stands to question whether an individual human should take the backlash. Our determination of this would be, be it lawyers, developers or just those performing maintenance, something that defines the very future of robots and legality. Morally, we as a society would be making ourselves incredibly vulnerable in the face of legal complications; something that would have little justification other than a set of robotic ethics deemed appropriate in 2011. 

Of course, it is also vital to consider the wider legal system, and consider that outside of the courtroom – ultimately, the majority of legal disputes begin with police systems. In the U.S., 10 million people are arrested every year, which helps to demonstrate the immediate importance of the law enforcement agencies. This is only aided by technology, not just in increasing the arrest rate, but also in reducing the racial disparities found to be a risk in mechanised judges. In a study by the University of Cambridge’s Institute of Criminology, police equipped with body-worn cameras received 93% fewer public complaints11. Whilst law often results from moral panic12, such an informative tool would clearly provide massive assistance in determining cases of racial profiling, as well as assisting in criminal ruling later on. In a different study, it was found that 93% of officers believed body cameras help with evidence gathering13. Whilst not mandatory for all police members, its undeniable that this is one of the more widely used forms of “Artificial” means of helping to solve legal disputes, and demonstrates how they can be incredibly effective in the typical court of law. 

Yet, is there truly a “Natural” way of resolving legal disputes in the first place? How can we determine what is considered to be as an “artificial” means of determining a ruling? First documented by Aristotle, whose remarks concerning law or natural justice are considered to be “of the greatest significance”14, “Natural Law” is the theory in ethics and philosophy that argues that human beings possess intrinsic values, determining our reasoning and behaviour. It maintains that, regardless of society or court judges, these rules of right and wrong are inherent in the nature of a human being15, something that goes inherently against positive law (statutes laid down by legislature). In following this ethical and philosophical context, we not only follow Sir Edward Coke in regarding human nature as something determining the purpose of law, but also we acknowledge that, in this context, some ideas have no ‘reserved’ place in the law. Even ignoring certain aspects of such an ideology, Natural Law encourages us to believe that it is, to some extent, impossible for us to make a judgement as to their place within the law, because we lack the human instinct to determine so justly. Our evolutionary background has no representative for these “artificial” means, and the decision as to whether that we deem “artificial” has a place in the law or our legal system as a whole is a matter of normative jurisprudence. 

Along similar lines, should you remove the need for such legal positions, you not only remove the need for the court, but you risk removing the opportunity for emotional conflict in court, and could risk going as far as removing the potential for settlement. By removing all bias, and simply looking at the information available without emotional interpretation, you redact the opportunity for the defendant to explain why they made those choices. Ultimately, opening statements, depositions and closing statements all help to sway a jury in the favour of a client. In addition, this isn’t just concerning the presentation of evidence, but in convincing a group of jurors to, at the very least, understand the actions of a defendant. Current research has already demonstrated that certain factors affect the stringency of a jury: Gender (Golding, Bradshaw, Dunlap and Hodell (2007))16 and Ethnicity (Mazzella and Feingold (1994))17 are just two important factors in jural persuasion. Should we, theoretically, introduce an entirely mechanised legal system, would we remove the need for barristers, solicitors, judge and jury? Would the same precedents still apply to such a situation? The multitude of newly posed interrogatives makes such a determination difficult, and we may risk undermining our legal systems as a simpler compromise for robotics. 

The removal of a jury, while still applicable currently to certain cases, inherently removes bias that some may deem necessary to find common ground within convictions. For example, in People v. Young, the defendant, Young, was arguably justified in intervening with an apparent arrest, as he had simply mistaken it for a potential mugging.18 Unless the machinery replacing those involved in legal proceedings and disputes are sentient – which is incredibly unlikely as it is currently impossible to detect sentience in mechanical beings – then it is perfectly arguable to claim that such a process would be unjust. Yet, even if you were to disregard the need for some bias within a court, and were to pioneer for either an entirely just jury or none at all, you remove the potential for public education. There are many advantages in wider society for the Jural system, as members of the public come into the courts, interact with the justice system, and take what they’re learnt and experienced back into the community19. Overall, it’s already estimated that automation will “Take 800 million jobs by 2030”20, with Paralegals cites as particularly vulnerable. So, should we choose to keep the jury and remove any additional human legal advisors, we expose ourselves to large economic, and moral, losses. 

In turn, what would prevent the manipulation of such technology for personal gain? We need only look at the case study of North Korea to see such an instance, even without the legal introduction. According to a study, North Korean engineers created file-watermarking software that essentially tags and monitors any media file that’s opened on a device, whether that’s mobile or PC.  What’s to say there aren’t the same affordances for any computerized member or aspect of a court? Of course, North Korea is far from a perfect legal system, but tampering is recognised as a crime in many countries for a reason.  

To conclude, whilst there are many affordances and limitations of both legal systems and artificial methods (whatever we deem those to be), the combination of these two individualities makes for an incredibly difficult decision for either side. Ultimately, we must seek further advancement in the sentimentality of robotics for a certain level of effectivity in legal cases before we develop an entirely mechanised legal system, to ensure the same just and, occasionally, piteous, outcomes we would expect from the human courtroom.

Kate 033155

Bibliography

  1. Dictionary.cambridge.org. 2020. ARTIFICIAL | Meaning In The Cambridge English Dictionary. [online] Available at: <https://dictionary.cambridge.org/dictionary/english/artificial?q=Artificial&gt; [Accessed 5 April 2020]. 
  1. Chappelow, J., 2019. Natural Law Definition. [online] Investopedia. Available at: <https://www.investopedia.com/terms/n/natural-law.asp&gt; [Accessed 5 April 2020]. 
  1. Soccia, D., n.d. LEGAL POSITIVISM Vs. NATURAL LAW THEORY. [online] Web.nmsu.edu. Available at: <https://web.nmsu.edu/~dscoccia/376web/376lpaust.pdf&gt; [Accessed 6 April 2020]. 
  1. Vasdani, T., 2019. From Estonian AI Judges To Robot Mediators In Canada, U.K. – The Lawyer’s Daily. [online] Thelawyersdaily.ca. Available at: <https://www.thelawyersdaily.ca/articles/12997/from-estonian-ai-judges-to-robot-mediators-in-canada-u-k-&gt; [Accessed 6 April 2020]. 
  1. Vasdani, T., 2019. Estonia Set To Introduce ‘AI Judge’ In Small Claims Court To Clear Court Backlog – The Lawyer’s Daily. [online] Thelawyersdaily.ca. Available at: <https://www.thelawyersdaily.ca/articles/11582/estonia-set-to-introduce-ai-judge-in-small-claims-court-to-clear-court-backlog-&gt; [Accessed 6 April 2020]. 
  1. Epsrc.ukri.org. 2008. Principles Of Robotics – EPSRC Website. [online] Available at: <https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/&gt; [Accessed 7 April 2020]. 
  1. World-information.org. 2020. World-Information.Org. [online] Available at: <http://world-information.org/wio/infostructure/100437611663/100438659360&gt; [Accessed 7 April 2020]. 
  1. Davis, J., 2018. Law Without Mind: AI, Ethics, and Jurisprudence. SSRN Electronic Journal, [online] 55(1). Available at: <https://scholarlycommons.law.cwsl.edu/cgi/viewcontent.cgi?article=1666&context=cwlr&gt; [Accessed 7 April 2020]. 
  1. Perc, M., Ozer, M. and Hojnik, J., 2019. Social and juristic challenges of artificial intelligence. Palgrave Communications, [online] 5(1), p.3. Available at: <https://www.nature.com/articles/s41599-019-0278-x#citeas&gt; [Accessed 7 April 2020]. 
  1. Death Penalty Information Center. n.d. South Carolina Vacates The Conviction Of 14-Year-Old Executed In 1944. [online] Available at: <https://deathpenaltyinfo.org/news/south-carolina-vacates-the-conviction-of-14-year-old-executed-in-1944&gt; [Accessed 7 April 2020]. 
  1. BBC News. 2017. Robots To ‘Take 800 Million Jobs By 2030’. [online] Available at: <https://www.bbc.co.uk/news/world-us-canada-42170100&gt; [Accessed 8 April 2020]. 
  1. F. Schopp, R., 1993. Justification defenses and just convictions. Pacific Law Journal, p.1247. [Accessed 8 April 2020] 
  1. Harris, B., 2018. Articles. [online] World Government Summit – Articles. Available at: <https://www.worldgovernmentsummit.org/observer/articles/could-an-ai-ever-replace-a-judge-in-court&gt; [Accessed 8 April 2020]. 
  1. Van Dam, A., 2019. Algorithms Were Supposed To Make Virginia Judges Fairer. What Happened Was Far More Complicated.. [online] The Washington Post. Available at: <https://www.washingtonpost.com/business/2019/11/19/algorithms-were-supposed-make-virginia-judges-more-fair-what-actually-happened-was-far-more-complicated/&gt; [Accessed 8 April 2020]. 
  1. Dai, C., 2019. China Tech Market Outlook, 2019 To 2020. [online] Forrester.com. Available at: <https://www.forrester.com/report/China+Tech+Market+Outlook+2019+To+2020/-/E-RES141552&gt; [Accessed 8 April 2020]. 
  1. Shellens, M., 2020. Aristotle On Natural Law. [online] NDLScholarship. Available at: <http://scholarship.law.nd.edu/nd_naturallaw_forum/40?utm_source=scholarship.law.nd.edu%2Fnd_naturallaw_forum%2F40&utm_medium=PDF&utm_campaign=PDFCoverPages&gt; [Accessed 9 April 2020]. 
  1. Sjöberg, M., 2015. The Relationship Between Empathy and Stringency of Punishment in Mock Jurors. Journal of European Psychology Students, 6(1), pp.37-44. 
  1. Speri, A., 2019. Police Make More Than 10 Million Arrests A Year, But That Doesn’t Mean They’re Solving Crimes. [online] The Intercept. Available at: <https://theintercept.com/2019/01/31/arrests-policing-vera-institute-of-justice/&gt; [Accessed 9 April 2020]. 
  1. Barak, A. and Sutherland, A., 2020. Body-Worn Cameras Associated With Increased Assaults Against Police, And Increase In Use-Of-Force If Officers Choose When To Activate Cameras. [online] University of Cambridge. Available at: <https://www.cam.ac.uk/research/news/body-worn-cameras-associated-with-increased-assaults-against-police-and-increase-in-use-of-force-if&gt; [Accessed 10 April 2020]. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s