Algorithmic Decision Making in the Legal Sector

Written by Maryam Khan
Written by Maryam Khan

Blogger

case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdomcase management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom

Through the advancement of technology, law firms are becoming more efficient, better organised and more effective. One such technology is algorithms, despite hardly being a recent invention. They are increasingly used to support decision-making processes within the public and private sectors. The incorporation of algorithms into law firm decision-making has transformed lawyers’ roles and changed the way lawyers perform their duties. The use of algorithmic decision making enables law firms to make better decisions quicker and changes the overall approach to how data is gathered and used to reach a conclusion.

 

What is algorithmic decision making and how does it work?

 

An “algorithm” is a mathematical term for instructions. It is a list of rules that are followed automatically in a step by step manner to solve a problem or make a decision [1]. There are specific algorithms that focus on decision-making, including systems such as artificial intelligence and machine learning, which analyse data in different ways [2].

 

Algorithms essentially work by using a large amount of gathered data to identify patterns and help make objective decisions. Some algorithmic decisions are categorised as ‘semi-automatic’ systems as they still assist, but the final decision-making power remains with humans. On the other hand, other systems are categorised as ‘fully automatic’ and therefore, require no human input to analyse data and decide [3].

 

It is important to note that algorithms make decisions based on the data they are given. If implicit biases are present in the data, they will also be present in the algorithm’s decision-making process. Therefore, while algorithms can be a powerful tool to assist in decision making, it is equally important to assure the data used to create the algorithm is reliable and unbiased. The strength and effectiveness of the algorithm are primarily based on the data available.

 

Algorithmic Decision Making in Practice

 

1)  Legal Recruitment

Algorithms are useful to most industries, including law firms, in recruitment. By looking at various data sets of candidates’ skills and their success at their jobs, algorithms can help identify who would be most suited for a specific job. It would also help in avoiding any explicit biases in the decision-making process. However, evidence suggests a huge risk of algorithmic bias using AI tools in the hiring process. This is largely due to the algorithm not being trained with sufficiently diverse data, which creates blind spots and provides inaccurate and biased decisions [4].

 

2)  Predictive Algorithms in Criminal Proceedings

 

The use of algorithms is extremely prevalent in areas of public policy by different government and regulatory bodies. Recent examples include algorithms generating risk scores of the tendency of a convicted criminal to re-offend, which judges or parole officers then use to make a bail or parole decision [5]. Moreover, using algorithmic decision-making, predictive policing and risk assessment increases law enforcement efficiency, particularly with criminal proceedings, eliminating delays and cutting costs. Algorithmic decision-making in criminal proceedings is currently used to map crime in the UK and assist in prosecuting those arrested [6].

 

How is algorithmic decision-making benefiting the legal sector?

 

1)  Better Decision Making

 

The ability to make the right decision quickly is an important skill required of a lawyer. This is especially the case when advising clients. An example of such decisions include advising the client whether to go to trial or make a settlement or whether the client should agree to a term in the contract. Lawyers have to conduct a cost-benefit analysis and weigh the consequences of each available option before advising their clients. This process involves considering the implications of the law, the lawyer’s previous experience within such an area, and looking at previous cases with similar facts. Although this previous experience is valuable in some cases, it could be detrimental in others as their own experiences could bias a lawyer’s decision-making process.

 

Moreover, algorithms take the decision-making process one step further by quantifying the likelihood of a particular outcome occurring. For instance, an algorithm will predict (using a database of previous cases) how likely a client is to succeed in their trial based on the circumstances and facts of the case. It essentially does this by comparing the specific circumstances of any given case with previous cases and their outcomes. By utilising algorithms, lawyers will better understand the consequences of their decisions. This will allow them to make better, more informed decisions quickly and help clients understand the risks associated with their approach.

 

2)  Extensive Research

 

Legal research is essential for helping lawyers make decisions, but it’s a taxing, time-consuming process. Algorithms can play a key role in saving time by finding the most relevant information for specific case types and help form the basis of the defence direction for a case. This efficient approach for case preparation further improves the firm’s overall productivity.

 

3)  Eliminating Human Bias

 

One of the greatest benefits of algorithmic decision making is the possibility of eliminating subconscious bias that inevitably exists in human decision making. According to recent studies, outcomes provided by algorithms are considered fairer and more accurate. One study in particular is of a machine learning algorithm trained on a dataset consisting of bail decisions from New York City between 2008 and 2013 had outperformed judges in crime prediction [7]. Such evidence suggests huge potential for algorithms eliminating human bias and resulting in a fairer justice system.

 

Are there any concerns with algorithmic decision making?

 

1)  Lack of Transparency

 

A major issue with algorithmic decision making is the lack of transparency in the entire process. This is also known as the ‘black box’ effect, as it is difficult to understand exactly how an algorithm is programmed and then used to make a decision [8]. Transparency is extremely important due to its relationship with the right to a fair trial. Similarly to how a judgement can be contested through an appeal or a judicial review, those affected by algorithmic decision making should have the right to contest the outcome of the decision and ask for more information to understand how that conclusion was reached. Currently, there is very limited scope for getting feedback and contesting algorithmic decision making, which makes the entire process extremely unpredictable and difficult to understand.

 

2)  Unfairness, Discrimination and Bias

 

There are two types of biases when it comes to algorithmic decision making. Firstly, there is selection bias which refers to drawing conclusions based on a limited data set, such as gathering data from only offenders who were apprehended instead of all offenders [9]. Secondly, there is reporting bias, where polling offenders who are directly self-reporting under report their information, such as their likelihood to re-offend [10]. There’s also the margin of human error while inputting data. Suppose an algorithm is based on a data sample that is not an accurate representation, and conclusions are drawn from a flawed data set. In that case, the calculation is not correct or representative of the entire population resulting in that decision being considered biased.

 

Moreover, despite algorithms being proven to produce faster and accurate decisions for standardised circumstances, a great deal of evidence suggests that algorithms may not be as unbiased as one would expect [11]. The reason for this is that algorithms are only as fair as those who created and programmed them. A case in point is the racial bias that is embedded in leading facial recognition technologies. Due to the lack of racial and gender diversity in the initial data sets, algorithms incorrectly recognised women with darker skin tones as men. This extremely inaccurate representation highlights the under-representation of minorities in data sets, which further contributes to the lack of accuracy and bias in the whole decision-making process [12].

3)  Profiling

 

As provided in the General Data Protection Regulation, ‘profiling’ refers to any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular, to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements [13]. Through algorithmic decision making, personal information about clients and employees is collected from various sources. The information is analysed to classify the data subject into particular groups to identify correlations between behaviours and characteristics to create profiles [14].

 

The concerns here are that there is no transparency when it comes to profiling, and people may not expect their personal information to be used this way and do not understand how this process works and how it can affect them. These automated decisions based on profiles can often have a huge impact on people whether it comes to granting bail, housing benefits and making employment offers[15].

 

How can accountability be increased?

 

Increasing transparency and accountability within algorithmic decision-making technologies can only be achieved once proper due diligence is carried out and the technologies are repeatedly tested and audited before being used in the legal sector. Private companies and start-ups have a huge responsibility to develop such transparency in technologies as they play a crucial role in how these life-changing decisions are reached. Not only is this extremely problematic, but it also creates a huge accessibility barrier to justice and the right to a fair trial, which is a basic human right.

 

The biases that exist within these technologies are often a representation of their creators. Therefore, there needs to be more regulation in terms of training for those who create such technologies and greater compliance requirements and oversight for algorithms both within the public and private legal sector. Lastly, those affected by algorithmic decision-making within the legal sector and judicial system should be given a platform to challenge such decisions and should be given further assistance to understand how exactly a particular decision was reached.

 

Final words

 

Decision making is one of the most crucial parts of being a lawyer. However, today to make effective decisions quickly, utilising legal technology such as algorithms has become essential. Algorithms not only help lawyers make decisions by way of legal research but can be actively involved in the decision-making process. They can even be used as a predictive tool helping lawyers make informed decisions. The only limitation to the  effectiveness of algorithms is if the data used to make them is faulty. Ultimately, they help lawyers and government bodies make better, less biased decisions more efficiently as long as they are heavily tested and regulated before being deployed.

References

 

[1] S. Olhede, P. Wolfe, ‘Can Algorithms Ever be Fair’, The Law Society (2018) at https://www.lawsociety.org.uk/en/topics/blogs/can-algorithms-ever-be-fair

[2] Kingsley Napley, ‘AI and Algorithmic Decision-Making in the Public Sector and Criminal Justice System’, Govtech (2020) at https://www.kingsleynapley.co.uk/insights/blogs/public-law-blog/ai-and-algorithmic-decision-making-in-the-public-sector-and-criminal-justice-system

[3] Ibid

[4] P. Bradley-Schmeig, R. Cage, R. Collier-Wright, ‘Algorithmic Bias in Employment – What You Need to Know’ Bird and Bird (2021) at https://www.twobirds.com/en/news/articles/2021/uk/algorithmic-bias-in-employment–what-you-need-to-know

[5] M. Anna Wojcik, ‘Machien-learnt bias? Algorithmic Decision Making and Access to Criminal Justice’, Legal Cheek (2020) at https://www.legalcheek.com/lc-journal-posts/machine-learnt-bias-algorithmic-decision-making-and-access-to-criminal-justice/

[6] Ibid

[7] Ibid

[8] A.Zavrsnik, ‘Criminal justice, artificial intelliegence systems, and uman rights’, ERA Forum 567:583 (2020): 568

[9] Ibid (1)

[10] Ibid

[11] Ibid (5)

[12] Ibid

[13] Article 4, Regulation (EU) 2016/679 of the European Parliament and of the Coucil at https://www.legislation.gov.uk/eur/2016/679/contents

[14] Information Commissioner’s Office, ‘What is Automated Individual Decision-Making and Profiling’ at https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/

[15] Ibid

 

case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdomcase management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom

Similar to this article