The risks and regulation of Artificial Intelligence Word count: 2002

Shrisha Sapkota
Shrisha Sapkota

Blogger

case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdomcase management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom

The Risks:

Simulations show that by 2030 about 70 percent of companies will have adopted some sort of AI technology[1]. The reason is simple, whether modelling climate change, selecting job candidates or predicting if someone will commit a crime, AI can replace humans and make more decisions quicker and cheaper[2]. However, the potential power of AI also carries risks. Its speed, complexity and scalability mean that it vastly outperforms human beings at certain tasks[3]. The potential inscrutability of self-generated algorithms means that the method and reasoning employed to produce a particular output may be unknowable, even to the AI’s developer[4]. It has been suggested that “AI/AS will be performing tasks that are far more complex and impactful than prior generations of technology, particularly with systems that interact with the physical world, thus raising the potential level of harm that such a system could cause[5].”

 

More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process[6]. Autonomous AI-powered weapons systems are already on sale and may have been used[7]. Additionally, tech billionaire Elon Musk, long an advocate for the regulation of artificial intelligence, recently called AI more dangerous than nuclear weapons[8]. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AI’s impact could be cataclysmic unless its rapid development is strictly and ethically controlled[9].

 

Subordination of human judgement and incorrect output

 

Some algorithms make or affect decisions with direct and important consequences on people’s lives[10]. They diagnose medical conditions, for instance, screen candidates for jobs, approve home loans, or recommend jail sentences[11]. In such circumstances, it may be wise to avoid using AI or at least subordinate it to human judgement[12]. However, suppose a judge granted early release to an offender against an AI recommendation and that person then committed a violent crime. The judge would be under pressure to explain why she ignored the AI[13]. Using AI could therefore increase human decision-makers accountability, which might make people likely to defer to the algorithms more often than they should[14]. Moreover, testing for all scenarios, permutations and combinations of available data may not be possible, thus leading to potential gaps in coverage[15]. The severity of these gaps may vary with each system and its applications[16].

 

 

Job Automation

Job automation is generally viewed as the most immediate concern of risk of AI applications[17]. Experts agree that job automation is the most immediate risk of AI applications[18]. According to a 2019 study by the Brookings Institution, automation threatens about 25 percent of American jobs[19]. According to the institution, 36 million people work in jobs with “high exposure” to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labour will be done using AI[20].

 

The study found that automation would impact low-wage earners, especially those in food-service, office management, and administration[21]. Jobs with repetitive tasks are the most vulnerable, but as machine learning algorithms become more sophisticated, jobs requiring degrees could be more at risk as well[22]. It’s no longer a matter if AI will replace certain types of jobs, but to what degree[23]. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks, disruption is well underway[24].

 

 

Biased AI

AI systems that produce biased results have been making headlines[25]. One well-known example is Apple’s credit card algorithm, which has been accused of discriminating against women, triggering an investigation by New York’s Department of Financial Services[26]. Amazon’s automated résumé screener, which filtered out female candidates[27].

 

Algorithmic bias can perpetuate existing structures of inequality in our societies and lead to discrimination and alienation of minorities[28]. The hiring algorithms are likely to prefer men over women and white people over black people because the data it is fed with tells them that ‘successful candidates’ are often white men[29].

 

The problem crops up in many other guises: for instance, in ubiquitous online advertisement algorithms, which may target viewers by race, religion, or gender, and in a recent study published in Science showed that risk prediction tools used in health care, which affect millions of people in the United States every year, exhibit significant racial bias[30]. Another study, published in the Journal of General Internal Medicine, found that the software used by leading hospitals to prioritise recipients of kidney transplants discriminated against Black patients[31]. In most cases, the problem stems from the data used to train the AI because if that data is biased, then the AI will acquire and may even amplify the bias[32].

 

Moreover, AI systems are threatening our fundamental rights[33]. For example, algorithms that moderate content on social media platforms can unfairly restrict free speech and influence public debate[34]. Biometric mass surveillance technologies violate our right to privacy and discourage democratic participation[35].

 

 

Overall

Algorithms rely on massive sets of personal data, the collection, processing and storage of which frequently violates our data protection rights[36]. Overall, the automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI[37].

 

With no clear protocol in place on how to identify, evaluate, and mitigate the risks, teams end up either overlooking risks, scrambling to solve issues as they come up, or crossing their fingers in the hope that the problem will resolve itself[38]. Companies need a plan for mitigating risk, on how to use data and develop AI products without falling into ethical pitfalls along the way, just like other risk-management strategies, an operationalized approach to data and AI ethics must systematically and exhaustively identify ethical risks throughout the organisation, from IT to HR to marketing to product and beyond[39].

 

 

The Regulation of AI

As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software, particularly by complex, evolving algorithms that might diagnose cancer, drive a car, or approve a loan[40]. Its game-changing promise to do things like improve efficiency, bring down costs and accelerate research and development has been tampered of late with worries that these complex, opaque systems may do more societal harm than economic good[41].

 

A huge amount has been written on whether, when and how to regulate artificial intelligence[42]. Some commentators consider that regulation is undesirable since it would stifle innovation or induce ‘ethics shopping’ by AI companies; premature, since the technology is still evolving; and even impossible, due to the intrinsic nature of AI[43]. Two leading researchers in the field have noted that “We know that there is no ‘formula’ for building trust, but we know from experience that technology is, in general, trusted if it brings benefits and is safe and well-regulated [44]. Unless AI is to be unwittingly or forcibly imposed on the general public, in other words, if it is to be introduced with the public’s consent then effective, proportionate regulation is necessary, although not sufficient, condition[45]. A primary reason that “good” humans don’t take extreme steps (like blowing up a polluting factory) is that we have laws that prohibit these actions and the negative consequences of disobeying those laws[46]. However, as AI are not humans, it is not clear how AI could be regulated without laws.

 

There is currently no legislation specifically designed to regulate the use of AI[47]. Rather, AI systems are regulated by other existing regulations which include data protection, consumer protection and market competition laws[48]. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases[49]. Given its power and expected ubiquity, some argue that the use of AI should be tightly regulated but there’s little consensus on how that should be done and who should make the rules[50]. Bills have also been passed to regulate certain specific AI systems[51].  In New York, companies may soon have to disclose when they use algorithms to choose their employees and several cities in the US have already banned the use of facial recognition technologies[52]. In the EU, the planned Digital Services Act will have a significant impact on online platforms’ use of algorithms that rank and moderate online content, predict our personal preferences and ultimately decide what we read and watch[53]. The EU, which is again leading the way (in its 2020 white paper “On Artificial Intelligence A European Approach to Excellence and Trust” and its 2021 proposal for an AI legal framework), considers regulation to be essential to the development of AI tools that consumers can trust[54].

 

In dealing with biased outcomes, regulators have mostly fallen back on standard anti-discrimination legislation. Also, AI increases the potential scale of bias: Any flaw could affect millions of people, exposing companies to class-action lawsuits of historic proportions and putting their reputations at risk. Thus, it must prohibit technologies that violate our fundamental rights, such as biometric mass surveillance or predictive policing systems[55]. The prohibition should not contain exceptions that allow corporations or public authorities to use them “under certain conditions[56]”.

 

Also, there must be clear rules setting out exactly what companies have to make public about their products[57]. Companies must provide a detailed description of the AI system itself which includes information on the data it uses, the development process, the systems’ purpose and where and by whom it is used[58]. It is also key that individuals exposed to AI are informed about it, for example in the case of hiring algorithms[59]. Systems that can have a significant impact on people’s lives should face extra scrutiny and feature in a publicly accessible database[60].

 

Thus far, companies that develop or use AI systems largely self-police, relying on existing laws and market forces, like negative reactions from consumers and shareholders or the demands of highly-prized AI technical talent to keep them in line[61]. “There’s no business person on the planet at an enterprise of any size that isn’t concerned about this and trying to reflect on what’s going to be politically, legally, regulatorily, [or] ethically acceptable,” said  Joseph Fuller, professor of management practice at Harvard Business School, who co-leads Managing the Future of Work, a research project that studies, in part, the development and implementation of AI, including machine learning, robotics, sensors, and industrial automation, in business and the work world[62]. “The regulatory bodies are not equipped with the expertise in artificial intelligence to engage in [oversight] without some real focus and investment,” said Fuller, noting the rapid rate of technological change means even the most informed legislators can’t keep pace[63]. Requiring every new product using AI to be prescreened for potential social harms is not only impractical but would create a huge drag on innovation[64]. Also, humans don’t have uniformity in practising the laws either as laws are different from place to place, the laws in New York are different from the laws in California, and they’re both very different from the laws in Thailand[65]. Thus, it would be difficult to apply these regulations worldwide.

 

Nevertheless, national and local governments have started adopting strategies and working on new laws for several years, but no legislation has been passed yet[66]. China, for example, has developed a strategy in 2017 to become the world’s leader in AI by 2030[67]. In the US, the White House issued ten principles for the regulation of AI which includes the promotion of “reliable, robust and trustworthy AI applications”, public participation and scientific integrity[68]. International bodies that advise governments, such as the OECD or the World Economic Forum, have developed ethical guidelines[69]. 

 

The Council of Europe created a Committee dedicated to helping develop a legal framework on AI[70].

 

[1] https://hbr.org/2021/09/ai-regulation-is-coming

[2] https://hbr.org/2021/09/ai-regulation-is-coming

[3] https://www.eerstekamer.nl/bijlage/20201105/justice_by_algorithm_the_role_of/document3/f=/vldiex7de4rs.pdf

[4] https://www.eerstekamer.nl/bijlage/20201105/justice_by_algorithm_the_role_of/document3/f=/vldiex7de4rs.pdf

[5] https://www.eerstekamer.nl/bijlage/20201105/justice_by_algorithm_the_role_of/document3/f=/vldiex7de4rs.pdf

[6]https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607

[7]https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607

[8] https://builtin.com/artificial-intelligence/examples-ai-in-industry

[9] https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

[10] https://hbr.org/2021/09/ai-regulation-is-coming

[11] https://hbr.org/2021/09/ai-regulation-is-coming

[12] https://hbr.org/2021/09/ai-regulation-is-coming

[13] https://hbr.org/2021/09/ai-regulation-is-coming

[14] https://hbr.org/2021/09/ai-regulation-is-coming

[15] https://ai.wharton.upenn.edu/artificial-intelligence-risk-governance/

[16] https://ai.wharton.upenn.edu/artificial-intelligence-risk-governance/

[17] https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

[18] https://www.rev.com/blog/what-are-the-potential-risks-of-artificial-intelligence

[19] https://www.rev.com/blog/what-are-the-potential-risks-of-artificial-intelligence

[20] https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

[21] https://www.rev.com/blog/what-are-the-potential-risks-of-artificial-intelligence

[22] https://www.rev.com/blog/what-are-the-potential-risks-of-artificial-intelligence

[23] https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

[24] https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

[25] https://hbr.org/2021/09/ai-regulation-is-coming

[26] https://hbr.org/2021/09/ai-regulation-is-coming

[27] https://hbr.org/2021/09/ai-regulation-is-coming

[28] https://www.liberties.eu/en/stories/ai-regulation/43740

[29] https://www.liberties.eu/en/stories/ai-regulation/43740

[30] https://hbr.org/2021/09/ai-regulation-is-coming

[31] https://hbr.org/2021/09/ai-regulation-is-coming

[32] https://hbr.org/2021/09/ai-regulation-is-coming

[33] https://www.liberties.eu/en/stories/ai-regulation/43740

[34] https://www.liberties.eu/en/stories/ai-regulation/43740

[35] https://www.liberties.eu/en/stories/ai-regulation/43740

[36] https://www.liberties.eu/en/stories/ai-regulation/43740

[37] https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

[38] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[39] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[40] https://hbr.org/2021/09/ai-regulation-is-coming

[41] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[42] https://www.eerstekamer.nl/bijlage/20201105/justice_by_algorithm_the_role_of/document3/f=/vldiex7de4rs.pdf

[43] https://www.eerstekamer.nl/bijlage/20201105/justice_by_algorithm_the_role_of/document3/f=/vldiex7de4rs.pdf

[44] https://www.eerstekamer.nl/bijlage/20201105/justice_by_algorithm_the_role_of/document3/f=/vldiex7de4rs.pdf

[45] Unless AI is to be unwittingly or forcibly imposed on the general public – in other words, if it is to be introduced with the public’s consent – then effective, proportionate regulation is a necessary, although not sufficient, condition.

[46] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[47] https://www.liberties.eu/en/stories/ai-regulation/43740

[48] https://www.liberties.eu/en/stories/ai-regulation/43740

[49] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[50] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[51] https://www.liberties.eu/en/stories/ai-regulation/43740

[52] https://www.liberties.eu/en/stories/ai-regulation/43740

[53] https://www.liberties.eu/en/stories/ai-regulation/43740

[54] https://hbr.org/2021/09/ai-regulation-is-coming

[55] https://www.liberties.eu/en/stories/ai-regulation/43740

[56] https://www.liberties.eu/en/stories/ai-regulation/43740

[57] https://www.liberties.eu/en/stories/ai-regulation/43740

[58] https://www.liberties.eu/en/stories/ai-regulation/43740

[59] https://www.liberties.eu/en/stories/ai-regulation/43740

[60] https://www.liberties.eu/en/stories/ai-regulation/43740

[61] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[62] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[63] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[64] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[65] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[66] https://www.liberties.eu/en/stories/ai-regulation/43740

[67] https://www.liberties.eu/en/stories/ai-regulation/43740

[68] https://www.liberties.eu/en/stories/ai-regulation/43740

[69] https://www.liberties.eu/en/stories/ai-regulation/43740

[70] https://www.liberties.eu/en/stories/ai-regulation/43740

case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdomcase management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom

Which system should I choose?

This is the key question when picking among the variety of management systems available in the market. You need to consider a few key aspects of each system: deciding between on-cloud vs on-premise software, payment methods, maintenance costs, the firm’s usage, current system compatibility and the training required to use each available system.

1)  On-Cloud vs On-Premise Software

Legal practice management software can be available both on-cloud or on-premise/server-based software. There are advantages and disadvantages to each kind and the utility of each will vary based on the firm’s needs [11].

From a monetary standpoint, cloud-based platforms are usually paid for annually or monthly, whereas a local or on-premise system will require high upfront costs. While there will be no regular payments for this kind of system, there will be maintenance costs, and the firm will have to make sure it is secure and updated.

On the other hand, with cloud-based software, the responsibility of this falls to the service provider. Cloud-based solutions comparatively have more advantages, such as automatic software upgrades, which save time and money by removing the cost of hardware maintenance and the ability to access the firm’s database from any location [12]. One major issue with local or on-premise software is that it must be installed on multiple computers, leading to troubleshooting issues [13].

2)  Firm Usage

Beyond some of the differences between cloud-based and on-premise management systems, it is important to consider the firm’s usage. For example, a cloud-based system would be more practical if the employees often work from home or need access to important documents in courtrooms. This is because one can log onto these systems easily from any location as long as they have a stable internet connection.

3)  Training

Furthermore, firms should also consider how much training is required to use each available system and what system is most compatible with the existing software. Investigating these areas will allow a firm to decide what legal practice management system is best for them.

4)  Data Protection Compliance

Every law firm’s first priority is to keep client information secure and confidential. Therefore, it is extremely important to ensure that you are investing in a practice management system that is secure compared to others. You will need to ensure that the technology used is not outdated, all system data is encrypted and that the system has been audited by a third party to ensure additional security [14]. Lastly, it would be best if you also inquired about the security measures taken by the manufacturers to ensure that any third party cannot hack into the system.

Conclusion

More law firms are moving toward adopting different forms of software to become more efficient and compete in the market. Adopting legal practice management systems is a change that law firms should welcome as it enables lawyers to perform their jobs more efficiently and helps them work more effectively. It is important however to keep in mind that not all software is the same. Each has its advantages, and before deciding to make the jump, one must make an informed decision regarding whether they need a legal practice management system and, if so, which system is best for their firm.

References

  1. Ritu Kaushal, ‘Importance of Case Management Software’, Cogneesol (2021) at https://www.cogneesol.com/blog/legal-case-management-system-for-law-firms/                 

  2. Nerino Petro, ‘7 Reasons Why Small Law Firms Need Law Practice Management Software’ (2018), Thompson Reuters at https://store.legal.thomsonreuters.com/law-products/solutions/firm-central/resources/7-reasons-for-law-practice-management-software
  3. Insight Legal Software, Legal Practice Management Software’ at https://insightlegal.co.uk/solicitors-software/practice-management-system/?
  4. HSBC UK, ‘Legal Tech Analysis: Investment and Growth Strategies in Law Firms’ (2019) at https://www.business.hsbc.uk/corporate/-/media/library/business-uk/pdfs/hsbc-2019-legal-tech-report.pdf
  5. Chelsea Huss, ‘7 Benefits of Legal Practice Management Software in a Law Firm’, Centerbase (2020) at https://centerbase.com/blog/7-benefits-of-legal-practice-management-software-in-a-law-firm/
  6. Ibid
  7. Ibid 
  8. Nicole Black, ‘2020 in Review: Legal Software For Working Remotely’, Abajournal (2020) at https://www.abajournal.com/columns/article/2020-in-review-legal-software-for-working-remotely
  9. Tim Baran. ‘Lawyers Working Remotely: Using Practice Management Software’, Rocket Matter (2014) at https://www.rocketmatter.com/featured/lawyers-working-remotely-using-practice-management-software/
  10. Ritu Kaushal, ‘Importance of Case Management Software’, Cogneesol (2021) at https://www.cogneesol.com/blog/legal-case-management-system-for-law-firms/ 
  11. Legal Futures Associate, LEAP Legal Software, ‘Key Considerations for Law Firms When Choosing Legal Software’, Legal Futures, (2021) at https://www.legalfutures.co.uk/associate-news/key-considerations-for-law-firms-when-choosing-legal-software 
  12. Clio, ‘Legal Practice Management Software’ at https://www.clio.com/law-practice-management-software/
  13. Ibid
  14. Teresa Maitch, ’10 Things to Consider Before Choosing Case Management Software’, Clio at https://www.clio.com/uk/blog/choosing-case-management-software/
case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdomcase management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom

Similar to this article