Artificial Intelligence and Ethics
Written by Srisha Sapkota
Blogger
Our baseline empathy sets our definition of “good” or “evil”[1]. For example, most of us know that we should value human life over material objects without needing anyone to tell us so explicitly[2]. Someone who sacrifices a baby to get a new car would automatically be branded “evil” and these macro laws/rules are hardwired into us as human beings[3]. But why should human life or animal life be valuable to AI? A dog has no greater intrinsic value to a machine than, say, a sandwich unless we program our values into our AI systems[4].
Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics[5]. Today the biggest tech companies in the world such as Microsoft, Facebook, Twitter, Google, and more are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, also known as AI[6].
Importance of ethics in Artificial Intelligence
Artificial intelligence systems use machine learning to figure out patterns within data and make decisions often without a human giving them any moral basis for how to do it[7]. There have been numerous cases in which advanced or AI-powered algorithms were abused, went awry or caused damage[8]. It was revealed that the British political consulting firm Cambridge Analytica harvested the data of millions of Facebook users without their consent to influence the US elections, which raised questions on how algorithms can be abused to influence and manipulate the public sphere on a large scale[9]. Google decided not to renew a contract with the Pentagon to develop AI that would identify potential drone targets in satellite images, after large-scale protests by employees who were concerned that their technology would be used for lethal purposes[10].
Countless news reports from faulty and discriminatory facial recognition to privacy violations to black-box algorithms with life-altering consequences have put it on the agendas of boards, CEOs, and Chief Data and Analytics Officers[11]. What most leaders don’t understand, however, is that addressing these risks requires raising awareness of them across their entire organization and those that do understand this often don’t know how to proceed[12]. Over 50% of executives report “major” or “extreme” concern about the ethical and reputational risks of AI in their organization given its current level of preparedness for identifying and mitigating risks which means that building an AI ethical risk program that everyone is bought into is necessary for deploying AI at all[13].
AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment[14].
As AI systems proliferate, they’ll be frequently faced with lose-lose Cornelian dilemmas in real-life scenarios say, a self-driving car has to choose between turning left and hitting a child or turning right and hitting two adults[15]. We’re essentially trusting the programmers of these systems to make the right decision a tall task considering that we’d be hard-pressed to make the decision ourselves[16].
Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input[17]. As the training phase cannot cover all possible examples that a system may deal with in the real world, these systems can be fooled in ways that humans wouldn’t be[18]. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned and that people can’t overpower it to use it for their own ends[19].
Sidewalk Labs, a subsidiary of Google, faced massive backlash by citizens and local government officials over their plans to build an IoT-fueled “smart city” within Toronto due to a lack of clear ethical standards for the project’s data handling[20]. The company ultimately scrapped the project at a loss of two years of work and $50 million[21]. What we’re going to see is jobs that require human interaction, empathy, that require applying judgment to what the machine is creating [will] have robustness[22].
With no clear protocol in place on how to identify, evaluate, and mitigate the risks, teams end up either overlooking risks, scrambling to solve issues as they come up, or crossing their fingers in the hope that the problem will resolve itself[23].
Discrimination by AI
Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale[24]. Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral[25]. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes but it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people[26].
“Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said political philosopher Michael Sandel, Anne T. and Robert M. Bass Professor of Government. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing, replicate and embed the biases that already exist in our society.” As machines learn from data sets they’re fed, chances are “pretty high” they may replicate many of the banking industry’s past failings that resulted in systematic disparate treatment of African Americans and other marginalized consumers[27].
The business world and the workplace, rife with human decision-making, have always been riddled with “all sorts” of biases that prevent people from making deals or landing contracts and jobs[28]. AI not only replicates human biases, it confers on these biases a kind of scientific credibility but it may make it seem that these predictions and judgments have an objective status[29]. Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past[30].
Amazon engineers reportedly spent years working on AI hiring software but eventually scrapped the program because they couldn’t figure out how to create a model that doesn’t systematically discriminate against women[31].
Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”[32].
Issues with incorporating ethics in AI
Among highly cited AI papers that were published at top machine learning conferences, values like performance, building on past work, generalization, efficiency, quantitative evidence, novelty, or understanding are prevalent and prioritized in stark disfavour of values of societal needs, justice, diversity, critique, and other ethical principles that are covered extremely seldomly, if at all[33]. The prioritized values seem to be mere technical issues but are indirectly laden with sociopolitical implications that revolve around power centralization, benefiting already wealthy industries, and disregarding the interests of underprivileged social groups[34]. Furthermore, the papers hardly mention risks and expose significant blindness to potential harms, even when socially contentious applications in areas like surveillance or misinformation are being researched[35]. However, as more and more papers on AI metaethics show, many approaches in AI ethics, among them the prevalent principled, deontological approach, fail in many regards and the technical solutions in fairness, explainability, or privacy must evolve[36].
Typically, AI ethics approaches have no reinforcement mechanisms, they are often used for mere marketing purposes, they are not sensitive to different contexts and situations, they are naïve from a moral psychology perspective in not considering effects of bounded ethicality, they hardly have any influence on behavioural routines of practitioners, they fail to address the technical complexity of AI, for instance by only focusing on supervised machine learning applications and disregarding ethical implications of deep reinforcement learning etc., while at the same time being technologically deterministic, they use terms and concepts that are often too abstract to be put into practice etc[37]. There’s a lot of handwringing about how machines will behave when faced with ethical scenarios, yet there’s no consistency on how humans behave or even how they’re supposed to act[38]. Thus, it is difficult to impose that on the machines.
Possible solutions to incorporate ethics in AI
Many senior leaders describe ethics in general, and data and AI ethics in particular as “squishy” or “fuzzy,” and argue it is not sufficiently “concrete” to be actionable[39]. Leaders should take inspiration from health care, an industry that has been systematically focused on ethical risk mitigation since at least the 1970s[40]. Key concerns about what constitutes privacy, self-determination, and informed consent, for example, have been explored deeply by medical ethicists, health care practitioners, regulators, and lawyers and those insights can be transferred to many ethical dilemmas around consumer data privacy and control[41]
Some people believe that like with all children, our approach should be to expose it to the broad principles of good behaviour such as, not to cause unnecessary harm, not to discriminate, but to do things for the betterment of society as a whole (with the understanding that society may be a mix of humans and AI), and mostly to be able to balance competing and sometimes contradictory pulls of good behaviour [42].
Anyone who deals with data or AI products whether in HR, marketing, or operations should understand the company’s data and AI ethics framework[43]. Creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained requires educating and upskilling employees, and empowering them to raise important questions at crucial junctures and raise key concerns to the appropriate deliberative body[44]. Throughout this process, it’s important to clearly articulate why data and AI ethics matter to the organization in a way that demonstrates that commitment is not merely part of a public relations campaign[45]. Moreover, rewarding people for their efforts in promoting a data ethics program is essential[46]. Overall, creating organizational awareness, ethics committees, informed product managers owners, engineers, and data collectors is all part of the development and, ideally, procurement process to infuse ethics in AI[47]. Done well, raising awareness can both mitigate risks at the tactical level and lend itself to the successful implementation of a more general AI ethical risk program[48]. One barrier organizations face is that people outside of IT can be intimidated by the topic. “Artificial intelligence,” “machine learning,” and “discriminatory algorithms” can seem like daunting concepts, which leads people to shy away from the topic altogether[49]. It’s crucial for building organizational awareness that people become familiar and comfortable with the concepts, if not the technical underpinnings[50].AI doesn’t have to be as opaque as it may seem and needs to be more transparent to explain how AI models get to a decision[51]. It allows humans to see whether the models have been thoroughly tested and make sense and that they can understand why particular decisions are made[52]. AI is smart, but only in one-way[53]. So, when an AI model makes a mistake, human judgment is needed to gauge the context in which an algorithm operates and understand the implications of the outcomes[54].
“Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications not only to decide what the regulations should be but also to decide what role we want big tech and social media to play in our lives,” said political philosopher Michael Sandel[55]. Overall, the ethics that are important in Artificial intelligence are as follows: accountability, transparency, privacy, inclusiveness, bias awareness, informed consent, proportionality, and individual data control.
Nevertheless, if careful considerations are made to ethics, it can also be possible that robots are more ethical than humans. If robots were choosing whom to hire for a company or to approve for a bank loan, they could be programmed to avoid biases that humans might feel, said Francesca Rossi, AI ethics global leader at IBM Research[56].
[1] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[2] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[3] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[4] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[5] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[6] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[7]https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607
[8] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[9] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[10] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[11] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1
[12] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1
[13] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1
[14] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[15] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[16] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[17] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
[18] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
[19] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
[20] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[21] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[22] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[23] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[24] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
[25] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
[26] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
[27] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[28] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[29] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[30]https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607
[31] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[32]https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607
[33] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf
[34] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf
[35] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf
[36] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf
[37] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf
[38] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[39] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[40] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[41] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[42] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/
[43] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[44] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[45] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[46] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[47] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
[48] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1
[49] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1
[50] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1
[51] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[52] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[53] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[54] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html
[55] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[56]https://news.harvard.edu/gazette/story/2017/09/as-ai-rises-youll-likely-have-a-job-analysts-say-but-it-may-be-different/
Which system should I choose?
This is the key question when picking among the variety of management systems available in the market. You need to consider a few key aspects of each system: deciding between on-cloud vs on-premise software, payment methods, maintenance costs, the firm’s usage, current system compatibility and the training required to use each available system.
1) On-Cloud vs On-Premise Software
Legal practice management software can be available both on-cloud or on-premise/server-based software. There are advantages and disadvantages to each kind and the utility of each will vary based on the firm’s needs [11].
From a monetary standpoint, cloud-based platforms are usually paid for annually or monthly, whereas a local or on-premise system will require high upfront costs. While there will be no regular payments for this kind of system, there will be maintenance costs, and the firm will have to make sure it is secure and updated.
On the other hand, with cloud-based software, the responsibility of this falls to the service provider. Cloud-based solutions comparatively have more advantages, such as automatic software upgrades, which save time and money by removing the cost of hardware maintenance and the ability to access the firm’s database from any location [12]. One major issue with local or on-premise software is that it must be installed on multiple computers, leading to troubleshooting issues [13].
2) Firm Usage
Beyond some of the differences between cloud-based and on-premise management systems, it is important to consider the firm’s usage. For example, a cloud-based system would be more practical if the employees often work from home or need access to important documents in courtrooms. This is because one can log onto these systems easily from any location as long as they have a stable internet connection.
3) Training
Furthermore, firms should also consider how much training is required to use each available system and what system is most compatible with the existing software. Investigating these areas will allow a firm to decide what legal practice management system is best for them.
4) Data Protection Compliance
Every law firm’s first priority is to keep client information secure and confidential. Therefore, it is extremely important to ensure that you are investing in a practice management system that is secure compared to others. You will need to ensure that the technology used is not outdated, all system data is encrypted and that the system has been audited by a third party to ensure additional security [14]. Lastly, it would be best if you also inquired about the security measures taken by the manufacturers to ensure that any third party cannot hack into the system.
Conclusion
More law firms are moving toward adopting different forms of software to become more efficient and compete in the market. Adopting legal practice management systems is a change that law firms should welcome as it enables lawyers to perform their jobs more efficiently and helps them work more effectively. It is important however to keep in mind that not all software is the same. Each has its advantages, and before deciding to make the jump, one must make an informed decision regarding whether they need a legal practice management system and, if so, which system is best for their firm.
References
Ritu Kaushal, ‘Importance of Case Management Software’, Cogneesol (2021) at https://www.cogneesol.com/blog/legal-case-management-system-for-law-firms/
- Nerino Petro, ‘7 Reasons Why Small Law Firms Need Law Practice Management Software’ (2018), Thompson Reuters at https://store.legal.thomsonreuters.com/law-products/solutions/firm-central/resources/7-reasons-for-law-practice-management-software
- Insight Legal Software, Legal Practice Management Software’ at https://insightlegal.co.uk/solicitors-software/practice-management-system/?
- HSBC UK, ‘Legal Tech Analysis: Investment and Growth Strategies in Law Firms’ (2019) at https://www.business.hsbc.uk/corporate/-/media/library/business-uk/pdfs/hsbc-2019-legal-tech-report.pdf
- Chelsea Huss, ‘7 Benefits of Legal Practice Management Software in a Law Firm’, Centerbase (2020) at https://centerbase.com/blog/7-benefits-of-legal-practice-management-software-in-a-law-firm/
- Ibid
- Ibid
- Nicole Black, ‘2020 in Review: Legal Software For Working Remotely’, Abajournal (2020) at https://www.abajournal.com/columns/article/2020-in-review-legal-software-for-working-remotely
- Tim Baran. ‘Lawyers Working Remotely: Using Practice Management Software’, Rocket Matter (2014) at https://www.rocketmatter.com/featured/lawyers-working-remotely-using-practice-management-software/
- Ritu Kaushal, ‘Importance of Case Management Software’, Cogneesol (2021) at https://www.cogneesol.com/blog/legal-case-management-system-for-law-firms/
- Legal Futures Associate, LEAP Legal Software, ‘Key Considerations for Law Firms When Choosing Legal Software’, Legal Futures, (2021) at https://www.legalfutures.co.uk/associate-news/key-considerations-for-law-firms-when-choosing-legal-software
- Clio, ‘Legal Practice Management Software’ at https://www.clio.com/law-practice-management-software/
- Ibid
- Teresa Maitch, ’10 Things to Consider Before Choosing Case Management Software’, Clio at https://www.clio.com/uk/blog/choosing-case-management-software/