ZigaForm version 5.5.1
Menu Close

Artificial Intelligence and Ethics

by  Shrisha Sapkota

case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom

Our baseline empathy sets our definition of “good” or “evil”. For example, most of us know that we should value human life over material objects without needing anyone to tell us so explicitly. Someone who sacrifices a baby to get a new car would automatically be branded “evil” and these macro laws/rules are hardwired into us as human beings. But why should human life or animal life be valuable to AI? A dog has no greater intrinsic value to a machine than, say, a sandwich unless we program our values into our AI systems.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organisations and academics. Today the biggest tech companies in the world such as Microsoft, Facebook, Twitter, Google, and more are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, also known as AI.

Importance of Ethics in Artificial Intelligence

Artificial intelligence systems use machine learning to figure out patterns within data and make decisions often without a human giving them any moral basis for how to do it. There have been numerous cases in which advanced or AI-powered algorithms were abused, went awry or caused damage. It was revealed that the British political consulting firm Cambridge Analytica harvested the data of millions of Facebook users without their consent to influence the US elections, which raised questions on how algorithms can be abused to influence and manipulate the public sphere on a large scale. Google decided not to renew a contract with the Pentagon to develop AI that would identify potential drone targets in satellite images, after large-scale protests by employees who were concerned that their technology would be used for lethal purposes.

Countless news reports from faulty and discriminatory facial recognition to privacy violations to black-box algorithms with life-altering consequences have put it on the agendas of boards, CEOs, and Chief Data and Analytics Officers. What most leaders don’t understand, however, is that addressing these risks requires raising awareness of them across their entire organisation and those that do understand this often don’t know how to proceed. Over 50% of executives report “major” or “extreme” concern about the ethical and reputational risks of AI in their organisation given its current level of preparedness for identifying and mitigating risks which means that building an AI ethical risk program that everyone is bought into is necessary for deploying AI at all

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment

As AI systems proliferate, they’ll be frequently faced with lose-lose Cornelian dilemmas in real-life scenarios say, a self-driving car has to choose between turning left and hitting a child or turning right and hitting two adults. We’re essentially trusting the programmers of these systems to make the right decision a tall task considering that we’d be hard-pressed to make the decision ourselves

Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. As the training phase cannot cover all possible examples that a system may deal with in the real world, these systems can be fooled in ways that humans wouldn’t be. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned and that people can’t overpower it to use it for their own ends.

Sidewalk Labs, a subsidiary of Google, faced massive backlash by citizens and local government officials over their plans to build an IoT-fueled “smart city” within Toronto due to a lack of clear ethical standards for the project’s data handling. The company ultimately scrapped the project at a loss of two years of work and $50 million. What we’re going to see is jobs that require human interaction, empathy, that require applying judgment to what the machine is creating [will] have robustness. With no clear protocol in place on how to identify, evaluate, and mitigate the risks, teams end up either overlooking risks, scrambling to solve issues as they come up, or crossing their fingers in the hope that the problem will resolve itself

Discrimination by AI

Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale. Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes but it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.

“Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said political philosopher Michael Sandel, Anne T. and Robert M. Bass Professor of Government. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing, replicate and embed the biases that already exist in our society.” As machines learn from data sets they’re fed, chances are “pretty high” they may replicate many of the banking industry’s past failings that resulted in systematic disparate treatment of African Americans and other marginalised consumers.

The business world and the workplace, rife with human decision-making, have always been riddled with “all sorts” of biases that prevent people from making deals or landing contracts and jobs. AI not only replicates human biases, it confers on these biases a kind of scientific credibility but it may make it seem that these predictions and judgments have an objective status. Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past

Amazon engineers reportedly spent years working on AI hiring software but eventually scrapped the program because they couldn’t figure out how to create a model that doesn’t systematically discriminate against women.

Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”

Issues with Incorporating Ethics in AI

Among highly cited AI papers that were published at top machine learning conferences, values like performance, building on past work, generalisation, efficiency, quantitative evidence, novelty, or understanding are prevalent and prioritised in stark disfavour of values of societal needs, justice, diversity, critique, and other ethical principles that are covered extremely seldomly, if at all. The prioritised values seem to be mere technical issues but are indirectly laden with sociopolitical implications that revolve around power centralisation, benefiting already wealthy industries, and disregarding the interests of underprivileged social groups. Furthermore, the papers hardly mention risks and expose significant blindness to potential harms, even when socially contentious applications in areas like surveillance or misinformation are being researched. However, as more and more papers on AI metaethics show, many approaches in AI ethics, among them the prevalent principled, deontological approach, fail in many regards and the technical solutions in fairness, explainability, or privacy must evolve.

Typically, AI ethics approaches have no reinforcement mechanisms, they are often used for mere marketing purposes, they are not sensitive to different contexts and situations, they are naïve from a moral psychology perspective in not considering effects of bounded ethicality, they hardly have any influence on behavioural routines of practitioners, they fail to address the technical complexity of AI, for instance by only focusing on supervised machine learning applications and disregarding ethical implications of deep reinforcement learning etc., while at the same time being technologically deterministic, they use terms and concepts that are often too abstract to be put into practice, etc. There’s a lot of handwringing about how machines will behave when faced with ethical scenarios, yet there’s no consistency on how humans behave or even how they’re supposed to act. Thus, it is difficult to impose that on the machines.

Possible Solutions to Incorporate Ethics in AI

Many senior leaders describe ethics in general, and data and AI ethics in particular as “squishy” or “fuzzy,” and argue it is not sufficiently “concrete” to be actionable. Leaders should take inspiration from health care, an industry that has been systematically focused on ethical risk mitigation since at least the 1970s. Key concerns about what constitutes privacy, self-determination, and informed consent, for example, have been explored deeply by medical ethicists, health care practitioners, regulators, and lawyers and those insights can be transferred to many ethical dilemmas around consumer data privacy and control

Some people believe that like with all children, our approach should be to expose it to the broad principles of good behaviour such as, not to cause unnecessary harm, not to discriminate, but to do things for the betterment of society as a whole (with the understanding that society may be a mix of humans and AI), and mostly to be able to balance competing and sometimes contradictory pulls of good behaviour

Anyone who deals with data or AI products whether in HR, marketing, or operations should understand the company’s data and AI ethics framework. Creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained requires educating and upskilling employees, and empowering them to raise important questions at crucial junctures and raise key concerns to the appropriate deliberative body. Throughout this process, it’s important to clearly articulate why data and AI ethics matter to the organisation in a way that demonstrates that commitment is not merely part of a public relations campaign. Moreover, rewarding people for their efforts in promoting a data ethics program is essential. Overall, creating organisational awareness, ethics committees, informed product managers owners, engineers, and data collectors is all part of the development and, ideally, procurement process to infuse ethics in AI. Done well, raising awareness can both mitigate risks at the tactical level and lend itself to the successful implementation of a more general AI ethical risk program. One barrier organisations face is that people outside of IT can be intimidated by the topic. “Artificial intelligence,” “machine learning,” and “discriminatory algorithms” can seem like daunting concepts, which leads people to shy away from the topic altogether. It’s crucial for building organisational awareness that people become familiar and comfortable with the concepts, if not the technical underpinnings.AI doesn’t have to be as opaque as it may seem and needs to be more transparent to explain how AI models get to a decision. It allows humans to see whether the models have been thoroughly tested and make sense and that they can understand why particular decisions are made. AI is smart, but only in one-way. So, when an AI model makes a mistake, human judgment is needed to gauge the context in which an algorithm operates and understand the implications of the outcomes.

“Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citisens, have to educate ourselves about tech and its social and ethical implications not only to decide what the regulations should be but also to decide what role we want big tech and social media to play in our lives,” said political philosopher Michael Sandel. Overall, the ethics that are important in Artificial intelligence are as follows: accountability, transparency, privacy, inclusiveness, bias awareness, informed consent, proportionality, and individual data control.

Nevertheless, if careful considerations are made to ethics, it can also be possible that robots are more ethical than humans. If robots were choosing whom to hire for a company or to approve for a bank loan, they could be programmed to avoid biases that humans might feel, said Francesca Rossi, AI ethics global leader at IBM Research.

case management software, practice management software, legal accounting software, legaltech, technology for lawyers, case management, immigration, london, united kingdom


[1] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[2] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[3] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[4] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[5] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[6] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai


[8] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[9] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[10] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[11] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1

[12] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1

[13] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1

[14] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[15] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[16] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[17] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[18] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[19] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[20] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[21] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[22] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[23] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[24]  https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[25] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[26] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[27] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[28] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[29] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/


[31] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai


[33] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf

[34] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf

[35] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf

[36] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf

[37] https://link.springer.com/content/pdf/10.1007/s43681-021-00122-8.pdf

[38] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[39] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[40] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[41] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[42] https://www.forbes.com/sites/forbestechcouncil/2021/08/30/why-the-ethics-of-ai-are-complicated/

[43] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[44] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[45] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[46] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[47] https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai

[48] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1

[49] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1

[50] https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics?ab=at_art_art_1x1

[51] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[52] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[53] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[54] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/bringing-transparency-and-ethics-into-ai.html

[56] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/