AI Ethics: What are the key issues?
The field of artificial intelligence has developed largely disconnected from ethical concerns. For decades, AI applications remained so limited that their impact was likewise limited, and the sole focus was on developing the technology itself. As AI has advanced and its applications have proliferated, the need for ethics has become increasingly obvious. Time and again, AI has resulted in unnecessary harms that could have been prevented if the technology had been designed and deployed with ethics in mind.
Bias and discrimination
One of the most important areas of ethical concern in the context of AI is bias and discrimination, however not all bias and discrimination is wrong.
By way of example, discriminating in favor of the most qualified candidates in the labor market is perfectly acceptable.
However, when we talk about bias and discrimination in the context of AI, we are typically referring to algorithms that treat people unfairly. Most notably, some algorithms have been shown to have sexist and racist tendencies.
Discrimination by proxy
One of the key challenges in overcoming bias is that an algorithm can pick out sensitive categories such as gender through proxies, even if programmers tell it to disregard gender. In the case of one organization’s hiring algorithm, for instance, the algorithm might have noticed that no past employee had been in any women’s football team, and that way might have unintentionally discriminated against women. It is difficult to spot proxies that might be tracking things such as gender because they are so plentiful, from the toiletries people buy to the music they listen to and the movies they watch.
Data privacy and security
Even though privacy and security are separate issues, they are connected. To protect privacy, companies and other organizations must keep all the personal data they have collected via AI safe. However, personal data is notoriously difficult to keep safe. In the online world, attackers have an advantage over defenders. While the attacker can choose the moment and method of attack, defenders must protect themselves against every type of attack at all times.
The need for AI regulation
As drafting and passing legislation takes a long time, very few legal structures exist to regulate AI specifically. There have been a few rulings in Europe that have been relevant, but they are not AI legislation as such. For example, in 2020, a Dutch court declared that an algorithm that had been used to identify fraud among recipients of public benefits amounted to a violation of human rights.
Given the number of AI-related scandals, legislation is inevitable.
[“Trustworthy” AI – Can regulation enhance trust in the AI that businesses and people are using?]
AI ethics codes
In recent times, AI ethics codes have mushroomed.
Supranational organizations such as the OECD as well as national organizations, academic and research institutions, non-profit organizations, and many technology companies have all come up with codes of their own.
Despite the diversity of codes, some commonalities have been identified.
This suggests that the classic principles of bioethics – namely beneficence, nonmaleficence, autonomy, and justice – as well as principles related to explicability and privacy are likely to be the key elements of a good AI ethics code.
Such codes are important for several reasons:
- They can establish benchmarks for good practices
- They change the ethos of AI
- They prevent unnecessary harms from happening
***
This article was taken from Intuition Know-How’s AI-centered learning content. Our extensive Know-How content library is trusted by the world’s largest investment and commercial banks, leading asset managers, insurance firms, regulatory bodies, and professional services firms.