Ethical aspects of Artificial Intelligence, part 1/2: Algorithmic bias

Damir Valput

2020-07-09 13:14:45
Reading Time: 7 minutes

As we rely on Artificial Intelligence (AI) to handle an increasing number of areas of our lives, the issues of privacy and the fairness of AI algorithms are being discussed more in society. Therefore, I decided to dedicate my following two posts on here to those topics. Today, I delve into the question of algorithmic fairness and algorithmic bias.

What? Algorithms can be… bigots?

When I mention algorithmic bias in this post, I am not referring to bias as the statistical measure commonly used by Machine Learning practitioners, but as a “prejudice against an individual or a group”. Yes, as it turns out, algorithms can be prejudiced. Sort of…

A number of fields and methods within those fields, such as medical diagnosis, human resources and hiring, transportation, etc. already rely on algorithms to deliver automated decision-making. The advantages of artificial intelligence making a decision for us, in addition to saving us time and saving us from decision fatigue, is that algorithmic decision-making can process a much larger quantity of data than a human could and thus could be said to be less subjective than humans.

The reality seems to be slightly different. A number of studies have shown that ML algorithms can too be prone to delivering unfair and biased outputs. Biases in data fed to the algorithms are inevitable, and a well-performing model that is good at picking up statistical patterns in the historical data is also going to be prone to those biases. Furthermore, a prediction model might become inherently prone to committing the same biases as humans as it will propagate them throughout its future decision-making once learnt. However, predictive algorithms we develop should be in accordance with The Treaty on the Functioning of the European Union (TFEU), which prohibits discrimination on grounds of nationality and discourages discrimination on the grounds of sex, racial or ethnic origin, religion or belief, and other protected features.

One of the more famous examples of bias learnt by a machine learning algorithm was discovered by Bolukbasi et al. and described in the paper titled Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Word embeddings are vectors that represent a word or a common phrase and are used in computations performed by algorithms that operate over texts. Bolukbasi et al. have discovered that such embeddings can learn gender biases and thus encapsulate those biases in vectorial representations that are used in subsequent learning algorithms.

The title of the paper was motivated by a particular example of sexism implicitly contained in the embeddings. Following simple algebraic manipulations over vectors of word embeddings, they noticed that the system, when asked to complete the analogy “man is to computer programmer as woman is to x”, gives the answer x=homemaker. Needless to say, most people today would not agree with that analogy. Yet, based on the historical data fed to the algorithm, it was incorporated into the learnt language model.

Figure 1. Illustration of gender bias in a 2D vector space. The distance between vectors representing the terms “woman” and “homemaker” is quite shorter than the distance between “man” and “homemaker”. Similar for the term “computer programmer”

Where do algorithms learn biases from?

Unfairness can stem from a variety of sources. In the above example of biased word embeddings, gender bias was captured in vectorial word representations due to its presence in the historical data used to train the model. In other words, it was contained in the ground truth fed to the algorithm. That is one of the most common sources of algorithmic bias as datasets used for training ML models contain historically biased human decisions and ML models do an excellent job at picking upon them and eventually replicating them in their outputs.

However, that is not the only source of algorithmic bias. Biases can be caused as well by missing values in the data (e.g. a part of population underrepresented in the dataset), erroneous values, or even objective functions that the algorithms aim to minimise, which can benefit majority groups over minorities. Even with perfectly labelled data, a model can produce discriminatory forecasts; a fairly recent example of that is Google’s algorithm that occasionally labelled black people as gorillas.

This is becoming increasingly problematic as more algorithms are being used in fairness-critical applications such as criminal processing, background checks, hiring processes, etc. Ideally, we would like the algorithmic decision-making to be free of the biases we consider harmful, even when they are present in the data used to train the model. For example, imagine a model used to perform background checks of air travellers in order to detect potentially dangerous individuals and label them for additional screening at the airport. As has been observed before, such criminal processing algorithms can pick up on racial biases and discriminate against certain minority groups.

How to decide what is fair?

While most of us would most likely agree that the bias example above is harmful and unfair, it is not always easy to define what constitutes a bias in algorithmic decision-making. There is no universal technique that can ensure fairness in every situation. Moreover, we need to make sure that we are removing only harmful unwanted biases, not unbiased information that might be relevant for the accurate performance of the model (depending on its purpose).

Researchers describe algorithmic fairness in general terms as absence of any prejudice or favoritism toward an individual or a group based on their inherent or acquired characteristics. Therefore, algorithmic fairness should aim at eliminating biases against any individual and against a group of people.

With group bias, we wish to partition the population of interest into various groups and make sure that the statistical measures we use to capture fairness are equalised across those groups. On the other hand, individual fairness aims to observe similar individuals and ensure that they are treated similarly (by the algorithm and its output).

Fairness as a mathematical concept

In order to model fairness, detect the lack of it in algorithmic outputs, measure it and ultimately eliminate it from a model, we should be able to express it using tangible mathematical concepts. This is where we could really benefit from a universally accepted measure of a degree of fairness; however, as already mentioned, there is no such thing (yet) and we need to look at each specific problem at hand.

One proposal towards identifying and quantifying fairness for a specific problem consists of questioning stakeholders on their beliefs on fairness and eventually reaching a consensus on its definition relevant for the application under development. While translating that consensus back to the algorithm is a challenge of its own, the work of Bolukbasi et al. gives us hope that there are feasible solutions and that oftentimes the same techniques used to detect an undesirable bias can be used to eliminate it as well.

However, in doing so, we need to make sure that removing the bias does not seriously dampen the forecasting or decision-making power of the algorithm.

Fairness and accuracy don’t go hand in hand

As it turns out, there is a trade-off between accuracy of a predictive algorithm and bias suppression, i.e. fairness. It seems that removing harmful biases from ML models is difficult to do without diminishing accuracy of the algorithm. However, there is also research that shows it might be possible to suppress that trade-off. In their paper Unlocking Fairness: a Trade-off Revisited, M. Wick, S. Panda and J.B. Tristan found that, upon performing experiments under controlled conditions, that fairness and accuracy can work together. Whether fairness can be improved without sacrificing accuracy is still an open question.

Moreover, incorporating fairness metrics into the formulation of already existing ML models is not easily done without increasing their complexity. Lastly, there is also a trade-off among mutually exclusive definitions of fairness that are not reconcilable, even when they are all commonly accepted and valid. Therefore, it is important to understand that fairness comes both with an upper bound and a tax to accuracy.

Figure 2. There seems to be an unavoidable trade-off between fairness and accuracy (similarly as there is one between precision and recall). Image borrowed from the paper Unlocking Fairness: a Trade-off Revisited.

What is being done to increase algorithmic fairness?

There is a significant amount of work being done recently in the data science community to address the problem of algorithmic fairness of predictive models.

One idea relies on continuously retraining the models with fresh data, assuming it will lead to an eradication of historical biases in the data itself as it reflects (positive) changes in the society. Obviously, this is more of a long-term solution. Relatedly, there is also the idea of continuously testing models for bias presence.

Another data science technique relies on removing features highly correlated with the protected feature (the one which we wish to control for bias, e.g. gender) as ML algorithms could still pick up biases via those features even if the protected feature had been removed. However, there is still a need for a more holistic approach. Along that line, a group of researchers from the University of Toronto recently developed an approach to learning fair representations, which can help achieve both group and individual fairness.

While data science techniques can help deal with this issue, algorithmic bias is not just a data science problem. From discussing who decides what is fair and how stakeholders should be involved in those discussions, to enabling mechanisms for ensuring fairness, the work in algorithmic fairness needs to involve experts from various disciplines. For example, as of 2017, IEEE is aiming to develop algorithmic fairness standards. As we move towards a future more driven by intelligent machines’ decision-making, we can expect to see algorithm audits being performed across various industries, which will include aviation industry as well. However, we are still far from establishing the methodologies on how to perform such audits.

Algorithmic fairness in (aviation) industry

With AI technologies increasingly finding their place in the aviation industry, we need to be aware of the potential risks associated with algorithmic biases and how easily they can creep into ML models without us realising it. In the aviation industry, we can expect to have to deal with algorithmic bias in many applications: from general ones like computer vision (e.g. automatic face identification of travellers) or in hiring processes (e.g. screening or testing job candidates); to more aviation industry-specific applications such as before mentioned security threats detection.

While the academia and scientific community are now fairly conscious of the problem of algorithmic fairness, the aviation industry still lags behind. Many of the discoveries that are almost considered common knowledge in academia are not being successfully transferred to industry; oftentimes, data scientists working in the industry do not think of algorithmic fairness.

Algorithmic fairness analysis should be added as a step into the data science workflow across all data science projects. By raising consciousness about this problem in the aviation industry, we will be one step closer to making the aviation community more acceptable and welcoming of AI.

***

Machine Learning algorithms have been proven time and time again to be excellent in picking up statistical patterns in historical data. It’s just that sometimes we don’t want them to repeat history. Ultimately, we could even leverage the potential of AI to detect and correct conscious and even unconscious biases and bring about positive change.

Author: Damir Valput

© datascience.aero