3 Causes of Algorithm Bias in AI

Artificial intelligence Mar 29, 2022

Automation exists all around us. From self-driving cars to smart assistants, we are constantly surrounded by new and innovative AI applications. Through automation, these tools are continuously augmenting our lives by removing many of the repetitive and mundane tasks we encounter on a regular basis, improving our safety and wellbeing at work and in our personal lives. Despite the benefits that these AI applications, there are some cases of this technology not acting as objective and impartial as we would expect and desire of AI tools.

There are several examples. Within criminal justice systems, algorithms have been said to show bias like recidivism calculator COMPAS, that has reportedly shown racial prejudice towards inmates. This solution has determined criminal risk rates at a disproportionately higher rate towards certain ethnicities. Similar problems have been observed within finance, such as the data analytics company FICO, whose algorithm has reportedly allocated credit scores based on race rather than the person’s individual circumstances, disadvantaging people from minority backgrounds. Other problems have been found in employment, such as Amazons’ AI hiring system that had to be scrapped for its ‘sexist’ practices after it was recognised it had learnt that being a female was a negative trait on a CV. These problems have even famously occurred within social media, such as Microsoft’s twitter bot ‘Tay Tweets’, that had to be shut down after the bot’s conversation made “racist, inflammatory, and political statements”.

Cases of AI applications making prejudiced decisions are drawing increased attention. The growing use of AI applications taking on increasingly important roles could be cause for concern if these tools are known to show an element of bias, as the mistakes made can have devastating impacts on people’s lives: Imagine applying for a job, or a mortgage, only to find out a machine turned you down because it was prejudiced!

As the use of AI increases within important decision making, it is critical that we work to understand how the cases of bias within AI occurs, so that we can work towards mitigating them – making AI better and fairer for everyone.

What is Algorithm Bias?

To understand algorithm bias, it is first important to mention how two domains of machine learning (ML), unsupervised learning (USL) and Reinforcement Learning (RL) works. USL and RL identify patterns within a set of data and generates outputs or ‘predictions’ from that data that were not explicitly defined for it to make. Unlike rule-based AI (which produces pre-defined outcomes based on pre-set rules), these ML methods can define its own original results from a data set. This enables it to reach conclusions without the need for human intervention. However, ML algorithms can generate output that could be understood as ‘biased’ – one that makes decisions that disadvantages a particular group without any relevant difference to justify it. These systems are not aware of the nature or types of bias reflected in the data nor understand the wider context and consequence of their decisions. This means that an AI tool can – unknowingly – generate a biased outcome in the real world that impacts human lives.

How does algorithm bias occur?

When algorithmic bias occurs, the results can have a devastating impact on the people involved. But there are ways to minimise, mitigate and prevent instances of bias by understanding how it occurs. Bias typically derives from one of three causes which lead to a biased output. By recognising the origin of algorithm bias through these causes, along with understanding how these causes lead to a biased outcome, we can work towards mitigating instances of bias in decision making:

Cause 1: Bias in data

Data is an integral part of any business and plays a fundamental role in the use of AI algorithms. AI systems are only as good as the data fed into them, meaning that the bias in an algorithm’s decision making can often derive from the data that is used.

AI and machine learning models are created using a set of training data to find patterns and establish relationships that can be generalised to new data sets to produce useful outputs. Training is a critical step in an algorithm’s ability to generate useful conclusions from a data set and validate its performance (I.e., precision, accuracy and consistency) before it is implemented within important decision making. There is potential for this process to be compromised if the model is trained on ‘bad data’ - data that, for whatever reason, is not fit for purpose. This could be due to it being poorly compiled, missing key elements, not being cleaned, or even just being inappropriate for the algorithm.

3 Causes of Algorithm Bias in AI
AI systems are only as good as the data fed into them

Bad data can manifest in many forms, some which may seem to be irrelevant or ridiculous but have significant impacts. For example, formatting inconsistencies, though recognisable as the same thing to humans, can make similar data appear different to a computer. This can lead to AI solutions treating the same data differently and identifying unintentional and non-existent relationships between them. In a hypothetical example, an algorithm may flag a relationship between the users’ phone number format ‘+99999 999999’ and higher income and could issue lower credit scores for others who use different formats like ‘99999999999’ or ‘99,999,999,999’. The solutions then generated by an algorithm could discriminate based on this relationship, leading to a biased outcome.

Cause 2: Bias in design

Another cause of bias can derive from the design of the ML model. Even with the same data set, there are different ways an algorithm can be designed that determines how that data will be treated. This means there is a chance for models to produce different outcomes based on their design, and if the design contains an element of bias, then it is likely that the outcome produced will be biased.

An algorithm may be designed with a particular objective in mind, leading to a bias in the way that objective is met. If the algorithm is designed to consider an overall objective, but neglects the smaller demographics within the overall group, then the misrepresentation within the data may lead to disproportionate results for different demographics. For example, a solution designed with the intention to maximise successful mortgage applications may favour applicants with certain career paths or backgrounds: indirectly and unfairly favouring certain demographics over others. It’s important to be conscious of the objectives that we set for AI to achieve and also be aware of how AI sets out to achieve it: AI seeks the most optimal solution, not the fairest. It lacks knowledge of the greater context and human values and ethics to temper it judgements. In the context of applying for a job, loan or house, it can easily be seen how unconscious bias can become embedded in the technology that helps make decisions that govern our daily lives.

3 Causes of Algorithm Bias in AI
It's important to be aware of how AI goes about achieving its set objectives or goals

There can also be issues in attributing value to different aspects of a data set, known as weighting. Different factors within a data set (such as age, gender, etc.) can be weighed to have different values, but can be problematic if weighted incorrectly. For car insurance providers, factors such as age and duration the applicant has held their licence are relevant to the value of the cover and are weighted heavily as a result. However, if a factor like hair colour was weighted as heavily when it has no relevance to the situation, it would lead to biased outcomes. This and the previous examples may both seem an outlandish and preposterous scenario, but the intention of these is to highlight the current paradigm where business are increasingly hungry for any and all data that can create, capture and used to improve their performance. Lacking an understanding of how AI works and the potential for seemingly innocuous data points to result in bias is a real-world concern for businesses seeking to adopt AI – and the consumers who may be misjudged by it.

Cause 3: Human bias within use

Discriminatory outcomes can also arise from the human side of AI solutions. Humans are, regardless of best intentions, competency, knowledge, wealth of experience, fundamentally limited and flawed. These limitations and flaws can sometimes be reflected within the way AI solutions operate or are used:

Humans can be guilty of interpretation bias when evaluating the output generated by an algorithm. Whether intentional or not, a conclusion made by a ML solution can be misinterpreted and misused because of a human’s bias, so that the decisions or actions made using this output is biased. This could be a human lacking the context of the outcome, or someone framing an ambiguous outcome with suggestive language. A 30% recidivism risk can be made to sound positive when phrased as “only 30%”, and negative when phrased as “a full 30%”. In this case, it could be the language used that determines a convict’s release chance more than the solutions output, as humans can be heavily influenced by the way in which the data is framed rather than objective consideration of the data.

Humans may also unknowingly embed their own bias within the data an algorithm uses, leading to biased outcomes. Unlike other types of bad data, which may be ‘incorrect’ due to formatting or missing elements, human data bias is data ‘true’ to its purpose that reflects a human bias within the set. This can be anything from historic inequalities or unconscious bias present within data sets. If a data set reflects a human bias, an algorithm will learn from the data to reflect a similar bias, systematically embedding bias in its judgement and decision-making process.

Final word:

Although cases of its occurrence are minor compared to the many AI applications improving the decision-making process, the risk of an AI tool creating a biased outcome is still an important aspect to consider. It’s important to recognise that humans are ultimately responsible for the creation of Artificial Intelligence: we design and build the models and we are responsible for creating the data that they are trained on to learn how to perform a task. Our intelligence, perspective and understanding of the world is imparted into these machines, which means that although they can be powerful and transformative, they can also magnify and systemise our ignorance, flaws, and bias. In recognising this, we can work towards mitigating instances of this bias, ensuring that AI algorithms work fairly for everyone involved.

Written by Joseph Myler and Clayton Black

Brainpool AI

Brainpool is an artificial intelligence consultancy specialising in developing bespoke AI solutions for business.