AI Ethics: How to Avoid Bias

October 2021 5 min read

AI is everywhere

From image recognition tools to online recommendations, AI automates repetitive and monotonous tasks to augment our lives. But not always does this technology act in our favour.

Algorithm biases within AI systems leading to unfair outcomes are concerning as the use of technology grows across all industries. AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems and the data that is fed into them.

 

Machine learning can be used to detect bias and spot anomalies. However, machines can’t do this without humans. It requires data scientists to choose the training data that goes into the models, so if a human is the chooser, bias can be present.

 

In recent years, bias in ML and AI have hit headlines, such as Amazon’s hiring algorithm that favoured men and Facebook’s charge of housing discrimination in targeted ads. So how do we tackle such bias?

Can AI be more biased than humans?

Human decisions are difficult to probe or review; people may lie or not understand the factors that influence their thinking, leaving room for unconscious bias. Humans are also prone to misapplying information.

 

For example, employers may review prospective employees’ credit histories in ways that can hurt minority groups. Employers have also been shown to grant interviews at different rates to candidates with identical resumes but with names considered to reflect different racial groups.

 

AI, on the other hand, can reduce humans’ subjective interpretation of data. Machine learning algorithms work by learning to consider the variables that improve their predictive accuracy, based on their training models. Thanks to its simplicity and lack of hidden agendas, a ML algorithm can be fairer in its process than human decision-making. To quote Andrew McAfee of MIT, “If you want the bias out, get the algorithms in.” However, the fairness of any ML model is fully dependent on the statistical distribution of the data that is fed into it.

Data is the source of bias

ML models may be trained on data containing human decisions, automatically being biased, or on data that reflects the effects of societal or historical inequities.

As an example, a set of natural language processing techniques, called word embedding, may be trained on news articles that could exhibit the gender stereotypes of today’s society, or the racial inequalities.

On the other hand, the collection of data also introduces bias. For example, within criminal justice, oversampling neighbourhoods that are overpoliced can result in recording more crime, resulting in more policing. There is a chance that an algorithm can be trained on data that is not fit for purpose, irrelevant to the task the machine must perform, therefore leading to bias.

Managing bias when building an AI model

There are ways to reduce bias and strengthen ethics in rule-based artificial intelligence
systems, here are three examples.

  1. Choose the right learning model for the problem.
    Each problem requires a bespoke solution that matches its data and objectives. There’s no clear right or wrong approach, but there are parameters that can inform your team as it is choosing the most suitable AI strategy. Unsupervised models, for example, that cluster data to find patterns can learn bias from their data set. Supervised models allow for more control over bias in the data selection but can introduce human bias in the process. It is therefore necessary to understand the input of human decision-making in developing the algorithmic models.
  2. Choose a representative training data set.
    Making sure that the training data is statistically representative of the total population you want to analyse is essential. When there is insufficient data for one group or segment, you could use weighting methods to increase its importance in training, however this should be done with extreme caution as it can lead to unexpected new biases. A thorough statistical analysis in the data pre-processing stage is an essential step to remove bias from the results.
  3. Monitor performance using real data.
    No company is knowingly creating biased AI. Initially models are likely to work as expected in controlled environments. That’s why real-world applications should be simulated as much as possible when building algorithms. Understanding the statistical methods that work against real-world data can increase the output quality and help companies identify and remove bias before the product launch.

 

By applying statistical methods and modelling principles, bias can be greatly reduced or eliminated, it is therefore crucial that companies mandate these steps as obligatory for teams designing and building AI solutions.

Bias Mitigation

Human judgment is still needed to ensure AI supported decision making is fair. Minimising bias in AI is an important prerequisite for enabling trust in the system. This is critical to drive real businesses benefits, ensure continuous productivity growth, and tackle pressing societal issues.

Firstly, be aware of the contexts in which AI is helping to correct bias. Organisations need to understand where AI is not introducing fairness and anticipate domains potentially prone to unfair bias, such as those with skewed data sets. Based on this, operational strategies can include improving data collection through more aware sampling and using third parties to audit data and models.

 

AI models are often trained on recent human decisions or behaviour. Leaders can consider whether the proxies used in the past are adequate and how AI can help by surfacing long-standing biases that may have gone unnoticed. Examples of this include diversifying the AI field itself. The representation of people working in AI does not encompass society’s diversity, such as gender, race, geography, class, and physical abilities. A more diverse AI community will be better equipped to anticipate, spot, and review issues of unfair bias and better able to engage communities likely affected by bias. By understanding this, it will be easier to see how humans and machines can work together.

 

Finally, while significant progress has been made in recent years in technical and multidisciplinary research, more investment in bias research and available data efforts will be needed. A key part of the multidisciplinary approach will be to continually consider and evaluate the role of AI decision making, as the field progresses and practical experience in real applications grows.

Ethics and AI: A Necessary Change

As the intelligence of machine learning models surpasses human intelligence, it also surpasses human understanding.

But since the models are trained on data gathered by humans, they inherit human prejudices. Ensuring that AI is fair is a fundamental challenge of automation. It is our ethical and legal obligation to ensure AI acts in fairness. Using machine learning methods to help detect and combat unwanted bias when necessary, thorough statistical analysis of the data input, building sufficiently diverse teams, and having a shared sense of empathy for the users and targets of a given problem space will help manage these human prejudices and eliminate bias.

Latest company news