AI is everywhere
From image recognition tools to online recommendations, AI automates repetitive and monotonous tasks to augment our lives. But not always does this technology act in our favour.
Algorithm biases within AI systems leading to unfair outcomes are concerning as the use of technology grows across all industries. AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems and the data that is fed into them.
Machine learning can be used to detect bias and spot anomalies. However, machines can’t do this without humans. It requires data scientists to choose the training data that goes into the models, so if a human is the chooser, bias can be present.
In recent years, bias in ML and AI have hit headlines, such as Amazon’s hiring algorithm that favoured men and Facebook’s charge of housing discrimination in targeted ads. So how do we tackle such bias?