1. Insights
  2. AI Data
  3. Article
  • Share on Facebook
  • Share via email

How to mitigate bias in artificial intelligence models

Posted August 1, 2021
Man on a computer

Bias in artificial intelligence models is a growing field of research. Machine bias researchers study the ways that algorithms exhibit the bias of their training datasets. When we expose human data to machine learning algorithms, we are also exposing our implicit bias. Building models to follow human logic and values while avoiding human bias is quite a challenge. However, it is an important challenge.

In most cases, we build machine learning algorithms not to use in a vacuum, but to make predictions that will drive real-word decisions. If your training data includes implicit bias, your model will learn and even amplify those biases in its output.

For example, in the legal system, some courts are beginning to base their criminal sentencing recommendations on machine learning predictions. But it’s still ambiguous whether this is a fair practice. If the models are trained using historical records of past sentencing decisions without any human involvement, then they will learn and apply past discrimination patterns when making new predictions. We shouldn’t train machine learning models using biased datasets that contain unfair outcomes.

The problem described above is an example of historical bias. In this article, we’ll explain about four more types of bias in artificial intelligence models and how to address them.

Sample bias in AI systems

Sample bias occurs when the training dataset doesn’t accurately represent the intended real-world application. Imagine that you’re building a computer vision model for autonomous vehicles, and you would like your autonomous vehicle to be able to navigate the roads at any time of day or night. If you only use image and video training data that was taken during the daytime, you’ve introduced sample bias to your model. To mitigate sample bias, you’ll need to build a training dataset that is both large enough and representative of all situations.

Exclusion bias in AI systems

Exclusion bias in artificial intelligence occurs when you exclude some features from the training dataset. This often happens when people mistakenly think that some features are irrelevant. It is important to perform sufficient analysis before discarding features from your training dataset based on gut feeling.

Machine learning is useful for preventing discrimination in processes such as employee recruitment or college admissions, given that the machine learning model itself is not biased. Colleges often sort their applicants by standardized test scores, and taking ZIP codes into account may seem discriminatory at first. But the quality of preparatory resources available in a given area can affect test scores, so excluding ZIP codes can actually increase bias.

The essential guide to AI training data

Discover best practices for the sourcing, labeling and analyzing of training data from TELUS Digital (formerly TELUS International), a leading provider of AI data solutions.

Download the guide

Observer bias in AI systems

Machine learning algorithms are only as good as their developers. Observer bias, also called experimenter bias, is the tendency to see what we expect or want to see. This happens when a data scientist approaches their machine learning project with conscious or unconscious personal prejudices. Some of the more common examples of personal prejudice include: racism, sexism, homophobia, religious prejudice, ageism and nationalism.

Most people have some underlying personal prejudices, but observer bias can be mitigated by being aware. You and your machine learning team should be well-trained on AI bias. Depending on the project at hand, screening participants for potential bias and establishing clear guidelines are also effective solutions.

Systemic value distortion in AI systems

Systemic value distortion occurs when there’s an issue with the device used to observe or measure data. This type of bias tends to skew the data in one direction. For example, you might be using an image dataset of people’s faces to train a facial recognition model. If you have a biased dataset because you took all of the photos in a room with bad lighting, then that might cause systemic value distortion. You can prevent systemic value distortion by using multiple measuring devices, and working with experienced data scientists who would see the red flags when data has been distorted.

It can be difficult to tell whether you’ve successfully removed all traces of bias in your training dataset, before feeding it to your model. That’s where TELUS Digital comes in. We have extensive experience in providing AI training data to the world’s leading tech companies. If you need a clean, unbiased dataset for machine learning, we’re your one-stop-shop for data collection, annotation and cleansing.


Check out our solutions

Test and improve your machine learning models via our global AI Community of 1 million+ annotators and linguists.

Learn more