Algorithmic Bias in Language Models

We now know that Language Models unintentionally learn and propagate biases that are inherent to our society based on the data used to train them. More so, runaway feedback loops amplify these biases leading to Algorithmic Bias in AI systems that can reinforce injustices and inequities that are already prevalent in society.

Scroll to follow the bubble and see ways to mitigate biases in Language Models and the systems that utilise them.

Debiasing Data

The data used pre-train a Language Model must be representative of all groups. Oversampling underrepresented groups in the data can be applied to create a more balanced training dataset.

Debiasing Word Embeddings

Going back to our gender biased word embeddings example, the fact that “female” is more closely related to “homemaker” as opposed to “computer programmer” shows gender bias compared to the close association between “female” and “queen” which reflects an appropriate semantic relationship. Researchers used a separate algorithm to mitigate gender associations between word pairs that are biased before feeding this de-biased data into the Word2Vec embedding algorithm.

Understanding the Context of Data

In addition to debiasing data, it is important to interpret the data within the context it was created and collected. Procedures should be put in place to increase dataset transparency and facilitate better communication between dataset creators and dataset consumers (e.g., those using datasets to train machine learning models). This provides the full context of the dataset: from who collected it, how and why it was collected, to an account of any skews and correlations in it etc.

Curated Datasets

Language Model behaviour can be improved with respect to specific behavioral values by fine-tuning on a small, curated dataset. This way data scientists can predict the model’s behaviour , similar to running a controlled experiment in a laboratory.

Curating Datasets
AllenNLP

Fairness in AI

After tackling bias directly in the data and/or the Language Model algorithm, large technology companies and other organisations are establishing responsible or ethical AI guidelines, where minimising bias is part of their overarching AI objectives.

Google's Responsible AI Microsoft's Responsible AI Stanford Ethics Board

Mitigating Bias

There are ways can we can tackle biases in Language Models:

  • Debiasing Data
  • Debiasing Algorithms
  • Debiasing Runaway Feedback Loops
  • Guidelines for Fair and Inclusive AI.

Click on the yellow buttons to learn more about the various methods to tackle bias in Language Models.

Original Sample

Debiasing Word Embeddings


Datasheets for Datasets

Debiasing Algorithms

Interventions can be applied to a Language Model to directly offset their tendency to amplify bias. One way we can do this is to specify how predictions generated from the model should be fed back into it, in order to constrain the model. There are tools like LIT (click to watch the video and learn more) and AllenNLP to explore questions about model predictions, features, and data points to look out for any biases.

Fair Predictive Policing

Going back to our predictive policing example, instead of letting every newly observed crime instance be fed back into the algorithm, a sampling rule can be applied to the algorithm so that the more likely police are sent to a particular precinct, the less likely data observed from those assignments are incorporated into the algorithm. This helps prevent models from making predictions that disproportionately fall to one particular group.

Evaluate for Fairness and Inclusion

In addition to constraining predictions made by Language Models, the model’s performance on a task should be evaluated with different groups and sub-groups e.g. males, females, white males, white females etc., to catch any biases that might disproportionately fall to one particular group based on the models behaviour.

Techniques for Bias Mitigation

Ethical Artificial Intelligence

These are only a few ways that biases in AI systems and machine learning models can be mitigated. Though these practices may generate less biased results or predictions, they will not fix the underlying social injustices and inequities. It is our responsibility to create AI systems that produce positive outcomes for all groups of people and our environment.

Return to Home