Machine learning software helps agencies make important decisions, such as who gets a bank loan or what areas police should patrol. But if these systems have biases, even small ones, they can cause real harm. A specific group of people could be underrepresented in a training dataset, for example, and as the machine learning (ML) model learns that bias can multiply and lead to unfair outcomes, such as loan denials or higher risk scores in prescription management systems.