This document discusses testing AI systems for bias. It begins by defining bias and how it can occur in machine learning models due to choices in training data and definitions of success. The document then provides examples of organizational values like equality, customer satisfaction, and environmental protection that AI systems could be designed to reflect. It suggests testing systems by defining hypotheses based on these values, collecting proxy data to measure the values, establishing test scenarios, and comparing results to data and goals to identify unintended biases. The goal is for ML models to make decisions aligned with an organization's values rather than just business metrics alone.