Major
Computer Science
Project title
“Interactions Between Bias Models and Fairness Definitions”
Project description
Jacob researched bias and fairness in machine learning models, focusing on the effects of outside factors — mainly societal — on the training data for a machine learning model. The societal biases are learned and therefore applied in uses such as risk assessment or loan allocation models. He trained models with different fairness classifiers, observing and analyzing the results. The outcome of this research was to understand the best classifier techniques to eliminate bias from machine learning models.
Faculty Mentor
Dr. Sarah Brown, Department of Computer Science and Statistics
“In the data used to create machine learning models, societal biases are often present. When using biased data, the resulting model used for any sort of predicting will have those underlying biases. I wanted to research this because I believe this is one of the largest areas of machine learning that makes people skeptical of its effectiveness. It is also important for the future of equality of all groups of people as the use of machine learning continues to grow.”