When: Tuesday, 3/4 11 am – 12 pm. Where: Tyler Hall 055.
Abstract
Artificial intelligence is increasingly performing high-stakes tasks traditionally reserved for skilled professionals, with AI systems often surpassing human expert performance on specific tasks. Despite these advances, the “black box” (i.e., uninterpretable) nature of many machine learning algorithms poses significant challenges. These opaque models resist troubleshooting, cannot justify their decisions, and lack accountability—limitations that have slowed their adoption in critical workflows. In response, the European Union (through the General Data Protection Regulation) and the US Food & Drug Administration have published guidelines calling for interpretability and explainability in AI systems that have a major impact on individuals’ lives.
My research addresses this challenge by developing interpretable machine learning models for both general tasks and specific clinical decisions in mammography and neurology. Through novel neural network architectures, objective functions, and training regimes, I create models that achieve accuracy comparable to conventional black box systems while remaining inherently interpretable. These models are constrained to provide faithful explanations for their predictions, functioning not merely as decision-makers but as decision aids that communicate their reasoning in human-understandable terms. This human-centered design enables expert users to scrutinize the model’s logic, appropriately calibrate their trust, and intervene when necessary. The result is a more collaborative human-AI partnership that maintains both high performance and meaningful human oversight.
Bio
Alina Jade Barnett is a postdoctoral research associate advised by Cynthia Rudin at Duke University. She researches interpretable deep learning for computer vision with applications in clinical medicine. She initiated and now leads a collaboration between the Duke Departments of Computer Science and Radiology, the University of Maine, and Brigham and Women’s Hospital developing interpretable machine learning models for mammography. Her work has been featured in NeurIPS (spotlight), New England Journal of Medicine AI, CVPR (IEEE/CVF Computer Vision and Pattern Recognition Conference), Radiology and Nature Machine Intelligence. Outside of research, she is a classical musician, active community volunteer and former varsity coxswain.