When: Thursday, February 12, 12:00 PM
Where: Bliss 190
Abstract
In practice, there is rarely a single “golden” answer. In this talk, I argue that trustworthy ML should be set-valued: instead of validating a single model, we should reason over the Rashomon set (the set of models that meet a performance criterion). I’ll present a perspective that treats multiplicity not only as uncertainty to be acknowledged, but as a resource for personalized alignment and control. We can select or constrain models within the set to satisfy stakeholder preferences and operational requirements (e.g., interpretability, fairness, privacy, stability) while preserving task performance.
Bio
Lesia Semenova is an Assistant Professor in Computer Science at Rutgers University. She works on safe and interpretable AI, developing multiplicity-aware foundations, tools, and pipelines for trustworthy decision support. Previously, she was a postdoctoral researcher at Microsoft Research (NYC) and earned her PhD in Computer Science from Duke University. She was selected as a 2024 Rising Star in Computational and Data Science. The student teams she coached won first place in the ASA Data Challenge Expo twice.
