LLMs, Bias, and the Implications for Equitable Healthcare

Topic

LLMs, Bias, and the Implications for Equitable Healthcare

Video

Speakers

Irene Y. Chen, Assistant Professor, UC Berkeley and UCSF in Computational Precision Health (CPH), Electrical Engineering and Computer Science (EECS), and Berkeley AI Research (BAIR)

Dr. Chen is an assistant professor at UC Berkeley and UCSF. She studies machine learning systems for healthcare to be more robust, impactful, and equitable with a focus on women’s health. Her work has been published in machine learning conferences (NeurIPS, AAAI) and medical journals (Nature Medicine, Lancet Digital Health), and has been covered by media outlets including MIT Tech Review, NPR/WGBH, and Stat News. She has been named a Rising Star in EECS, Machine Learning, and Data Science. Irene received her PhD in Electrical Engineering and Computer Science from MIT, and her joint AB/SM in Applied Math from Harvard.

Session Description
Large language models (LLMs) have demonstrated impressive capabilities, but also exhibit concerning biases that can perpetuate harm. This is especially problematic in high-stakes domains like healthcare, where biases can severely impact equitable access and quality of care. In this talk, I will discuss two recent projects examining the implications of LLMs on equitable healthcare. First, I will present work designing guiding principles for LLMs in maternal health through participatory design with healthcare workers, women, and birthing people. Next, I will demonstrate how LLMs can provide insight into medication switching for contraceptives using clinical notes from UCSF. The talk concludes with a discussion about the implications of LLMs in the existing landscape of bias in medical AI.