“Machine Unlearning for Generative AI”
Friday, June 6 at 12:45pm
MALA 5050
Add to Calendar
Pizza Will be Provided!
Abstract
As generative AI continues to advance, the ability to selectively remove information from well-trained models (known as unlearning) is becoming increasingly vital for regulatory compliance, ethical safeguards, and the mitigation of harmful content retention. This talk highlights recent advances in generative model unlearning, focusing on two key directions. First, we explore the advantages and potential of unlearning beyond standard model alignment. We demonstrate how unlearning can enhance the safety fine-tuning of vision-language models (VLMs), offering new opportunities to mitigate spurious correlations and adversarial vulnerabilities. Second, from an optimization perspective, we address the challenge of robust unlearning in large language models (LLMs), particularly under relearning attacks, where previously removed knowledge is recovered from a small subset of the forgotten data. Drawing on insights from adversarial robustness, we establish a novel connection between robust unlearning and sharpness-aware minimization (SAM), showing how smoothness optimization can improve resilience against such attacks. The talk concludes with open challenges and future directions for integrating unlearning into the AI lifecycle, ensuring long-term trustworthiness, safety, and compliance from data, model, and optimization perspectives.
Biography
Sijia Liu, Ph.D., is currently an assistant professor at Michigan State University. Liu received the Ph.D. degree (with All-University Doctoral Prize) in electrical and computer engineering from Syracuse University, NY, in 2016. He was a postdoctoral research fellow at the University of Michigan, Ann Arbor, 2016-2017, and research staff at the MIT-IBM Watson AI Lab from 2018 to 2020. Since 2021, he has served as an affiliated professor at the MIT-IBM Watson AI Lab, IBM Research.
Dr. Liu’s research spans the areas of machine learning, optimization, computer vision, security, signal processing, and data science, with a focus on developing learning algorithms and theory for trustworthy artificial intelligence (AI). These research themes provide a solid foundation for reaching his long-term research objective: Making AI systems safe and scalable. He received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’16), and the Best Paper Runner-Up Award at the 38th Conference on Uncertainty in Artificial Intelligence (UAI’22). He has published over 70 papers at top-tier machine learning and computer vision conferences, such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AISTATS, and AAAI. He has also organized a series of Trustworthy and Scalable Machine Learning workshops and tutorials in ICML, NeurIPS, KDD, CVPR, and ICASSP.