Jieyu Zhao, UCLA: “Understanding and Intervening on Societal Biases in Natural Language Processing”

Position: PhD Candidate

Current Institution: UCLA

Abstract: Understanding and Intervening on Societal Biases in Natural Language Processing

Over the past few years Natural Language Processing (NLP) technologies become truly ubiquitous in real-world applications—including educational healthcare policy and social welfare applications. Despite many successes there is a growing awareness that these techniques can adversely impact people’s lives such as dangerously capturing and generalizing the societal biases from the data they are trained on. For example automatic resume filtering systems may implicitly select candidates based on their gender or race perpetuating and even amplifying the disparities in society. This is exacerbated by the black-box nature of NLP tools making it difficult to detect and mitigate those biases. My research plan is to build interpretation tools to understand biases in NLP models and to provide intervention methods to reduce those biases. The broader impact of my research aligns well with the goal of Rising Star Workshop – in recognizing the value of diversity and under-represented groups.


Jieyu is a PhD candidate in the department of Computer Science at UCLA advised by Prof. Kai-Wei Chang. Her research interest lies in fairness of ML and NLP models. Their previous paper was awarded the EMNLP Best Long Paper Award (2017). She also received the 2020 Microsoft PhD Fellowship. More detail can be found at https://jyzhao.net/