Position: PhD Candidate
Current Institution: University of Virginia
Abstract: Fostering Trustworthiness in Machine Learning Algorithms
Today we are witnessing an explosion of works that develop and apply machine learning algorithms to build intelligent learning systems (e.g. self-driving cars intelligent recommendation systems and clinical decision support systems). However traditional machine learning algorithms mainly focus on optimizing accuracy and they fail to consider trustworthiness in their design. Trustworthiness reflects the degree of the user’s confidence in that the deployed machine learning system will operate as the user expects in the face of various circumstances e.g. human errors system faults adversarial attacks poisoning attacks and threats related to environmental disturbances. The essential characteristics at the core of trustworthiness include model transparency robustness against malicious attacks and privacy preservation. Without fully studying the trustworthiness of the deployed real-world intelligent learning systems we will face a variety of devastating societal and environmental consequences. For example motivated attackers may actively manipulate the perception systems of autonomous vehicles which can potentially lead to a multitude of disastrous consequences ranging from a life-threatening accident to a large-scale interruption of transportation services relying on autonomous vehicles. In my research I take steps to study and address the trustworthiness issues in the design of machine learning algorithms. Specifically I propose several model interpretation methods that can provide insights on how machine learning models work and hence help increase the trust in model decisions. Then I investigate the security vulnerabilities of some machine learning algorithms to malicious attacks by designing both attack and defense strategies. In addition I design several privacy-preserving mechanisms to allow information providers to privately share their data for training machine learning systems.
Mengdi Huai is a Ph.D. candidate in the Department of Computer Science at the University of Virginia advised by Professor Aidong Zhang. Her research interests lie in the general area of data mining and machine learning with an emphasis on the aspects of model transparency security privacy and algorithm design. Mengdi’s research work has been published in various top venues such as KDD AAAI IJCAI NeurIPS WWW ICDM SDM TKDD. She has received multiple awards including the John A. Stankovic Research Award the Sture G. Olsson Fellowship in Engineering at the University of Virginia and the Best Paper Runner-up for KDD2020.