Position: Postdoctoral Researcher
Current Institution: Tel-Aviv University
Abstract: Foundations of Explainable Machine learning
Recently there is increasing use of machine learning in high-stakes fields such as healthcare and transportation. In these fields high-accuracy models alone are not sufficient. Explanations of the models’ predictions are required. Developing the foundations of explainable machine learning is one of my main research objectives. I develop new explanation methods in various areas of machine learning: unsupervised supervised and reinforcement learning. In one research work the method included (i) constructing a clustering defined by the smallest decision tree (ii) proving guarantees on its k-means value and (iii) empirically showing that it beats its competition on real datasets. As another example we developed the first algorithm with provable guarantees both on robustness interpretability and accuracy in the context of decision trees. Experiments confirm that our algorithm yields classifiers that are both interpretable and robust and have high accuracy.
Michal has recently joined Yishay Mansour’s group at Tel-Aviv University. Prior to joining TAU she was a postdoctoral fellow at the Qualcomm Institute of the University of California San Diego. Her interests lie in the foundations of AI. She researches the effect of different constraints (e.g. bounded memory and online irrevocable decisions) on learning. Recently she focused on explainability constraints and researched the foundations of explainable machine learning. She has a Ph.D. in neuroscience from Hebrew University and an MSc in computer science from Tel-Aviv University. During her Ph.D. Michal interned at the Machine Learning for Healthcare and Life Sciences group of IBM Research and at the Foundations of Machine Learning group of Google. Michal received the Anita Borg scholarship from Google and the Hoffman scholarship from the Hebrew University.