Position: PhD Candidate
Current Institution: Tel Aviv University
Abstract: Robust and Interpretable Machine Reasoning Over Text
My research aspires towards developing natural language understanding systems that can reason over text in a robust and explainable manner. Towards this goal my research has been focused on four targets. First developing new methods for endowing models with additional reasoning skills such as date comparison and simple arithmetic which current models often fail to capture when trained with standard language modeling objectives. Second improving the robustness of the model’s reasoning skills to the variance in natural language e.g. if the model can answer the question “Who was born first Aristotle or Caesar?” correctly then we would also expect it to answer “Who was born later Aristotle or Caesar?” correctly. Third devising new evaluation methods that comprehensively test the reasoning abilities of models and allow a fine-grained performance analysis to reveal the strengths and weaknesses of different models. Last understanding how reasoning processes are encoded and executed by models in order to facilitate interpretability methods that allow debugging model predictions and the development of new model capabilities.
Mor Geva is a PhD candidate (direct track) at Tel Aviv University and a research intern at the Allen Institute for AI advised by Prof. Jonathan Berant. Her research focuses on developing systems that can reason over text in a robust and interpretable manner. During her PhD Mor interned at Google AI and Microsoft Media AI. She was recently awarded the Dan David prize for graduate students in the field of AI and the Deutsch Prize for excellence in PhD studies. She is also a laureate of the Sephora Berrebi scholarship in Computer Science.