My research
My research is dedicated to understanding the inner workings of artificial neural networks, with the goal of deriving more principled and controlled ML methods, in order to ensure safe and effective deployment of AI models. In my work, I seek suitable frameworks (e.g., from information theory or statistical physics) that can capture the essence of an observed effect in AI models, but I prioritize the connection to practical real-world setups over theoretical rigor.
My motivation originates from my early attempts to apply ML as a computational method for problems in condensed-matter physics. That project revealed to me that ML lacks the essential characteristics of a scientific method, such as a specified range of applicability, known failure modes, and reliable error estimates. I also learned that these limitations of ML extend beyond scientific research. This lead me to study the adversarial examples phenomenon and model robustness, generalization, and neural network sparsity.
My background
My background is in Theoretical Physics, with a focus on condensed matter and computational methods for quantum many-body systems. I did my PhD at the Uof Waterloo and the Perimeter Institute for Theoretical Physics in Canada, working with Roger Melko at the PIQuIL (🗣️pickle; or formally Perimeter Institute Quantum Intelligence Lab). I also spent some time at Google doing research on sparse neural networks with Guy Gur-Ari and Behnam Neyshabur on the Blueshift team. Before my PhD, I participated in the Perimeter Scholars International program (also known as the “theoretical physics bootcamp”). And before-before that, I studied Physics (MSc) and Biophysics (BSc) at the Goethe University in Frankfurt, Germany.
Geographically, my roots are in Moldova, but also in Germany, where I lived since my teenage years and until the “physics bootcamp” lured me to North America.