About me
I am a Senior Researcher at Microsoft Research New England, based in Cambridge, MA. I work on generative AI models (diffusion and flow models, language models) and related topics at the intersection of machine learning, statistics, and AI for science. My works include Adjoint Matching, a reward fine-tuning framework for flow models that has been extended to chemistry and robotics, and Energy-Based Fine-Tuning (EBFT), a language model fine-tuning algorithm that relies on matching feature moments, outperforming SFT in perplexity and downstream performance while matching RLVR in downstream performance.
I received my PhD in Computer Science from NYU, where I was advised by Joan Bruna.
During my PhD, I was a Visiting Researcher at Meta FAIR Labs for two years, and prior to that, I interned at Microsoft Research and IBM Research. I obtained a B.S. in Mathematics and a B.S. in Engineering Physics from the Polytechnic University of Catalonia (UPC).
My email address is cd2754 (at) nyu (dot) edu.
Join our weekly Generative Modeling & Sampling Seminar at MSR New England, with the option to attend in person. Fill out the form and check upcoming talks on the seminar website, and watch recorded talks on our YouTube channel.
Selected works
- Matching Features, Not Tokens: Energy-Based Fine-Tuning of Language Models
Samy Jelassi*, Mujin Kwun*, Rosie Zhao*, Yuanzhi Li, Nicolo Fusi, Yilun Du, Sham M. Kakade, Carles Domingo-Enrich* (*Equal contribution). arXiv preprint, March 2026. Project website and code available. - Adjoint matching: fine-tuning flow and diffusion generative models with memoryless stochastic optimal control
Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, Ricky T. Q. Chen. ICLR 2025, Spotlight. Code here: https://github.com/microsoft/soc-fine-tuning-sd
