Aditya Tadimeti

About me

I’m a master’s student in Computer Science at Stanford University, where I also earned my bachelor’s. My research interests include foundation model training, efficiency, inference, and alignment.

Currently, I’m a Machine Learning Research Intern at Liquid AI.

At Stanford, I worked with Prof. Noah Goodman in the Computation and Cognition lab on the efficient distillation of reasoning traces (ICML 2025) and with Prof. Jure Leskovec in the SNAP group on pretraining large-scale foundation models. I’ve also trained reward models for diffusion alignment at Adobe Firefly, worked on language model reasoning at Cohere, and built high-impact engineering tools during internships at Amazon and Oracle.

Feel free to reach out at [my last name] [at] stanford [dot] edu.

Publications

Simple, Scalable Reasoning via Iterated Summarization

Vivek Vajipey*, Aditya Tadimeti*, Justin Shen*, Ben Prystawski, Michael Y. Li, Noah Goodman

ICML 2025 Workshop on Long Context Foundation Models

* Equal contribution