About me

I’m a joint undergraduate and master’s student at Stanford University studying Computer Science with a focus on AI. I’m interested in foundation model training, efficiency, and alignment.

Currently, I’m working on pretraining large-scale foundation models in the Stanford SNAP group under Prof. Jure Leskovec. I’m also working in the CoCo Lab under Prof. Noah Goodman on optimizing reasoning depth in language models via Early Readout and Reinforcement Learning.

Previously, I trained reward models and aligned text-to-image model diffusion generation as a Research Engineering Intern at Adobe Firefly. Before that, I worked on language model reasoning, evaluation, and alignment as an Intern of Technical Staff at Cohere. I’ve also built high-impact engineering tools as a Software Engineering Intern at Amazon and Oracle.

Feel free to reach out at tadimeti [at] stanford [dot] edu.