I am a PhD student at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Research


 

I research neuro-symbolic Transformers, and I am privileged to work under the supervision of Nada Amin.

My most recent work, co-authored with Brandon Amos, Steve Brunton, and Shuran Song, was published at the Differentiable Almost Everything Workshop at ICML 2023.

Current interests


 

My primary research goal is to work toward a language model architecture capable of sample-efficient and continual learning. Applications I am interested in include natural language processing, robotics, program synthesis, and biology.

 

Previous work


 

My ICML paper was originally developed as a thesis in my MS program. Previously, I’ve researched cognitive science in the Kriegeskorte Visual Inference Lab and computer vision and NLP at Philips Research North America. For additional details on my research experience, please see my CV.