Rachit Bansal

I'm a PhD student at Harvard University. Previously, I was a pre-doctoral researcher at Google Research. I work on making language models more capable and efficient.

Publications

LLM Augmented LLMs: Expanding Capabilities through Composition

LLM Augmented LLMs: Expanding Capabilities through Composition

Rachit Bansal, Bidisha Samanta, Siddharth Dalmia, Nitish Gupta, Shikhar Vashishth, Sriram Ganapathy, Abhishek Bapna, Prateek Jain, P. Talukdar

International Conference on Learning Representations (ICLR) 2024

Measures of Information Reflect Memorization Patterns

Measures of Information Reflect Memorization Patterns

Rachit Bansal, Danish Pruthi, Yonatan Belinkov

Neural Information Processing Systems (NeurIPS) 2022

Linear Connectivity Reveals Generalization Strategies

Linear Connectivity Reveals Generalization Strategies

Jeevesh Juneja, Rachit Bansal, Kyunghyun Cho, João Sedoc, Naomi Saphra

International Conference on Learning Representations (ICLR) 2022

CoSe-Co: Text Conditioned Generative CommonSense Contextualizer

CoSe-Co: Text Conditioned Generative CommonSense Contextualizer

Rachit Bansal, Milan Aggarwal, S. Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy

North American Chapter of the Association for Computational Linguistics (NAACL) 2022

LM-CORE: Language Models with Contextually Relevant External Knowledge

LM-CORE: Language Models with Contextually Relevant External Knowledge

Jivat Neet Kaur, S. Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy

North American Chapter of the Association for Computational Linguistics (NAACL) 2022

How Low is Too Low? A Computational Perspective on Extremely Low-Resource Languages

How Low is Too Low? A Computational Perspective on Extremely Low-Resource Languages

Rachit Bansal, Himanshu Choudhary, Ravneet Punia, Niko Schenk, Jacob L Dahl, 'Emilie Pag'e-Perron

Annual Meeting of the Association for Computational Linguistics (ACL) 2021