Anwoy Chatterjee

I'm a second-year PhD student at IIT Delhi advised by Prof. Tanmoy Chakraborty. I aspire to make AI systems robust, reliable, safe and transparent. Currently, I'm working with large language models, trying to make them robust, reliable and interpretable. My research is supported by Google PhD Fellowship.

I also work closely with Dr. Sumit Bhatia and have also interned at the Media and Data Science Research (MDSR) Lab of Adobe Inc.

Before joining PhD, I earned a Bachelor's in CS from IIT(BHU), Varanasi where I worked on Computer Vision and Graph Neural Networks with Rajeev Srivastava and Pratik Chattopadhyay.

Email  |  CV  |  DBLP  |  Google Scholar  |  LinkedIn  |  X  |  Bluesky  |  GitHub

profile photo

Recent Updates

  • November 2024: Delivered an invited talk at Google Deepmind, Bangalore.
  • November 2024: Presented our work at Amazon Research Days 2024.
  • September 2024: Our paper on a novel prompt sensitivity index got accepted to EMNLP (Findings) 2024. This work was done during my internship at Adobe.
  • August 2024: Awarded Google PhD Fellowship 2024 in NLP.
  • May 2024: Our paper on cross-task in-context learning got accepted to ACL 2024.
  • January 2024: Attended Google Research Week 2024.

Education

IIT Delhi Logo Doctor of Philosophy in Artificial Intelligence
Indian Institute of Technology, Delhi
2022 – Present
IIT BHU Logo Bachelor of Technology in Computer Science and Engineering
Indian Institute of Technology (BHU), Varanasi
2018 – 2022

Experience

Adobe Logo Research Intern
Media and Data Science Research (MDSR) Lab, Adobe Inc.
Mentor: Dr. Sumit Bhatia
January 2025 – July 2025
Adobe Logo Research Intern
Media and Data Science Research (MDSR) Lab, Adobe Inc.
Mentor: Dr. Sumit Bhatia
May 2024 – August 2024

Research

I'm interested in natural language processing, deep learning, generative AI, and model interpretability. My current research revolves around understanding the workings of large language models, making them more robust and reliable, and enabling their effective in-context adaptation in low-resource settings.

Publications

emnlp24-image POSIX: A Prompt Sensitivity Index For Large Language Models
Anwoy Chatterjee*, H S V N S Kowndinya Renduchintala*, Sumit Bhatia, Tanmoy Chakraborty
EMNLP (Findings), 2024
paper  /  code  /  video  /  bibtex

We propose POSIX – a novel PrOmpt Sensitivity IndeX as a reliable measure of prompt sensitivity, offering a more comprehensive evaluation of LLM performance.

acl24-image Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel Tasks
Anwoy Chatterjee*, Eshaan Tanwar*, Subhabrata Dutta, Tanmoy Chakraborty
ACL (Main), 2024
paper  /  code  /  video  /  bibtex

In this paper, we offer a first-of-its-kind exploration of LLMs’ ability to solve novel tasks based on contextual signals from different task examples.

Miscellanea

Recorded
Lectures/
Tutorials/
Talks

Panel discussion on the current state of LLM research in academia and industry.
Lecture on "Interpretability: Demystifying the Black-Box LMs" as part of ELL881/AIL821 at IIT Delhi.
Virtual presentation of our EMNLP'24 paper - "POSIX: A Prompt Sensitivity Index For Large Language Models".
Virtual presentation of our ACL'24 paper - "Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel Tasks".


Teaching

Teaching Assistant, Introduction to Large Language Models, NPTEL 2025
Graduate Teaching Assistant, ELL884 (Deep Learning for NLP), Spring 2025
Graduate Teaching Assistant, ELL881/AIL821 (Large Language Models: Introduction and Recent Advances), Fall 2024
Graduate Teaching Assistant, ELL880 (Social Network Analysis), Fall 2023

Talks/
Posters

Invited talk at Google Deepmind, Bangalore, India, Nov 22, 2024.
Poster presentation at Amazon Research Days 2024, Bangalore, India.
Poster presentation at ACL 2024, Bangkok, Thailand.

Design and source code from Jon Barron's website