Allyson Ettinger

Allyson

About

I am a computational linguist, addressing research problems on language in humans and machines. I have training in linguistics, natural language processing (NLP), and cognitive neuroscience of language, which I use to bridge research in these domains.

In my research I do a combination of NLP and computational psycholinguistic modeling. My cross-disciplinary training allows me to bring theoretical and analytical insights from linguistics and cognitive neuroscience to the development of NLP systems, and to bring computational tools and methods from NLP to the modeling of human language processing.

I am committed to building closer connections between linguistics, NLP, and cognitive neuroscience of language so that advances in each can benefit the others. To this end, I collaborate broadly across these domains, teach interdisciplinary courses for students in these fields, and have an active organizational role in interdisciplinary communities such as the Society for Computation in Linguistics and the Workshop on Building Linguistically Generalizable NLP Systems that bring together linguists and computer scientists.

I am now at AI2! I am no longer at the University of Chicago, so please note that I will not be taking new PhD advisees.


Bio

I did my PhD work at the University of Maryland with Colin Phillips and Philip Resnik, in addition to collaborations with numerous faculty and students across the linguistics and computer science departments. I was an active member of the Maryland Language Science Center.

Starting with a background in linguistics and cognitive neuroscience, I undertook a rapid computational conversion when I became fascinated by the problem of language processing in machines, and how this enterprise can connect with research on language in humans.

Before joining the community at the University of Maryland, I spent two years conducting neurolinguistic MEG research with Alec Marantz in the Neuroscience of Language Lab at NYU.

I have lived in China twice, and am fluent in Mandarin. My second time in China consisted of a year of graduate-level study, in Mandarin, of topics in political science, law, and cultural studies at the Hopkins-Nanjing Center.

I graduated from Brandeis University in 2010 with bachelor's degrees in linguistics and psychology.

News

May 2023. We received a Best Paper Award at EACL 2023 for our COMPS paper -- congratulations Kanishka!

May 2023. Paper by Kanishka, "COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models", to be presented at EACL 2023!

March 2023. Gave a talk at the Transdisciplinary Institute in Applied Data Science (TRIADS) seminar series at Washington University St. Louis.

December 2022. Gave a keynote talk at CoNLL (virtually) in Abu Dhabi, UAE.

December 2022. Paper by Jiaxuan on "Heuristic interpretation as rational inference: A computational model of the N400 and P600 in language processing", accepted to Cognition!

December 2022. Paper by Jiaxuan and Lang on counterfactual reasoning in pre-trained LMs, presented at Workshop on neuro Causal and Symbolic AI (nCSI) at NeurIPS.

November 2022. Gave a talk at the ILFC monthly online seminar.

October 2022. Paper by Sanghee and Lang presented at COLING 2022, on knowledge of dialogue response dynamics in pre-trained LMs!

September 2022. Gave a talk at the UT Austin.

September 2022. Gave a talk at the McGill Linguistics Colloquium.

August 2022. Gave an invited talk at the UC Irvine Cognitive Modeling Summer School.

July 2022. Paper presentation by Kanishka at CogSci 2022, on property induction in neural LMs!

July 2022. Gave a keynote talk at *SEM conference.

June 2022. Invited speaker and panelist, "The Challenge of Compositionality for AI" workshop.

June 2022. Gave an invited talk at Microsoft Cognitive Services Research Group Distinguished Talk Series.

May 2022. Gave a keynote talk at DeeLIO ACL workshop.

April 2022. Invited talks at UMD CLIP lab, Notre Dame NLP seminar, and UPenn CLunch.

March 2022. Gave an invited presentation at the Mini-workshop on Linguistic Ambiguity and Deep Learning.

February 2022. Invited talks at the Stanford NLP Seminar and the CMU brAIn Seminar.

November 2021. Three paper presentations coming up at EMNLP 2021: 1) Lalchand on testing robustness of meaning representations in pre-trained LMs in the main conference, 2) Lalchand and Yan on pragmatic competence in pre-trained LMs at CoNLL, and 3) Qinxuan on encoding of syntactic anomaly information in pre-trained sentence embeddings at BlackBoxNLP.

September 2021. Invited talks at the OSU Department of Linguistics Colloquium and the van Schijndel research group at Cornell.

August 2021. Paper presentation by Lang on impact of fine-tuning on semantic compostion in transformers, in Findings of ACL and presented at Rep4NLP workshop.

July 2021. Paper presentation by Kanishka on whether language models learn typicality, presented at CogSci 2021.

May 2021. Gave a talk for the UChicago MACSS Computational Social Science Workshop.

May 2021. Gave an interview with the TWIML podcast.

May 2021. Served as a panelist for the ICLR Brain2AI workshop panel, "How can findings about the brain improve AI systems?".

April 2021. Gave a talk for the NYU NLP/Text-as-Data speaker series.

April 2021. PhD student Lang Yu has successfully defended his dissertation, Analyzing and Improving Compositionality in Neural Language Models!

February 2021. SCiL 2021 (Meeting of the Society for Computation in Linguistics) was a success! Thank you to my fellow organizers, and to PC members, authors, and the many who attended the virtual conference!

February 2021. Gave a talk for the English Literature and Language Department of Dongguk University.

November 2020. Three papers at EMNLP 2020 (Conference on Empirical Methods in Natural Language Processing): 1) assessing phrase representation and composition in transformers, 2) applying semantic priming to examine lexical sensitivity in BERT (Findings/BlackBoxNLP), and 3) long document coreference resolution.

October 2020. Gave a talk at the MIT CompLang discussion group.

September 2020. Gave a talk at the MIT Computational Psycholinguistics Lab.

September 2020. Gave a talk for the Georgia Tech Workshop on Language, Technology, & Society. A recording of the talk can be found here.

July 2020. Three papers at ACL 2020 (Association for Computational Linguistics annual meeting): 1) probing contextual embeddings, 2) diagnostics for BERT (TACL paper), and 3) tracking entities with memory-augmented neural networks.

May 2020. Gave a talk in the Northwestern University Linguistics Department colloquium series.

January 2020. Paper now out in TACL (Transactions of the Association for Computational Linguistics): What BERT is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models.