My research consists of two primary threads:
Assessing linguistic capacities of NLP models: controlled tests of meaning competence
In the first thread of research I design controlled tests of linguistic capacities of state-of-the-art NLP models, with a particular focus on assessing models' compositional meaning capabilities, and teasing these capabilities apart from shallower heuristic strategies. This work draws significantly on methods and insights from linguistics and cognitive science.
Computational psycholinguistic modeling: meaning extraction and prediction
In the second thread I use computational psycholinguistic modeling methods, including tools from NLP, to shed light on mechanisms of real-time language processing in humans. This work is focused on understanding the interactive dynamics between systematic compositional semantic mechanisms and probabilistic predictive processes during real-time comprehension of linguistic inputs.
Misra, K., Taylor Rayz, J., Ettinger, A. (2023). COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models. Proceedings of The 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Recipient of Best Paper Award. [PDF]
Li, J., Ettinger, A. (2023). Heuristic interpretation as rational inference: A computational model of the N400 and P600 in language processing. Cognition 233. [PDF]
Li, J., Yu, L., Ettinger, A. (2022). Counterfactual reasoning: Do language models need world knowledge for causal understanding? Proceedings of the Workshop on neuro Causal and Symbolic AI (nCSI) at NeurIPS. [PDF]
Kim, S., Yu, L., Ettinger, A. (2022). “No, they did not”: Dialogue response dynamics in pre-trained language models.Proceedings of the 29th International Conference on Computational Linguistics (COLING). [PDF]
Misra, K., Taylor Rayz, J., Ettinger, A. (2022). A Property Induction Framework for Neural Language Models. Proceedings of the 44th Annual Conference of the Cognitive Science Society. [PDF]
Pandia, L., Ettinger, A. (2021). Sorting through the noise: Testing robustness of information processing in pre-trained language models. Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing. [PDF]
Pandia, L., Cong, Y., Ettinger, A. (2021). Pragmatic competence of pre-trained language models through the lens of discourse connectives. Proceedings of the 2021 SIGNLL Conference on Computational Natural Language Learning. [PDF]
Wu, Q., Ettinger, A. (2021). Variation and generality in encoding of syntactic anomaly information in sentence embeddings. Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. [PDF]
Yu, L., Ettinger, A. (2021). On the Interplay Between Fine-tuning and Composition in Transformers. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics: Findings. [PDF]
Misra, K., Ettinger, A., Taylor Rayz, J. (2021). Do language models learn typicality judgments from text?. Proceedings of the 43rd Annual Meeting of the Cognitive Science Society. [PDF]
Yu, L., Ettinger, A. (2020). Assessing Phrasal Representation and Composition in Transformers. Proceedings of The 2020 Conference on Empirical Methods in Natural Language Processing. [PDF]
Misra, K., Ettinger, A., Taylor Rayz, J. (2020). Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming. Findings of ACL: EMNLP 2020. [PDF]
Toshniwal, S., Wiseman, S., Ettinger, A., Gimpel, K., Livescu, K. (2020). Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks. Proceedings of The 2020 Conference on Empirical Methods in Natural Language Processing. [PDF]
Klafka, J., Ettinger, A. (2020). Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [PDF] [Probing datasets and code]
Toshniwal, S., Ettinger, A., Gimpel, K., Livescu, K. (2020). PeTra: A Sparsely Supervised Memory Model for People Tracking. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [PDF] [Colab notebook]
Ettinger, A. (2020). What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics. [PDF] [Diagnostic tests and code]
Ettinger, A., Elgohary, A., Phillips, C., Resnik, P. (2018). Assessing Composition in Sentence Vector Representations. Proceedings of the 27th International Conference on Computational Linguistics. [PDF] [Supplementary material]
Ettinger, A., Rao, S., Daumé III, H., Bender, E. M. (2017). Towards Linguistically Generalizable NLP Systems: A Workshop and Shared Task. Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems. [PDF]
Ettinger, A., Elgohary, A., Resnik, P. (2016). Probing for semantic evidence of composition by means of simple classification tasks. Proceedings of the First Workshop on Evaluating Vector Space Representations for NLP, ACL 2016. Recipient of Best Proposal Award. [PDF]
Ettinger, A., Feldman, N.H., Resnik, P., Phillips, C. (2016). Modeling N400 amplitude using vector space models of word representation. Proceedings of the 38th Annual Conference of the Cognitive Science Society. [PDF]
Ettinger, A., Linzen, T. (2016). Evaluating vector space models using human semantic priming results. Proceedings of the First Workshop on Evaluating Vector Space Representations for NLP, ACL 2016. [PDF]
Ettinger, A., Resnik, P., Carpuat, M. (2016). Retrofitting sense-specific word vectors using parallel text. Proceedings of NAACL HLT 2016. [PDF] [Supplementary material]
Rao, S., Ettinger, A., Daumé III, H., Resnik, P. (2015). Dialogue focus tracking for zero pronoun resolution. Proceedings of NAACL HLT 2015. [PDF]
Ettinger, A., Linzen, T., Marantz, A. (2014). The role of morphology in phoneme prediction: Evidence from MEG. Brain and Language 129, 14-23. [PDF]
Ettinger, A. & Malamud, S. (2014) Mandarin utterance-final particle ba in the conversational scoreboard. Proceedings of Sinn und Bedeutung. [PDF]
Ettinger, A., Elgohary, A., Resnik, P. (2016). Probing for semantic evidence of composition by means of simple classification tasks. Talk presented at the First Workshop on Evaluating Vector Space Representations for NLP, ACL 2016, Berlin, Germany.
Ettinger, A., Linzen, T. (2016). Evaluating vector space models using human semantic priming results. Poster presented at the First Workshop on Evaluating Vector Space Representations for NLP, ACL 2016, Berlin, Germany.
Ettinger, A., Feldman, N.H., Resnik, P., Phillips, C. (2016). Modeling N400 amplitude using vector space models of word representation. Poster presented at Annual Conference of the Cognitive Science Society, Philadelphia, PA.
Ettinger, A., Resnik, P., Carpuat, M. (2016). Retrofitting sense-specific word vectors using parallel text. Talk presented at NAACL HLT 2016, San Diego, CA.
Rao, S., Ettinger, A., Daumé III, H., Resnik, P. (2015). Dialogue focus tracking for zero pronoun resolution. Poster presented at NAACL HLT 2015, Denver, CO.
Ettinger, A. & Malamud, S. (2013). Mandarin utterance-final particle ba in the conversational scoreboard. Talk presented at Linguistic Society of America Annual Meeting, Boston, MA, and 19th International Congress of Linguists, Geneva, Switzerland.
Ettinger, A., Linzen, T., Marantz, A. (2013). The role of morphological structure in phoneme prediction: evidence from MEG. Poster presented at Cognitive Neuroscience Society Annual Meeting and CUNY Conference on Human Sentence Processing.
Ettinger, A. & Malamud, S. (2012). Mandarin utterance-final particle ba and conversational goals. Talk presented at NYU Semantics Discussion Group Meeting, New York, NY.
Ettinger, A., Moua, M.Y., Stanford, J. (2010). Linguistic construction of gender and generations in Hmong-American communities. Talk presented at Linguistic Society of America Annual Meeting, Baltimore, MD.