Yanai Elazar
Yanai Elazar
Postdoctoral Researcher at AI2 & UW
Verified email at - Homepage
Cited by
Cited by
Evaluating models' local decision boundaries via contrast sets
M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, ...
arXiv preprint arXiv:2004.02709, 2020
Adversarial removal of demographic attributes from text data
Y Elazar, Y Goldberg
arXiv preprint arXiv:1808.06640, 2018
Null it out: Guarding protected attributes by iterative nullspace projection
S Ravfogel, Y Elazar, H Gonen, M Twiton, Y Goldberg
arXiv preprint arXiv:2004.07667, 2020
Measuring and improving consistency in pretrained language models
Y Elazar, N Kassner, S Ravfogel, A Ravichander, E Hovy, H Schütze, ...
Transactions of the Association for Computational Linguistics 9, 1012-1031, 2021
oLMpics-on what language model pre-training captures
A Talmor, Y Elazar, Y Goldberg, J Berant
Transactions of the Association for Computational Linguistics 8, 743-758, 2020
Amnesic probing: Behavioral explanation with amnesic counterfactuals
Y Elazar, S Ravfogel, A Jacovi, Y Goldberg
Transactions of the Association for Computational Linguistics 9, 160-175, 2021
Contrastive explanations for model interpretability
A Jacovi, S Swayamdipta, S Ravfogel, Y Elazar, Y Choi, Y Goldberg
arXiv preprint arXiv:2103.01378, 2021
Do language embeddings capture scales?
X Zhang, D Ramachandran, I Tenney, Y Elazar, D Roth
arXiv preprint arXiv:2010.05345, 2020
A taxonomy and review of generalization research in NLP
D Hupkes, M Giulianelli, V Dankers, M Artetxe, Y Elazar, T Pimentel, ...
Nature Machine Intelligence 5 (10), 1161-1174, 2023
How large are lions? inducing distributions over quantitative attributes
Y Elazar, A Mahabal, D Ramachandran, T Bedrax-Weiss, D Roth
arXiv preprint arXiv:1906.01327, 2019
Adversarial removal of demographic attributes revisited
M Barrett, Y Kementchedjhieva, Y Elazar, D Elliott, A Søgaard
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
First align, then predict: Understanding the cross-lingual ability of multilingual BERT
B Muller, Y Elazar, B Sagot, D Seddah
arXiv preprint arXiv:2101.11109, 2021
Back to square one: Artifact detection, training and commonsense disentanglement in the Winograd schema
Y Elazar, H Zhang, Y Goldberg, D Roth
arXiv preprint arXiv:2104.08161, 2021
Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation
M Mosbach, T Pimentel, S Ravfogel, D Klakow, Y Elazar
arXiv preprint arXiv:2305.16938, 2023
It's not Greek to mBERT: inducing word-level translations from multilingual BERT
H Gonen, S Ravfogel, Y Elazar, Y Goldberg
arXiv preprint arXiv:2010.08275, 2020
Measuring causal effects of data statistics on language model'sfactual'predictions
Y Elazar, N Kassner, S Ravfogel, A Feder, A Ravichander, M Mosbach, ...
arXiv preprint arXiv:2207.14251, 2022
Revisiting few-shot relation classification: Evaluation data and classification schemes
O Sabo, Y Elazar, Y Goldberg, I Dagan
Transactions of the Association for Computational Linguistics 9, 691-706, 2021
Privacy and fairness in recommender systems via adversarial training of user representations
YS Resheff, Y Elazar, M Shahar, OS Shalom
arXiv preprint arXiv:1807.03521, 2018
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
arXiv preprint arXiv:2402.00838, 2024
What's In My Big Data?
Y Elazar, A Bhagia, I Magnusson, A Ravichander, D Schwenk, A Suhr, ...
arXiv preprint arXiv:2310.20707, 2023
The system can't perform the operation now. Try again later.
Articles 1–20