Follow
Najoung Kim
Title
Cited by
Cited by
Year
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
arXiv preprint arXiv:2211.05100, 2022
11242022
What do you learn from context? Probing for sentence structure in contextualized word representations
I Tenney, P Xia, B Chen, A Wang, A Poliak, RT McCoy, N Kim, ...
8142018
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
N Kim, T Linzen
arXiv preprint arXiv:2010.05465, 2020
2272020
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling
A Wang, J Hula, P Xia, R Pappagari, RT McCoy, R Patel, N Kim, I Tenney, ...
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
1172019
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
N Kim, R Patel, A Poliak, A Wang, P Xia, RT McCoy, I Tenney, A Ross, ...
arXiv preprint arXiv:1904.11544, 2019
1032019
jiant 1.1: A software toolkit for research on general-purpose text understanding models
A Wang, IF Tenney, Y Pruksachatkun, K Yu, J Hula, P Xia, R Pappagari, ...
512019
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
Z Wu, L Qiu, A Ross, E Akyürek, B Chen, B Wang, N Kim, J Andreas, ...
arXiv preprint arXiv:2307.02477, 2023
482023
Implicit Discourse Relation Classification: We Need to Talk about Evaluation
N Kim, S Feng, C Gunasekara, L Lastras
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
432020
LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
SM Kazemi, N Kim, D Bhatia, X Xu, D Ramachandran
arXiv preprint arXiv:2212.13894, 2022
422022
Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering
N Kim, E Pavlick, BK Ayan, D Ramachandran
arXiv preprint arXiv:2101.00391, 2021
362021
What do you learn from context
I Tenney, P Xia, B Chen, A Wang, A Poliak, RT McCoy, N Kim, ...
Probing for sentence structure in contextualized word representations In …, 2019
362019
Inverse Scaling: When Bigger Isn't Better
IR McKenzie, A Lyzhov, M Pieler, A Parrish, A Mueller, A Prabhu, ...
arXiv preprint arXiv:2306.09479, 2023
352023
Inverse scaling can become u-shaped
J Wei, N Kim, Y Tay, QV Le
arXiv preprint arXiv:2211.02011, 2022
302022
Automatic scoring of semantic fluency
N Kim, JH Kim, MK Wolters, SE MacPherson, JC Park
Frontiers in Psychology 10, 1020, 2019
292019
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
SR Bowman, E Pavlick, E Grave, B Van Durme, A Wang, J Hula, P Xia, ...
arXiv preprint arXiv:1812.10860, 2018
262018
Testing the general deductive reasoning capacity of large language models using ood examples
A Saparov, RY Pang, V Padmakumar, N Joshi, M Kazemi, N Kim, H He
Advances in Neural Information Processing Systems 36, 2024
202024
Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models
N Kim, T Linzen, P Smolensky
arXiv preprint arXiv:2212.10769, 2022
192022
Testing for Grammatical Category Abstraction in Neural Language Models
N Kim, P Smolensky
Proceedings of the Society for Computation in Linguistics 4 (1), 467-470, 2021
192021
(QA)^2: Question Answering with Questionable Assumptions
N Kim, PM Htut, SR Bowman, J Petty
arXiv preprint arXiv:2212.10003, 2022
162022
Entity Tracking in Language Models
N Kim, S Schuster
arXiv preprint arXiv:2305.02363, 2023
112023
The system can't perform the operation now. Try again later.
Articles 1–20