Follow
Jiuhai Chen
Jiuhai Chen
Verified email at umd.edu
Title
Cited by
Cited by
Year
From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning
M Li, Y Zhang, Z Li, J Chen, L Chen, N Cheng, J Wang, T Zhou, J Xiao
arXiv preprint arXiv:2308.12032 & NAACL 2024, 2023
422023
Instructzero: Efficient instruction optimization for black-box large language models
L Chen*, J Chen*, T Goldstein, H Huang, T Zhou
arXiv preprint arXiv:2306.03082 & ICML 2024, 2023
372023
When do you need chain-of-thought prompting for chatgpt?
J Chen, L Chen, H Huang, T Zhou
arXiv preprint arXiv:2304.03262, 2023
342023
Quantifying uncertainty in answers from any language model via intrinsic and extrinsic confidence assessment
J Chen, J Mueller
arXiv preprint arXiv:2308.16175, 2023
23*2023
GOAT: A Global Transformer on Large-scale Graphs
K Kong, J Chen, J Kirchenbauer, R Ni, CB Bruss, T Goldstein
International Conference on Machine Learning 2023, 2023
232023
Gaussian process assisted active learning of physical laws
J Chen, L Kang, G Lin
Technometrics 63 (3), 329-342, 2021
202021
Why Propagate Alone
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
Parallel Use of Labels and Features on Graphs. In, 2020
19*2020
How Many Demonstrations Do You Need for In-context Learning?
J Chen, L Chen, C Zhu, T Zhou
Empirical Methods in Natural Language Processing 2023, 2023
18*2023
Particle-based energetic variational inference
Y Wang, J Chen, C Liu, L Kang
Statistics and Computing 31, 1-17, 2021
172021
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, and Tom Goldstein. A closer look at distribution shifts and out-of-distribution …
M Ding
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
172021
A closer look at distribution shifts and out-of-distribution generalization on graphs
M Ding*, K Kong*, J Chen*, J Kirchenbauer, M Goldblum, D Wipf, ...
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
162021
Does your graph need a confidence boost? Convergent boosted smoothing on graphs with tabular node features
J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf
International Conference on Learning Representations (ICLR) 2022, 2021
142021
Reflection-tuning: Recycling data for better instruction-tuning
M Li, L Chen, J Chen, S He, T Zhou
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
9*2023
Why propagate alone? parallel use of labels and features on graphs
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
arXiv preprint arXiv:2110.07190, 2021
62021
ODIN: Disentangled Reward Mitigates Hacking in RLHF
L Chen, C Zhu, J Chen, D Soselia, T Zhou, T Goldstein, H Huang, ...
arXiv preprint arXiv:2402.07319 & ICML 2024, 2024
52024
Convergent boosted smoothing for modeling graph data with tabular node features
J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf
International Conference on Learning Representations (ICLR) 2022, 2021
52021
Selective reflection-tuning: Student-selected data recycling for llm instruction-tuning
M Li, L Chen, J Chen, S He, J Gu, T Zhou
arXiv preprint arXiv:2402.10110, 2024
42024
Understanding the role of self-supervised learning in out-of-distribution detection task
J Chen, C Zhu, B Dai
arXiv preprint arXiv:2110.13435, 2021
42021
Automated data curation for robust language model fine-tuning
J Chen, J Mueller
arXiv preprint arXiv:2403.12776, 2024
32024
Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements
M Li, J Chen, L Chen, T Zhou
arXiv preprint arXiv:2402.10614 & ACL 2024, 2024
32024
The system can't perform the operation now. Try again later.
Articles 1–20