Patterns and rates of exonic de novo mutations in autism spectrum disorders BM Neale, Y Kou, L Liu, A Ma’Ayan, KE Samocha, A Sabo, CF Lin, ... Nature 485 (7397), 242-245, 2012 | 2083 | 2012 |
The huge package for high-dimensional undirected graph estimation in R T Zhao, H Liu, K Roeder, J Lafferty, L Wasserman The Journal of Machine Learning Research 13 (1), 1059-1062, 2012 | 724* | 2012 |
SMART: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization H Jiang, P He, W Chen, X Liu, J Gao, T Zhao arXiv preprint arXiv:1911.03437 (ACL), 2019 | 497 | 2019 |
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning Q Zhang, M Chen, A Bukharin, P He, Y Cheng, W Chen, T Zhao arXiv preprint arXiv:2303.10512 (ICLR), 2023 | 379 | 2023 |
Transformer Hawkes Process S Zuo, H Jiang, Z Li, T Zhao, H Zha International Conference on Machine Learning, 11692-11702, 2020 | 331 | 2020 |
BOND: Bert-assisted open-domain named entity recognition with distant supervision C Liang, Y Yu, H Jiang, S Er, R Wang, T Zhao, C Zhang Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020 | 276 | 2020 |
A nonconvex optimization framework for low rank matrix estimation T Zhao, Z Wang, H Liu Advances in Neural Information Processing Systems 28, 2015 | 196* | 2015 |
Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging A Eloyan, J Muschelli, MB Nebel, H Liu, F Han, T Zhao, AD Barber, S Joel, ... Frontiers in systems neuroscience 6, 61, 2012 | 168 | 2012 |
Deep hyperspherical learning W Liu, YM Zhang, X Li, Z Yu, B Dai, T Zhao, L Song Advances in neural information processing systems 30, 2017 | 155 | 2017 |
Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach Y Yu, S Zuo, H Jiang, W Ren, T Zhao, C Zhang arXiv preprint arXiv:2010.07835 (NAACL), 2020 | 131 | 2020 |
The FLARE package for high dimensional linear regression and precision matrix estimation in R X Li, T Zhao, X Yuan, H Liu Journal of Machine Learning Research, 2015 | 130* | 2015 |
Differentiable top-k with optimal transport Y Xie, H Dai, M Chen, B Dai, T Zhao, H Zha, W Wei, T Pfister Advances in Neural Information Processing Systems 33, 20520-20531, 2020 | 122 | 2020 |
Efficient approximation of deep ReLU networks for functions on low dimensional manifolds M Chen, H Jiang, W Liao, T Zhao Advances in neural information processing systems 32, 2019 | 121 | 2019 |
Symmetry, saddle points, and global optimization landscape of nonconvex matrix factorization X Li, J Lu, R Arora, J Haupt, H Liu, Z Wang, T Zhao IEEE Transactions on Information Theory 65 (6), 3489-3514, 2019 | 118* | 2019 |
LoftQ: Lora-fine-tuning-aware quantization for large language models Y Li, Y Yu, C Liang, P He, N Karampatziakis, W Chen, T Zhao arXiv preprint arXiv:2310.08659 (ICLR), 2023 | 117 | 2023 |
Nonparametric regression on low-dimensional manifolds using deep ReLU networks: Function approximation and statistical recovery M Chen, H Jiang, W Liao, T Zhao arXiv preprint arXiv:1908.01842 (IMA Information and Inference), 2019 | 111 | 2019 |
Nonconvex sparse learning via stochastic optimization with progressive variance reduction X Li, R Arora, H Liu, J Haupt, T Zhao arXiv preprint arXiv:1605.02711 (ICML), 2016 | 111* | 2016 |
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks?---A Neural Tangent Kernel Perspective K Huang, Y Wang, M Tao, T Zhao Advances in neural information processing systems 33, 2698-2709, 2020 | 109 | 2020 |
Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data M Chen, K Huang, T Zhao, M Wang arXiv preprint arXiv:2302.07194 (ICML), 2023 | 106 | 2023 |
Taming sparsely activated transformer with stochastic experts S Zuo, X Liu, J Jiao, YJ Kim, H Hassan, R Zhang, T Zhao, J Gao arXiv preprint arXiv:2110.04260 (ICLR), 2021 | 101 | 2021 |