Follow
Preetum Nakkiran
Preetum Nakkiran
Apple ML Research
Verified email at cs.harvard.edu - Homepage
Title
Cited by
Cited by
Year
Deep double descent: Where bigger models and more data hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
International Conference on Learning Representations (ICLR) 2019, 2019
9062019
SGD on Neural Networks Learns Functions of Increasing Complexity
P Nakkiran, G Kaplun, D Kalimeris, T Yang, B Edelman, H Zhang, B Barak
Advances in Neural Information Processing Systems, 3491-3501, 2019
216*2019
Having Your Cake and Eating It Too: Jointly Optimal Erasure Codes for {I/O}, Storage, and Network-bandwidth
KV Rashmi, P Nakkiran, J Wang, NB Shah, K Ramchandran
13th USENIX Conference on File and Storage Technologies (FAST 15), 81-94, 2015
1662015
Adversarial robustness may be at odds with simplicity
P Nakkiran
arXiv preprint arXiv:1901.00532, 2019
1302019
Optimal regularization can mitigate double descent
P Nakkiran, P Venkat, S Kakade, T Ma
arXiv preprint arXiv:2003.01897, 2020
1182020
Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks
R Prabhavalkar, R Alvarez, C Parada, P Nakkiran, TN Sainath
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
1022015
Compressing deep neural networks using a rank-constrained topology
P Nakkiran, R Alvarez, R Prabhavalkar, C Parada
Sixteenth Annual Conference of the International Speech Communication …, 2015
942015
Revisiting model stitching to compare neural representations
Y Bansal, P Nakkiran, B Barak
Advances in neural information processing systems 34, 225-236, 2021
702021
More data can hurt for linear regression: Sample-wise double descent
P Nakkiran
arXiv preprint arXiv:1912.07242, 2019
642019
The deep bootstrap framework: Good online learners are good offline generalizers
P Nakkiran, B Neyshabur, H Sedghi
International Conference on Learning Representations (ICLR) 2021, 2020
63*2020
Limitations of neural collapse for understanding generalization in deep learning
L Hui, M Belkin, P Nakkiran
arXiv preprint arXiv:2202.08384, 2022
452022
Benign, tempered, or catastrophic: Toward a refined taxonomy of overfitting
N Mallinar, J Simon, A Abedsoltan, P Pandit, M Belkin, P Nakkiran
Advances in Neural Information Processing Systems 35, 1182-1195, 2022
40*2022
Computational Limitations in Robust Classification and Win-Win Results
A Degwekar, P Nakkiran, V Vaikuntanathan
Proceedings of the Thirty-Second Conference on Learning Theory 99, 994-1028, 2019
372019
Distributional generalization: A new kind of generalization
P Nakkiran, Y Bansal
arXiv preprint arXiv:2009.08092, 2020
342020
General strong polarization
J Błasiok, V Guruswami, P Nakkiran, A Rudra, M Sudan
ACM Journal of the ACM (JACM) 69 (2), 1-67, 2022
272022
Rank-constrained neural networks
RA Guevara, P Nakkiran
US Patent 9,767,410, 2017
262017
What algorithms can transformers learn? a study in length generalization
H Zhou, A Bradley, E Littwin, N Razin, O Saremi, J Susskind, S Bengio, ...
arXiv preprint arXiv:2310.16028, 2023
212023
Limitations of the ntk for understanding generalization in deep learning
N Vyas, Y Bansal, P Nakkiran
arXiv preprint arXiv:2206.10012, 2022
212022
Learning rate annealing can provably help generalization, even for convex problems
P Nakkiran
arXiv preprint arXiv:2005.07360, 2020
202020
A Discussion of'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too
P Nakkiran
Distill 4 (8), e00019. 5, 2019
202019
The system can't perform the operation now. Try again later.
Articles 1–20