Follow
Yoav Levine
Yoav Levine
Verified email at mail.huji.ac.il - Homepage
Title
Cited by
Cited by
Year
Quantum entanglement in deep learning architectures
Y Levine, O Sharir, N Cohen, A Shashua
Physical review letters 122 (6), 065301, 2019
1442019
Sensebert: Driving some sense into bert
Y Levine, B Lenz, O Dagan, D Padnos, O Sharir, S Shalev-Shwartz, ...
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
1132020
Deep autoregressive models for the efficient variational simulation of many-body quantum systems
O Sharir, Y Levine, N Wies, G Carleo, A Shashua
Physical review letters 124 (2), 020503, 2020
1122020
Deep learning and quantum entanglement: Fundamental connections with implications to network design
Y Levine, D Yakira, N Cohen, A Shashua
6th International Conference on Learning Representations (ICLR), 2018
103*2018
Analysis and design of convolutional networks via hierarchical tensor decompositions
N Cohen, O Sharir, Y Levine, R Tamari, D Yakira, A Shashua
arXiv preprint arXiv:1705.02302, 2017
302017
Benefits of depth for long-term memory of recurrent networks
Y Levine, O Sharir, A Shashua
6th International Conference on Learning Representations (ICLR) workshop, 2018
23*2018
PMI-Masking: Principled masking of correlated spans
Y Levine, B Lenz, O Lieber, O Abend, K Leyton-Brown, M Tennenholtz, ...
9th International Conference on Learning Representations (ICLR), 2021
162021
Realizing topological superconductivity with superlattices
Y Levine, A Haim, Y Oreg
Physical Review B 96 (16), 165147, 2017
162017
Limits to Depth Efficiencies of Self-Attention
Y Levine, N Wies, O Sharir, H Bata, A Shashua
Advances in Neural Information Processing Systems 34 (NeurIPS), 2020
14*2020
Which transformer architecture fits my data? A vocabulary bottleneck in self-attention
N Wies, Y Levine, D Jannai, A Shashua
International Conference on Machine Learning, 11170-11181, 2021
42021
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Y Levine, N Wies, D Jannai, D Navon, Y Hoshen, A Shashua
10th International Conference on Learning Representations (ICLR), 2022
32022
MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
E Karpas, O Abend, Y Belinkov, B Lenz, O Lieber, N Ratner, Y Shoham, ...
arXiv preprint arXiv:2205.00445, 2022
2022
Standing on the Shoulders of Giant Frozen Language Models
Y Levine, I Dalmedigos, O Ram, Y Zeldes, D Jannai, D Muhlgay, Y Osin, ...
arXiv preprint arXiv:2204.10019, 2022
2022
Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks
N Wies, Y Levine, A Shashua
arXiv preprint arXiv:2204.02892, 2022
2022
Tensors for deep learning theory: Analyzing deep learning architectures via tensorization
Y Levine, N Wies, O Sharir, N Cohen, A Shashua
Tensors for Data Processing, 215-248, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–15