Follow
Dara Bahri
Dara Bahri
Research Scientist, Google Research
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Efficient Transformers: A Survey
D Tay, Yi and Dehghani, Mostafa and Bahri, Dara and Metzler
ACM Computing Surveys, 2022 55 (0360-0300), 2022
988*2022
Long range arena: A benchmark for efficient transformers
Y Tay, M Dehghani, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
4492020
Berkeley advanced reconstruction toolbox
M Uecker, F Ong, JI Tamir, D Bahri, P Virtue, JY Cheng, T Zhang, M Lustig
Proc. Intl. Soc. Mag. Reson. Med 23 (2486), 2015
4042015
Synthesizer: Rethinking self-attention for transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
International conference on machine learning, 10183-10192, 2021
3002021
Sparse sinkhorn attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
International Conference on Machine Learning, 9438-9447, 2020
2752020
Ext5: Towards extreme multi-task scaling for transfer learning
V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng, SV Mehta, H Zhuang, ...
arXiv preprint arXiv:2111.10952, 2021
1552021
Transformer memory as a differentiable search index
Y Tay, V Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
Advances in Neural Information Processing Systems 35, 21831-21843, 2022
1262022
Ul2: Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
The Eleventh International Conference on Learning Representations, 2022
1162022
Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ...
arXiv preprint arXiv:2205.05131, 2022
1122022
Charformer: Fast character transformers via gradient-based subword tokenization
Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri, Z Qin, ...
arXiv preprint arXiv:2106.12672, 2021
1042021
Scarf: Self-supervised contrastive learning using random feature corruption
D Bahri, H Jiang, Y Tay, D Metzler
arXiv preprint arXiv:2106.15147, 2021
992021
Rethinking search: making domain experts out of dilettantes
D Metzler, Y Tay, D Bahri, M Najork
Acm sigir forum 55 (1), 1-27, 2021
902021
Confident adaptive language modeling
T Schuster, A Fisch, J Gupta, M Dehghani, D Bahri, V Tran, Y Tay, ...
Advances in Neural Information Processing Systems 35, 17456-17472, 2022
772022
Are pre-trained convolutions better than pre-trained transformers?
Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler
arXiv preprint arXiv:2105.03322, 2021
742021
Deep k-nn for noisy labels
D Bahri, H Jiang, M Gupta
International Conference on Machine Learning, 540-550, 2020
712020
Sharpness-aware minimization improves language model generalization
D Bahri, H Mobahi, Y Tay
arXiv preprint arXiv:2110.08529, 2021
602021
Structformer: Joint unsupervised induction of dependency and constituency structure from masked language modeling
Y Shen, Y Tay, C Zheng, D Bahri, D Metzler, A Courville
arXiv preprint arXiv:2012.00857, 2020
392020
Hypergrid transformers: Towards a single model for multiple tasks
Y Tay, Z Zhao, D Bahri, D Metzler, DC Juan
312021
Omninet: Omnidirectional representations from transformers
Y Tay, M Dehghani, V Aribandi, J Gupta, PM Pham, Z Qin, D Bahri, ...
International Conference on Machine Learning, 10193-10202, 2021
302021
Diminishing returns shape constraints for interpretability and regularization
M Gupta, D Bahri, A Cotter, K Canini
Advances in neural information processing systems 31, 2018
262018
The system can't perform the operation now. Try again later.
Articles 1–20