Follow
Vamsi Aribandi
Vamsi Aribandi
ASAPP
Verified email at asapp.com - Homepage
Title
Cited by
Cited by
Year
Are Pre-trained Convolutions Better than Pre-trained Transformers?
Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler
ACL 2021, 2021
372021
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng, SV Mehta, H Zhuang, ...
ICLR 2022, 2022
312022
OmniNet: Omnidirectional Representations from Transformers
Y Tay, M Dehghani, V Aribandi, J Gupta, P Pham, Z Qin, D Bahri, DC Juan, ...
ICML 2021, 2021
132021
Characterization of Time-variant and Time-invariant Assessment of Suicidality on Reddit using C-SSRS
M Gaur, V Aribandi, A Alambo, U Kursuncu, K Thirunarayan, J Beich, ...
PLoS ONE 16 (5), e0250448, 2021
112021
Knowledge-infused abstractive summarization of clinical diagnostic interviews: Framework development study
G Manas, V Aribandi, U Kursuncu, A Alambo, VL Shalin, K Thirunarayan, ...
JMIR Mental Health 8 (5), e20865, 2021
82021
HyperPrompt: Prompt-based Task-Conditioning of Transformers
Y He, HS Zheng, Y Tay, J Gupta, Y Du, V Aribandi, Z Zhao, YG Li, Z Chen, ...
ICML 2022, 2022
42022
How Reliable are Model Diagnostics?
V Aribandi, Y Tay, D Metzler
ACL Findings 2021, 2021
42021
Prediction of Refactoring-Prone Classes Using Ensemble Learning
VK Aribandi, L Kumar, L Bhanu Murthy Neti, A Krishna
International Conference on Neural Information Processing, 242-250, 2019
12019
Machine-Learned Attention Models Featuring Omnidirectional Processing
Y Tay, D Juan, D Bahri, DA Metzler Jr, JP Gupta, M Dehghani, P Pham, ...
US Patent App. 17/592,796, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–9