Follow
Rahul Ramesh
Rahul Ramesh
Verified email at seas.upenn.edu
Title
Cited by
Cited by
Year
Model Zoo: A Growing "Brain" That Learns Continually
R Ramesh, P Chaudhari
International Conference on Learning Representations (ICLR 22), 2022
752022
Successor Options: An Option Discovery Framework for Reinforcement Learning
R Ramesh*, M Tomar*, B Ravindran
International Joint Conference on Artificial Intelligence (IJCAI 19), 2019
402019
Learning to factor policies and action-value functions: Factored action space representations for deep reinforcement learning
S Sharma, A Suresh*, R Ramesh*, B Ravindran
arXiv preprint arXiv:1705.07269, 2017
382017
FigureNet: A Deep Learning model for Question-Answering on Scientific Plots
R Reddy, R Ramesh, A Deshpande, MM Khapra
International Joint Conference on Neural Networks (IJCNN 19), 2019
35*2019
The training process of many deep networks explores the same low-dimensional manifold
J Mao, I Griniasty, HK Teoh, R Ramesh, R Yang, MK Transtrum, ...
Proceedings of the National Academy of Sciences 121 (12), e2310002121, 2024
142024
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks
R Ramesh, ES Lubana, M Khona, RP Dick, H Tanaka
Forty-first International Conference on Machine Learning, 0
13*
The Value of Out-of-Distribution Data
A De Silva, R Ramesh*, CE Priebe, P Chaudhari, JT Vogelstein
International Conference on Machine Learning (ICML 23), 2022
122022
Prospective Learning: Principled Extrapolation to the Future
A De Silva*, R Ramesh*, L Ungar, MH Shuler, NJ Cowan, M Platt, C Li, ...
Conference on Lifelong Learning Agents, 347-357, 2023
10*2023
A picture of the space of typical learnable tasks
R Ramesh, J Mao, I Griniasty, R Yang, HK Teoh, M Transtrum, JP Sethna, ...
International Conference on Machine Learning (ICML 23), 2022
62022
Deep Reference Priors: What is the best way to pretrain a model?
Y Gao*, R Ramesh*, P Chaudhari
International Conference on Machine Learning (ICML 22), 2022
52022
Option Encoder: A Framework for Discovering a Policy Basis in Reinforcement Learning
R Ramesh*, A Manoharan*, B Ravindran
The European Conference on Machine Learning and Principles and Practice of …, 2020
4*2020
Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model
M Khona, M Okawa, J Hula, R Ramesh, K Nishi, R Dick, ES Lubana, ...
arXiv preprint arXiv:2402.07757, 2024
22024
Prospective Learning: Learning for a Dynamic Future
A De Silva*, R Ramesh*, R Yang*, S Yu, JT Vogelstein, P Chaudhari
arXiv preprint arXiv:2411.00109, 2024
2024
Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing
K Nishi, M Okawa, R Ramesh, M Khona, ES Lubana, H Tanaka
arXiv preprint arXiv:2410.17194, 2024
2024
Many Perception Tasks are Highly Redundant Functions of their Input Data
R Ramesh*, A Bisulco*, RW DiTullio, L Wei, V Balasubramanian, ...
arXiv preprint arXiv:2407.13841, 2024
2024
AUPCR Maximizing Matchings: Towards a Pragmatic Notion of Optimality for One-Sided Preference Matchings
G Raguvir J*, R Ramesh*, S Sridhar*, V Manoharan*
Multidisciplinary Workshop on Advances in Preference Handling (AAAI 2018), 2017
2017
The system can't perform the operation now. Try again later.
Articles 1–16