Haihao (Sean) Lu
Haihao (Sean) Lu
Assistant Professor, University of Chicago
Verified email at - Homepage
Cited by
Cited by
Relatively smooth convex optimization by first-order methods, and applications
H Lu, RM Freund, Y Nesterov
SIAM Journal on Optimization 28 (1), 333-354, 2018
The best of many worlds: Dual mirror descent for online allocation problems
SR Balseiro, H Lu, V Mirrokni
Operations Research 71 (1), 101-119, 2023
Depth creates no bad local minima
H Lu, K Kawaguchi
arXiv preprint arXiv:1702.08580, 2017
“relative continuity” for non-lipschitz nonsmooth convex optimization using stochastic (or deterministic) mirror descent
H Lu
INFORMS Journal on Optimization 1 (4), 288-303, 2019
Ordered sgd: A new stochastic optimization framework for empirical risk minimization
K Kawaguchi, H Lu
International Conference on Artificial Intelligence and Statistics, 669-679, 2020
Practical large-scale linear programming using primal-dual hybrid gradient
D Applegate, M Díaz, O Hinder, H Lu, M Lubin, B O'Donoghue, W Schudy
Advances in Neural Information Processing Systems 34, 20243-20257, 2021
Regularized online allocation problems: Fairness and beyond
S Balseiro, H Lu, V Mirrokni
International Conference on Machine Learning, 630-639, 2021
Accelerating gradient boosting machines
H Lu, SP Karimireddy, N Ponomareva, V Mirrokni
International conference on artificial intelligence and statistics, 516-526, 2020
Faster first-order primal-dual methods for linear programming using restarts and sharpness
D Applegate, O Hinder, H Lu, M Lubin
Mathematical Programming 201 (1), 133-184, 2023
New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
RM Freund, H Lu
Mathematical Programming 170, 445-477, 2018
The landscape of the proximal point method for nonconvex–nonconcave minimax optimization
B Grimmer, H Lu, P Worah, V Mirrokni
Mathematical Programming 201 (1), 373-407, 2023
Randomized gradient boosting machine
H Lu, R Mazumder
SIAM Journal on Optimization 30 (4), 2780-2808, 2020
Accelerating Greedy Coordinate Descent Methods
H Lu, R Freund, V Mirrokni
International Conference on Machine Learning, 3257-3266, 2018
Generalized stochastic Frank–Wolfe algorithm with stochastic “substitute” gradient for structured convex optimization
H Lu, RM Freund
Mathematical Programming 187 (1), 317-349, 2021
An -Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to the Linear Convergence of Minimax Problems
H Lu
Mathematical Programming 194, 1061-1112, 2022
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
S Wang, W Zhou, H Lu, A Maleki, V Mirrokni
International Conference on Machine Learning, 5228-5237, 2018
Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming
D Applegate, M Díaz, H Lu, M Lubin
SIAM Journal on Optimization 34 (1), 459-484, 2024
Approximate leave-one-out for high-dimensional non-differentiable learning problems
S Wang, W Zhou, A Maleki, H Lu, V Mirrokni
arXiv preprint arXiv:1810.02716, 2018
Limiting behaviors of nonconvex-nonconcave minimax optimization via continuous-time systems
B Grimmer, H Lu, P Worah, V Mirrokni
International Conference on Algorithmic Learning Theory, 465-487, 2022
Renormalized dispersion relations of -Fermi-Pasta-Ulam chains in equilibrium and nonequilibrium states
SW Jiang, H Lu, D Zhou, D Cai
Physical Review E 90 (3), 032925, 2014
The system can't perform the operation now. Try again later.
Articles 1–20