Jinliang Wei
Jinliang Wei
Verified email at alumni.cmu.edu - Homepage
Title
Cited by
Cited by
Year
Petuum: A new platform for distributed machine learning on big data
EP Xing, Q Ho, W Dai, JK Kim, J Wei, S Lee, X Zheng, P Xie, A Kumar, ...
IEEE Transactions on Big Data 1 (2), 49-67, 2015
4362015
Poseidon: An efficient communication architecture for distributed deep learning on {GPU} clusters
H Zhang, Z Zheng, S Xu, W Dai, Q Ho, X Liang, Z Hu, J Wei, P Xie, ...
2017 {USENIX} Annual Technical Conference ({USENIX}{ATC} 17), 181-193, 2017
232*2017
Lightlda: Big topic models on modest computer clusters
J Yuan, F Gao, Q Ho, W Dai, J Wei, X Zheng, EP Xing, TY Liu, WY Ma
Proceedings of the 24th International Conference on World Wide Web, 1351-1361, 2015
1922015
Exploiting Bounded Staleness to Speed Up Big Data Analytics.
H Cui, J Cipar, Q Ho, JK Kim, S Lee, A Kumar, J Wei, W Dai, GR Ganger, ...
USENIX Annual Technical Conference, 37-48, 2014
1442014
Managed communication and consistency for fast data-parallel iterative analytics
J Wei, W Dai, A Qiao, Q Ho, H Cui, GR Ganger, PB Gibbons, GA Gibson, ...
Proceedings of the Sixth ACM Symposium on Cloud Computing, 381-394, 2015
1082015
High-Performance Distributed ML at Scale through Parameter Server Consistency Models.
W Dai, A Kumar, J Wei, Q Ho, GA Gibson, EP Xing
AAAI, 79-87, 2015
1022015
Addressing the straggler problem for iterative convergent parallel ML
A Harlap, H Cui, W Dai, J Wei, GR Ganger, PB Gibbons, GA Gibson, ...
Proceedings of the Seventh ACM Symposium on Cloud Computing, 98-111, 2016
842016
Priority-based parameter propagation for distributed DNN training
A Jayarajan, J Wei, G Gibson, A Fedorova, G Pekhimenko
Proceedings of the 2 nd SysML Conference, Palo Alto, CA, USA, 2019., 2019
582019
Exploiting iterative-ness for parallel ML computations
H Cui, A Tumanov, J Wei, L Xu, W Dai, J Haber-Kucharsky, Q Ho, ...
Proceedings of the ACM Symposium on Cloud Computing, 1-14, 2014
382014
Petuum: A framework for iterative-convergent distributed ML
W Dai, J Wei, JK Kim, S Lee, J Yin, Q Ho, EP Xing
372013
Consistent bounded-asynchronous parameter servers for distributed ML
J Wei, W Dai, A Kumar, X Zheng, Q Ho, EP Xing
arXiv preprint arXiv:1312.7869, 2013
142013
Automating Dependence-Aware Parallelization of Machine Learning Training on Distributed Shared Memory
J Wei, G Gibson, P Gibbons, XP Eric
EuroSys, 2019
72019
Benchmarking Apache Spark with Machine Learning Applications
J Wei, JK Kim, GA Gibson
Carnegie Mellon University, Pittsburgh, 2016
72016
Parallel Implementation of Expectation-Maximization for Fast Convergence
H Cui, J Wei, W Dai
6*
A software toolkit for visualizing enterprise routing design
X Sun, J Wei, SG Rao, GG Xie
2011 4th Symposium on Configuration Analytics and Automation (SAFECONFIG), 1-8, 2011
32011
Addressing the Long-Lineage Bottleneck in Apache Spark
H Wang, J Wei, G Gibson
12018
Scheduling for Efficient Large-Scale Machine Learning Training
J Wei
Intel, 2019
2019
Efficient and Programmable Distributed Shared Memory Systems for Machine Learning Training
J Wei
Google, 2018
2018
Solving the straggler problem for iterative convergent parallel ML
A Harlap, H Cui, W Dai, J Wei, GR Ganger, PB Gibbons, GA Gibson, ...
2015
Training Larger Models on TensorFlow without Additional GPU
J Wei, A Qiao, A Jayarajan, G Gibson, V Vasudevan, E Xing
The system can't perform the operation now. Try again later.
Articles 1–20