Welcome to Wei-Lin Chiang’s page!
- I am a CS PhD student in UC Berkeley Sky Computing Lab (previously RISElab), working with Prof. Ion Stoica.
- I obtained my bachelor’s and Master’s degree from National Taiwan University under the supervision of Prof. Chih-Jen Lin.
- My research interests include AI systems, Cloud ML, optimization for ML, and scalable ML algorithms. Currently building an intercloud broker system, SkyPilot, to bring them all together.
- I enjoy developing open-source ML software and I am always happy to learn how they are being used! Email me if you have questions or find our projects useful.
- More details can be found in my CV.
- Intern@Amazon, Seattle (May. 2021 - Aug. 2021)
Contrastive learning for information extraction on semi-structure webpages
- Intern@Google Research, Mountain View (Dec. 2018 - Mar. 2019)
Efficient algorithms for training large and deep GCN models.
Cluster-GCN paper, code
- Intern@Alibaba Group, Hangzhou (July 2017 - Sept. 2017)
Distributed ML algorithms on Alibaba’s parameter server (KunPeng)
- Intern@Microsoft Research Asia, Beijing (Dec. 2016 - Feb. 2017)
Distributed training for deep learning frameworks
- Intern@Microsoft, Redmond (July 2016 - Oct. 2016)
Large-scale ML algorithms on Microsoft’s distributed platform (REEF)
- SkyPilot (Project)
SkyPilot is an intercloud broker system for easily and cost-effectively deploying ML workloads on any cloud
- Balsa (Project | Paper)
Balsa is a ML-based query optimizer, learning to optimize SQL queries by trial-and-error using deep RL and sim-to-real learning
- Cluster-GCN (Project | Paper)
One of the first scalable methods for training large (million-scale) and deep GCN
Achieved state-of-the-art performance on public datasets (e.g., PPI, Reddit)
- Distributed LIBLINEAR (Project | Paper)
Distributed extension of a widely-used linear classification package, LIBLINEAR
Developed L1-regularized LR solver for solving large (billion-scale) tasks.
- Multi-core LIBLINEAR (Project | Paper)
Multi-core extension of a widely-used linear classification package, LIBLINEAR
Developed efficient parallel algorithms for primal and dual solvers
Publications (Google Scholar Profile)
- Balsa: Learning a Query Optimizer Without Expert Demonstrations
Zongheng Yang, Wei-Lin Chiang+, Sifei Luan+, Gautam Mittal, Michael Luo, Ion Stoica. (+ equal contribution)
ACM SIGMOD 2022
- Manifold Identification for Ultimately Communication-Efficient Distributed Optimization
Yu-Sheng Li, Wei-Lin Chiang, and Ching-pei Lee.
International Conference on Machine Learning (ICML), 2020
- Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks [code, dataset (Amazon2M)]
Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh.
ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), 2019 (Oral) slides, poster
- Preconditioned Conjugate Gradient Methods in Truncated Newton Frameworks for Large-scale Linear Classification [supplement & code. Implementation available in LIBLINEAR after version 2.20.]
Chih-Yang Hsia, Wei-Lin Chiang, and Chih-Jen Lin.
Asian Conference on Machine Learning (ACML), 2018 (Best paper award) slides, poster
- Limited-memory Common-directions Method for Distributed L1-regularized Linear Classification [supplement & code. Implementation available in Distributed LIBLINEAR.]
Wei-Lin Chiang, Yu-Sheng Li, Ching-pei Lee, and Chih-Jen Lin.
SIAM International Conference on Data Mining (SDM), 2018 slides, poster
- Parallel Dual Coordinate Descent Method for Large-scale Linear Classification in Multi-core Environments [supplement, code. Implementation available in Multi-core LIBLINEAR.]
Wei-Lin Chiang, Mu-Chu Lee, and Chih-Jen Lin.
ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), 2016 poster
- Fast Matrix-vector Multiplications for Large-scale Logistic Regression on Shared-memory Systems [supplement, code. Implementation available in Multi-core LIBLINEAR.]
Mu-Chu Lee, Wei-Lin Chiang, and Chih-Jen Lin.
IEEE International Conference on Data Mining (ICDM), 2015 slides