TY - GEN
T1 - Alternating Direction Method of Multipliers for Convex Optimization in Machine Learning-Interpretation and Implementation
AU - Huang, Kuan Min
AU - Samani, Hooman
AU - Yang, Chan Yun
AU - Chen, Jie Sheng
N1 - © 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The alternating direction method of multipliers (ADMM) is an important method to solve convex optimization problems. Due to the optimization tasks increased with the sort of machine learning applications, ADMM has gained much more attention recently. The principle of ADMM solves problems by breaking them into smaller pieces to specially limit the problem dimension. Each of the pieces are then easier to handle and speed up accordingly the total computational time to reach the optimum. With the speeding-up, it was widely adopted for optimization in a number of areas. In this paper, we start the explanation from the constrained convex optimization, and the relation between primal problem and dual problem. With the preliminary explanation, two optimization algorithms are introduced, including the dual ascent and the dual decomposition approaches. An introduction of augmented Lagrangian, the key to success ADMM, is also followed up ahead for elaboration. Finally, the main topic of ADMM is explained algorithmically based on the fundamentals, and an example code is outlined for implementation.
AB - The alternating direction method of multipliers (ADMM) is an important method to solve convex optimization problems. Due to the optimization tasks increased with the sort of machine learning applications, ADMM has gained much more attention recently. The principle of ADMM solves problems by breaking them into smaller pieces to specially limit the problem dimension. Each of the pieces are then easier to handle and speed up accordingly the total computational time to reach the optimum. With the speeding-up, it was widely adopted for optimization in a number of areas. In this paper, we start the explanation from the constrained convex optimization, and the relation between primal problem and dual problem. With the preliminary explanation, two optimization algorithms are introduced, including the dual ascent and the dual decomposition approaches. An introduction of augmented Lagrangian, the key to success ADMM, is also followed up ahead for elaboration. Finally, the main topic of ADMM is explained algorithmically based on the fundamentals, and an example code is outlined for implementation.
KW - convex optimization
KW - dual ascent
KW - dual problem
UR - http://www.scopus.com/inward/record.url?scp=85134020744&partnerID=8YFLogxK
U2 - 10.1109/ICIPRob54042.2022.9798720
DO - 10.1109/ICIPRob54042.2022.9798720
M3 - Conference contribution
AN - SCOPUS:85134020744
T3 - 2022 2nd International Conference on Image Processing and Robotics, ICIPRob 2022
BT - 2022 2nd International Conference on Image Processing and Robotics, ICIPRob 2022
PB - Institute of Electrical and Electronics Engineers (IEEE)
T2 - 2nd International Conference on Image Processing and Robotics, ICIPRob 2022
Y2 - 12 March 2022 through 13 March 2022
ER -