大连理工大学数学科学学院
通知与公告

【华东师范大学】Implementing the ADMM for big datasets: a case study of LASSO

2018年10月26日 14:32  点击:[]

报告题目:Implementing the ADMM for big datasets: a case study of LASSO

报告时间:2018年11月9日(周五)下午14:30点-15:30点

报告地点:创新园大厦A1101

报告人:王祥丰   华东师范大学

校内报告联系人:刘永朝           联系电话:84708351-8141

 

报告摘要:The alternating direction method of multipliers (ADMM) has been extensively used in a wide variety of different applications. When large datasets with high-dimensional variables are considered, subproblems arising from the ADMM must be inexactly solved even though they may theoretically have closed-form solutions. Such a scenario immediately poses mathematical ambiguities such as how these subproblems should be accurately solved and whether the convergence can still be guaranteed. Although the ADMM is well known, it seems that these topics should be deeply investigated. In this paper, we study the mathematics of how to implement the ADMM for a large dataset scenarios. More specifically, we attempt to focus on the convex programming case where there is a quadratic function with extremely high-dimensional variables in the objective function of the model; thereby there is a huge-scale system for linear equations needing to be solved at each iteration of the ADMM. It is revealed that there is no need, indeed it is impossible, to exactly solve this linear system, and we attempt to propose an adjustable inexactness criterion to automatically and inexactly solve this linear system. We further attempt to identify the safe-guard number for the internally nested iterations that can sufficiently ensure this inexactness criterion if the linear system would be solved by a standard numerical linear algebra solver. The convergence, together with the worst-case convergence rate measured by the iteration complexity, is rigorously established for the ADMM with inexactly solved subproblems. Some numerical experiments for large datasets of the least absolute shrinkage and selection operator containing millions of variables are reported to show the efficiency of the mentioned inaccurate implementation of the ADMM.

 

报告人简介:王祥丰,2009年毕业于南京大学数学系,2014年同样于南京大学数学系获得理学博士学位(导师:何炳生教授);攻读博士学位期间,获得国家留学基金委资助赴美国明尼苏达大学联合培养(导师:罗智泉教授),并赴香港浸会大学访问(导师:袁晓明教授)。毕业后,加入华东师范大学计算机科学与软件工程学院,主要研究方向是大规模最优化算法设计与理论分析,及其在机器学习中的应用(机器学习驱动非凸优化、最优传输),已在Mathematical Programming(MP), SIAM Journal on Scientific Computing(SISC), Mathematics of Operations Research(MOR)等数学规划和计算数学期刊以及IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI), IEEE Transactions on Signal Processing(TSP), Neurocomputing等机器学习和信号处理应用期刊以及IJCAI, AAAI, ICMR, ICASSP, INTERSPEECH等会议发表论文二十余篇。

 

上一条:【西北师范大学】线性差分方程的谱及相关非线性问题中的分歧 下一条:【香港浸会大学】Some new optimization theory for convergence analysis of first-order algorithms

关闭