凸优化问题的不精确逼近梯度算法的收敛速度分析
我们考虑利用逼近梯度法优化光滑凸函数和非光滑凸函数之和的问题,其中在光滑项的梯度或非光滑项的逼近算子的计算中存在误差。
We consider the problem of optimizing thesum of a smooth convex function and a non-smooth convex function usingproximal-gradient methods, where an error is present in the calculation of thegradient of the smooth term or in the proximity operator with respect to thenon-smooth term.
我们证明了基本逼近梯度法和加速逼近梯度法都获得了与无误差情况下相同的收敛速度,只要误差能够以适当的速率减小。
We show that both the basicproximal-gradient method and the accelerated proximal-gradient method achievethe same convergence rate as in the error-free case, provided that the errorsdecrease at appropriate rates.
使用这些速率,与精心选择的固定误差水平相比,我们在一组结构化稀疏性问题上能够运行得到近似或更优的结果。
Using these rates, we perform as well as orbetter than a carefully chosen fixed error level on a set of structuredsparsity problems.
近年来,利用凸优化结构的重要性问题已经成为机器学习界的一个热门研究课题。
In recent years the importance of takingadvantage of the structure of convex optimization problems has become a topicof intense research in the machine learning community.
……
下载原文及相关源码地址:
http://page2.dfpan.com/fs/7ldcejf202617259160/
更多精彩文章请关注微信号: