探花视频

清华主页 EN
导航菜单 探花视频

AdaBB: A Parameter-Free Gradient Method for Convex Optimization

来源: 11-13

探花视频
探花视频
探花视频
探花视频

时间:Thur., 11:00 am -12:00, Nov. 14, 2024

地点:Tencent Meeting:127-784-846

组织者:Chenglong Bao

主讲人:Shiqian Ma (Rice University)

Organizer:

Chenglong Bao 包承龙

Speaker:

Shiqian Ma (Rice University)

Time:

Thur., 11:00 am -12:00, Nov. 14, 2024

Online:

Tencent Meeting:127-784-846

Title:

AdaBB: A Parameter-Free Gradient Method for Convex Optimization

Abstract:

We propose AdaBB, an adaptive gradient method based on the Barzilai-Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially provides a convergent variant of the Barzilai-Borwein method for general unconstrained convex optimization. We analyze the ergodic convergence of the objective function value and the convergence of the iterates for solving general unconstrained convex optimization. Compared with existing works along this line of research, our algorithm gives the best lower bounds on the stepsize and the average of the stepsizes. Moreover, we present an extension of the proposed algorithm for solving composite optimization where the objective function is the summation of a smooth function and a nonsmooth function. Our numerical results also demonstrate very promising potential of the proposed algorithms on some representative examples.


About the speaker:

Shiqian Ma is a professor in Department of Computational Applied Mathematics and Operations Research and Department of Electrical and Computer Engineering at Rice University. He received his PhD in Industrial Engineering and Operations Research from Columbia University. His main research areas are optimization and machine learning. His research is currently supported by ONR and NSF Grants from the DMS, CCF, and ECCS programs. Shiqian received the 2024 INFORMS Computing Society Prize and the 2024 SIAM Review SIGEST Award, among many other awards from both academia and industry. Shiqian is an Associate Editor of Journal of Machine Learning Research, Journal of Scientific Computing, Journal of Optimization Theory and Applications, Pacific Journal of Optimization, and IISE Transactions, a Senior Area Chair of NeurIPS, an Area Chair of ICML, ICLR and AISTATS, and a Senior Program Committee of AAAI. He is a plenary speaker of the Texas Colloquium on Distributed Learning in 2023 and a semi-plenary speaker of the International Conference on Stochastic Programming in 2023.

返回顶部
探花视频相关的文章
  • Efficient natural gradient method for large-scale optimization problems

    AbstractFirst-order methods are workhorses for large-scale optimization problems, but they are often agnostic to the structural properties of the problem under consideration and suffer from slow convergence, being trapped in bad local minima, etc. Natural gradient descent is an acceleration technique in optimization that takes advantage of the problem's geometric structure and preconditions the...

  • Adaptive Gradient Methods with Energy for Optimization Problems | Applied and Computational Math Colloquium

    AbstractWe propose AEGD, a new algorithm for gradient-based optimization of stochastic objective functions, based on adaptive updates of quadratic energy. The method is shown to be unconditionally energy stable, irrespective of the step size. In addition, AEGD enjoys tight convergence rates, yet allows a large step size. The method is straightforward to implement and requires little tuning of h...