Annals of Mathematical Sciences and Applications

Volume 9 (2024)

Number 2

Primal-dual damping algorithms for optimization

Pages: 467 – 504

DOI: https://dx.doi.org/10.4310/AMSA.2024.v9.n2.a7

Authors

Xinzhe Zuo (Department of Mathematics, University of California, Los Angeles, CA, USA)

Stanley Osher (Department of Mathematics, University of California, Los Angeles, CA, USA)

Wuchen Li (Department of Mathematics, University of South Carolina, Columbia, SC, USA)

Abstract

We propose an unconstrained optimization method based on the well-known primal-dual hybrid gradient (PDHG) algorithm. We first formulate the optimality condition of the unconstrained optimization problem as a saddle point problem. We then compute the minimizer by applying generalized primal-dual hybrid gradient algorithms. Theoretically, we demonstrate the continuous-time limit of the proposed algorithm forms a class of second-order differential equations, which contains and extends the heavy ball ODEs and Hessian-driven damping dynamics. Following the Lyapunov analysis of the ODE system, we prove the linear convergence of the algorithm for strongly convex functions. Experimentally, we showcase the advantage of algorithms on several convex and nonconvex optimization problems by comparing the performance with other well-known algorithms, such as Nesterov’s accelerated gradient methods. In particular, we demonstrate that our algorithm is efficient in training two-layer and convolution neural networks in supervised learning problems.

Keywords

optimization, primal-dual hybrid gradient algorithms, primal-dual damping dynamics

The full text of this article is unavailable through your IP address: 172.17.0.1

Received 30 April 2023

Accepted 8 June 2023

Published 15 August 2024