Access the full text.
Sign up today, get DeepDyve free for 14 days.
Xin-Feng He, Rong Hu, Yaping Fang (2020)
Convergence Rates of Inertial Primal-Dual Dynamical Methods for Separable Convex Optimization ProblemsSIAM J. Control. Optim., 59
A. Cabot, H. Engler, S. Gadat (2007)
On the long time behavior of second order differential equations with asymptotically small dissipationTransactions of the American Mathematical Society, 361
H. Attouch, Z. Chbani, H. Riahi (2019)
Fast Proximal Methods via Time Scaling of Damped Inertial DynamicsSIAM J. Optim., 29
H. Attouch, Z. Chbani (2016)
Combining fast inertial dynamics for convex optimization with Tikhonov regularizationarXiv: Optimization and Control
Weijie Su, Stephen Boyd, E. Candès (2014)
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and InsightsJ. Mach. Learn. Res., 17
Xianlin Zeng, J. Lei, Jie Chen (2019)
Dynamical Primal-Dual Accelerated Method with Applications to Network OptimizationarXiv: Optimization and Control
R. Rockafellar (1976)
Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex ProgrammingMath. Oper. Res., 1
Xin-Feng He, Rong Hu, Yaoli Fang (2021)
Perturbed inertial primal-dual dynamics with damping and scaling terms for linearly constrained convex optimization problems ⋆
A. Chambolle, C. Dossal (2015)
On the Convergence of the Iterates of the “Fast Iterative Shrinkage/Thresholding Algorithm”Journal of Optimization Theory and Applications, 166
H. Attouch, Z. Chbani, H. Riahi (2020)
Fast convex optimization via a third-order in time evolution equationOptimization, 71
B. Abbas, H. Attouch, B. Svaiter (2014)
Newton-Like Dynamics and Forward-Backward Methods for Structured Monotone Inclusions in Hilbert SpacesJournal of Optimization Theory and Applications, 161
A. Beck, M. Teboulle (2009)
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse ProblemsSIAM J. Imaging Sci., 2
Xin-Feng He, Rong Hu, Yaoli Fang (2021)
Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problemAutom., 146
R. Boț (2010)
Conjugate Duality in Convex Optimization
Guodong Shi, K. Johansson (2011)
Randomized optimal consensus of multi-agent systemsAutom., 48
Stephen Boyd, Neal Parikh, E. Chu, B. Peleato, Jonathan Eckstein (2011)
Distributed Optimization and Statistical Learning via the Alternating Direction Method of MultipliersFound. Trends Mach. Learn., 3
Ritesh Madan, S. Lall (2004)
Distributed algorithms for maximum lifetime routing in wireless sensor networksIEEE Transactions on Wireless Communications, 5
R. Boț, E. Csetnek, Dang-Khoa Nguyen (2022)
Fast OGDA in continuous and discrete time
Xianlin Zeng, Peng Yi, Yiguang Hong, Lihua Xie (2018)
Distributed Continuous-Time Algorithms for Nonsmooth Extended Monotropic Optimization ProblemsSIAM J. Control. Optim., 56
R. Boț, E. Csetnek, S. László (2019)
A primal-dual dynamical approach to structured convex minimization problemsarXiv: Optimization and Control
H. Attouch, Z. Chbani, J. Fadili, H. Riahi (2021)
Fast Convergence of Dynamical ADMM via Time Scaling of Damped Inertial DynamicsJournal of Optimization Theory and Applications, 193
Boris Polyak (1964)
Some methods of speeding up the convergence of iteration methodsUssr Computational Mathematics and Mathematical Physics, 4
(Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \le 3$$\end{document}α≤3. ESAIM Control Optim. Calc. Var. (2019). 10.1051/cocv/2017083)
Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \le 3$$\end{document}α≤3. ESAIM Control Optim. Calc. Var. (2019). 10.1051/cocv/2017083Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \le 3$$\end{document}α≤3. ESAIM Control Optim. Calc. Var. (2019). 10.1051/cocv/2017083, Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \le 3$$\end{document}α≤3. ESAIM Control Optim. Calc. Var. (2019). 10.1051/cocv/2017083
(Boţ, R.I.: Conjugate Duality in Convex Optimization, vol. 637. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (2010). 10.1007/978-3-642-04900-2)
Boţ, R.I.: Conjugate Duality in Convex Optimization, vol. 637. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (2010). 10.1007/978-3-642-04900-2Boţ, R.I.: Conjugate Duality in Convex Optimization, vol. 637. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (2010). 10.1007/978-3-642-04900-2, Boţ, R.I.: Conjugate Duality in Convex Optimization, vol. 637. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (2010). 10.1007/978-3-642-04900-2
H. Attouch (2020)
Fast inertial proximal ADMM algorithms for convex structured optimization with linear constraint
H. Attouch, J. Peypouquet (2017)
Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operatorsMathematical Programming, 174
Ramzi May (2015)
Asymptotic for a second order evolution equation with convex potential and vanishing damping termarXiv: Optimization and Control
ii) every weak sequential cluster point of the trajectory z p t q as t Ñ `8 belongs to S . Then, z p t q converges weakly to an element of S as t Ñ `8 . 27
R. Boț, E. Csetnek, Dang-Khoa Nguyen (2021)
Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iteratesMathematical Programming, 200
Y. Nesterov (2014)
Introductory Lectures on Convex Optimization - A Basic Course, 87
H. Attouch, Z. Chbani, H. Riahi (2017)
Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3ESAIM: Control, Optimisation and Calculus of Variations
F. Alvarez, H. Attouch, J. Bolte, P. Redont (2002)
A second-order gradient-like dissipative dynamical system with Hessian-driven damping.: Application to optimization and mechanicsJournal de Mathématiques Pures et Appliquées, 81
Y. Nesterov (1983)
A method for solving the convex programming problem with convergence rate O(1/k^2)Proceedings of the USSR Academy of Sciences, 269
(1983)
A method for solving the convex programming problem with convergence rate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\cal{O}(1/k^{2})$$\end{document}O(1/k2)Proc. USSR Acad. Sci., 269
J. Kiusalaas (2015)
Introduction to OptimizationApplied Evolutionary Algorithms for Engineers Using Python
(Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011). 10.1007/978-1-4419-9467-7)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011). 10.1007/978-1-4419-9467-7Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011). 10.1007/978-1-4419-9467-7, Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011). 10.1007/978-1-4419-9467-7
D. Gabay, B. Mercier (1976)
A dual algorithm for the solution of nonlinear variational problems via finite element approximationComputers & Mathematics With Applications, 2
(Zeng, X., Lei, J., Chen, J.: Dynamical primal-dual accelerated method with applications to network optimization. IEEE Trans. Autom. Control (2022). 10.1109/TAC.2022.3152720)
Zeng, X., Lei, J., Chen, J.: Dynamical primal-dual accelerated method with applications to network optimization. IEEE Trans. Autom. Control (2022). 10.1109/TAC.2022.3152720Zeng, X., Lei, J., Chen, J.: Dynamical primal-dual accelerated method with applications to network optimization. IEEE Trans. Autom. Control (2022). 10.1109/TAC.2022.3152720, Zeng, X., Lei, J., Chen, J.: Dynamical primal-dual accelerated method with applications to network optimization. IEEE Trans. Autom. Control (2022). 10.1109/TAC.2022.3152720
H. Attouch, X. Goudou, P. Redont (2000)
THE HEAVY BALL WITH FRICTION METHOD, I. THE CONTINUOUS DYNAMICAL SYSTEM: GLOBAL EXPLORATION OF THE LOCAL MINIMA OF A REAL-VALUED FUNCTION BY ASYMPTOTIC ANALYSIS OF A DISSIPATIVE DYNAMICAL SYSTEMCommunications in Contemporary Mathematics, 02
Osman Güler (1992)
New Proximal Point Algorithms for Convex MinimizationSIAM J. Optim., 2
T. Goldstein, Brendan O'Donoghue, S. Setzer, Richard Baraniuk (2014)
Fast Alternating Direction Optimization MethodsSIAM J. Imaging Sci., 7
H. Attouch, Z. Chbani, M. Fadili, H. Riahi (2019)
First-order optimization algorithms via inertial systems with Hessian driven dampingMathematical Programming, 193
R. Boț, Dang-Khoa Nguyen (2021)
Improved convergence rates and trajectory convergence for primal-dual dynamical systems with vanishing dampingJournal of Differential Equations
(He, X., Hu, R., Fang, Y.-P.: Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems. Applicable Anal. (2022). 10.1080/00036811.2022.2104260)
He, X., Hu, R., Fang, Y.-P.: Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems. Applicable Anal. (2022). 10.1080/00036811.2022.2104260He, X., Hu, R., Fang, Y.-P.: Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems. Applicable Anal. (2022). 10.1080/00036811.2022.2104260, He, X., Hu, R., Fang, Y.-P.: Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems. Applicable Anal. (2022). 10.1080/00036811.2022.2104260
Xin-Feng He, Rong Hu, Yaping Fang (2022)
Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problemsApplicable Analysis, 102
Xin-Feng He, Rong Hu, Yaping Fang (2021)
Inertial accelerated primal-dual methods for linear equality constrained convex optimization problemsNumerical Algorithms, 90
(Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software, Publications Division, New York (1987))
Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software, Publications Division, New York (1987)Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software, Publications Division, New York (1987), Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software, Publications Division, New York (1987)
F. Alvarez (2000)
On the Minimizing Property of a Second Order Dissipative System in Hilbert SpacesSIAM J. Control. Optim., 38
A. Cabot, H. Engler, S. Gadat (2009)
Second-order differential equations with asymptotically small dissipation and piecewise flat potentialsElectronic Journal of Differential Equations, 17
Heinz Bauschke, P. Combettes (2011)
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
(2006)
Distributed algorithms for maximum lifetime routing in wireless sensor networksIEEE Trans. Wirel. Commun., 5
Peng Yi, Yiguang Hong, Feng Liu (2015)
Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systemsAutom., 74
H. Attouch, J. Peypouquet (2015)
The Rate of Convergence of Nesterov's Accelerated Forward-Backward Method is Actually Faster Than 1/k2SIAM J. Optim., 26
H. Attouch, Z. Chbani, J. Peypouquet, P. Redont (2018)
Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosityMathematical Programming, 168
Z. Opial (1967)
Weak convergence of the sequence of successive approximations for nonexpansive mappingsBulletin of the American Mathematical Society, 73
Zhouchen Lin, Huan Li, Cong Fang (2020)
Accelerated Optimization for Machine Learning: First-Order AlgorithmsAccelerated Optimization for Machine Learning
Peng Yi, Yiguang Hong, Feng Liu (2015)
Distributed gradient algorithm for constrained optimization with application to load sharing in power systemsSyst. Control. Lett., 83
(Nesterov, Y.: Introductory Lectures on Convex Optimization, vol. 87. Applied Optimization. Springer, Boston (2004). 10.1007/978-1-4419-8853-9)
Nesterov, Y.: Introductory Lectures on Convex Optimization, vol. 87. Applied Optimization. Springer, Boston (2004). 10.1007/978-1-4419-8853-9Nesterov, Y.: Introductory Lectures on Convex Optimization, vol. 87. Applied Optimization. Springer, Boston (2004). 10.1007/978-1-4419-8853-9, Nesterov, Y.: Introductory Lectures on Convex Optimization, vol. 87. Applied Optimization. Springer, Boston (2004). 10.1007/978-1-4419-8853-9
H. Attouch, Z. Chbani, H. Riahi (2020)
Fast convex optimization via time scaling of damped inertial gradient dynamics
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations
In this work, we approach the minimization of a continuously differentiable convex function under linear equality constraints by a second-order dynamical system with an asymptotically vanishing damping term. The system under consideration is a time rescaled version of another system previously found in the literature. We show fast convergence of the primal-dual gap, the feasibility measure, and the objective function value along the generated trajectories. These convergence rates now depend on the rescaling parameter, and thus can be improved by choosing said parameter appropri- ately. When the objective function has a Lipschitz continuous gradient, we show that the primal-dual trajectory asymptotically converges weakly to a primal-dual optimal solution to the underlying minimization problem. We also exhibit improved rates of convergence of the gradient along the primal trajectories and of the adjoint of the corresponding linear operator along the dual trajectories. We illustrate the theoretical outcomes and also carry out a comparison with other classes of dynamical systems through numerical experiments. Keywords Augmented Lagrangian method · Primal-dual dynamical system · Damped inertial dynamics · Nesterov’s accelerated gradient method · Lyapunov analysis · Time rescaling · Convergence rate · Trajectory convergence Mathematics Subject Classification 37N40 · 46N10 · 65K10 · 90C25 B Dang-Khoa Nguyen dang-khoa.nguyen@univie.ac.at David Alexander Hulett david.alexander.hulett@univie.ac.at Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria 0123456789().: V,-vol 123 27 Page 2 of 43 Applied Mathematics & Optimization (2023) 88:27 1 Introduction 1.1 Problem Statement and Motivation In this paper we will consider the optimization problem min f (x ) , (1.1) subject to Ax = b where X , Y are real Hilbert spaces; f : X → R is a continuously differentiable convex function; A : X → Y is a continuous linear operator and b ∈ Y; the set S of primal-dual optimal solutions of (1.1) is assumed to be nonempty. (1.2) This model formulation underlies many important applications in various areas, such as image recovery [25], machine learning [20, 31], the energy dispatch of power grids [42, 43], distributed optimization [32, 44] and network optimization [40, 45]. In recent years, there has been a flurry of research on the relationship between continuous time dynamical systems and the numerical algorithms that arise from their discretizations. For the unconstrained optimization problem, it has been known that inertial systems with damped velocities enjoy good convergence properties. For a convex, smooth function f : X → R, Polyak is the first to consider the heavy ball with friction (HBF) dynamics [37, 38] x ¨(t ) + γ x˙ (t ) +∇ f (x (t )) = 0. (HBF) Alvarez and Attouch continue the line of this study, focusing on inertial dynamics with a fixed viscous damping coefficient [2–4]. Later on, Cabot et al. [21, 22] consider the system that replaces γ with a time dependent damping coefficient γ (t ).In[41], Su, Boyd, and Candès showed that it turns out one can achieve fast convergence rates by introducing a time dependent damping coefficient which vanishes in a controlled manner, neither too fast nor too slowly, as t goes to infinity x ¨(t ) + x˙ (t ) +∇ f (x (t )) = 0. (AVD) For α 3, the authors showed that a solution x : [t , +∞) → X to (AVD) satisfies f (x (t )) − f (x ) = O as t →+∞. In fact, the choice α = 3 provides a ∗ 2 continuous limit counterpart to Nesterov’s celebrated accelerated gradient algorithm [15, 34, 35]. Weak convergence of the trajectories to minimizers of f when α> 3 has been shown by Attouch et al. in [6]and Mayin[33], together with the improved rates of convergence f (x (t )) − f (x ) = o as t →+∞. In the meantime, similar 123 Applied Mathematics & Optimization (2023) 88:27 Page 3 of 43 27 results for the discrete counterpart were also reported by Chambolle and Dossal in [23], and by Attouch and Peypouquet in [13]. In [7], Attouch, Chbani, and Riahi proposed an inertial proximal type algorithm, which results from a discretization of the time rescaled (AVD)system x ¨(t ) + x˙ (t ) + δ(t )∇ f (x (t )) = 0, where δ : [t , +∞) → R is a time scaling function satisfying a certain growth condi- 0 + tion, which enters the convergence statement by way of f (x (t ))− f (x ) = O t δ(t ) as t →+∞. The resulting algorithm obtained by the authors is considerably simpler than the founding proximal point algorithm proposed by Güler in [26], while providing comparable convergence rates for the functional values. In order to approach constrained optimization problems, Augmented Lagrangian Method (ALM) [39] (for linearly constrained problems) and Alternating Direction Method of Multipliers (ADMM) [20, 24] (for problems with separable objectives and block variables linearly coupled in the constraints) and some of their variants have been shown to enjoy substantial success. Continuous-time approaches for structured convex minimization problems formulated in the spirit of the full splitting paradigm have been recently addressed in [18] and, closely connected to our approach, in [10, 17, 27, 45], to which we will have a closer look in Subsection 2.2. The temporal discretization resulting from these dynamics gives rise to the numerical algorithm with fast convergence rates [28, 29] and with a convergence guarantee for the generated iterate [19], without additional assumptions such as strong convexity. In this paper, we will investigate a second-order dynamical system with asymptotic vanishing damping and time rescaling term, which is associated with the optimization problem (1.1) and formulated in terms of its augmented Lagrangian. The case when the time rescaling term does not appear has been established in [17]. We show that by introducing this time rescaling function, we are able to derive faster convergence rates for the primal-dual gap, the feasibility measure, and the objective function value along the generated trajectories while still maintaining the asymptotic behaviour of the trajectories towards a primal-dual optimal solution. On the other hand, this work can also be viewed as an extension of the time rescaling technique derived in [7, 9] for the constrained case. To our knowledge, the trajectory convergence for dynamics with time scaling seems to be new in the constrained case. 1.2 Notations and a Preliminary Result For both Hilbert spaces X and Y, the Euclidean inner product and the associated norm will be denoted by ·, · and · , respectively. The Cartesian product X × Y will be endowed with the inner product and the associated norm defined for (x,λ) , (z,μ) ∈ X × Y as 2 2 (x,λ) , (z,μ) = x , z + λ, μ and (x,λ) = x + λ , 123 27 Page 4 of 43 Applied Mathematics & Optimization (2023) 88:27 respectively. Let f : X → R be a continuously differentiable convex function such that ∇ f is −Lipschitz continuous. For every x , y ∈ X it holds (see [35, Theorem 2.1.5]) 2 2 0 ∇ f (x ) −∇ f (y) f (x ) − f (y) − ∇ f (y) , x − y x − y . 2 2 (1.3) 2 The Primal-Dual Dynamical Approach 2.1 Augmented Lagrangian Formulation Consider the saddle point problem min max L (x,λ) (2.1) x ∈X λ∈Y associated to problem (1.1), where L : X × Y → R denotes the Lagrangian function L (x,λ) := f (x ) + λ, Ax − b . Under the assumptions (1.2), L is convex with respect to x ∈ X and affine with respect to λ ∈ Y. A pair (x ,λ ) ∈ X × Y is said to be a saddle point of the Lagrangian ∗ ∗ function L if for every (x,λ) ∈ X × Y L (x ,λ) L (x ,λ ) L (x,λ ) . (2.2) ∗ ∗ ∗ ∗ If (x ,λ ) ∈ X × Y is a saddle point of L then x ∈ X is an optimal solution of (1.1), ∗ ∗ ∗ and λ ∈ Y is an optimal solution of its Lagrange dual problem. If x ∈ X is an optimal ∗ ∗ solution of (1.1) and a suitable constraint qualification is fulfilled, then there exists an optimal solution λ ∈ Y of the Lagrange dual problem such that (x ,λ ) ∈ X × Y is ∗ ∗ ∗ a saddle point of L. For details and insights into the topic of constraint qualifications for convex duality we refer to [14, 16]. The set of saddle points of L, called also primal-dual optimal solutions of (1.1), will be denoted by S and, as stated in the assumptions, it will be assumed to be nonempty. The set of feasible points of (1.1) will be denoted by F := {x ∈ X : Ax = b} and the optimal objective value of (1.1)by f . The system of primal-dual optimality conditions for (1.1) reads ∇ L x ,λ = 0 ∇ f x + A λ = 0 ( ) ( ) x ∗ ∗ ∗ ∗ (x ,λ ) ∈ S ⇔ ⇔ , (2.3) ∗ ∗ ∇ L (x ,λ ) = 0 Ax − b = 0 λ ∗ ∗ ∗ where A : Y → X denotes the adjoint operator of A. 123 Applied Mathematics & Optimization (2023) 88:27 Page 5 of 43 27 For β 0, we consider also the augmented Lagrangian L : X ×Y → R associated with (1.1) β β 2 2 L (x,λ) := L (x,λ)+ Ax − b = f (x )+λ, Ax − b+ Ax − b . (2.4) 2 2 For every (x,λ) ∈ F × Y it holds f (x ) = L (x,λ) = L (x,λ) . (2.5) If (x ,λ ) ∈ S, then we have for every (x,λ) ∈ X × Y ∗ ∗ L (x ,λ) = L (x ,λ) L (x ,λ ) = L (x ,λ ) L (x,λ ) L (x,λ ) . ∗ β ∗ ∗ ∗ β ∗ ∗ ∗ β ∗ In addition, from (2.3)wehave ∗ ∗ ∗ ∇ f (x ) + A λ = 0 ∇ f (x ) + A λ + β A (Ax − b) = 0 ∗ ∗ ∗ ∗ ∗ (x ,λ ) ∈ S ⇔ ⇔ ∗ ∗ Ax − b = 0 Ax − b = 0 ∗ ∗ ∇ L (x ,λ ) = 0 x β ∗ ∗ ∇ L (x ,λ ) = 0. λ β ∗ ∗ In other words, for any β 0 the sets of saddle points of L and L are identical. 2.2 The Primal-Dual Asymptotic Vanishing Damping Dynamical System with Time Rescaling In this subsection we present the system under study, and we include a brief discussion regarding the existence and uniqueness of solutions. The dynamical system which we associate to (1.1) and investigate in this paper reads ⎪ x ¨ (t ) + x˙ (t ) + δ (t ) ∇ L x (t ) ,λ (t ) + θt λ (t ) = 0 x β ¨ ˙ λ (t ) + λ (t ) − δ (t ) ∇ L x (t ) + θt x˙ (t ) ,λ (t ) = 0 , (2.6) λ β ⎪ t ˙ ˙ x (t ) ,λ (t ) = x ,λ and x˙ (t ) , λ (t ) = x˙ , λ 0 0 0 0 0 0 0 0 where t > 0, α> 0, θ> 0, δ : [t , +∞) → R is a nonnegative continuously 0 0 differentiable function and (x ,λ ) , x˙ , λ ∈ X × Y are the initial conditions. 0 0 0 0 Replacing the expressions of the partial gradients of L into the system leads to the following formulation for (2.6): 123 27 Page 6 of 43 Applied Mathematics & Optimization (2023) 88:27 ∗ ∗ ⎪ x ¨ (t ) + x˙ (t ) + δ (t ) ∇ f (x (t )) + δ (t ) A λ (t ) + θt λ (t ) + δ (t ) β A Ax (t ) − b = 0 ¨ ˙ λ (t ) + λ (t ) − δ (t ) A x (t ) + θt x˙ (t ) − b = 0 ˙ ˙ x (t ) ,λ (t ) = x ,λ and x˙ (t ) , λ (t ) = x˙ , λ . 0 0 0 0 0 0 0 0 The case (2.6) in which there is no time rescaling, i.e., when δ(t ) ≡ 1, was studied by Zeng et al. in [45], and by Bo¸t and Nguyen in [17]. The system with more general damping, extrapolation and time rescaling coefficients was addressed by He et al. in [27, 30] and by Attouch et al. in [10]. We mention that extending the results in this paper to the multi-block case is possible. For further details, we refer the readers to [17, Sect. 2.4]. It is well known that the viscous damping term has a vital role in achieving fast convergence in unconstrained minimization [6, 8, 33]. The role of the extrapolation θt is to induce more flexibility in the dynamical system and in the associated discrete schemes, as it has been recently noticed in [10, 12, 27, 45]. The time scaling function δ (·) has the role to further improve the rates of convergence of the objective function value along the trajectory, as it was noticed in the context of unconstrained minimiza- tion problems in [7, 9, 11] and of linearly constrained minimization problems in [10, 30]. It is straightforward to show the existence of local solutions to (2.6), under the additional assumption that ∇ f is Lipschitz continuous on every bounded subset of X . First, notice that (2.6) can be rewritten as a first-order dynamical sys- tem. Indeed, (x,λ) : [t , +∞) → X × Y is a solution to (2.6) if and only if (x,λ, y,ν) : [t , +∞) → X × Y × X × Y is a solution to x˙ (t ), λ(t ), y ˙(t ), ν( ˙ t ) = F (t , x (t ), λ(t ), y(t ), ν(t )) (x (t ), λ(t ), y(t ), ν(t )) = x ,λ , x˙ , λ , 0 0 0 0 0 0 0 0 where F : [t , +∞) × X × Y × X × Y → X × Y × X × Y is given by F (t , z,μ,w,η) ∗ ∗ := w, η, − w − δ(t ) ∇ f (z) + A μ + θt η + β A Az − b , − η + δ(t ) A z + θt w − b . where F is evidently continuous in t, and F (t , ·) is Lipschitz continuous on every bounded subset, provided that the same property holds for ∇ f . We can then employ a theorem such as that by Cauchy-Lipschitz to obtain the existence of a unique solution to the previous system, and thus a unique solution to (2.6), defined on a maximal interval [t , T ). To go further and show the existence and uniqueness of a global 0 max solution (that is, T =+∞) we will need some energy estimates derived in the next max section in a similar way as in [11, 17]. For this reason, the existence and uniqueness of a global solution is postponed to a later stage. 123 Applied Mathematics & Optimization (2023) 88:27 Page 7 of 43 27 3 Faster Convergence Rates via Time Rescaling In this section we will derive fast convergence rates for the primal-dual gap, the feasibility measure, and the objective function value along the trajectories generated by the dynamical system (2.6). We will make the following assumptions on the parameters α, θ, β and the function δ throughout this section. Assumption 1 In (2.6), assume that δ : [t , +∞) → (0, +∞) is continuously differentiable. Moreover, suppose that the parameters α, β, θ and the function δ satisfy 1 1 tδ(t ) 1 − 2θ α 3,β 0, θ and sup . (3.1) 2 α − 1 δ(t ) θ t t Besides the first three conditions that are known previously in [17], it is worth pointing out that we can deduce from the last one the following inequality for every t t : t δ (t ) 1 − 2θ 1 = − 2 α − 3. (3.2) δ (t ) θ θ This gives a connection to the condition which appears in [7]. A few more comments regarding the function δ will come later, after the convergence rates statements. 3.1 The Energy Function Let (x,λ) : [t , +∞) → X × Y beasolutionof (2.6). Let (x ,λ ) ∈ S be fixed, we 0 ∗ ∗ define the energy function E : [t , +∞) → R 2 2 2 E (t ) := θ t δ(t ) L (x (t ) ,λ ) − L (x ,λ (t )) + v (t ) β ∗ β ∗ + x (t ) ,λ (t ) − (x ,λ ) , (3.3) ∗ ∗ where v t := x t ,λ t − (x ,λ ) + θt x˙ t , λ t , (3.4) ( ) ( ) ( ) ( ) ( ) ∗ ∗ ξ := αθ − θ − 1 0. (3.5) Notice that, according to (2.4) and (2.5), we have for every t t L (x (t ) ,λ ) − L (x ,λ (t )) = L (x (t ) ,λ ) − L (x ,λ (t )) + Ax (t ) − b β ∗ β ∗ ∗ ∗ (3.6) = L (x (t ) ,λ ) − f + Ax (t ) − b ∗ ∗ = f (x (t )) − f 123 27 Page 8 of 43 Applied Mathematics & Optimization (2023) 88:27 + λ , Ax (t ) − b + Ax (t ) − b 0, (3.7) where f denotes the optimal objective value of (1.1). In addition, due to (3.7), we have E (t ) 0 ∀t t . (3.8) The construction of E is inspired by [17]. However, one can notice that we only consider E defined with respect to a fixed primal-dual solution (x ,λ ) ∈ S rather ∗ ∗ than a family of energy functions, each defined with respect to a point (z,μ) ∈ F × Y. This gives simpler proofs for some results when compared to those in [17]. Assumption 1 implies the nonnegativity of following quantity, which will appear many times in our analysis: 1 − 2θ σ : [t , +∞) → R ,σ(t ) := δ(t ) − tδ(t ). (3.9) 0 + The following lemma gives us the decreasing property of the energy function. As a consequence of this lemma, we obtain some integrability results which will be needed later. The proofs are postponed to the Appendix. Lemma 3.1 Let (x,λ) : [t , +∞) → X × Y be a solution of (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ For every t t it holds d 1 2 2 E (t ) −θ tσ(t ) L (x (t ) ,λ ) − L (x ,λ (t )) − βθt δ (t ) Ax (t ) − b β ∗ β ∗ dt 2 − ξθt x˙ (t ) , λ (t ) . Proof See “Proof of Lemma 3.1” in Appendix B. Theorem 3.2 Let (x,λ) : [t , +∞) → X × Y be a solution of (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ The following statements are true (i) it holds +∞ tσ(t ) L (x (t ), λ ) − L(x ,λ(t )) dt E (t )< +∞, (3.10) ∗ ∗ 0 +∞ 2E (t ) β tδ(t ) Ax t − b dt < +∞, (3.11) ( ) +∞ E (t ) 2 0 ξ t x˙ (t ) , λ (t ) < +∞; (3.12) 1 1 (ii) if, in addition, α> 3 and θ> , then the trajectory (x (t ), λ(t )) is t t 2 α−1 bounded and the convergence rate of its velocity is x˙ (t ) , λ (t ) = O as t →+∞. 123 Applied Mathematics & Optimization (2023) 88:27 Page 9 of 43 27 Proof See “Proof of Theorem 3.2” in Appendix B. 3.2 Fast Convergence Rates for the Primal-Dual Gap, the Feasibility Measure and the Objective Function Value The following are the main convergence rates results of the paper. Theorem 3.3 Let (x,λ) : [t , +∞) → X × Y be a solution of (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ The following statements are true (i) for every t t it holds E (t ) 0 L (x (t ), λ ) − L(x ,λ(t )) ; (3.13) ∗ ∗ 2 2 θ t δ(t ) (ii) for every t t it holds 2C Ax (t ) − b , (3.14) t δ(t ) where C := sup t λ (t ) + (α − 1) sup λ (t ) − λ 1 ∗ t t t t 0 0 + t δ(t ) Ax (t ) − b + t λ (t ) ; 0 0 0 0 (iii) for every t t it holds E (t ) 1 | f x t − f | + 2C λ . (3.15) ( ( )) ∗ 1 ∗ 2 2 θ t δ(t ) Proof (i) We have already established that E is nonincreasing on [t , +∞). Therefore, from the expression for E and relation (3.6) we deduce 2 2 θ t δ(t ) L (x (t ), λ ) − L(x ,λ(t )) E (t ) ∀t t , (3.16) ∗ ∗ 0 0 and the first claim follows. (ii) From the second line of (2.6), for every t t we have ¨ ˙ t λ(t ) + αλ(t ) = tδ(t ) A x (t ) + θt x˙ (t ) − b = tδ(t ) Ax (t ) − b + θt δ(t )Ax˙(t ). (3.17) Fix t t . On the one hand, integration by parts yields t t t ¨ ˙ ˙ ˙ ˙ ˙ sλ(s) + αλ(s) ds = t λ(t ) − t λ(t ) − λ(s)ds + α λ(s)ds 0 0 (3.18) t t t 0 0 0 ˙ ˙ = t λ(t ) − t λ(t ) + (α − 1)(λ(t ) − λ(t )). 0 0 0 123 27 Page 10 of 43 Applied Mathematics & Optimization (2023) 88:27 On the other hand, again integrating by parts leads to 2 2 2 s δ(s)Ax˙(s)ds = t δ(t )(Ax (t ) − b) − t δ(t )(Ax (t ) − b) 0 0 (3.19) − 2sδ(s) + s δ(s) (Ax (s) − b)ds. Now, integrating (3.17) from t to t and using (3.18) and (3.19) gives us ˙ ˙ t λ(t ) − t λ(t ) + (α − 1)(λ(t ) − λ(t )) 0 0 0 t t = sδ(s)(Ax (s) − b)ds + θ s δ(s)Ax˙ (s)ds t t 0 0 2 2 = t δ(t )(Ax (t ) − b) − t δ(t )(Ax (t ) − b) 0 0 + s (1 − 2θ)δ(s) − θsδ(s) (Ax (s) − b)ds 2 2 = t δ(t )(Ax (t ) − b) − t δ(t )(Ax (t ) − b) 0 0 (1 − 2θ)δ(s) − θsδ(s) + s δ(s)(Ax (s) − b)ds. (3.20) sδ(s) It follows that, for every t t ,wehave (1 − 2θ)δ(s) − θsδ(s) 2 2 t δ(t )(Ax (t ) − b) + s δ(s)(Ax (s) − b)ds C , sδ(s) (3.21) where C = sup t λ (t ) + (α − 1) sup λ(t ) − λ (t ) + t δ(t ) Ax (t ) − b 1 0 0 0 t t t t 0 0 + t λ (t ) < +∞, 0 0 and this quantity is finite in light of (B.7) and (B.5). Now, we set (1 − 2θ)δ(t ) − θtδ(t ) g(t ) := t δ(t ) Ax (t ) − b , a(t ) := ∀t t tδ(t ) and we apply Lemma A.1 to deduce that t δ(t ) Ax (t ) − b 2C ∀t t . (3.22) 1 0 (iii) For a fixed t t ,wehave L (x (t ), λ ) − L(x ,λ(t )) = f (x (t )) − f (x ) +λ , Ax (t ) − b. ∗ ∗ ∗ ∗ 123 Applied Mathematics & Optimization (2023) 88:27 Page 11 of 43 27 Therefore, from using (3.22) and (3.16) we obtain, for every t t , | f (x (t )) − f | L (x (t ), λ ) − L(x ,λ(t )) + λ Ax (t ) − b ∗ ∗ ∗ ∗ E (t ) 1 + 2C λ , 1 ∗ 2 2 θ t δ(t ) which leads to the last statement. Some comments regarding the previous proof and results are in order. Remark 3.4 The proof we provided here is significantly shorter than the one derived in [17] thanks to Lemma A.1. This Lemma is the one used in [28] for showing the fast convergence to zero of the feasibility measure, although the authors study a different dynamical system. On the other hand, when δ (t ) ≡ 1, the result in [17] is more robust than the one we obtain here, as it gives the O rates for the sum of primal-dual gap and feasibility measure, rather than each one individually. It also allows us to focus only on the energy function defined with respect to a primal-dual optimal solution (x ,λ ) ∈ S, rather than on an arbitrary feasible point (z,μ) ∈ F × Y as in [17]. ∗ ∗ Remark 3.5 Here are some remarks comparing our rates of convergence to those in [10, 30]. • Primal-dual gap: According to (3.13), the following rate of convergence for the primal-dual is exhibited: L x (t ), λ − L x ,λ(t ) = O as t →+∞, ∗ ∗ t δ(t ) which coincides with the findings of [10, 30]. • Feasibility measure: According to (3.14), we have Ax (t ) − b = O as t →+∞, t δ(t ) which improves the rate O reported in [10, 30]. t δ(t ) • Functional values: Relation (3.15) tells us that | f (x (t )) − f | = O as t →+∞. t δ(t ) In [10], only the upper bound presents this order of convergence. The lower bound obtained is of order O as t →+∞.In[30], there are no comments on t δ(t ) the rate attained by the functional values in the case of a general time rescaling parameter. 123 27 Page 12 of 43 Applied Mathematics & Optimization (2023) 88:27 • Further comparisons with [30]: in [30, Theorem 2.16], unlike the preceding result [30, Theorem 2.15], the authors produce a rate of O as t →+∞ 1/θ for Ax (t ) − b and | f (x (t )) − f (x )|, provided the time rescaling parameter is −2 chosen to be δ(t ) = δ t ,for some δ > 0. This choice comes from the solution 0 0 to the differential equation tδ(t ) 1 − 2θ = ∀t t , δ(t ) θ and thus is covered by our results when the growth condition (3.1) holds with equality. The rates are consequently 1 1 O = O as t →+∞. 1 1 −2 θ θ t · δ t t In this setting, if we wish to obtain fast convergence rates, we need to choose a 1 1 small θ. In light of Assumption 1, where we have θ , this can be 2 α−1 achieved by taking α large enough. Such behaviour can also be seen in [10] and in the unconstrained case [7, 11]. 4 Weak Convergence of the Trajectory to a Primal-Dual Solution In this section we will show that the solutions to (2.6) weakly converge to an element of S. The fact that δ (t ) enters the convergence rate statement suggests that one can benefit from this time rescaling function when it is at least nondecreasing on [t , +∞). We are, in fact, going to need this condition when showing trajectory convergence. Assumption 2 In (2.6), assume that ∇ f is -Lipschitz continuous for some > 0 and that δ :[t , +∞) → (0, +∞) is continuously differentiable and nondecreas- ing. Moreover, suppose that the parameters α, β, θ and the function δ satisfy 1 1 tδ(t ) 1 − 2θ α> 3,β 0, >θ > , sup < . 2 α − 1 δ(t ) θ t t Assumption 2 entails the existence of C > 0 such that tδ(t ) 1 − 2θ + C ∀t t . (4.1) 2 0 δ(t ) θ and therefore it follows further from the nondecreasing property of δ that 0 < C δ(t ) C δ(t ) (1 − 2θ)δ(t ) − θtδ(t ) ∀t t . (4.2) 2 0 2 0 123 Applied Mathematics & Optimization (2023) 88:27 Page 13 of 43 27 Moreover, from (4.1), for every t t ,wehave 1 − 2θ tδ(t ) σ(t ) 0 < C − = , θ δ(t ) δ(t ) which gives σ(t ) δ(t ) ∀t t . (4.3) We can now formally state the existence and uniqueness result of the trajectory. The proof follows the same argument as in [17, Theorem 4.1], therefore we omit the details. Theorem 4.1 In the setting of Assumption 2, for every choice of initial conditions ˙ ˙ x (t ) = x ,λ(t ) = λ , x˙ (t ) =˙x , and λ(t ) = λ 0 0 0 0 0 0 0 0 the system (2.6) has a unique global twice continuously differentiable solution (x,λ) : [t , +∞) → X × Y. The additional Lipschitz continuity condition of ∇ f and the fact that δ is nonde- creasing give rise to the following two essential integrability statements. Proposition 4.2 Let (x,λ) : [t , +∞) → X × Y be a solution of (2.6) and (x ,λ ) ∈ 0 ∗ ∗ S. Then it holds +∞ tδ(t ) ∇ f (x (t )) −∇ f (x ) dt < +∞ (4.4) and +∞ tδ(t ) Ax (t ) − b dt < +∞. (4.5) Proof See “Proof of Proposition 4.2” in Appendix B. Now, for a given primal-dual solution (x ,λ ) ∈ S, we define the following map- ∗ ∗ pings on [t , +∞) W (t ) := δ(t ) L (x (t ), λ ) − L (x ,λ(t )) + x˙ (t ) , λ (t ) 0, (4.6) β ∗ β ∗ ϕ(t ) := x (t ), λ(t ) − (x ,λ ) 0. (4.7) ∗ ∗ The following are three technical lemmas that we will need in this section. Lemma 4.4 guarantess that the first condition of Opial’s Lemma is met. Lemma 4.3 Let (x,λ) : [t , +∞ → X × Y asolutionof (2.6) and (x ,λ ) ∈ S. The 0 ∗ ∗ following inequality holds for every t t : α δ(t ) βδ(t ) 2 2 ϕ( ¨ t ) + ϕ( ˙ t ) + θt W (t ) + ∇ f (x (t )) −∇ f (x ) + Ax (t ) − b 0. t 2 2 (4.8) 123 27 Page 14 of 43 Applied Mathematics & Optimization (2023) 88:27 Proof See “Proof of Lemma 4.3” in Appendix B. Lemma 4.4 Let (x,λ) : [t , +∞) → X × Y be a solution to (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ Then the positive part [˙ ϕ] of ϕ ˙ belongs to L [t , +∞) and the limit lim ϕ(t ) + 0 t →+∞ exists. Proof For any t t , we multiply (4.8)by t and drop the last two norm squared terms to obtain tϕ( ¨ t ) + αϕ( ˙ t ) + θt W (t ) 0. Recall from (4.6)that forevery t t we have tW (t ) = tδ(t ) L (x (t ), λ ) − L (x ,λ(t )) + x˙ (t ) , λ (t ) . (4.9) β ∗ β ∗ On the one hand, according to (3.12), the second summand of the previous expression belongs to L [t , +∞). On the other hand, using (4.3) and (3.10), we assert that +∞ tδ(t ) L (x (t ), λ ) − L (x ,λ(t )) dt β ∗ β ∗ +∞ tσ(t ) L (x (t ), λ ) − L (x ,λ(t )) dt < +∞. β ∗ β ∗ Hence, the first summand of (4.9) also belongs to L [t , +∞), which implies that the mapping t → tW (t ) belongs to L [t , +∞) as well. For achieving the desired conclusion, we make use of Lemma A.4 with φ := ϕ and w := θ W . Lemma 4.5 Let (x,λ) : [t , +∞) → X × Y be a solution to (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ The following inequality holds for every t t α d α x˙ (t ) , λ (t ) + 2 x ¨(t ) + x˙ (t ), A (λ(t ) − λ ) tδ(t ) dt t 2 2 ∗ ∗ + θ tδ(t ) A (λ(t ) − λ ) + (1 − θ)δ(t ) − θtδ(t ) A (λ(t ) − λ ) ∗ ∗ dt 2 2 2 2 δ(t ) 2 ∇ f (x (t )) −∇ f (x ) + 2β A + 1 Ax (t ) − b . Proof See “Proof of Lemma 4.5” in Appendix B. Lemma 4.6 Let (x,λ) :[t , +∞) → X × Y be a solution to (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ Then, for every t t it holds α+1 ∗ θt δ(t ) A (λ (t ) − λ ) α α −t ϕ( ˙ t ) + s V (s)ds 123 Applied Mathematics & Optimization (2023) 88:27 Page 15 of 43 27 α ∗ + s θ(α + 1) − 1 δ(t ) + θsδ(s) A (λ(s) − λ ) ds α ∗ − 2t x˙ (t ), A (λ(t ) − λ ) + C , (4.10) ∗ 5 where, for s t , α(α − 1) V (s) := θ(α + 1)W (s) + + A x˙ (s), λ(s) t δ( ) 2 2 + C δ(s) ∇ f (x (s)) −∇ f (x ) + C δ(s) Ax (s) − b , 3 ∗ 4 for certain nonnegative constants C , C and C . 3 4 5 Proof See “Proof of Lemma 4.6” in Appendix B. The following proposition provides us with the main integrability result that will be used for verifying the second condition of Opial’s Lemma. Proposition 4.7 Let (x,λ) : [t , +∞) → X × Y be a solution to (2.6) and (x ,λ ) ∈ 0 ∗ ∗ S. Then it holds +∞ tδ(t ) A (λ (t ) − λ ) dt < +∞. (4.11) Proof We divide (4.10)by t , thus obtaining ∗ α θtδ(t ) A (λ (t ) − λ ) −˙ ϕ(t ) + s V (s)ds + s θ(α + 1) − 1 δ(s) + θsδ(s) A (λ(s) − λ ) ds − 2 x˙ (t ), A (λ(t ) − λ ) + . Now, we integrate this inequality from t to r. We get θ tδ(t ) A (λ (t ) − λ ) dt r t ϕ(t ) − ϕ(r ) + s V (s)ds dt t t 0 0 r t α ∗ + s θ(α + 1) − 1 δ(s) + θsδ(s) A (λ(s) − λ ) ds dt t t 0 0 r r − 2 Ax˙ (t ), λ(t ) − λ dt + C t dt . (4.12) ∗ 5 t t 0 0 123 27 Page 16 of 43 Applied Mathematics & Optimization (2023) 88:27 We now recall some important facts. First of all, we have 1 1 dt . (4.13) α α−1 (α − 1)t In addition, according to Lemma A.2, it holds r t r 1 1 s V (s)ds dt tV (t )dt , (4.14) t α − 1 t t t 0 0 0 and r t α ∗ s θ(α + 1) − 1 δ(s) + θsδ(s) A (λ(s) − λ ) ds dt t t 0 0 θ(α + 1) − 1 δ(t ) + θtδ(t ) A (λ(t ) − λ ) dt , (4.15) α − 1 respectively. Finally, integrating by parts leads to − Ax˙ (t ), λ(t ) − λ dt =− Ax (r ) − b,λ(r ) − λ + Ax (t ) − b,λ(t ) − λ + Ax (t ) − b, λ(t ) dt ∗ 0 0 ∗ Ax (r ) − b λ(r ) − λ + Ax (t ) − b λ(t ) − λ + Ax (t ) − b, λ(t ) dt ∗ 0 0 ∗ sup{ Ax (t ) − b λ (t ) − λ }+ Ax (t ) − b λ(t − λ ) ∗ 0 0 ∗ t t + Ax (t ) − b + λ (t ) dt . (4.16) The supremum term is finite due to the boundedness of the trajectory. Now, by using the nonnegativity of ϕ and the facts (4.13), (4.14), (4.15) and (4.16)on (4.12), we come to tσ(t ) A (λ (t ) − λ ) dt α − 1 θ(α + 1) − 1 δ(t ) + θtδ(t ) = θδ(t ) − t A (λ (t ) − λ ) dt α − 1 r r tV (t )dt + t Ax (t ) − b + λ (t ) dt + C , (4.17) α − 1 t t 0 0 123 Applied Mathematics & Optimization (2023) 88:27 Page 17 of 43 27 where C := ϕ(t ) + 2sup { Ax (t ) − b λ (t ) − λ } 6 0 ∗ t t + 2 Ax (t ) − b λ(t − λ ) + . 0 0 ∗ α−1 (α − 1)t According to (3.11) and (3.12) in Theorem 3.2,aswellasLemma 4.4, we know that the 2 1 mappings t → tV (t ) and t → t Ax (t ) − b + λ (t ) belong to L [t , +∞). Therefore, by taking the limit as r →+∞ in (4.17) we obtain +∞ tσ(t ) A (λ (t ) − λ ) dt < +∞. Again, from (4.3) we conclude that +∞ +∞ 2 2 ∗ ∗ tδ(t ) A (λ (t ) − λ ) dt tσ(t ) A (λ (t ) − λ ) dt < +∞, ∗ ∗ t 2 t 0 0 which completes the proof. The following result is the final step towards the second condition of Opial’s Lemma. Theorem 4.8 Let (x,λ) : [t , +∞) → X × Y be a solution to (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ Then it holds ∇ f (x (t )) −∇ f (x ) = o √ √ and t δ(t ) A (λ (t ) − λ ) = o √ √ as t →+∞. (4.18) t δ(t ) Consequently, ∇ L x (t ) ,λ (t ) = ∇ f (x (t )) + A λ (t ) = o √ √ as t →+∞, t δ(t ) while, as seen earlier, ∇ L x (t ) ,λ (t ) = Ax (t ) − b = O as t →+∞. t δ(t ) Proof We first show the gradient rate. For t t , it holds t δ(t ) ∇ f (x (t )) −∇ f (x ) dt tδ(t ) = δ(t ) + √ ∇ f (x (t )) −∇ f (x ) 2 δ(t ) 123 27 Page 18 of 43 Applied Mathematics & Optimization (2023) 88:27 + 2t δ(t ) ∇ f (x (t )) −∇ f (x ), ∇ f (x (t )) . (4.19) dt On the one hand, by Assumption 2, we can write ˙ ˙ tδ(t ) δ(t ) tδ(t ) 1 − 2θ δ(t ) + √ = δ(t ) + · 1 + δ(t ) 2 δ(t ) 2θ 2 δ(t ) = δ(t ). (4.20) 2θ √ √ Since δ is nondecreasing, for t t we have δ(t ) δ(t )> 0. Set t := 0 0 1 max t , . Therefore, for t t it holds 0 1 1 1 √ √ = t t δ(t ) δ(t ) and thus δ(t ) tδ(t ). (4.21) On the other hand, for every t t we deduce 2t δ(t ) ∇ f (x (t )) −∇ f (x ), ∇ f (x (t )) dt = 2t δ(t ) ∇ f (x (t )) −∇ f (x ) , ∇ f (x (t )) dt tδ(t ) ∇ f (x (t )) −∇ f (x ) + t ∇ f (x (t )) dt 2 2 tδ(t ) ∇ f (x (t )) −∇ f (x ) + t x˙ (t ) , (4.22) where the last inequality is a consequence of the -Lipschitz continuity of ∇ f .By combining (4.20), (4.21) and (4.22), from (4.19) we assert that for every t t t δ(t ) ∇ f (x (t )) −∇ f (x ) dt 2 2 1 + tδ(t ) ∇ f (x (t )) −∇ f (x ) + t x˙ (t ) . 2θ The right hand side of the previous inequality belongs to L [t , +∞), according to (3.12) and (4.4). Since δ is nondecreasing, for every t t we have δ(t ) δ(t ) δ(t ) = δ(t ) · √ √ , δ(t ) δ(t ) 123 Applied Mathematics & Optimization (2023) 88:27 Page 19 of 43 27 and thus +∞ t δ(t ) ∇ f (x (t )) −∇ f (x ) dt +∞ √ tδ(t ) ∇ f (x (t )) −∇ f (x ) dt < +∞. (4.23) δ(t ) 0 t This means that the function being differentiated also belongs to L [t , +∞). There- fore, Lemma A.3 gives us t δ(t ) ∇ f (x (t )) −∇ f (x ) →0as t →+∞. Proceeding in the exact same way, for every t t we have t δ(t ) A (λ(t ) − λ ) dt tδ(t ) ∗ ∗ = δ(t ) + √ A (λ(t ) − λ ) + 2t δ(t ) AA (λ(t ) − λ ), λ(t ) ∗ ∗ 2 δ(t ) 2 2 2 ∗ + A tδ(t ) A (λ(t ) − λ ) + t λ (t ) . 2θ According to (3.12) and (4.11), the right hand side of the previous inequality belongs to L [t , +∞). Arguing as in (4.23), we deduce that the function being differentiated also belongs to L [t , +∞). Again applying Lemma A.3, we come to t δ(t ) A (λ(t ) − λ ) →0as t →+∞. Finally, recalling that A λ =−∇ f (x ), we deduce from the triangle inequality ∗ ∗ that ∇ L x (t ) ,λ (t ) = ∇ f (x (t )) + A λ (t ) ∇ f (x (t )) −∇ f (x ) + A (λ(t ) − λ ) ∗ ∗ = o √ √ as t →+∞, t δ(t ) and the third claim follows. Remark 4.9 The previous theorem also has its own interest. It tells us that the time rescaling parameter also plays a role in accelerating the rates of convergence for ∇ f (x (t )) −∇ f (x ) and A (λ(t ) − λ ) as t →+∞. Moreover, we deduce ∗ ∗ from (4.18) that the mapping (x,λ) → (∇ f (x ), A λ) is constant along S, as reported in [17, Proposition A.4]. We now come to the final step and show weak convergence of the trajectories of (2.6) to elements of S. 123 27 Page 20 of 43 Applied Mathematics & Optimization (2023) 88:27 Theorem 4.10 Let (x,λ) : [t , +∞) → X ×Y be a solution to (2.6) and (x ,λ ) ∈ S. 0 ∗ ∗ Then x (t ), λ(t ) converges weakly to a primal-dual solution of (1.1) as t →+∞. Proof For proving this theorem, we make use of Opial’s Lemma (see Lemma A.5). Lemma 4.4 tells us that lim x (t ), λ(t ) − (x ,λ ) exists for every (x ,λ ) ∈ t →+∞ ∗ ∗ ∗ ∗ S, which proves condition (i) of Opial’s Lemma. Now let x˜ , λ be any weak sequential cluster point of x (t ), λ(t ) as t →+∞, which means there exists a strictly increasing sequence (t ) ⊆[t , +∞) such that n n∈N 0 x (t ), λ(t ) x˜ , λ as n →+∞. n n We want to show the remaining condition of Opial’s Lemma, which asks us to check that x˜ , λ ∈ S. In other words, we must show that ˜ ˜ L x˜,λ L x˜ , λ L x , λ ∀(x,λ) ∈ X × Y. (4.24) Let (x,λ) ∈ X × Y and (x ,λ ) ∈ S be fixed. Notice that the functions ∗ ∗ f (·) + λ, A(·) − Ax : X → R and ·, b − Ax : Y → R are convex and continuous, therefore they are lower semicontinuous. According to a known result (see, for example, [14, Theorem 9.1]), they are also weakly lower semicontinuous. Therefore, we can derive that ˜ ˜ ˜ L x˜ , λ − L x , λ = f x˜ + λ, Ax˜ − Ax − f (x ) lim inf f (x (t )) + λ, Ax − Ax − f (x ) n n n→+∞ = f (x ) + λ, b − Ax − f (x ) f (x ) − f (x ) + lim inf A λ(t ), x − x ∗ n ∗ n→+∞ = f (x ) − f (x ) +λ , Ax − b ∗ ∗ = L(x ,λ ) − L(x,λ ) 0, ∗ ∗ ∗ where in the second and third equalities we used the fact that, as n →+∞,wehave ∗ ∗ f (x (t )) → f (x ) and Ax (t ) → b (Theorem 3.3), and A λ → A λ (Theorem n ∗ n n ∗ 4.8). Similarly, the weak lower semicontinuity of the function λ−λ, A(·)−b : X → R yields ˜ ˜ ˜ L x˜,λ − L x˜ , λ = λ − λ, Ax˜ − b lim inf λ − λ, Ax (t ) − b = 0, n→+∞ We have thus showed (4.24) and the proof is concluded. 123 Applied Mathematics & Optimization (2023) 88:27 Page 21 of 43 27 5 Numerical Experiments We will illustate the theoretical results by two numerical examples, with X = R and Y = R . We will address two minimization problems with linear constraints; one with a strongly convex objective function and another with a convex objective function which is not strongly convex. In both cases, the linear constraints are dictated by ! " ! " 1 −1 −10 0 A = and b = . 01 0 −1 0 Example 5.1 Consider the minimization problem 2 2 2 2 min f (x , x , x , x ) := (x − 1) + (x − 1) + x + x 1 2 3 4 1 2 3 4 subject to x − x − x = 0 1 2 3 x − x = 0. 2 4 The optimality conditions can be calculated and lead to the following primal-dual solution pair ⎡ ⎤ 0.8 ! " ⎢ ⎥ 0.6 0.4 ⎢ ⎥ x = and λ = . ∗ ∗ ⎣ ⎦ 0.2 1.2 0.6 Example 5.2 Consider the minimization problem −x −x 2 2 1 2 min f (x , x , x , x ) := log 1 + e + x + x 1 2 3 4 3 4 subject to x − x − x = 0 1 2 3 x − x = 0. 2 4 This problem is similar to the regularized logistic regression frequently used in machine learning. We cannot explicitly calculate the optimality conditions as in the previous case; instead, we use the last solution in the numerical experiment as the approximate solution. To comply with Assumption 2, we choose t > 0, α = 8, β = 10, θ = , and we test four different choices for the rescaling parameter: δ(t ) = 1 (i.e., the (PD-AVD) 2 3 dynamics in [17, 45]), δ(t ) = t, δ(t ) = t and δ(t ) = t . In both examples, the initial conditions are ⎡ ⎤ ⎡ ⎤ 0.5 0.5 ! " ! " ⎢ ⎥ ⎢ ⎥ 0.5 0.2 0.5 0.5 ⎢ ⎥ ⎢ ⎥ ˙ x (t ) = ,λ(t ) = , x˙ (t ) = and λ(t ) = . 0 0 0 0 ⎣ ⎦ ⎣ ⎦ 0.5 0.2 0.5 0.5 0.5 0.5 For each choice of δ, we plot, using a logarithmic scale, the primal-dual gap L x (t ), λ − L x ,λ(t ) , the feasibility measure Ax (t ) − b and the func- ∗ ∗ tional values | f (x (t )) − f |, to highlight the theoretical result in Theorem 3.3. 123 27 Page 22 of 43 Applied Mathematics & Optimization (2023) 88:27 We also illustrate the findings from Theorem 4.8, namely, we plot the quantities ∇ f (x (t )) −∇ f (x ) and A (λ (t ) − λ ) , as well as the velocity (x˙ (t ), λ(t )) . ∗ ∗ Figures 1 and 2 display these plots for Examples 5.1 and 5.2, respectively. As predicted by the theory, choosing faster-growing time rescaling parameters yields better convergence rates. This is not the case for the velocities. Next we use Example 5.2 to compare the convergence behaviour of our system (2.6) with the one where the asymptotically vanishing damping term is chosen to be ,for r ∈[0, 1]. Notice that r = 1 gives our system (2.6). When r = 0, in the setting of [30, Theorem 2.2], we know that the primal-dual gap exhibits a convergence rate of O as t →+∞. This is illustrated in Fig. 3, were we plotted the combina- tδ(t ) 2 2 tions (δ(t ) = t ; r = 0), (δ(t ) = t ; r = 1), δ(t ) = t ; r = 0 , and δ(t ) = t ; r = 1 . In particular, observe that the rate predicted by [30, Theorem 2.2] for the primal-dual gap for the case δ(t ) = t ; r = 0 reads O , while the rate predicted by our The- t ·t orem 3.3 for the case (δ(t ) = t ; r = 1) reads O . It is no surprise then to see the t ·t combinations δ(t ) = t ; r = 0 and (δ(t ) = t ; r = 1) exhibiting similar convergence behaviour in Fig. 3. For better understanding, we run Example 5.2 once more to show the plots which result from fixing the time rescaling parameter δ(t ) = t and varying r ∈ {0, 0.25, 0.5, 0.75, 1}. Notice how the convergence improves as r approaches 1. As t →+∞,[30, Theorem 2.7] predicts convergence rates of O for the primal- t δ(t ) dual gap and of O for the velocities, which is reflected in our plots. τ/2 Acknowledgements The authors would like to thank the editor and the anonymous referees for their helpful comments and suggestions, which have led to the improvement of this paper. Funding Open access funding provided by Austrian Science Fund (FWF). D.-K. Nguyen research supported by FWF (Austrian Science Fund), project P 34922-N. Declarations Conflict of interest No potential conflict of interest was reported by the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix Here we collect the auxiliary results that are required to carry out many steps in out analysis. 123 Applied Mathematics & Optimization (2023) 88:27 Page 23 of 43 27 Fig. 1 The function δ (t ) influences convergence behaviour in Example 5.1 27 Page 24 of 43 Applied Mathematics & Optimization (2023) 88:27 Fig. 2 The function δ (t ) influences convergence behaviour in Example 5.2 Applied Mathematics & Optimization (2023) 88:27 Page 25 of 43 27 Fig. 3 The function δ(t ), as well as the parameter r, influence convergence behaviour in Example 5.2 27 Page 26 of 43 Applied Mathematics & Optimization (2023) 88:27 Fig. 4 The parameter r influences convergence behaviour in Example 5.2 Applied Mathematics & Optimization (2023) 88:27 Page 27 of 43 27 A proof for the following lemma in the finite-dimensional case can be found in [28, Lemma 6]. The proof for the infinite-dimensional case is short and virtually identical, so we include it here for the sake of completeness. Lemma A.1 Assume that t > 0,g : [t , +∞) → Y is a continuous differentiable 0 0 function, a : t , +∞ →[0, +∞) is a continuous function, and C 0. If, in the [ ) sense of Bochner integrability, we have g(t ) + a(s)g(s)ds C ∀t t (A.1) then sup g(t ) 2C < +∞. t t Proof Define, for every t t , a(s)ds G(t ) := e a(s)g(s)ds. Fix t t . The time derivative of G reads ) ) t t a(s)ds a(s)ds t t 0 0 G(t ) = a(t )e a(s)g(s)ds + e a(t )g(t ) ! " a(s)ds = a(t )e g(t ) + a(s)g(s)ds , so by using (A.1) and the previous equality we arrive at ) ) t t a(s)ds a(s)ds t t 0 0 G(t ) Ca(t )e = C e . (A.2) dt Since G(t ) = 0, we have G(t ) = G(t ) − G(t ) = G(s)ds, so by employing (A.2) and the previous equality we obtain, for every t t , ) ) t t t t s a(s)ds a(τ )dτ t t 0 0 e a(s)g(s)ds = G(t ) G(s) ds C e ds ds t t t 0 0 0 ! " ) ) t t a(s)ds a(s)ds t t 0 0 C e − 1 Ce . 123 27 Page 28 of 43 Applied Mathematics & Optimization (2023) 88:27 a(s)ds Dividing both sides of the previous inequality by e gives us a(s)g(s)ds C ∀t t . (A.3) Now, by putting (A.1) and (A.3) we finally come to t t g(t ) g(t ) + a(s)g(s)ds + a(s)g(s)ds 2C ∀t t , t t 0 0 which leads to the announced statement. The proofs for the following results can be found in [17, Lemma A.1] and [1, Lemma 5.2], respectively. Lemma A.2 Let 0 < t r +∞ and h : [t , +∞) →[0, +∞) be a continuous 0 0 function. For every α> 1 it holds ! " r t r 1 1 α−1 s h(s) dt h(t )dt . t α − 1 t t t 0 0 0 If r =+∞, then equality holds. Lemma A.3 Let t > 0, 1 p < +∞ and 1 q +∞. Suppose that F ∈ L [t , +∞) is a locally absolutely continuous nonnegative function, G ∈ L [t , +∞) and F (t ) G(t ) ∀t t . Then, lim F (t ) = 0. t →+∞ The following lemma is a slight variation of results already present in the literature. See, for example, [5, Lemma A.2]. Lemma A.4 Let t > 0, α> 1, and let φ :[t , +∞) → R be a twice continuously 0 0 differentiable function bounded from below. Furthermore, assume w :[t , +∞) → [0, +∞) to be a continuously differentiable function such that t → tw(t ) belongs to L [t , +∞) and ¨ ˙ tφ(t ) + αφ(t ) + t w( ˙ t ) 0 ∀t t . ˙ ˙ Then, the positive part [φ] of φ belongs to L [t , +∞) and the limit lim φ(t ) + 0 t →+∞ is a real number. Proof Fix t t . Adding (α + 1)tw(t ) to both sides of the previous inequality and α−1 then multiplying it by t yields d d α α+1 α t φ(t ) + t w(t ) (α + 1)t w(t ). dt dt 123 Applied Mathematics & Optimization (2023) 88:27 Page 29 of 43 27 Since the previous inequality holds for any t t , we can integrate it from t to t t 0 0 0 to get α α α+1 α+1 α ˙ ˙ t φ(t ) − t φ(t ) + t w(t ) − t w(t ) (α + 1) s w(s)ds. 0 0 α+1 α After dropping the nonnegative term t w(t ) and dividing by t we arrive at C α + 1 φ(t ) + s w(s)ds ∀t t , α α t t where + + α α+1 * + ˙ + C := t φ(t ) + t w(t ), 0 0 which further leads to C α + 1 [φ(t )] + s w(s)ds ∀t t . + 0 α α t t Now, we integrate this inequality from t to r t and we apply Lemma A.2 with 0 0 h :[t , +∞) →[0, +∞) given by h(s) := sw(s) to obtain ! " r r r t 1 1 α−1 ˙ * [φ(t )] dt C dt + (α + 1) s · sw(s)ds dt α α t t t t t t 0 0 0 0 C 1 1 α + 1 − + tw(t )dt . α−1 α−1 1 − α r 1 − α By hypothesis, as r →+∞, the right hand side of the previous inequality is finite. In other words, +∞ [φ(t )] dt < +∞. The previous statement, together with the fact that we assumed that φ was bounded from below, allow us to deduce that the function ψ :[t , +∞) → R given by ψ(t ) := φ(t ) − [φ(s)] ds is also bounded from below. An easy computation shows that ψ is nonpositive on [t , +∞), thus ψ is nonincreasing on [t , +∞). These facts imply that lim ψ(t ) 0 0 t →+∞ is a real number. Finally, we conclude that +∞ lim φ(t ) = lim ψ(t ) + [φ(s)] ds ∈ R. t →+∞ t →+∞ 123 27 Page 30 of 43 Applied Mathematics & Optimization (2023) 88:27 The proof for Opial’s Lemma can be found in [36]. Lemma A.5 (Opial’s Lemma) Let H be a real Hilbert space, S ⊆ H a nonempty set, t > 0 and z : [t , +∞) → H a mapping that satisfies 0 0 (i) for every z ∈ S, lim z(t ) − z exists; ∗ t →+∞ ∗ (ii) every weak sequential cluster point of the trajectory z(t ) as t →+∞ belongs to S. Then, z(t ) converges weakly to an element of S as t →+∞. Appendix B: Missing Proofs Proof of Lemma 3.1 Let t t be fixed. Since x ∈ F,wehave 0 ∗ ∇ L (x (t ) ,λ ) − L (x ,λ (t )) =∇ L (x (t ) ,λ ) x β ∗ β ∗ x β ∗ ∗ ∗ =∇ f (x (t )) + A λ + β A (Ax (t ) − b) , ∇ L (x (t ) ,λ ) − L (x ,λ (t )) =−∇ L (x ,λ (t )) = 0. λ β ∗ β ∗ λ β ∗ Under these expressions, the system (2.6) can be equivalently written as ¨ ˙ x ¨ (t ) , λ (t ) =− x˙ (t ) , λ (t ) − δ (t ) ∇ L (x (t ) ,λ ) , 0 x β ∗ − δ(t ) A λ (t ) − λ + θt λ (t ) , − A (x (t ) + θt x˙ (t )) − b , which leads to ˙ ¨ v ˙ (t ) = (1 + θ) x˙ (t ) , λ (t ) + θt x ¨ (t ) , λ (t ) =−ξ x˙ (t ) , λ (t ) − θt δ (t)(t ) ∇ L (x (t ) ,λ ) , 0 x β ∗ − θt δ (t ) A λ (t ) − λ + θt λ (t ) , − A (x (t ) + θt x˙ (t )) − b . We get from the distributive property of the inner product v (t ) , v ˙ (t ) ˙ ˙ =−ξ x (t ) ,λ (t ) − (x ,λ ), x˙ (t ) , λ (t ) − ξθt x˙ (t ) , λ (t ) ∗ ∗ − θt δ (t ) ∇ L (x (t ) ,λ ) , 0 , x (t ) ,λ (t ) − (x ,λ ) x β ∗ ∗ ∗ 2 2 − θ t δ (t ) ∇ L (x (t ) ,λ ) , 0 , x˙ (t ) , λ (t ) x β ∗ − θt δ (t ) λ (t ) − λ + θt λ (t ) , Ax (t ) − Ax ∗ ∗ 2 2 − θ t δ (t ) λ (t ) − λ + θt λ (t ) , Ax˙ (t ) 123 Applied Mathematics & Optimization (2023) 88:27 Page 31 of 43 27 + θt δ (t ) A (x (t ) + θt x˙ (t )) − b,λ (t ) − λ 2 2 + θ t δ (t ) A (x (t ) + θt x˙ (t )) − b, λ (t ) . Since x ∈ F, the last four terms in the above identity vanish. Indeed, ˙ ˙ − λ (t ) − λ + θt λ (t ) , Ax (t ) − Ax − θt λ (t ) − λ + θt λ (t ) , Ax˙ (t ) ∗ ∗ ∗ + A (x (t ) + θt x˙ (t )) − b,λ (t ) − λ + θt A (x (t ) + θt x˙ (t )) − b, λ (t ) ˙ ˙ =− λ (t ) − λ + θt λ (t ) , Ax (t ) − b − θt λ (t ) − λ + θt λ (t ) , Ax˙ (t ) ∗ ∗ + Ax (t ) − b,λ (t ) − λ + θt Ax˙ (t ) ,λ (t ) − λ ∗ ∗ 2 2 ˙ ˙ + θt Ax (t ) − b, λ (t ) + θ t Ax˙ (t ) , λ (t ) = 0. Therefore, differentiating E with respect to t gives E (t ) = θ t 2δ (t ) + t δ (t ) L (x (t ) ,λ ) − L (x ,λ (t )) β ∗ β ∗ dt 2 2 + θ t δ(t ) ∇ L (x (t ) ,λ ) , 0 , x˙ (t ) , λ (t ) x β ∗ + v (t ) , v ˙ (t ) + ξ x (t ) ,λ (t ) − (x ,λ ), x˙ (t ) , λ (t ) ∗ ∗ ˙ ˙ = θ t 2δ (t ) + t δ (t ) L (x (t ) ,λ ) − L (x ,λ (t )) − ξθt x˙ (t ) , λ (t ) β ∗ β ∗ − θtδ(t ) ∇ L (x (t ) ,λ ) , 0 , x (t ) ,λ (t ) − (x ,λ ) . (B.1) x β ∗ ∗ ∗ Furthermore, the convexity of f and the fact that x ∈ F guarantee − ∇ L (x (t ) ,λ ) , 0 , x (t ) ,λ (t ) − (x ,λ ) x β ∗ ∗ ∗ ∗ ∗ = ∇ f (x (t )) , x − x (t ) + A λ , x − x (t ) + β A (Ax (t ) − b) , x − x (t ) ∗ ∗ ∗ ∗ − ( f (x (t )) − f (x )) − λ , Ax (t ) − b − β Ax (t ) − b ∗ ∗ =− L (x (t ) ,λ ) − L (x ,λ (t )) − Ax (t ) − b , (B.2) β ∗ β ∗ where we recall that the second equality comes from (2.5). By multiplying this inequal- ity by θtδ(t ) and combining it with (B.1), the coefficient attached to the primal-dual gap becomes 1 − 2θ 2 2 2 ˙ ˙ θ t 2δ(t ) + tδ(t ) − θtδ(t ) =−θ t δ(t ) − tδ(t ) =−θ tσ(t ), which finally gives the desired statement. Proof of Theorem 3.2 (i) Recall that Assumption 1 implies σ(t ) 0 for all t t and ξ 0. Moreover, (x ,λ ) ∈ S yields x ∈ F. Therefore, we can apply (3.7) and Lemma 3.1 to obtain, ∗ ∗ ∗ 123 27 Page 32 of 43 Applied Mathematics & Optimization (2023) 88:27 for every t t , E (t ) −θ tσ(t ) L (x (t ) ,λ ) − L (x ,λ (t )) β ∗ β ∗ dt − βθtδ(t ) Ax (t ) − b − ξθt x˙ (t ) , λ (t ) 0. (B.3) This means that E is nonincreasing on [t , +∞). For every t t ,byintegrating(B.3) 0 0 from t to t, we obtain θ tσ(t ) L(x (s), λ ) − L(x ,λ(s)) ds ∗ ∗ t t βθ + sδ(s) Ax (s) − b ds + ξθ s x˙ (s), λ(s) ds t t 0 0 E (t ) − E (t ) E (t ), 0 0 where the last inequality follows from (3.8). Since all quantities inside the integrals are nonnegative, we obtain (3.10)–(3.12) by letting t →+∞. (ii) Let t t be fixed. Inequality (B.3) tells us that E is nonincreasing on [t , +∞). 0 0 2 2 θ t δ(t ) L (x (t ), λ ) − L (x ,λ(t )) β ∗ β ∗ (B.4) + v(t ) + x (t ), λ(t ) − (x ,λ ) E (t ). ∗ ∗ 0 1 1 Assuming α> 3 and θ> , we immediately see ξ> 0. From (B.4) we obtain 2 α−1 2E (t ) 2 0 x (t ), λ(t ) − (x ,λ ) , (B.5) ∗ ∗ and v(t ) = x (t ), λ(t ) − (x ,λ ) + θt x˙ (t ), λ(t ) 2E (t ). (B.6) ∗ ∗ 0 The estimate (B.5) leads to the boundedness of the trajectory. Moreover, applying the triangle inequality and (B.5)–(B.6), we obtain t x˙ (t ) , λ (t ) x (t ), λ(t ) − (x ,λ ) + v(t ) ∗ ∗ 1 2E (t ) 1 1 + 2E (t ) = √ + 1 2E (t ), (B.7) 0 0 θ ξ θ ξ which gives the desired convergence rate. Proof of Proposition 4.2 Thanks to ∇ f being -Lipschitz continuous, we can use (1.3) to refine relation (B.2) in the proof of Lemma 3.1 123 Applied Mathematics & Optimization (2023) 88:27 Page 33 of 43 27 − ∇ L (x (t ) ,λ ) , 0 , x (t ) ,λ (t ) − (x ,λ ) x β ∗ ∗ ∗ ∗ ∗ =∇ f (x (t )), x − x (t )+ A λ , x − x (t ) + β A (Ax (t ) − b), x − x (t ) ∗ ∗ ∗ ∗ −( f (x (t )) − f (x )) − ∇ f (x (t )) −∇ f (x ) −λ , Ax (t ) − b ∗ ∗ ∗ − β Ax (t ) − b =− L (x (t ) ,λ ) − L (x ,λ (t )) − ∇ f (x (t )) −∇ f (x ) β ∗ β ∗ ∗ − Ax (t ) − b . Consequently, combining this inequality with (B.1) yields, for every t t E (t ) −θ tσ(t ) L (x (t ) ,λ ) − L (x ,λ (t )) − ξθt x˙ (t ) , λ (t ) β ∗ β ∗ dt θtδ(t ) θβtδ(t ) 2 2 − ∇ f (x (t )) −∇ f (x ) − Ax (t ) − b 2 2 θtδ(t ) − ∇ f (x (t )) −∇ f (x ) . Integration of this inequality produces (4.4). The finiteness of the second integral is only entailed by (3.11) when β> 0. For the general case β 0, we use (3.14) and the fact that δ is nondecreasing on [t , +∞) to obtain +∞ +∞ +∞ 1 4C 1 2 2 1 tδ(t ) Ax (t ) − b dt 4C dt dt < +∞, 3 3 t δ(t ) δ(t ) t t t 0 t 0 0 0 and the proof is complete. Proof of Lemma 4.3 Let t t be fixed. Differentiating W with respect to time yields ˙ ˙ W (t ) = δ(t ) L (x (t ), λ ) − L (x ,λ(t )) β ∗ β ∗ + δ(t ) ∇ L (x (t ), λ ) , x˙ (t ) − ∇ L (x ,λ(t )), λ(t ) x β ∗ λ β ∗ ¨ ˙ +x¨(t ), x˙ (t )+λ(t ), λ(t ). Recall the formulas for the gradients of L ∗ ∗ ∇ L (x (t ), λ ) =∇ f (x (t )) + A λ + β A (Ax (t ) − b), x β ∗ ∗ ∇ L (x ,λ(t )) = Ax − b = 0, λ β ∗ ∗ 123 27 Page 34 of 43 Applied Mathematics & Optimization (2023) 88:27 since x ∈ F. Plugging this into the expression for W (t ) gives us ˙ ˙ W (t ) = δ(t ) L (x (t ), λ ) − L (x ,λ(t )) β ∗ β ∗ + δ(t ) ∇ L (x (t ), λ(t ) + θt λ(t )), x˙ (t ) +x¨(t ), x˙ (t ) x β − δ(t ) λ(t ) − λ + θt λ(t ), Ax˙ (t ) ¨ ˙ − δ(t ) ∇ L (x (t ) + θt x˙ (t ), λ(t )), λ(t ) +λ(t ), λ(t ) λ β + δ(t ) Ax (t ) − b + θtAx˙(t ), λ(t ) . By regrouping and using (2.6), we arrive at α α 2 2 ˙ ˙ ˙ W (t ) = δ(t ) L (x (t ), λ ) − L (x ,λ(t )) − x˙ (t ) − λ(t ) β ∗ β ∗ t t (B.8) − δ(t ) λ(t ) − λ , Ax˙ (t ) + δ(t ) Ax (t ) − b, λ(t ) . On the other hand, by the chain rule, we have ϕ( ˙ t ) =x (t ) − x , x˙ (t )+ λ(t ) − λ , λ(t ) , ∗ ∗ ¨ ˙ ϕ( ¨ t ) =x (t ) − x , x ¨(t )+ x˙ (t ) + λ(t ) − λ , λ(t ) + λ(t ) . ∗ ∗ By combining these relations, (2.6) and the fact that x ∈ F, we get α α α ¨ ˙ ϕ( ¨ t ) + ϕ( ˙ t ) = x (t ) − x , x ¨(t ) + x˙ (t ) + λ(t ) − λ , λ(t ) + λ(t ) + x˙ (t ) ∗ ∗ t t t + λ t ( ) =− x (t ) − x ,δ(t )∇ L (x (t ), λ(t ) + θt λ(t )) ∗ x β ˙ ˙ + λ(t ) − λ ,δ(t )∇ L (x (t ) + θt λ(t ), λ(t )) + x˙ (t ) + λ (t ) ∗ λ β =− x (t ) − x ,δ(t )∇ L (x (t ), λ ) ∗ x β ∗ − Ax (t ) − b,δ(t ) λ(t ) − λ + θt λ(t ) + λ(t ) − λ ,δ(t ) Ax (t ) − b + θtAx˙(t ) + x˙ (t ) + λ (t ) =−δ(t ) x (t ) − x , ∇ L (x (t ), λ ) − θtδ(t ) Ax (t ) − b, λ(t ) ∗ x β ∗ + θtδ(t ) λ(t ) − λ , Ax˙ (t ) + x˙ (t ) + λ (t ) (B.9) The Lipschitz continuity of ∇ f entails − x (t ) − x , ∇ L (x (t ), λ ) ∗ x β ∗ ∗ 2 =−x (t ) − x , ∇ f (x (t ))− x (t ) − x , A λ − β Ax (t ) − b ∗ ∗ ∗ −( f (x (t )) − f (x )) − ∇ f (x (t )) −∇ f (x ) ∗ ∗ −λ , Ax (t ) − b− β Ax (t ) − b 123 Applied Mathematics & Optimization (2023) 88:27 Page 35 of 43 27 =− L (x (t ), λ ) − L (x ,λ(t )) − ∇ f (x (t )) −∇ f (x ) β ∗ β ∗ ∗ − Ax (t ) − b . This, together with (B.9), implies ϕ( ¨ t ) + ϕ( ˙ t ) −δ(t ) L (x (t ), λ ) − L (x ,λ(t )) − θtδ(t ) Ax (t ) − b, λ(t ) β ∗ β ∗ + θtδ(t ) λ(t ) − λ , Ax˙ (t ) + x˙ (t ) + λ (t ) δ(t ) βδ(t ) 2 2 − ∇ f (x (t )) −∇ f (x ) − Ax (t ) − b . (B.10) 2 2 Multiplying (B.8)by θt > 0 and then adding the result to (B.10) yields ˙ ˙ ϕ( ¨ t ) + ϕ( ˙ t ) + θt W (t ) =− δ(t ) − θtδ(t ) L (x (t ), λ ) − L (x ,λ(t )) β ∗ β ∗ δ(t ) βδ(t ) 2 2 − ∇ f (x (t )) −∇ f (x ) − Ax (t ) − b 2 2 + (1 − θα) x˙ (t ) , λ (t ) δ(t ) βδ(t ) 2 2 − ∇ f (x (t )) −∇ f (x ) − Ax (t ) − b , 2 2 where the last inequality follows from Assumption 2 1 − θα −θ< 0, ˙ ˙ −δ(t ) + θtδ(t ) (2θ − 1)δ(t ) + θtδ(t ) 0. The desired result then follows after some rearranging. Proof of Lemma 4.5 Let t t be fixed. From (2.6) and the fact that A λ =−∇ f (x ),wehave 0 ∗ ∗ 2 ∗ δ (t ) ∇ f (x (t )) −∇ f (x ) + β A (Ax (t ) − b) = x ¨(t ) + x˙ (t ) + δ(t )A λ(t ) − λ + θt λ(t ) 2 ∗ = x ¨(t ) + x˙ (t ) + δ (t ) A λ(t ) − λ + θt λ(t ) + 2δ(t ) x ¨(t ) + x˙ (t ), A λ(t ) − λ ∗ ∗ ˙ ˙ + 2θtδ(t ) x ¨(t ), A λ(t ) + 2αθtδ(t ) x˙ (t ), A λ(t ) . (B.11) 123 27 Page 36 of 43 Applied Mathematics & Optimization (2023) 88:27 Again using (2.6) yields 2 2 ¨ ˙ δ (t ) Ax ((t )) − b = λ(t ) + λ(t ) − θtδ(t )Ax˙(t ) 2 2 2 2 ¨ ˙ = λ(t ) + λ(t ) + θ t δ (t ) Ax˙ (t ) ¨ ˙ − 2θtδ(t ) λ(t ), Ax˙ (t ) − 2αθ δ(t ) λ(t ), Ax˙ (t ) (B.12) Adding (B.11) and (B.12) together produces 2 ∗ 2 2 δ (t ) ∇ f (x (t )) −∇ f (x ) + β A (Ax (t ) − b) + δ (t ) Ax ((t )) − b 2 ∗ ¨ ˙ = x ¨(t ), λ(t ) + x˙ (t ), λ(t ) + δ (t ) A λ(t ) − λ 2 2 2 2 + θt λ(t ) + θ t δ (t ) Ax˙ (t ) ˙ ¨ + 2θtδ(t ) x ¨(t ), A λ(t ) − 2θtδ(t ) λ(t ), Ax˙ (t ) + 2δ(t ) x ¨(t ) + x˙ (t ), A (λ(t ) − λ ) . (B.13) On the one hand, we have 2 2 2 2 ∗ ˙ ¨ θ t δ (t ) Ax˙ (t ) + 2θtδ(t ) x ¨(t ), A λ(t ) − 2θtδ(t ) λ(t ), Ax˙ (t ) 2 2 2 2 2 ∗ 2 2 2 ∗ ˙ ˙ = θ t δ (t ) A λ(t ), −Ax˙ (t ) − θ t δ (t ) A λ(t ) ¨ ˙ + 2θtδ(t ) x ¨(t ), λ(t ) , A λ(t ), −Ax˙ (t ) 2 2 ∗ ∗ ¨ ¨ ˙ =− x ¨(t ), λ(t ) + x ¨(t ), λ(t ) + θtδ(t )A A λ(t ), −Ax˙ (t ) 2 2 2 ∗ − θ t δ (t ) A λ(t ) 2 2 2 2 2 ∗ ¨ ˙ − x ¨(t ), λ(t ) − θ t δ (t ) A λ(t ) . (B.14) On the other hand, it holds ¨ ˙ ¨ x ¨(t ), λ(t ) + x˙ (t ), λ(t ) − x ¨(t ), λ(t ) α α α d 2 2 ˙ ¨ ˙ ˙ = x˙ (t ), λ(t ) + 2 x ¨(t ), λ(t ) , x˙ (t ), λ(t ) x˙ (t ), λ(t ) . t t t dt (B.15) Moreover, 2 2 ∗ 2 2 ∗ ˙ ˙ δ(t ) A λ(t ) − λ + θt λ(t ) − θ t δ(t ) A λ(t ) ∗ ∗ = δ(t ) A (λ(t ) − λ ) + 2θtδ(t ) AA (λ(t ) − λ ), λ(t ) ∗ ∗ 2 2 ∗ ∗ = (1 − θ)δ(t ) − θtδ(t ) A λ(t ) − λ + θδ(t ) A (λ(t ) − λ ) ∗ ∗ ∗ ∗ 2 + θtδ(t ) A (λ (t ) − λ ) + θ A (λ(t ) − λ ) ∗ ∗ dt 123 Applied Mathematics & Optimization (2023) 88:27 Page 37 of 43 27 2 2 ∗ ∗ = (1 − θ)δ(t ) − θtδ(t ) A (λ (t ) − λ ) + θ tδ(t ) A (λ(t ) − λ ) . ∗ ∗ dt (B.16) Now, using (B.14), (B.15) and (B.16)in (B.13) yields 2 ∗ 2 2 δ (t)(t ) ∇ f (x (t )) −∇ f (x ) + β A (Ax (t ) − b) + δ (t ) Ax ((t )) − b 2 ∗ ¨ ˙ ˙ x ¨(t ), λ(t ) + x˙ (t ), λ(t ) + δ (t ) A λ(t ) − λ + θt λ(t ) 2 2 2 2 2 ∗ ∗ ¨ ˙ − x ¨(t ), λ(t ) − θ t δ (t ) A λ(t ) +2δ(t ) x ¨(t ) + x˙ (t ), A (λ(t )−λ ) α d α x˙ (t ), λ(t ) + 2δ(t ) x ¨(t ) + x˙ (t ), A (λ(t ) − λ ) t dt t + δ(t ) (1 − θ)δ(t ) − θtδ(t ) A (λ(t ) − λ ) +θ tδ(t ) A (λ (t ) − λ ) dt Finally, since ∗ 2 ∇ f (x (t )) −∇ f (x ) + β A (Ax (t ) − b) + Ax ((t )) − b 2 2 2 2 ∇ f (x (t )) −∇ f (x ) + (2β A + 1) Ax (t ) − b , the conclusion follows after dividing the inequality by δ(t ). Proof of Lemma 4.6 For every t t , by summing up the two inequalities produced by Lemmas 4.3 and 4.5 we deduce that α α ˙ ˙ ϕ( ¨ t ) + ϕ( ˙ t ) + θt W (t ) + x˙ (t ), λ(t ) t tδ(t ) d α ∗ ∗ + θ tδ(t ) A (λ(t ) − λ ) + 2 x ¨(t ) + x˙ (t ), A (λ(t ) − λ ) ∗ ∗ dt t − (1 − θ)δ(t ) − θtδ(t ) A (λ(t ) − λ ) + 2 − δ(t ) ∇ f (x (t )) −∇ f (x ) 2 2 2 + 2β A + 1 − δ(t ) Ax (t ) − b − (1 − θ)δ(t ) − θtδ(t ) A (λ(t ) − λ ) + C δ(t ) ∇ f (x (t )) −∇ f (x ) 3 ∗ + C δ(t ) Ax (t ) − b , (B.17) 123 27 Page 38 of 43 Applied Mathematics & Optimization (2023) 88:27 where ! " ! " 1 β 2 2 C := 2 − 0 and C := 2β A + 1 − 0. 3 4 2 2 + + Mutiplying (B.17)by t and integrating from t to t, we obtain I (t ) + θ I (t ) + αI (t ) + θ I (t ) + 2I (t ) 1 2 3 4 5 α ∗ s (1 − θ)δ(s) − θsδ(s) A (λ(s) α 2 − λ ) ds + C s δ(s) ∇ f (x (s)) −∇ f (x ) ds ∗ 3 ∗ α 2 + C s δ(s) Ax (s) − b ds, (B.18) where α α−1 I (t ) := s ϕ( ¨ s) + αs ϕ( ˙ s) ds, α+1 I (t ) := s W (s)ds, t α−1 s d I (t ) := x˙ (s), λ(s) ds, δ(s) ds α ∗ I (t ) := s sδ(s) A (λ(s) − λ ) ds, 4 ∗ ds α α−1 ∗ I (t ) := s x ¨(s) + αs x˙ (s), A (λ(s) − λ ) ds. 5 ∗ We will furnish five different inequalities from computing each of these integrals separately. Let t t be fixed. • The integral I (t ). By the chain rule, for s t it holds 1 0 α α−1 α s ϕ( ¨ s) + αs ϕ( ˙ s) = s ϕ( ˙ s) , ds which leads to α α α α 0 = I (t ) − t ϕ( ˙ t ) + t ϕ( ˙ t ) I (t ) − t ϕ( ˙ t ) +|t ϕ( ˙ t )|. (B.19) 1 0 1 0 0 0 • The integrals I (t ) and I (t ). Integration by parts gives 2 4 α+1 α+1 I (t ) + I (t ) = t W (t ) − t W (t ) 2 4 0 α α+1 ∗ − (α + 1) s W (s)ds + t δ(t ) A (λ (t ) − λ ) 123 Applied Mathematics & Optimization (2023) 88:27 Page 39 of 43 27 α+1 ∗ − t δ(t ) A (λ(t ) − λ ) 0 0 ∗ α ∗ − α s δ(s) A (λ(s) − λ ) ds, and from here 2 2 α+1 ∗ α+1 α+1 ∗ t δ(t ) A (λ (t ) − λ ) t W (t ) + t δ(t ) A (λ (t ) − λ ) ∗ ∗ α+1 α+1 ∗ = I (t ) + I (t ) + t W (t ) + t δ(t ) A (λ(t ) − λ ) 2 4 0 0 0 ∗ 0 0 + (α + 1) s W (s)ds α ∗ + α s δ(s) A (λ(s) − λ ) ds. (B.20) • The integral I (t ). Again by integrating by parts, we get α−1 α−1 2 2 ˙ ˙ I (t ) = x˙ (t ), λ(t ) − x˙ (t ), λ(t ) 3 0 0 δ(t ) δ(t ) ! " t α−2 α−1 (α − 1)s δ(s) − s δ(s) − x˙ (s), λ(s) ds. δ (s) For s t , according to Assumption 2 we have δ(s) 0, hence δ is monotonically increasing and therefore α−2 α−1 α−2 α (α − 1)s δ(s) − s δ(s) (α − 1)s δ(s) (α − 1)s 2 2 2 δ (s) δ (s) t δ(t ) It follows that α−1 0 x˙ (t ), λ(t ) δ(t ) α−1 = I (t ) + x˙ (t ), λ(t ) 3 0 0 δ(t ) ! " t α−2 α−1 (α − 1)s δ(s) − s δ(s) + x˙ (s), λ(s) ds δ (s) α−1 α − 1 2 2 ˙ ˙ I (t ) + x˙ (t ), λ(t ) + s x˙ (s), λ(s) ds. (B.21) 3 0 0 δ(t ) t δ(t ) 0 t • The integral I (t ). Integration by parts entails α ∗ I (t ) = s x˙ (s) , A (λ(s) − λ ) ds 5 ∗ ds α ∗ α ∗ = t x˙ (t ), A (λ(t ) − λ ) − t x˙ (t ), A (λ(t ) − λ ) ∗ 0 0 ∗ 123 27 Page 40 of 43 Applied Mathematics & Optimization (2023) 88:27 α ∗ − s x˙ (s), A λ(s) ds. By the Cauchy–Schwarz inequality, we deduce that t t α ∗ α ˙ ˙ s x˙ (s), A λ(s) ds A s x˙ (s) λ(s) ds t t 0 0 2 2 s x˙ (s) + λ(s) ds, and thus + + α ∗ α ∗ + + 0 I (t ) − t x˙ (t ), A (λ(t ) − λ ) + t x˙ (t ), A (λ(t ) − λ ) 5 ∗ 0 0 ∗ (B.22) + s x˙ (s), λ(s) ds. Now, summing up the inequalities (B.19), (B.20), (B.21), and (B.22), then we proceed to employ (B.18) and obtain α+1 ∗ θt δ(t ) A (λ (t ) − λ ) I (t ) + θ I (t ) + αI (t ) + θ I (t ) + 2I − t ϕ( ˙ t ) 1 2 3 4 5 α(α − 1) + s θ(α + 1)W (s) + + A x˙ (s), λ(s) ds t δ(t ) 0 0 α ∗ α ∗ + θα s δ(s) A (λ(s) − λ ) ds − 2t x˙ (t ), A (λ(t ) − λ ) + C ∗ ∗ 5 t t α α α −t ϕ( ˙ t ) + s V (s)ds + s θ(α + 1) − 1 δ(t ) + θsδ(s) t t 0 0 A (λ(s) − λ ) ds α ∗ − 2t x˙ (t ), A (λ(t ) − λ ) + C , ∗ 5 where recall that V was given by α(α − 1) V (s) := θ(α + 1)W (s) + + A x˙ (s), λ(s) t δ( ) 2 2 + C δ(s) ∇ f (x (s)) −∇ f (x ) + C δ(s) Ax (s) − b , 3 ∗ 4 and the constant C is given by α−1 α α+1 0 C := t |˙ ϕ(t )|+ θt W (t ) + α x˙ (t , λ(t )) 5 0 0 0 0 δ(t ) + + α+1 ∗ α ∗ + + + θt δ(t ) A (λ(t ) − λ ) + 2t x˙ (t ), A (λ(t ) − λ ) 0. 0 0 ∗ 0 0 ∗ 0 0 We come then to the desired result. 123 Applied Mathematics & Optimization (2023) 88:27 Page 41 of 43 27 References 1. Abbas, B., Attouch, H., Svaiter, B.F.: Newton-like dynamics and forward-backward methods for struc- tured monotone inclusions in Hilbert spaces. J. Optim. Theory Appl. 161(2), 331–360 (2014). https:// doi.org/10.1007/s10957-013-0414-5 2. Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000). https://doi.org/10.1137/S0363012998335802 3. Attouch, H., Goudou, X., Redont, P.: The heavy ball with friction method, I. The continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 02(1), 1–34 (2000). https://doi.org/10.1142/ S0219199700000025 4. Alvarez, F., Attouch, H., Bolte, J., Redont, P.: A second-order gradient-like dissipative dynamical system with Hessian-driven damping: application to optimization and mechanics. J. Math. Pures Appl. 81(8), 747–779 (2002). https://doi.org/10.1016/S0021-7824(01)01253-3 5. Attouch, H., Chbani, Z., Riahi, H.: Combining fast inertial dynamics for convex optimization with Tikhonov regularization. J. Math. Anal. Appl. 457(2), 1065–1094 (2018). https://doi.org/10.1016/j. jmaa.2016.12.017 6. Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algo- rithms with asymptotic vanishing viscosity. Math. Program. 168(1), 123–175 (2018). https://doi.org/ 10.1007/s10107-016-0992-8 7. Attouch, H., Chbani, Z., Riahi, H.: Fast proximal methods via time scaling of damped inertial dynamics. SIAM J. Optim. 29(3), 2227–2256 (2019). https://doi.org/10.1137/18M1230207 8. Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3. ESAIM Control Optim. Calc. Var. (2019). https://doi.org/10.1051/cocv/ 9. Attouch, H., Chbani, Z., Riahi, H.: Fast convex optimization via time scaling of damped inertial gradient dynamics. Pure Appl. Funct. Anal. 6(6), 1081–1117 (2021) 10. Attouch, H., Chbani, Z., Fadili, J., Riahi, H.: Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics. J. Optim. Theory Appl. 193(1), 704–736 (2022). https://doi.org/10.1007/ s10957-021-01859-2 11. Attouch, H., Chbani, Z., Fadili, J., Riahi, H.: First-order optimization algorithms via inertial sys- tems with Hessian driven damping. Math. Program. 193(1), 113–155 (2022). https://doi.org/10.1007/ s10107-020-01591-1 12. Attouch, H., Chbani, Z., Riahi, H.: Fast convex optimization via a third-order in time evolution equation. Optimization 71(5), 1275–1304 (2022). https://doi.org/10.1080/02331934.2020.1764953 13. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than 1/k .SIAMJ.Optim. 26(3), 1824–1834 (2016). https://doi.org/10. 1137/15M1046095 14. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011). https://doi.org/10.1007/978-1-4419-9467- 15. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009). https://doi.org/10.1137/080716542 16. Bo¸t, R.I.: Conjugate Duality in Convex Optimization, vol. 637. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-04900-2 17. Bo¸t, R.I., Nguyen, D.-K.: Improved convergence rates and trajectory convergence for primal-dual dynamical systems with vanishing damping. J. Differ. Equ. 303, 369–406 (2021). https://doi.org/10. 1016/j.jde.2021.09.021 18. Bo¸t, R.I., Csetnek, E.R., László, S.C.: A primal-dual dynamical approach to structured convex mini- mization problems. J. Differ. Equ. 269(12), 10717–10757 (2020). https://doi.org/10.1016/j.jde.2020. 07.039 19. Bo¸t, R.I., Csetnek, E.R., Nguyen, D.-K.: Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates. Math. Program. (2022). https://doi.org/10.1007/s10107- 022-01879-4 20. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010). https://doi.org/10.1561/2200000016 123 27 Page 42 of 43 Applied Mathematics & Optimization (2023) 88:27 21. Cabot, A., Engler, H., Gadat, S.: Second-order differential equations with asymptotically small dissi- pation and piecewise flat potentials. Electron. J. Differ. Equ. 2009(17), 33–38 (2009) 22. Cabot, A., Engler, H., Gadat, S.: On the long time behavior of second order differential equations with asymptotically small dissipation. Trans. Am. Math. Soc. 361(11), 5983–6017 (2009) 23. Chambolle, A., Dossal, C.: On the convergence of the iterates of the “Fast iterative shrink- age/thresholding algorithm”. J. Optim. Theory Appl. 166(3), 968–982 (2015). https://doi.org/10.1007/ s10957-015-0746-4 24. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976). https://doi.org/10.1016/0898- 1221(76)90003-1 25. Goldstein, T., O’Donoghue, B., Setzer, S., Baraniuk, R.: Fast alternating direction optimization meth- ods. SIAM J. Imaging Sci. 7(3), 1588–1623 (2014). https://doi.org/10.1137/120896219 26. Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992). https://doi.org/10.1137/0802032 27. He, X., Hu, R., Fang, Y.P.: Convergence rates of inertial primal-dual dynamical methods for separable convex optimization problems. SIAM J. Control Optim. 59(5), 3278–3301 (2021). https://doi.org/10. 1137/20M1355379 28. He, X., Hu, R., Fang, Y.-P.: Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem. Automatica 146, 110547 (2022). https://doi.org/10.1016/j.automatica. 2022.110547 29. He, X., Hu, R., Fang, Y.-P.: Inertial accelerated primal-dual methods for linear equality constrained convex optimization problems. Numer. Algorithms 90(4), 1669–1690 (2022). https://doi.org/10.1007/ s11075-021-01246-y 30. He, X., Hu, R., Fang, Y.-P.: Inertial primal-dual dynamics with damping and scaling for linearly con- strained convex optimization problems. Applicable Anal. (2022). https://doi.org/10.1080/00036811. 2022.2104260 31. Lin, Z., Li, H., Fang, C.: Accelerated Optimization for Machine Learning: First-Order Algorithms. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-2910-8 32. Madan, R., Lall, S.: Distributed algorithms for maximum lifetime routing in wireless sensor networks. IEEE Trans. Wirel. Commun. 5(8), 2185–2193 (2006). https://doi.org/10.1109/TWC.2006.1687734 33. May, R.: Asymptotic for a second-order evolution equation with convex potential and vanishing damp- ing term. Turk. J. Math. 41(3), 681–685 (2017). https://doi.org/10.3906/mat-1512-28 34. Nesterov, Y.: A method for solving the convex programming problem with convergence rate O(1/k ). Proc. USSR Acad. Sci. 269, 543–547 (1983) 35. Nesterov, Y.: Introductory Lectures on Convex Optimization, vol. 87. Applied Optimization. Springer, Boston (2004). https://doi.org/10.1007/978-1-4419-8853-9 36. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73(4), 591–597 (1967) 37. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964). https://doi.org/10.1016/0041-5553(64)90137-5 38. Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Opti- mization Software, Publications Division, New York (1987) 39. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976). https://doi.org/10.1287/moor.1.2.97 40. Shi, G., Johansson, K.H.: Randomized optimal consensus of multi-agent systems. Automatica 48(12), 3018–3030 (2012). https://doi.org/10.1016/j.automatica.2012.08.018 41. Su, W., Boyd, S., Candès, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17(1), 5312–5354 (2016) 42. Yi, P., Hong, Y., Liu, F.: Distributed gradient algorithm for constrained optimization with application to load sharing in power systems. Syst. Control Lett. 83, 45–52 (2015). https://doi.org/10.1016/j.sysconle. 2015.06.006 43. Yi, P., Hong, Y., Liu, F.: Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems. Automatica 74, 259–269 (2016). https://doi.org/10.1016/j.automatica.2016.08.007 44. Zeng, X., Yi, P., Hong, Y., Xie, L.: Distributed continuous-time algorithms for nonsmooth extended monotropic optimization problems. SIAM J. Control Optim. 56(6), 3973–3993 (2018). https://doi.org/ 10.1137/17M1118609 123 Applied Mathematics & Optimization (2023) 88:27 Page 43 of 43 27 45. Zeng, X., Lei, J., Chen, J.: Dynamical primal-dual accelerated method with applications to network optimization. IEEE Trans. Autom. Control (2022). https://doi.org/10.1109/TAC.2022.3152720 Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Applied Mathematics & Optimization – Springer Journals
Published: Oct 1, 2023
Keywords: Augmented Lagrangian method; Primal-dual dynamical system; Damped inertial dynamics; Nesterov’s accelerated gradient method; Lyapunov analysis; Time rescaling; Convergence rate; Trajectory convergence; 37N40; 46N10; 65K10; 90C25
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.