Quasiconvergence of an implementation of optimal balance by backwardforward nudging
Abstract
Optimal balance is a nonasymptotic numerical method to compute a point on the slow manifold for certain twoscale dynamical systems. It works by solving a modified version of the system as a boundary value problem in time, where the nonlinear terms are adiabatically ramped up from zero to the fully nonlinear dynamics. A dedicated boundary value solver, however, is often not directly available. The most natural alternative is a nudging solver, where the problem is repeatedly solved forward and backward in time and the respective boundary conditions are restored whenever one of the temporal end points is visited. In this paper, we show quasiconvergence of this scheme in the sense that the termination residual of the nudging iteration is as small as the asymptotic error of the method itself, i.e., under appropriate assumptions exponentially small. This confirms that optimal balance in its nudging formulation is an effective algorithm. Further, it shows that the boundary value problem formulation of optimal balance is well posed up at most a residual error as small as the asymptotic error of the method itself. The key step in our proof is a careful twocomponent Gronwall inequality.
 Publication:

arXiv eprints
 Pub Date:
 June 2022
 arXiv:
 arXiv:2206.13068
 Bibcode:
 2022arXiv220613068T
 Keywords:

 Mathematics  Dynamical Systems;
 Mathematics  Numerical Analysis;
 Primary 34E13;
 Secondary 34B15;
 37M21