Learning efficient and provably convergent splitting methods

Henry Lockyer (University of Bath), L. M. Kreusser, E. H. Müller, and P. Singh

Splitting methods are widely used for solving initial value problems (IVPs) due to their ability to simplify complicated evolutions into more manageable subproblems. These subproblems can be solved efficiently and accurately, leveraging properties like linearity, sparsity and reduced stiffness. Traditionally, these methods are derived using analytic and algebraic techniques from numerical analysis, including truncated Taylor series and their Lie algebraic analogue, the Baker–Campbell–Hausdorff formula. These tools enable the development of high-order numerical methods that provide exceptional accuracy for small timesteps. Moreover, these methods often (nearly) conserve important physical invariants, such as mass, unitarity, and energy. However, in many practical applications the computational resources are limited. Thus, it is crucial to identify methods that achieve the best accuracy within a fixed computational budget, which might require taking relatively large time steps. In this regime, high-order methods derived with traditional methods often exhibit large errors since they are designed to be asymptotically optimal. Machine Learning techniques offer a potential solution since they can be trained to efficiently solve a given IVP for large timesteps, but they are often purely data-driven, come with limited convergence guarantees in the small-timestep regime and do not necessarily conserve physical invariants. In this work, we propose a framework for finding machine learned splitting methods that are computationally efficient for large time steps and have provable convergence and conservation guarantees in the small-timestep limit.

[link to pdf] [back to Numdiff-17]