Stochastic rounding and reducedprecision fixedpoint arithmetic for solving neural ordinary differential equations
Abstract
Although doubleprecision floatingpoint arithmetic currently dominates highperformance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lowerprecision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reducedprecision fixedpoint arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixedpoint arithmetic with stochastic rounding consistently results in smaller errors compared to single precision floatingpoint and fixedpoint arithmetic with roundtonearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit (LSB) in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).
 Publication:

arXiv eprints
 Pub Date:
 April 2019
 arXiv:
 arXiv:1904.11263
 Bibcode:
 2019arXiv190411263H
 Keywords:

 Computer Science  Data Structures and Algorithms;
 Computer Science  Mathematical Software
 EPrint:
 Submitted to Philosophical Transactions of the Royal Society A