DISE: Dynamic Integrator Selection to Minimize Forward Pass Time in Neural ODEs

1 Jan 2021  ·  Soyoung Kang, Ganghyeon Park, Kwang-Sung Jun, Noseong Park ·

Neural ordinary differential equations (Neural ODEs) are appreciated for their ability to significantly reduce the number of parameters when constructing a neural network. On the other hand, they are blamed for their procrastinated forward-pass inference, which is incurred by solving integral problems. To improve the model accuracy, they rely on advanced solvers, such as the Dormand--Prince (DOPRI) method. To solve an integral problem, however, it requires at least tens (or sometimes hundreds) of steps in many Neural ODE experiments. In this work, we propose to i) directly regularize the step size of DOPRI to make the forward-pass faster and ii) dynamically choose a simpler integrator than DOPRI for a carefully selected subset of input. Because it is not the case that every input requires the advanced integrator, we design an auxiliary neural network to choose an appropriate integrator given input to decrease the overall inference time without significantly sacrificing accuracy. We consider the Euler method, the fourth-order Runge--Kutta (RK4) method, and DOPRI as selection candidates. We found that a non-trivial percentage of cases can be solved with simple integrators. Therefore, the overall number of functional evaluations (NFEs) decreases up to 78% with improved accuracy.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here