Tag Archives: Differential Equations

A Comparison Between Differential Equation Solver Suites In MATLAB, R, Julia, Python, C, Mathematica, Maple, and Fortran

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/comparison-differential-equation-solver-suites-matlab-r-julia-python-c-fortran/

Many times a scientist is choosing a programming language or a software for a specific purpose. For the field of scientific computing, the methods for solving differential equations are what’s important. What I would like to do is take the time to compare and contrast between the most popular offerings.
This is a good way to reflect upon what’s available and find out where there is room for improvement. I hope that by giving you the details for how each suite was put together (and the “why”, as gathered from software publications) you can come to your own conclusion as to which suites are right for you.

(Full disclosure, I am the lead developer of DifferentialEquations.jl. You will see at the end that DifferentialEquations.jl does offer pretty much everything from the other suite combined, but that’s no accident: our software organization came last and we used these suites as a guiding hand for how to design ours.)

Quick Summary Table

If you just want a quick summary, I created a table which has all of this information. You can find it here (click for PDF):

Comparison Of Differential Equation Solver Software

MATLAB’s Built-In Methods

Due to its popularity, let’s start with MATLAB’s built in differential equation solvers. MATLAB’s differential equation solver suite was described in a research paper by its creator Lawerance Shampine, and this paper is one of the most highly cited SIAM Scientific Computing publications. Shampine also had a few other papers at this time developing the idea of a “methods for a problem solving environment” or a PSE. The idea is pretty simple: users of a problem solving environment (the examples from his papers are MATLAB and Maple) do not have the same requirements as more general users of scientific computing. Instead of focusing on efficiency, they key for this group is to have a clear and neatly defined (universal) interface which has a lot of flexibility.

The MATLAB ODE Suite does extremely well at hitting these goals. MATLAB documents its ODE solvers very well, there’s a similar interface for using each of the different methods, and it tells you in a table in which cases you should use the different methods.

But the modifications to the methods goes even further. Let’s take for example the classic ode45. That method just works and creates good plots, right? Well, Shampine added a little trick to it. When you solve an equation using ode45, the Runge-Kutta method uses a “free” interpolation to fill in some extra points. So between any two steps that the solver takes, it automatically adds in 4 extra points using a 4th order interpolation. This is because high order ODE solvers are good enough at achieving “standard user error tolerances” that they actually achieve quite large timesteps, and in doing so step too infrequently to make a good plot. Shampine’s scheme is a good quick fix to this problem which most people probably never knew was occurring under the hood!

There’s quite a bit of flexibility here. The methods allow you to use complex numbers. You’re given access to the “dense output function” (this is the function which computes the interpolations). There’s a few options you can tweak. Every one of these methods is setup with event handling, and there are methods which can handle differential-algebraic equations. There are also dde23 and ddesd for delay differential equations, and in the financial toolbox there’s an Euler-Maruyama method for SDEs.

While MATLAB does an excellent job at giving a large amount of easily available functionality, where it lacks is performance. There’s a few reasons for this. For one thing, these modifications like adding extra points to the solution array can really increase the amount of memory consumed if the ODE system is large! This actually has an impact in other ways. There’s a very good example of this in ode45. ode45 is based on the Dormand-Prince 5(4) pair. However, in 1999, the same year the MATLAB ODE Suite was published, Shampine released a paper with a new 5(4) pair which was more efficient than the Dormand-Prince method. So that begs the question, why wasn’t this used in the MATLAB ODE Suite (it’s clear Shampine new about it!)? (I actually asked him in an email) The reason is because its interpolation requires calculating some extra steps, so it’s less efficient if you are ALWAYS interpolating. But since ode45 is always interpolating in order to make the plots look nicer, this would get in the way. Essentially, it can be more efficient, but MATLAB sets things up for nice plotting and not pure efficiency.

But there are other areas where more efficient methods were passed up during the development phase of the ODE suite. For example, Hairer’s benchmarks in his book Solving Ordinary Differential Equations I and II (the second is for stiff problems), along with the benchmarks from the Julia DifferentialEquations.jl suite, consistently show that high order Runge-Kutta methods are usually the most efficient methods for high accuracy solving of nonstiff ODEs. These benchmarks both consistently show that, for the same error, high order Runge-Kutta methods (like order >6) can solve the equation much faster than methods like Adams methods. But MATLAB does not offer high order Runge-Kutta methods and only offers ode113 (an Adams method) for high-accuracy solving.

Some of this is due to a limitation within MATLAB itself. MATLAB’s ODE solver requires taking in a user-defined function, and since this function is defined in MATLAB its function calls are very inefficient and expensive. Thus MATLAB’s ODE solver suite can become more efficient by using methods which reduce the number of function calls (which multistep methods do). But this isn’t the only case where efficient methods are missing. Both Hairer and the JuliaDiffEq benchmarks show that high-order Rosenbrock methods are the most efficient for low-medium accuracy stiff ODEs, but MATLAB doesn’t offer these methods. It does offer ode23s, a low-order Rosenbrock method, and ode15s which is a multistep method. ode15s is supposed to be the “meat and potatoes of the MATLAB Suite”, but it cannot handle all equations of since its higher order methods (it’s adaptive order) are not L-stable (and not even A-stable). For this reason there’s a few other low order SDIRK methods (ode23tb, an ESDIRK method for highly stiff problems) which are recommended to fill in the gaps, but none of the higher order variants which are known to be more efficient for many equations.

This pattern goes beyond the ODE solvers. The DDE solvers are all low order, and in the case of ddesd, it’s a low accuracy method which is fast for getting plots correct but not something which converges to many decimal places all too well since it doesn’t explicitly track discontinuities. This is even seen in the paper on the method which shows the convergence only matches dde23 to graphical accuracy on a constant-delay problem. Again, this fits in with the mantra of the suite, but may not hit all demographics. Shampine specifically made a separate version of ddesd for Fortran for people who are interested in efficiency, which is another way of noting that the key of ddesd is features and automatic usage, and not hardcore scientific computing efficiency. The mentioned SDE solver from the financial toolbox is only order 0.5 and thus requires quite a small dt to be accurate.

And I can keep going, but I think you get the moral of the story. This suite was created with one purpose in mind: to make it very easy to solve a wide array of differential equations and get a nice plot out. It does a very good job at doing so. But it wasn’t made with efficiency in mind, and so it’s missing a lot of methods that may be useful if you want high efficiency or high accuracy. Adding these wouldn’t make sense to the MATLAB suite anyways since it would clutter the offering. MATLAB is about ease of use, and not efficiency, and it does extremely well at what it set out to do. For software that was first made before the Y2K crisis with just a few methods added later, it still holds up very well.

Hairer’s Solvers (Fortran)

Next I want to bring up some Fortran solvers because they will come up later. Hairer’s Fortran solvers are a set methods with similar interfaces that were designed with efficiency in mind. Many of these methods are classics: dopri5, dop853, radau, and rodas will specifically show up in many of the suites which are discussed later. These methods are not too flexible: they don’t allow event handling (though with enough gusto you can use the dense output to write your own), or numbers that aren’t double-precision floating point numbers (it’s Fortran). They have a good set of options for tweaking parameters to make the adaptive timestepping more efficient, though you may have to read a few textbooks to know exactly what they do. And that’s the key to them: they will solve an ODE, stiff or non-stiff, and they will do so pretty efficiently, but nothing more. But even then, they show some age which don’t make them “perfectly efficient”. These solvers include their own linear algebra routines which don’t multithread like standard BLAS and LINPACK implementations, meaning that they won’t make full use of modern CPU architectures. The computations don’t necessarily SIMD or use FMA. But most of all, to use it directly you need to use Fortran which would be turn off for many people.

There is some flexibility in the offering. There are symplectic solvers for second order ODEs, the stiff solvers allow for solving DAEs in mass matrix form, there’s a constant-lag nonstiff delay differential equation solver (RETARD), there is a fantastic generalization of radau to stiff state-dependent delay differential equations (RADAR5), and there’s some solvers specifically for some “mechanical ODEs” commonly found in physical problems. Of course, to get this all working you’ll need to be pretty familiar with Fortran, but this is a good suite to look at if you are.

ODEPACK and Netlib ODE Solvers (Fortran)

ODEPACK is an old set of ODE solvers which accumulated over time. You can learn a bit about its history by reading this interview with Alan Hindenmarsh. I also bundle the Netlib suite together with it. There’s a wide variety of methods in there. There’s old Runge-Kutta methods due to Shampine, some of Shampine’s old multistep methods ddebdf and ddeabm, etc. The reason why this pack is really important though is because of the Lawarance Livermore set of methods, specifically LSODE and its related methods (LSODA, LSODR, VODE, etc.). It includes methods for implicit ODEs (DAEs) as well. These are a group of multistep methods which are descendent from GEAR, the original BDF code. They are pretty low level and thus allow you to control the solver step by step, and some of them have “rootfinding capabilities”. This means that you can use these to add event handling, though an event handling interface will take some coding. There’s lots of options and these are generally pretty performant for a large array of problems, but they do show their age in the same way that the Hairer codes do. Their linear algebra setups do not make use of modern BLAS and LINPACK, which in practical terms means they don’t make full use of modern computer architectures to speed things up. They don’t always make use of SIMD and other modern CPU acceleration features. They are the vanilla ODE solvers which have existed through time. In particular, LSODA is popular because this is the most widely distributed method which automatically detects stiffness and swtiches between integration methods, though it should be pointed out that there is a performance penalty from this feature.

Sundials and ARKCODE (C++ and Fortran)

Sundials‘ CVODE is a re-write of VODE to C which is a descendent of LSODE which is a descendent itself of the original GEAR multistep code. Yes, this has a long history. But you should think of it as “LSODE upgraded”: it makes use of modern BLAS/LINPACK, and also a bunch of other efficient C/Fortran linear solvers, to give a very efficient Adams and BDF method. Its solver IDA is like CVODE but handles implicit ODEs (DAEs). The interface for these is very similar to the ODEPACK interface, which means you can control it step by step and use the rootfinding capabilities to write your own event handling interface. Since the Adams methods handle nonstiff ODEs and the BDF methods handle stiff ODEs, this performance plus flexibility makes it the “one-stop-shop” for ODE solving. Many different scientific computing software internally are making use of Sundials because it can handle just about anything. Well, it does have many limitations. For one, this only works with standard C++ numbers and arrays. There’s no stochasticity or delays allowed. But more importantly, Hairer and DifferentialEquations.jl’s show that these multistep methods are usually not the most efficient methods since high order Rosenbrock, (E)SDIRK, and fully implicit RK methods are usually faster for the same accuracy for stiff problems, and high order Runge-Kutta are faster than the Adams methods for the same accuracy. Multistep methods are also not very stable at their higher orders, and so at higher tolerances (lower accuracy) these methods may fail to have their steps converge on standard test problems (see this note in the ROBER tests), meaning that you may have to increase the accuracy (and thus computational cost) due to stiffness issues. But, since multistep methods only require a single function evaluation per step (or are implicit in only single steps), they are the most efficient for asymptotically hard problems (i.e. when the derivative calculation is very expensive or the ODE is a large 10,000+ system). For this reason, these methods excel at solving large discretizations of PDEs. To top it off, there are parallel (MPI) versions for using CVODE/IDA for HPC applications.

I also want to note that recently Sundials added ARKCODE, a set of Runge-Kutta methods. These include explicit Runge-Kutta methods and SDIRK methods, including additive Runge-Kutta methods for IMEX methods (i.e. you can split out a portion to solve explicitly so that the implicit portion is cheaper when you know your problem is only partly or semi stiff). This means that it covers the methods which I mentioned earlier are more efficient in many of the cases (though it is a bit lacking on the explicit tableaus and thus could be more efficient, but that’s just details).

If you are using C++ or Fortran and want to write to only one interface, the Sundials suite is a great Swiss Army knife. And if you have an asymtopically large problem or very expensive function evaluations, this will be the most efficient as well. Plus the parallel versions are cool! You do have to live with the limitations of low-level software forcing specific number and array types, along with the fact that you need to write your own event handling, but if you’re “hardcore” and writing in a compiled language this suite is a good bet.

SciPy’s Solvers (Python)

Now we come to SciPy’s suite. If you look at what it offers, the names should now start to sound familiar: LSODA, VODE, dopri5, dop853. That is write: SciPy is simply a wrapper over Hairer’s and ODEPACK’s methods. Because it writes a generic interface though, it doesn’t have the granularity that is offered by ODEPACK, meaning that you don’t have step-by-step control and no event handling. The tweaking options are very basic as well. Basically, they let you define a function in Python, say what timeframe to solve it on, give a few tolerances, and it calls Fortran code and spits back a solution. It only has ODE solvers, no differential-algebraic, delay, or stochastic solvers. It only allows the basic number types and does no event handling. Super vanilla, but gets the job done? As with the methods analysis, it has the high order Runge-Kutta methods for efficient solving of nonstiff equations, but it’s missing Rosenbrock and SDIRK methods entirely, opting to only provide the multistep methods.

I should note here that it has the same limitation as MATLAB though, namely that the user’s function is Python code. Since the derivative function is where the ODE solver spends most of its time (for sufficiently difficult problems), this means that even though you are calling Fortran code, you will lose out on a lot of efficiency. Still, if efficiency isn’t a big deal and you don’t need bells and whistles, this suite will do the basics.

deSolve Package (R)

There’s not much to say other than that deSolve is very much like SciPy’s suite. It wraps almost the same solvers, has pretty much the same limitations, and has the same efficiency problem since in this case it’s calling the user-provided R function most of the time. One advantage is that it does have event handling. Less vanilla with a little bit more features, but generally the same as SciPy.

PyDSTool (Python)

PyDSTool is an odd little beast. Part of the software is for analytic continuation (i.e. bifurcation plotting). But another part of it is for ODE solvers. It contains one ODE solver which is written in Python itself and it recommends against actually using this for efficiency reasons. Instead, it wraps some of the Hairer methods, specifically dopri5 and radau, and recommends these. But it’s different than SciPy in that it takes in the specification of the ODE as a string, and compiles it to a C function, and uses this inside the ODE solver. By doing so, it’s much more efficient. We still note that its array of available methods is small and it offers radau which is great for high accuracy ODEs and DAEs, but is not the most efficient at lower accuracy so it would’ve been nice to see Rosenbrock and ESDIRK methods. It has some basic event handling and methods for DDEs (again as wrappers to a Hairer method). This is a pretty good suite if you can get it working, though I do caution that getting the extra (non-Python) methods setup and compiled is nontrivial. One big point to note is that I find the documentation spectacularly difficult to parse. Together, it’s pretty efficient and has a good set of methods which will do basic event handling and solve problems at double precision.

JiTCODE and JiTCSDE (Python)

JiTCODE is another Python library which makes things efficient by compiling the function that the user provides. It uses the SciPy integrators and does something similar to PyDSTool in order to get efficiency. I haven’t tried it out myself but I’ll assume this will get you as efficient as though you used it from Fortran. However, it is lacking in the features department, not offering advanced things like arbitrary number types, event handling, etc. But if you have a vanilla ODE to solve and you want to easily do it efficiently in Python, this is a good option to look at.

Additionally, JiTCDDE is a version for constant-lag DDEs similar to dde23. JiTCSDE is a version for stochastic differential equations. It uses the high order (strong order 1.5) adaptive Runge-Kutta method for diagonal noise SDEs developed by Rackauckas (that’s me) and Nie which has been demonstrated as much more efficient than the low order and fixed timestep methods found in the other offerings. It employs the same compilation setup as JitCODE so it should create efficient code as well. I haven’t used this myself but it would probably be a very efficient ODE/DDE/SDE solver if you want to use Python and don’t need events and other sugar.

Boost ODEINT Solver Library (C++)

The Boost ODEINT solver library has some efficient implementations of some basic explicit Runge-Kutta methods (including high order RK methods) and some basic Rosenbrock methods (including a high order one). Thus it can be pretty efficient for solving the most standard stiff and nonstiff ODEs. However, its implementations do not make use of advanced timestepping techniques (PI-controllers and Gustofsson accleration) which makes it require more steps to achieve the same accuracy as some of the more advanced software, making it not really any more efficient in practice. It doesn’t have event handling, but it is flexible with the number and array types you can put in there via C++ templates. It and DifferentialEquations.jl are the only two suites that are mentioned that allow for solving the differential equations on the GPU. Thus if you are familiar with templates and really want to make use of them, this might be the library to look at, otherwise you’re probably better off looking elsewhere like Sundials.

GSL ODE Solvers (C)

To say it lightly, the GSL ODE solvers are kind of a tragedy. Actually, they are just kind of weird. When comparing their choices against what is known to be efficient according to the ODE research and benchmarks, the methods that they choose to implement are pretty bizarre like extrapolation methods which have repeatedly been shown to not be very efficient, while not included other more important methods. But they do have some of the basics like multistep Adams and BDF methods. This, like Boost, doesn’t do all of the fancy PI-controlled adaptivity and everything though, so YMMV. This benchmark, while not measuring runtime and only uses function evaluations (which can be very very different according to more
sophisticated benchmarks like the Hairer and DifferentialEquations.jl ones!), clearly shows that the GSL solvers can take way too many function evaluations because of this and thus, since it’s using methods similar to LSODA/LSODE/dopri5, probably have much higher runtimes than they should.

Mathematica’s ODE Solvers

Mathematica’s ODE solvers are very sophisticated. It has a lot of explicit Runge-Kutta methods, including newer high order efficient methods due to Verner and Shampine’s efficient method mentioned in the MATLAB section. These methods can be almost 5x faster than the older high order explicit RK methods which themselves are the most efficient class of methods for many nonstiff ODEs, and thus these do quite well. It
includes interpolations to make their solutions continuous functions that plot nicely. Its native methods are able to make full use of Mathematica and its arbitrary precision, but sadly most of the methods it uses are wrappers for the classic solvers. Its stiff solvers mainly call into Sundials or LSODA. By using LSODA, it tends to be able to do automatic stiffness detection by default. It also wraps Sundials’ IDA for DAEs. It uses a method of steps implementation over its explicit Runge-Kutta methods for solving nonstiff DDEs efficiently, and includes high order Runge-Kutta methods for stochastic differential equations (though it doesn’t do adaptive timestepping in this case). One nice feature is that all solutions come with an interpolation to make them continuous. Additionally, it can use the symbolic form of the user-provided equation in order to create a function for the Jacobian, and use that (instead of a numerical differentiation fallback like all of the previous methods) in the stiff solvers for more accuracy and efficiency. It also has symplectic methods for solving second order ODEs, and its event handling is very expressive. It’s very impressive, but since it’s using a lot of wrapped methods you cannot always make use of Mathematica’s arbitrary precision inside of these numerical methods. Additionally, its interface doesn’t give you control over all of the fine details of the solvers that it wraps.

Maple’s ODE Solvers

Maple’s ODE solvers are pretty basic. It defaults to a 6th order explicit RK method due to Verner (not the newer more efficient ones though), and for stiff problems it uses a high order Rosenbrock method. It also wraps LSODE like everything else. It has some basic event handling, but not much more. As another symbolic programming language, it computes Jacobians analytically to pass to the stiff solvers like Mathematica, helping it out in that respect, but its offerings pale in comparison to Mathematica.

FATODE (Fortran)

FATODE
is a set of methods written in Fortran. It includes explicit Runge-Kutta methods, SDIRK methods, Rosenbrock methods and fully implicit RK methods. Thus it has something that’s pretty efficient for pretty much every case. What makes it special is that it includes the ability to do sensitivity analysis calcuations. It can’t do anything but double-precision numbers and doesn’t have event handling, but the sensitivity calculations makes it quite special. If you’re a FORTRAN programmer, this is worth a look, especially if you want to do sensitivity analysis.

DifferentialEquations.jl (Julia)

Okay, time for DifferentialEquations.jl. I left it for last because it is by far the most complex of the solver suites, and pulls ideas from many of them. While most of the other suite offer no more than about 15 methods on the high end (with most offering about 8 or less), DifferentialEquations.jl offers 200+ methods and is continually growing. Like the standard Python and R suites, it offers wrappers to Sundials, ODEPACK, and Hairer methods. However, since Julia code is always JIT compiled, its wrappers are more akin to PyDSTool or JiTCODE in terms of efficiency. Thus all of the standard methods mentioned before are available in this suite.

But then there are the native Julia methods. For ODEs, these include explicit Runge-Kutta methods, (E)SDIRK methods, and Rosenbrock methods. In each of these categories it has a huge amount of variety, offering pretty much every method from the other suites along with some unique methods. Some unique methods to point out are that it has the only 5th order Rosenbrock method, it has the efficient Verner methods discussed in the Mathematica part, it has newer 5th order methods (i.e. it includes the Bogacki-Shampine method discussed as an alternative to ode45’s tableau, along with an even newer tableau due to Tsitorious which is even more efficient). It has methods specialized to reduce interpolation error (the OwrenZen methods), and methods which are strong-stability presurving (for hyperbolic PDEs). It by default the solutions as continuous functions via a high order interpolation (though this can be turned off to make the saving more efficient). Each of these implementations employ a lot of extra tricks for efficiency. For example, the interpolation is “lazy”, meaning that if the method requires extra function evaluations for the continuous form, it will only do those extra calculations when the continuous function is used (so when you directly ask for it or when you plot). This is just a peak at the special things the library does to gain an edge.

The native Julia methods benchmark very well as well, and all of the benchmarks are openly available. Essentially, these methods make use of the native multithreading of modern BLAS/LINPACK, FMA, SIMD, and all of the extra little compiler goodies that allows code to be efficient, along with newer solver methods which theoretically reduce the amount of work that’s required to get the same error. They even allow you to tweak a lot of the internals and swap out the linear algebra routines to use parallel solvers like PETSc. The result is that these methods usually outperform the classic C/Fortran methods which are wrapped. Additionally, it has ways to symbolically calculate Jacobians like Mathematica/Maple, and instead of defaulting to numerical differentiation the stiff solvers fall back to automatic differentiation which is more efficient and has much increased accuracy.

Its event handling is the most advanced out of all of the offerings. You can change just about anything. You can make your ODE do things like change its size during the solving if you want, and you can make the event handling adjust and adapt internal solver parameters. It’s not a hyperbole to say that the user is given “full control” since the differential equation solvers themselves are written as essentially a method on the event handling interface, meaning that anything it can do internally you can do.

The variety of methods is also much more diverse than the other offerings. It has symplectic integrators like Harier’s suite, but has more high and low order methods. It has a range of Runge-Kutta Nystrom methods for efficiently solving second order ODEs. It has the same high order adaptive method for diagonal noise SDEs as JiTCSDE, but also includes high order adaptive methods specifically for additive noise SDEs. It also has methods for stiff SDEs in Ito and Stratanovich interpretations, and allows for event handling in the SDE case (with the full flexibility). It has DDE solvers for constant-lag and state-dependent delays, and it has stiff solvers for each of these cases. The stiff solvers also all allow for solving DAEs in mass matrix form (though fully implicit ODEs are possible, but can only be solved using a few methods like a wrapper to Sundials’ IDA and doesn’t include event handling here quite yet).

It allows arbitrary numbers and arrays like Boost. This means you can use arbitrary precision numbers in the native Julia methods, or you can use “array-like” objects to model multiscale biological organisms instead of always having to use simple contiguous arrays. It has addons for things like sensitivity analysis and parameter estimation. Also like Boost, it can solve equations on the
GPU by using a GPUArray.

It hits the strong points of each of the previously mentioned suites because it was designed from the get-go to do so. And it benchmarks extremely well. The only downside is that, because it is so feature and performance focused, its documentation is heavy. The beginning tutorial will give you the basics (making it hopefully as easy as MATLAB to pick up), but pretty much every control knob is available, making the extended portion of the documentation a long read.

Conclusion

Let’s take a step back and summarize this information. DifferentialEquations.jl objectively has the largest feature-set, swamping most of the others while wrapping all of the common solvers. Since it also features solvers for the non-ordinary differential equations and its unique Julia methods also benchmarks well, I think it’s clear that DifferentialEquations.jl is by far the best choice for “power-users” who are looking for a comprehensive suite.

As for the other choices from scripting languages, MATLAB wasn’t designed to have all of the most efficient methods, but it’ll handle basic equations with delays and events and output good plots. R’s deSolve is similar in most respects to MATLAB. SciPy’s offering is lacking in comparison to MATLAB and R’s due to the lack of event handling. But MATLAB/Python/R all have efficiency problems due to the fact that the user’s function is written in the scripting language. JiTCODE and PyDSTool are two Python offerings make the interface to the Fortran solvers more efficient than straight SciPy. Mathematica and Maple will do symbolic pre-calculations to speed things up and can JiT compile functions, along with offering pretty good event handling, and thus their wrappers are more like DifferentialEquations.jl in terms of flexibility and efficiency (and Mathematica had a few non-wrapper goodies mentioned as well). So in a pinch when not all of the bells and whistles are necessary, each of these scripting language suites will get you by. Behind DifferentialEquations.jl, I would definitely put Mathematica’s suite second for scripting languages with everything else much further behind.

If you’re already hardcore and writing in C++/Fortran, Sundials is a pretty good “one-stop-shop” to get everything you need, especially when you add in ARKCODE. Still, you’re going to have to write a lot of stuff yourself to make the rootfinding into an event handling interface, but if you put the work in
this will do you well. Hairer’s codes are a great set of classics which cover a wide variety of equations, and FATODE is the only non-DifferentialEquations.jl suite which offers a way to calculate sensitivity equations (and its sensitivity equations are more advanced). Any of these will do you well if you want to really get down-and-dirty in a compiled language and write a lot of the interfaces yourself, but they will be a sacrifice in productivity with no clear performance gain over the scripting language methods which also include some form of JIT compilation. With these in mind, I don’t really see a purpose for the GSL or Boost suites, and the ODEPACK methods are generally outdated.

I hope this review helps you make a choice which is right for you.

The post A Comparison Between Differential Equation Solver Suites In MATLAB, R, Julia, Python, C, Mathematica, Maple, and Fortran appeared first on Stochastic Lifestyle.

Video Introduction to DifferentialEquations.jl

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/video-introduction-differentialequations-jl/

Videos can be much easier to follow than text (though they usually have fewer details!). So, here’s a video introduction to DifferentialEquations.jl from JuliaCon. In this talk I walk through the major features of DifferentialEquations.jl by walking through the the tutorials in the documentation, highlighting usage details and explaining how to properly think about the code. I hope this helps make it easier to adopt DifferentialEquations.jl!

The post Video Introduction to DifferentialEquations.jl appeared first on Stochastic Lifestyle.

DifferentialEquations.jl 2.0: State of the Ecosystem

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/differentialequations-jl-2-0-state-ecosystem/

In this blog post I want to summarize what we have accomplished with DifferentialEquations’ 2.0 release and detail where we are going next. I want to put the design changes and development work into a larger context so that way everyone can better understand what has been achieved, and better understand how we are planning to tackle our next challenges.

If you find this project interesting and would like to support our work, please star our Github repository. Thanks!

Now let’s get started.

DifferentialEquations.jl 1.0: The Core

Before we start talking about 2.0, let’s understand first what 1.0 was all about. DifferentialEquations.jl 1.0 was about answering a single question: how can we put the wide array of differential equations into one simple and efficient interface. The result of this was the common interface explained in the first blog post. Essentially, we created one interface that could:

  1. Specify a differential equation problem
  2. Solve a differential equation problem
  3. Analyze a differential equation problem

The problem types, solve command, and solution interface were all introduced here as part of the unification of differential equations. Here, most of the work was on developing the core. DifferentialEquations.jl 1.0 was about having the core methods for solving ordinary differential equations, stochastic differential equations, and differential algebraic equations. There were some nice benchmarks to show that our core native solvers were on the right track, even besting well-known Fortran methods in terms of efficiency, but the key of 1.0 was the establishment of this high level unified interface and the core libraries for solving the problems.

DifferentialEquations.jl 2.0: Extended Capabilities

DifferentialEquations.jl 2.0 asked a very unique question for differential equations libraries. Namely, “how flexible can a differential equations solver be?”. This was motivated by an off-putting remark where someone noted that standard differential equations solvers were limited in their usefulness because many of the higher level analyses that people need to do cannot be done with a standard differential equations solver.

So okay, then we won’t be a standard differential equations solver. But what do we need to do to make all of this possible? I gathered a list of things which were considered impossible to do with “blackbox” differential equations solvers. People want to model continuous equations for protein concentrations inside of each cell, but allow the number of cells (and thus the number of differential equations) to change stochastically over time. People want to model multiscale phenomena, and have discontinuities. Some “differential equations” may only be discontinuous changes of discrete values (like in Gillespie models). People want to solve equations with colored noise, and re-use the same noise process in other calculations. People want to solve the same ODE efficiently hundreds of times, and estimate parameters. People want to quantify the uncertainty and the sensitivity of their model. People want their solutions conserve properties like energy.

People want to make simulations of reality moreso than solve equations.

And this became the goal for DifferentialEquations.jl 2.0. But the sights were actually set a little higher. The underlying question was:

How do you design a differential equations suite such that it can have this “simulation engine” functionality, but also such that adding new methods automatically makes the method compatible with all of these features?

That is DifferentialEquations.jl 2.0. the previous DifferentialEquations.jl ecosystem blog post details the strategies we were going to employ to achieve this goal, but let me take a little bit of time to explain the solution that eventually resulted.

The Integrator Interface

The core of the solution is the integrator interface. Instead of just having an interface on the high-level solve command, the integrator interface is the interface on the core type. Everything inside of the OrdinaryDiffEq.jl, StochasticDiffEq.jl, DelayDiffEq.jl packages (will be referred to as the *DiffEq solvers) is actually just a function on the integrator type. This means that anything that the solver can do, you can do by simply having access to the integrator type. Then, everything can be unified by documenting this interface.

This is a powerful idea. It makes development easy, since the devdocs just explain what is done internally to the integrator. Adding new differential equations algorithms is now simply adding a new perform_step dispatch. But this isn’t just useful for development, this is useful for users too. Using the integrator, you can step one at a time if you wanted, and do anything you want between steps. Resizing the differential equation is now just a function on the integrator type since this type holds all of the cache variables. Adding discontinuities is just changing integrator.u.

But the key that makes this all work is Julia. In my dark past, I wrote some algorithms which used R’s S3 objects, and I used objects in numerical Python codes. Needless to say, these got in the way of performance. However, the process of implementing the integrator type was a full refactor from straight loops to the type format. The result was no performance loss (actually, there was a very slight performance gain!). The abstraction that I wanted to use did not have a performance tradeoff because Julia’s type system optimized its usage. I find that fact incredible.

But back to the main story, the event handling framework was re-built in order to take full advantage of the integrator interface, allowing the user to directly affect the integrator. This means that doubling the size of your differential equation the moment some value hits 1 is now a possibility. It also means you can cause your integration to terminate when “all of the bunnies” die. But this became useful enough that you might not want to just use it for traditional event handling (aka cause some effect when some function hits zero, which we call the ContinuousCallback), but you may just want to apply some affect after steps. The DiscreteCallback allows one to check a boolean function for true/false, and if true apply some function to the integrator. For example, we can use this to always apply a projection to a manifold at the end of each step, effectively preserving the order of the integration while also conserving model properties like energy or angular momentum.

The integrator interface and thus its usage in callbacks then became a way that users could add arbitrary functionality. It’s useful enough that a DiscreteProblem (an ODE problem with no ODE!) is now a thing. All that is done is the discrete problem walks through the ODE solver without solving a differential equation, just hitting callbacks.

But entirely new sets of equations could be added through callbacks. For example, discrete stochastic equations (or Gillespie simulations) are models where rate equations determine the time to the next discontinuity or “jump”. The JumpProblem types simply add callbacks to a differential (or discrete) equation that perform these jumps at specific rates. This effectively turns the “blank ODE solver” into an equation which can solve these models of discrete proteins stochastically changing their levels over time. In addition, since it’s built directly onto the differential equations solvers, mixing these types of models is an instant side effect. These models which mix jumps and differential equations, such as jump diffusions, were an immediate consequence of this design.

The design of the integrator interface meant that dynamicness of the differential equation (changing the size, the solver options, or any other property in the middle of solving) was successfully implemented, and handling of equations with discontinuities directly followed. This turned a lot of “not differential equations” into “models and simulations which can be handled by the same DifferentialEquations.jl interface”.

Generic algorithms over abstract types

However, the next big problem was being able to represent a wider array of models. “Models and simulations which do weird non-differential equation things over time” are handled by the integrator interface, but “weird things which aren’t just a system of equations which do weird non-differential equation things over time” were still out of reach.

The solution here is abstract typing. The *DiffEq solvers accept two basic formats. Let’s stick to ODEs for the explanation. For ODEs, there is the out-of-place format

du = f(t,u)

where the derivative/change is returned by the function, and there is the in-place format

f(t,u,du)

where the function modifies the object du which stores the derivative/change. Both of these formats were generalized to the extreme. In the end, the requirements for a type to work in the out-of-place format can be described as the ability to do basic arithmetic (+,-,/,*), and you add the requirement of having a linear index (or simply having a broadcast! function defined) in order to satisfy the in-place format. If the method is using adaptivity, the user can pass an appropriate norm function to be used for calculating the norm of the error estimate.

This means that wild things work in the ODE solvers. I have already demonstrated arbitrary precision numbers, and unit-checked arithmetic.

But now there’s even crazier. Now different parts of your equation can have different units using the ArrayPartition. You can store and update discrete values along with your differential equation using the DEDataArray type. Just the other day I showed this can be used to solve problems where the variable is actually a symbolic mathematical expression. We are in the late stages of getting a type which represents a spectral discretization of a function compatible with the *DiffEq solvers.

But what about those “purely scientific non-differential equations” applications? A multiscale model of an embryo which has tissues, each with different populations of cells, and modeling the proteins in each cell? That’s just a standard application of the AbstractMultiScaleArray.

Thus using the abstract typing, even simulations which don’t look like systems of equations can now be directly handled by DifferentialEquations.jl. But not only that, since this is done simply via Julia’s generic programming, this compatibility is true for any of the new methods which are added (one caveat: if they use an external library like ForwardDiff.jl, their compatibility is limited by the compatibility of that external library).

Refinement of the problem types

The last two big ideas made it possible for a very large set of problems to be written down as a “differential equation on an array” in a much expanded sense of the term. However, there was another design problem to solve: not every algorithm could be implemented with “the information” we had! What I mean by “information”, I mean the information we could get from the user. The ODEProblem type specified an ODE as

 \frac{du}{dt} = f(t,u)

but some algorithms do special things. For example, for the ODE

 \frac{du}{dt} = f(t,u) = A + g(t,u)

the Lawson-Euler algorithm for solving the differential equation is

 u_{n+1} = \exp(A \Delta t)(u_n + g(t,u_n)\Delta t)

This method exploits the fact that it knows that the first part of the equation is A for some matrix, and uses it directly to improve the stability of the algorithm. However, if all we know is f, we could never implement this algorithm. This would violate our goal of “full flexibility at full performance” if this algorithm was the most efficient for the problem!

The solution is to have a more refined set of problem types. I discussed this a bit at the end of the previous blog post that we could define things like splitting problems. The solution is quite general, where

 M \frac{du}{dt} = f_1(t,u) + f_2(t,u) + ... + f_n(t,u)

can be defined using the SplitODEProblem (M being a mass matrix). Then specific methods can request specific forms, like here the linear-nonlinear ODE. Together, the ODE solver can implement this algorithm for the ODE, and that implementation, being part of a *DiffEq solver, will have interpolations, the integrator interface, event handling, abstract type compatibility, etc. all for free. Check out the other “refined problem types”: these are capable of covering wild things like staggered grid PDE methods and symplectic integrators.

In addition to specifying the same equations in new ways, we created avenues for common analyses of differential equations which are not related to simulating them over time. For example, one common problem is to try to find steady states, or points where the differential equation satisfies f(u)=0. This can now easily be done by defining a SteadyStateProblem from an ODE, and then using the steady state solver. This new library will also lead to the implementation of accelerated methods for finding steady states, and the development of new accelerated methods. The steady state behavior can now also be analyzed using the bifurcation analysis tools provided by the wrapper to PyDSTool.

Lastly, the problem types themselves have become much more expressive. In addition to solving the standard ODE, one can specify mass matrices in any appropriate DE type, to instead solve the equation

 M \frac{du}{dt} = f(t,u)

where M is some linear operator (similarly in DDEs and SDEs). While the vast majority of solvers are not able to use M right now, this infrastructure is there for algorithms to support it. In addition, one can now specify the noise process used in random and stochastic equations, allowing the user to solve problems with colored noise. Using the optional fields, a user can define non-diagonal noise problems, and specify sparse noise problems using sparse matrices.

As of now, only some very basic methods using all of this infrastructure have been made for the most extreme examples for testing purposes, but these show that the infrastructure works and this is ready for implementing new methods.

Common solve extensions

Okay, so once we can specify efficient methods for weird models which evolve over time in weird ways, we can simulate and get what solutions look like. Great! We have a tool that can be used to get solutions! But… that’s only the beginning of most analyses!

Most of the time, we are simulating solutions to learn more about the model. If we are modeling chemical reactions, what is the reaction rate that makes the model match the data? How sensitive is our climate model to our choice of the albedo coefficient?

To back out information about the model, we rely on analysis algorithms like parameter estimation and sensitivity analysis. However, the common solve interface acts as the perfect level for implementing these algorithms because they can be done problem and algorithm agnostic. I discuss this in more detail in a previous blog post, but the general idea is that most of these algorithms can be written with a term y(t) which is the solution of a differential equation. Thus we can write the analysis algorithms at a very high level and allow the user to pass in the arguments for a solve command use that to generate the y(t). The result is an implementation of the analysis algorithm which works with any of the problems and methods which use the common interface. Again, chaining all of the work together to get one more complete product. You can see this in full force by looking at the parameter estimation docs.

Modeling Tools

In many cases one is solving differential equations not for their own sake, but to solve scientific questions. To this end, we created a framework for modeling packages which make this easier. The financial models make it easy to specify common financial equations, and the biological models make it easy to specify chemical reaction networks. This functionality all works on the common solver / integrator interface, meaning that models specified in these forms can be used with the full stack and analysis tools. Also, I would like to highlight BioEnergeticFoodWebs.jl as a great modeling package for bio-energetic food web models.

Over time, we hope to continue to grow these modeling tools. The financial tools I hope to link with Julia Computing’s JuliaFin tools (Miletus.jl) in order to make it easy to efficiently solve the SDE and PDE models which result from their financial DSL. In addition, DiffEqPhysics.jl is planned to make it easy to specify the equations of motion just by giving a Hamiltonian or Lagrangian, or by giving the the particles + masses and automatically developing a differential equation. I hope that we can also tackle domains like mechanical systems and pharmacokinetics/pharmacodynamics to continually expand what is easily able to be solved using this infrastructure.

DifferentialEquations 2.0 Conclusion

In the end, DifferentialEquations 2.0 was about finding the right infrastructure such that pretty much anything CAN be specified and solved efficiently. While there were some bumps along the road (that caused breaking API changes), I believe we came up with a very good solution. The result is a foundation which feeds back onto itself, allowing projects like parameter estimation of multiscale models which change size due to events to be standard uses of the ODE solver.

And one of the key things to note is that this follows by design. None of the algorithms were specifically written to make this work. The design of the *DiffEq packages gives interpolation, event handling, compatibility with analysis tools, etc. for free for any algorithm that is implemented in it. One contributor, @ranocha, came to chat in the chatroom and on a few hours later had implemented 3 strong stability preserving Runge-Kutta methods (methods which are efficient for hyperbolic PDEs) in the *DiffEq solvers. All of this extra compatibility followed for free, making it a simple exercise. And that leads me to DifferentialEquations 3.0.

DifferentialEquations 3.0: Stiff solvers, parallel solvers, PDEs, and improved analysis tools

1.0 was about building the core. 2.0 was about making sure that the core packages were built in a way that could be compatible with a wide array of problems, algorithms, and analysis tools. However, in many cases, only the simplest of each type of algorithm was implemented since this was more about building out the capabilities than it was to have completeness in each aspect. But now that we have expanded our capabilities, we need to fill in the details. These details are efficient algorithms in the common problem domains.

Stiff solvers

Let’s start by talking about stiff solvers. As of right now, we have the all of the standard solvers (CVODE, LSODA, radau) wrapped in the packages Sundials.jl, LSODA.jl, and ODEInterface.jl respectively. These can all be used in the DifferentialEquations.jl common interface, meaning that it’s mostly abstracted away from the user that these aren’t actually Julia codes. However, these lower level implementations will never be able to reach the full flexibility of the native Julia solvers simply because they are restricted in the types they use and don’t fully expose their internals. This is fine, since our benchmarks against the standard Runge-Kutta implementations (dopri5, dop853) showed that the native Julia solvers, being more modern implementations, can actually have performance gains over these older methods. But, we need to get our own implementations of these high order stiff solvers.

Currently there exists the Rosenbrock23 method. This method is similar to the MATLAB ode23s method (it is the Order 2/3 Shampine-Rosenbrock method). This method is A and L stable, meaning it’s great for stiff equations. This was thus used for testing event handling, parameter estimation, etc.’s capabilities and restrictions with the coming set of stiff solvers. However, where it lacks is order. As an order 2 method, this method is only efficient at higher error tolerances, and thus for “standard tolerances” it tends not to be competitive with the other methods mentioned before. That is why one of our main goals in DiffEq 3.0 will be the creation of higher order methods for stiff equations.

The main obstacle here will be the creation of a library for making the differentiation easier. There are lots of details involved here. Since a function defined using the macros of ParameterizedFunctions can symbolically differentiate the users function, in some cases a pre-computed function for the inverted or factorized Jacobian can be used to make a stiff method explicit. In other cases, we need autodifferentiation, and in some we need to use numerical differentiation. This is all governed by a system of traits setup behind the scenes, and thus proper logic for determining and using Jacobians can immensely speed up our calculations.

The Rosenbrock23 method did some of this ad-hocly within its own method, but it was determined that the method would be greatly simplified if there was some library that could handle this. In fact, if there was a library to handle this, then the Rosenbrock23 code for defining steps would be as simple as defining steps for explicit RK methods. The same would be true for implicit RK methods like radau. Thus we will be doing that: building a library which handles all of the differentiation logic. The development of this library, DiffEqDiffTools.jl, is @miguelraz ‘s Google Summer of Code project. Thus with the completion of this project (hopefully summer?), efficient and fully compatible high order Rosenbrock methods and implicit RK methods will easily follow. Also included will be additive Runge-Kutta methods (IMEX RK methods) for SplitODEProblems. Since these methods double as native Julia DAE solvers and this code will make the development of stiff solvers for SDEs, this will be a major win to the ecosystem on many fronts.

Stiffness Detection and Switching

In many cases, the user may not know if a problem is stiff. In many cases, especially in stochastic equations, the problem may be switching between being stiff and non-stiff. In these cases, we want to change the method of integration as we go along. The general setup for implementing switching methods has already been implemented by the CompositeAlgorithm. However, current usage of the CompositeAlgorithm requires that the user define the switching behavior. This makes it quite difficult to use.

Instead, we will be building methods which make use of this infrastructure. Stiffness detection estimates can be added to the existing methods (in a very efficient manner), and could be toggled on/off. Then standard switching strategies can be introduced such that the user can just give two algorithms, a stiff and a non-stiff solvers, and basic switching can then occur. What is deemed as the most optimal strategies can then be implemented as standard algorithm choices. Then at the very top, these methods can be added as defaults for solve(prob), making the fully automated solver efficiently handle difficult problems. This will be quite a unique feature and is borderline a new research project. I hope to see some really cool results.

Parallel-in-time ODE/BVP solvers

While traditional methods (Runge-Kutta, multistep) all step one time point at a time, in many cases we want to use parallelism to speed up our problem. It’s not hard to buy an expensive GPU, and a lot of us already have one for optimization, so why not use it?

Well, parallelism for solving differential equations is very difficult. Let’s take a look at some quick examples. In the Euler method, the discretization calculates the next time step u_{n+1} from the previous time step u_n using the equation

u_{n+1} = u_n + \Delta t f(t,u)

In code, this is the update step

u .= uprev .+ dt.*f(t,uprev)

I threw in the .’s to show this is broadcasted over some arrays, i.e. for systems of equations u is a vector. And that’s it, that’s what the inner loop is. The most you can parallelize are the loops internal to the broadcasts. This means that for very large problems, you can parallelize this method efficiently (this form is called parallelism within the method). Also, if your input vector was a GPUArray, this will broadcast using CUDA or OpenCL. However, if your problem is not a sufficiently large vector, this parallelism will not be very efficient.

Similarly for implicit equations, we need to repeatedly solve (I-\Delta tJ)u = b where J is the Jacobian matrix. This linear solve will only parallelize well if the Jacobian matrix is sufficiently large. But many small differential equations problems can still be very difficult. For example, this about solving a very stiff ODE with a few hundred variables. Instead, the issue is that we are stepping serially over time, and we need to use completely different algorithms which parallelize over time.

One of these approaches is a collocation method. Collocation methods build a very large nonlinear equation F(X)=0 which describes a numerical method over all time points at once, and simultaneously solves for all of the time points using a nonlinear solver. Internally, a nonlinear solver is just a linear solver, Ax=b, with a very large A. Thus, if the user passes in a custom linear solver method, say one using PETSc.jl or CUSOLVER, this is parallelize efficiently over many nodes of an HPC or over a GPU. In fact, these methods are the standard methods for Boundary Value Problems (BVPs). The development of these methods is the topic of @YingboMa’s Google Summer of Code project. While written for BVPs, these same methods can then solve IVPs with a small modification (including stochastic differential equations).

By specifying an appropriate preconditioner with the linear solver, these can be some of the most efficient parallel methods. When no good preconditioner is found, these methods can be less efficient. One may wonder then if there’s exists a different approach, one which may sacrifice some “theoretical top performance” in order to be better in the “low user input” case (purely automatic). There is! Another approach to solving the parallelism over time issue is to use a neural network. This is the topic of @akaysh’s Google Summer of Code project. Essentially, you can define a cost function which is the difference between the numerical derivative and f(t_i,u_i) at each time point. This then gives an optimization problem: find the u_i at each time point such that the difference between the numerical and the desired derivatives is minimized. Then you solve that cost function minimization using a neural network. The neat thing here is that neural nets are very efficient to parallelize over GPUs, meaning that even for somewhat small problems we can get out very good parallelism. These neural nets can use very advanced methods from packages like Knet.jl to optimize efficiently and parallel with very little required user input (no preconditioner to set). There really isn’t another standard differential equations solver package which has a method like this, so it’s hard to guess how efficient it will be in advance. But given the properties of this setup, I suspect this should be a very good “automatic” method for medium-sized (100’s of variables) differential equations.

The really great thing about these parallel-in-time methods is that they are inherently implicit, meaning that they can be used even on extremely stiff equations. There are also simple extensions to make these solver SDEs and DDEs. So add this to the bank of efficient methods for stiff diffeqs!

Improved methods for stochastic differential equations

As part of 3.0, the hope is to release brand new methods for stochastic differential equations. These methods will be high order and highly stable, some explicit and some implicit, and will have adaptive timestepping. This is all of the details that I am giving until these methods are published, but I do want to tease that many types of SDEs will become much more efficient to solve.

Improved methods for jump equations

For jump equations, in order to show that everything is complete and can work, we have only implemented the Gillespie method. However, we hope to add many different forms of tau-leaping and Poisson(/Binomial) Runge-Kutta methods for these discrete stochastic equations. Our roadmap is here and it seems there may be a great deal of participation to complete this task. Additionally, we plan on having a specialized DSL for defining chemical reaction networks and automatically turn them into jump problems or ODE/SDE systems.

Geometric and symplectic integrators

In DifferentialEquations.jl 2.0, the ability to Partitioned ODEs for dynamical problems (or directly specify a second order ODE problem) was added. However, only a symplectic Euler method has been added to solve this equations so far. This was used to make sure the *DiffEq solvers were compatible with this infrastructure, and showed that event handling, resizing, parameter estimation, etc. all works together on these new problem types. But, obviously we need more algorithms. Velocity varlet and higher order Nystrom methods are asking to be implemented. This isn’t difficult for the reasons described above, and will be a very nice boost to DifferentialEquations.jl 3.0.

(Stochastic/Delay) Partial differential equations

Oh boy, here’s a big one. Everyone since the dawn of time has wanted me to focus on building a method that makes solving the PDE that they’re interested dead simple to do. We have a plan for how to get there. The design is key: instead of one-by-one implementing numerical methods for each PDE, we needed a way to pool the efforts together and make implementations on one PDE matter for other PDEs.

Let’s take a look at how we can do this for efficient methods for reaction-diffusion equations. In this case, we want to solve the equation

 u_t = \Delta u + f(t,u)

The first step is always to discretize this over space. Each of the different spatial discretization methods (finite difference, finite volume, finite element, spectral), end up with some equation

 U_t = AU + f(t,U)

where now U is a vector of points in space (or discretization of some basis). At this point, a specialized numerical method for stepping this equation efficiently in the time can be written. For example, if diffusion is fast and f is stiff, one really efficient method is the implicit integrating factor method. This would solve the equation by updating time points like:

U_{n+1} = e^{-A\Delta t}U_n + \Delta t U_{n+1}

where we have to solve this implicit equation each time step. The nice thing is that the implicit equation decouples in space, and so we actually only need to solve a bunch of very small implicit equations.

How can we do this in a way that is not “specific to the heat equation”? There were two steps here, the first is discretizing in space, the second is writing an efficient method specific to the PDE. The second part we already have an answer for: this numerical method can be written as one of the methods for linear/nonlinear SplitODEProblems that we defined before. Thus if we just write a SplitODEProblem algorithm that does this form of updating, it can be applied to any ODE (and any PDE discretization) which splits out a linear part. Again, because it’s now using the ODE solver, all of the extra capabilities (event handling, integrator interface, parameter estimation tools, interpolations, etc.) all come for free as well. The development of ODE/SDE/DDE solvers for handling this split, like implicit integrating factor methods and exponential Runge-Kutta methods, is part of DifferentialEquations.jl 3.0’s push for efficient (S/D)PDE solvers.

So with that together, we just need to solve the discretization problem. First let’s talk about finite difference. For the Heat Equation with a fixed grid-size \Delta x, many people know what the second-order discretization matrix A is in advance. However, what if you have variable grid sizes, and want different order discretizations of different derivatives (say a third derivative)? In this case the Fornburg algorithm can be used to construct this A. And instead of making this an actual matrix, because this is sparse we can make this very efficient by creating a “matrix-free type” where AU acts like the appropriate matrix multiplication, but in reality no matrix is ever created. This can save a lot of memory and make the multiplication a lot more efficient by ignoring the zeros. In addition, because of the reduced memory requirement, we easily distribute this operator to the GPU or across the cluster, and make the AU function utilize this parallelism.

The development of these efficient linear operators is the goal of @shivin9’s Google Summer of Code project. The goal is to get a function where the user can simply specify the order of the derivative and the order of the discretization, and it will spit out this highly efficient A to be used in the discretization, turning any PDE into a system of ODEs. In addition, other operators which show up in finite difference discretizations, such as the upwind scheme, can be encapsulated in such an A. Thus this project would make turning these types of PDEs into efficient ODE discretizations much easier!

The other very popular form of spatial discretization is the finite element method. For this form, you chose some basis function over space and discretize the basis function. The definition of this basis function’s discretization then derives what the A discretization of the derivative operators should be. However, there is a vast array of different choices for basis and the discretization. If we wanted to create a toolbox which would implement all of what’s possible, we wouldn’t get anything else done. Thus we will instead, at least for now, piggyback off of the efforts of FEniCS. FEniCS is a toolbox for the finite element element method. Using PyCall, we can build an interface to FEniCS that makes it easy to receive the appropriate A linear operator (usually sparse matrix) that arises from the desired discretization. This, the development of a FEniCS.jl, is the topic of @ysimillides’s Google Summer of Code. The goal is to make this integration seamless, so that way getting ODEs out for finite element discretizations is a painless process, once again reducing the problem to solving ODEs.

The last form of spatial discretization is spectral discretizations. These can be handled very well using the Julia library ApproxFun.jl. All that is required is to make it possible to step in time the equations which can be defined using the ApproxFun types. This is the goal of DiffEqApproxFun.jl. We already have large portions of this working, and for fixed basis lengths the ApproxFunProblems can actually be solved using any ODE solver (not just native Julia solvers, but also methods from Sundials and ODEInterface work!). This will get touched up soon and will be another type of PDE discretization which will be efficient and readily available.

Improved Analysis Tools

What was described above is how we are planning to solve very common difficult problems with high efficiency and simplify the problems for the user, all without losing functionality. However, the tools at the very top of the stack, the analysis tools, can also become much more efficient as well. This is the other focus of DifferentialEquations.jl 3.0.

Local sensitivity analysis is nice because it not only tells you how sensitive your model is to the choice of parameters, but it gives this information at every time point. However, in many cases this is overkill. Also, this makes the problem much more computationally difficult. If we wanted to classify parameter space, like to answer the question “where is the model least sensitive to parameters?”, we would have to solve this equation many times. When this is the question we wish to answer, global sensitivity analysis methods are much more efficient. We plan on adding methods like the Morris method in order for sensitives to be globally classified.

In addition, we really need better parameter estimation functionality. What we have is very good: you can build an objective function for your parameter estimation problem to then use Optim.jl, BlackBoxOptim.jl or any MathProgBase/JuMP solver (example: NLopt.jl) to optimize the parameters. This is great, but it’s very basic. In many cases, more advanced methods are necessary in order to get convergence. Using likelihood functions instead of directly solving the nonlinear regression can often times be more efficient. Also, in many cases statistical methods (the two-stage method) can be used to approximately optimize parameters without solving the differential equations repeatedly, a huge win for performance. Additionally, Bayesian methods will not only give optimal parameters, but distributions for the parameters which the user can use to quantify how certain they are about estimation results. The development of these methods is the focus of @Ayush-iitkgp’s Google Summer of Code project.

DifferentialEquations.jl 3.0 Conclusion

2.0 was about building infrastructure. 3.0 is about filling out that infrastructure and giving you the most efficient methods in each of the common problem domains.

DifferentialEquations.jl 4.0 and beyond

I think 2.0 puts us in a really great position. We have a lot, and the infrastructure allows us to be able to keep expanding and adding more and more algorithms to handle different problem types more efficiently. But there are some things which are not slated in the 3.0 docket. One thing that keeps getting pushed back is the automatic promotion of problem types. For example, if you specified a SplitODEProblem and you want to use an algorithm which wants an ODEProblem (say, a standard Runge-Kutta algorithm like Tsit5), it’s conceivable that this conversion can be handled automatically. Also, it’s conceivable that since you can directly convert an ODEProblem into a SteadyStateProblem, that the steady state solvers should work directly on an ODEProblem as well. However, with development time always being a constraint, I am planning on spending more time developing new efficient methods for these new domain rather than the automatic conversions. However, if someone else is interested in tackling this problem, this could probably be addressed much sooner!

Additionally, there is one large set of algorithms which have not been addressed in the 3.0 plan: multistep methods. I have been holding off on these for a few reasons. For one, we have wrappers to Sundials, DASKR, and LSODA which supply well-known and highly efficient multistep methods. However, these wrappers, having the internals not implemented in Julia, are limited in their functionality. They will never be able to support arbitrary Julia types and will be constrained to equations on contiguous arrays of Float64s. Additionally, the interaction with Julia’s GC makes it extremely difficult to implement the integrator interface and thus event handling (custom allocators would be needed). Also, given what we have seen with other wrappers, we know we can actually improve the efficiency of these methods.

But lastly, and something I think is important, these methods are actually only efficient in a small but important problem domain. When the ODE f is not “expensive enough”, implicit Runge-Kutta and Rosenbrock methods are more efficient. When it’s a discretization of a PDE and there exists a linear operator, exponential Runge-Kutta and implicit integrating factor methods are more efficient. Also, if there are lots of events or other dynamic behavior, multistep methods have to “restart”. This is an expensive process, and so in most cases using a one-step method (any of the previously mentioned methods) is more efficient. This means that multistep methods like Adams and BDF (/NDF) methods are really only the most efficient when you’re talking about a large spatial discretization of a PDE which doesn’t have a linear operator that you can take advantage of and events are non-existent/rare. Don’t get me wrong, this is still a very important case! But, given the large amount of wrappers which handle this quite well already, I am not planning on tackling these until the other parts are completed. Expect the *DiffEq infrastructure to support multistep methods in the future (actually, there’s already quite a bit of support in there, just the adaptive order and adaptive time step needs to be made possible), but don’t count on it in DifferentialEquations 3.0.

Also not part of 3.0 but still of importance is stochastic delay differential equations. The development of a library for handling these equations can follow in the same manner as DelayDiffEq.jl, but likely won’t make it into 3.0 as there are more pressing (more commonly used) topics to address first. In addition, methods for delay equations with non-constant lags (and neutral delay equations) also will likely have to wait for 4.0.

In the planning stages right now is a new domain-specific language for the specification of differential equations. The current DSL, the @ode_def macro in ParameterizedFunctions.jl does great for the problem that it can handle (ODEs and diagonal SDEs). However, there are many good reasons to want to expand the capabilities here. For example, equations defined by this DSL can be symbolically differentiated, leading to extremely efficient code even for stiff problems. In addition, one could theoretically “split the ODE function” to automatically turn the problem in a SplitODEProblem with a stiff and nonstiff part suitable for IMEX solvers. If PDEs also can be written in the same syntax, then the PDEs can be automatically discretized and solved using the tools from 2.0. Additionally, one can think about automatically reducing the index of DAEs, and specifying DDEs.

This all sounds amazing, but it will need a new DSL infrastructure. After a discussion to find out what kind of syntax would be necessary, it seems that a major overhaul of the @ode_def macro would be needed in order to support all of this. The next step will be to provide a new library, DiffEqDSL.jl, for providing this enhanced DSL. As described in the previously linked roadmap discussion, this DSL will likely take a form closer in style to JuMP, and the specs seem to be compatible at reaching the above mentioned goals. Importantly, this will be developed as a new DSL and thus the current @ode_def macro will be unchanged. This is a large development which will most likely not be in 3.0, but please feel free to contribute to the roadmap discussion which is now being continued at the new repository.

Conclusion

DifferentialEquations.jl 1.0 was about establishing a core that could unify differential equations. 2.0 was about developing the infrastructure to tackle a very vast domain of scientific simulations which are not easily or efficiently written as differential equations. 3.0 will be be about producing efficient methods for the most common sets of problems which haven’t adequately addressed yet. This will put the ecosystem in a very good state and hopefully make it a valuable tool for many people. After this, 4.0+ will keep adding algorithms, expand the problem domain some more, and provide a new DSL.

The post DifferentialEquations.jl 2.0: State of the Ecosystem appeared first on Stochastic Lifestyle.