Author Archives: Christopher Rackauckas

6 Months of DifferentialEquations.jl: Where We Are and Where We Are Going

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/6-months-differentialequations-jl-going/

So around 6 months ago, DifferentialEquations.jl was first registered. It was at first made to be a library which can solve “some” types of differential equations, and that “some” didn’t even include ordinary differential equations. The focus was mostly fast algorithms for stochastic differential equations and partial differential equations.

Needless to say, Julia makes you too productive. Ambitions grew. By the first release announcement, much had already changed. Not only were there ordinary differential equation solvers, there were many. But the key difference was a change in focus. Instead of just looking to give a production-quality library of fast methods, a major goal of DifferentialEquations.jl became to unify the various existing packages of Julia to give one user-friendly interface.

Since that release announcement, we have enormous progress. At this point, I believe we have both the most expansive and flexible differential equation library to date. I would like to take this time to explain the overarching design, where we are, and what you can expect to see in the near future.

(Note before we get started: if you like what you see, please star the DifferentialEquations.jl repository. I hope to use this to show interest in the package so that one day we can secure funding. Thank you for your support!)

JuliaDiffEq Structure

If you take a look at the source for DifferentialEquations.jl, you will notice that almost all of the code has gone. What has happened?

The core idea behind this change is explained in another blog post on modular interfaces for scientific computing in Julia. The key idea is that we built an interface which is “package-independent”, meaning that the same lines of code can call solvers from different packages. There are many advantages to this which will come up later in the section which talks about the “addon packages”, but one organizational advantage is that it lets us split up the repositories as needed. The core differential equation solvers from DifferentialEquations.jl reside in OrdinaryDiffEq.jl, StochasticDiffEq.jl, etc. (you can see more at our webpage). Packages like Sundials.jl, ODEInterface.jl, ODE.jl, etc. all have bindings to this same interface, making them all work similarly.

One interesting thing about this setup is that you are no longer forced to contribute to these packages in order to contribute to the ecosystem. If you are a developer or a researcher in the field, you can develop your own package with your own license which has a common interface binding, and you will get the advantages of the common interface without any problems. This may be necessary for some researchers, and so we encourage you to join in and contribute as you please.

The Common Interface

Let me then take some time to show you what this common interface looks like. To really get a sense, I would recommend checking out the tutorials in the documentation and the extra Jupyter notebook tutorials in DiffEqTutorials.jl. The idea is that solving differential equations always has 3 steps:

  1. Defining a problem.
  2. Solving the problem.
  3. Analyzing the solution.

Defining a problem

What we created was a generic type-structure for which dispatching handles the details. For defining an ODE problem, one specifies the function for the ODE, the initial condition, and the timespan that the problem is to be solved on. The ODE

 u' = f(t,u)

with an initial condition u_0 and and timespan (t_0,t_f) is then written as:

prob = ODEProblem(f,u0,(t0,tf))

There are many different problem types for different types of differential equations. Currently we have types (and solvers) for ordinary differential equations, stochastic differential equations, differential algebraic equations, and some partial differential equations. Later in the post I will explain how this is growing.

Solving the problem

To solve the problem, the common solve command is:

sol = solve(prob,alg;kwargs...)

where alg is a type-instance for the algorithm. It is by dispatch on alg that the package is chosen. For example, we can call the 14th-Order Feagin Method from OrdinaryDiffEq.jl via

sol = solve(prob,Feagin14();kwargs...)

We can call the BDF method from Sundials.jl via

sol = solve(prob,CVODE_BDF();kwargs...)

Due to this structure (and the productivity of Julia), we have a ridiculous amount of methods which are available as is seen in the documentation. Later I will show that we do not only have many options, but these options tend to be very fast, often benchmarking as faster than classic FORTRAN codes. Thus one can choose the right method for the problem, and efficient solve it.

Notice I put in the trailing “kwargs…”. There are many keyword arguments that one is able to pass to this solve command. The “Common Solver Options” are documented at this page. Currently, all of these options are supported by the OrdinaryDiffEq.jl methods, while there is general support for large parts of this for the other methods. This support will increase overtime, and I hope to include a table which shows what is supported where.

Analyzing the solution

Once you have this solution type, what does it do? The details are explained in this page of the manual, but I would like to highlight some important features.

First of all, the solution acts as an array. For the solution at the ith timestep, you just treat it as an array:

sol[i]

You can also get the ith timepoint out:

sol.t[i]

Additionally, the solution lets you grab individual components. For example, the jth component at the ith timepoint is found by:

sol[i,j]

These overloads are necessary since the underlying data structure can actually be a more complicated vector (some examples explained later), but this lets you treat it in a simple manner.

Also, by default many solvers have the option “dense=true”. What this means is that the solution has a dense (continuous) output, which is overloaded on to the solver. This look like:

sol(t)

which gives the solution at time t. This continuous version of the solution can be turned off using “dense=false” (to get better performance), but in many cases it’s very nice to have!

Not only that, but there are some standard analysis functions available on the solution type as well. I encourage you to walk through the tutorial and see for yourself. Included are things like plot recipes for easy plotting with Plots.jl:

plot(sol)

Now let me describe what is available with this interface.

Ecosystem-Wide Development Tools and Benchmarks

Since all of the solutions act the same, it’s easy to create tools which build off of them. One fundamental toolset are those included in DiffEqDevTools.jl. DiffEqDevTools.jl includes a bunch of functions for things like convergence testing and benchmarking. This not only means that all of the methods have convergence tests associated with them to ensure accuracy and correctness, but also that we have ecosystem-wide benchmarks to know the performance of different methods! These benchmarks can be found at DiffEqBenchmarks.jl and will be referenced throughout this post.

Very Efficient Nonstiff ODE Solvers

The benchmarks show that the OrdinaryDiffEq.jl methods achieve phenomenal performance. While in many cases other libraries resort to the classic dopri5 and dop853 methods due to Hairer, in our ecosystem have these methods available via the ODEInterface.jl glue package ODEInterfaceDiffEq.jl and so these can be directly called from the common interface. From the benchmarks on non-stiff problems you can see that the OrdinaryDiffEq.jl methods are much more efficient than these classic codes when one is looking for the highest performance. This is even the case for DP5() and DP8() which have the same exact timestepping behavior as dopri5() and dop853() respectively, showing that these implementations are top notch, if not the best available.

These are the benchmarks on the implementations of the Dormand-Prince 4/5 methods. Also included is a newer method, the Tsitorous 4/5 method, which is now the default non-stiff method in DifferentialEquations.jl since our research has shown that it is more efficient than the classical methods (on most standard problems).

A Wide Array of Stiff ODE Solvers

There is also a wide array of stiff ODE solvers which are available. BDF methods can be found from Sundials.jl, Radau methods can be found from ODEInterface.jl, and a well-optimized 2nd-Order Rosenbrock method can be found in OrdinaryDiffEq.jl. One goal in the near future will be to implement higher order Rosenbrock methods in this fashion, since it will be necessary to get better performance, as shown in the benchmarks. However, the Sundials and ODEInterface methods, being that they use FORTRAN interop, are restricted to equations on Float64, while OrdinaryDiffEq.jl’s methods support many more types. This allows one to choose the best method for the job.

Wrappers for many classic libraries

Many of the classic libraries people are familiar with are available from the common interface, including:

  1. CVODE
  2. LSODA
  3. The Hairer methods

and differential algebraic equation methods including

  1. IDA (Sundials)
  2. DASKR

Native Julia Differential Algebraic Equation Methods

DASSL.jl is available on the common interface and provides a method to solve differential algebraic equations using a variable-timestep BDF method. This allows one to support some Julia-based types like arbitrary-precision numbers which are not possible with the wrapped libraries.

Extensive Support for Julia-based Types in OrdinaryDiffEq.jl

Speaking of support for types, what is supported? From testing we know that the following work with OrdinaryDiffEq.jl:

  1. Arbitrary precision arithmetic via BigFloats, ArbFloats, DecFP
  2. Numbers with units from Unitful.jl
  3. N-dimensional arrays
  4. Complex numbers (the nonstiff solvers)
  5. “Very arbitrary arrays”

Your numbers can be ArbFloats of 200-bit precision in 3-dimensional tensors with units (i.e. “these numbers are in Newtons”), and the solver will ensure that the dimensional constraints are satisfied, and at every timestep give you a 3-dimensional tensor with 200-bit ArbFloats. The types are declared to match the initial conditions: if you start with u0 having BigFloats, you will be guaranteed to have BigFloat solutions. Also, the types for time are determined by the types for the times in the solution interval (t0,tf). Therefore can have the types for time be different than the types for the solution (say, turn off adaptive timestepping and do fixed timestepping with rational numbers or integers).

Also, by “very arbitrary arrays” I mean, any type which has a linear index can be used. One example which recently came up in this thread involves solving a hybrid-dynamical system which has some continuous variables and some discrete variables. You can make a type which has a linear index over the continuous variables and simply throw this into the ODE solver and it will know what to do (and use callbacks for discrete updates). All of the details like adaptive timestepping will simply “just work”.

Thus, I encourage you to see how these methods can work for you. I myself have been developing MultiScaleModels.jl to build multi-scale hybrid differential equations and solve them using the methods available in DifferentialEquations.jl. This shows that heuristic for classic problems which you “cannot use a package for” no longer apply: Julia’s dispatch system allows DifferentialEquations.jl to handle these problems, meaning that there is no need for you to have to ever reinvent the wheel!

Event Handling and Callbacks in OrdinaryDiffEq.jl

OrdinaryDiffEq.jl already has extensive support for callback functions and event handling. The documentation page describes a lot of what you can do with it. There are many things you can do with this, not just bouncing a ball, but you can also use events to dynamically change the size of the ODE (as demonstrated by the cell population example).

Specification of Extra Functions for Better Performance

If this was any other library, the header would have been “Pass Jacobians for Better Performance”, but DifferentialEquations.jl’s universe goes far beyond that. We named this set of functionality Performance Overloads. An explicit function for a Jacobian is one type of performance overload, but you can pass the solvers many other things. For example, take a look at:

f(Val{:invW},t,u,γ,iW) # Call the explicit inverse Rosenbrock-W function (M - γJ)^(-1)

This seems like an odd definition: it is the analytical function for the equation (M - \gamma J)^{-1} for some mass matrix M built into the function. The reason why this is so necessary is because Rosenbrock methods have to solve this every step. What this allows the developers to do is write a method which goes like:

if has_invW(f)
  # Use the function provided by the user
else
  # Get the Jacobian
  # Build the W
  # Solve the linear system
end

Therefore, whereas other libraries would have to use a linear solver to solve the implicit equation at each step, DifferentialEquations.jl allows developers to write this to use the pre-computed inverses and thus get an explicit method for stiff equations! Since the linear solves are the most expensive operation, this can lead to huge speedups in systems where the analytical solution can be computed. But is there a way to get these automatically?

Parameterized Functions and Function Definition Macros

ParameterizedFunctions.jl is a library which solves many problems at one. One question many people have is, how do you provide the model parameters to an ODE solver? While the standard method of “use a closure” is able to work, there are many higher-order analyses which require the ability to explicitly handle parameters. Thus we wanted a way to define functions with explicit parameters.

The way that this is done is via call overloading. The syntax looks like this. We can define the Lotka-Volterra equations with explicit parameters a and b via:

type  LotkaVolterra <: Function
         a::Float64
         b::Float64
end
f = LotkaVolterra(0.0,0.0)
(p::LotkaVolterra)(t,u,du) = begin
         du[1] = p.a * u[1] - p.b * u[1]*u[2]
         du[2] = -3 * u[2] + u[1]*u[2]
end

Now f is a function where f.a and f.b are the parameters in the function. This type of function can then be seamlessly used in the DifferentialEquations.jl solvers, including those which use interop like Sundials.jl.

This is very general syntax which can handle any general function. However, the next question was, is there a way to do this better for the problems that people commonly encounter? Enter the library ParameterizedFunctions.jl. As described in the manual, this library (which is part of DifferentialEquations.jl) includes a macro @ode_def which allows you to define the Lotka-Volterra equation

\begin{align} \frac{dx}{dt} &= ax - bxy \\ \frac{dy}{dt} &= -cy + dxy \\ \end{align}

as follows:

f = @ode_def LotkaVolterraExample begin
  dx = a*x - b*x*y
  dy = -c*y + d*x*y
end a=>1.5 b=>1.0 c=>3.0 d=1.0

Notice that at the bottom you pass in the parameters. => keeps the parameters explicit, while = passes them in as a constant. Flip back and forth to see that it matches the LaTeX so that way it’s super easy to debug and maintain.

But this macro isn’t just about ease of use: it’s also about performance! What happens silently within this macro is that symbolic calculations occur via SymEngine.jl. The Performance Overloads functions are thus silently symbolically computed, allowing the solvers to then get maximal performance. This gives you an easy way to define a stiff equation of 100 variables in a very intuitive way, yet get a faster solution than you would find with other libraries.

Sensitivity Analysis

Once we have explicit parameters, we can generically implement algorithms which use them. For example, DiffEqSensitivity.jl transforms a ParameterizedFunction into the senstivity equations which are then solved using any ODE solver, outputting both the ODE’s solution and the parameter sensitivities at each timestep. This is described in the manual in more detail. The result is the ability to use whichever differential equation method in the common interface matches your problem to solve the extended ODE and output how sensitive the solution is to the parameters. This is the glory of the common interface: tools can be added to every ODE solver all at once!

In the future we hope to increase the functionality of this library to include functions for computing global and adjoint sensitivities via methods like the Morris method. However, the current setup shows how easy this is to do, and we just need someone to find the time to actually do it!

Parameter Estimation

Not only can we identify parameter sensitivities, we can also estimate model parameters from data. The design of this is described in more detail is explained in the manual. It is contained in the package DiffEqParamEstim.jl. Essentially, you can define a function using the @ode_def macro, and then pair it with any ODE solver and (almost) any optimization method, and together use that to find the optimal parameters.

In the near future, I would like to increase the support of DiffEqParamEstim.jl to include machine learning methods from JuliaML using its Learn.jl. Stay tuned!

Adaptive Timestepping for Stochastic Differential Equations

Adaptive timestepping is something you will not find in other stochastic differential equation libraries. The reason is because it’s quite hard to do correctly and, when finally implemented, can have so much overhead that it does not actually speedup the runtime for most problems.

To counteract this, I developed a new form of adaptive timestepping for stochastic differential equations which focused on both correctness and an algorithm design which allows for fast computations. The result was a timestepping algorithm which is really fast! This paper has been accepted to Discrete and Continuous Dynamical Systems Series B, and where we show that the correctness of the algorithm and its efficiency. We actually had to simplify the test problem so that way we could time the speedup over fixed timestep algorithms, since otherwise they weren’t able to complete in a reasonable time without numerical instability! When simplified, the speedup over all of the tested fixed timestep methods was >12x, and this speedup only increases as the problem gets harder (again, we chose the simplified version only because testing the fixed timestep methods on harder versions wasn’t even computationally viable!).

These methods, Rejection Sampling with Memory (RSwM), are available in DifferentialEquations.jl as part of StochasticDiffEq.jl. It should help speed up your SDE calculations immensely. For more information, see the publication “Adaptive Methods for Stochastic Differential Equations via Natural Embeddings and Rejection Sampling with Memory”.

Easy Multinode Parallelism For Monte Carlo Problems

Also included as part of the stochastic differential equation suite are methods for parallel solving of Monte Carlo problems. The function is called as follows:

monte_carlo_simulation(prob,alg;numMonte=N,kwargs...)

where the extra keyword arguments are passed to the solver, and N is the number of solutions to obtain. If you’ve setup Julia on a multinode job on a cluster, this will parallelize to use every core.

In the near future, I hope to expand this to include a Monte Carlo simulation function for random initial conditions, and allow using this on more problems like ODEs and DAEs.

Smart Defaults

DifferentialEquations.jl also includes a new cool feature for smartly choosing defaults. To use this, you don’t really have to do anything. If you’ve defined a problem, say an ODEProblem, you can just call:

sol = solve(prob;kwargs...)

without passing the algorithm and DifferentialEquations.jl will choose an algorithm for you. Also included is an `alg_hints` parameter with allows you to help the solver choose the right algorithm. So lets say you want to solve a stiff stochastic differential equations, but you do not know much about the algorithms. You can do something like:

sol = solve(prob,alg_hints=[:stiff])

and this will choose a good algorithm for your problem and solve it. This reduces user-burden to only having to know properties of the problem, while allowing us to proliferate the solution methods. More information is found in the Common Solver Options manual page.

Progress Monitoring

Another interesting feature is progress monitoring. OrdinaryDiffEq.jl includes the ability to use Juno’s progressbar. This is done via the keyword arguments like:

sol = solve(prob,progress=true,
                 progress_steps=100)

You can also set a progress message, for which the default is:

ODE_DEFAULT_PROG_MESSAGE(dt,t,u) = "dt="*string(dt)*"\nt="*string(t)*"\nmax u="*string(maximum(abs.(u)))

When you scroll over the progressbar, it will show you how close it is to the final timepoint and use linear extrapolation to estimate the amount of time left to solve the problem.

When you scroll over the top progressbar, it will display the message. Here, it tells us the current dt, t, and the maximum value of u (the independent variable) to give a sanity check that it’s all working.

The keyword argument progress_steps lets you control how often the progress bar updates, so here we choose to do it every 100 steps. This means you can do some very intense sanity checks inside of the progress message, but reduce the number of times that it’s called so that way it doesn’t affect the runtime.

All in all, having this built into the interface should make handling long and difficult problems much easier, I problem that I had when using previous packages.

(Stochastic) Partial Differential Equations

There is rudimentary support for solving some stochastic partial differential equations which includes semilinear Poisson and Heat equations. This is able to be done on a large set of domains using a finite element method as provided by FiniteElementDiffEq.jl. I will say that this library is in need of an update for better efficiency, but it shows how we are expanding into the domain of adding easy-to-define PDE problems, which then create the correct ODEProblem/DAEProblem discretization and which then gets solved using the ODE methods.

Modular Usage

While all of this creates the DifferentialEquations.jl package, the JuliaDiffEq ecosystem is completely modular. If you want to build a library which uses only OrdinaryDiffEq.jl’s methods, you can directly use those solvers without requiring the rest of DifferentialEquations.jl

An Update Blog

Since things are still changing fast, the website for JuliaDiffEq contains a news section which will describe updates to packages in the ecosystem as they occur. To be notified of updates, please subscribe to the RSS feed.

Coming Soon

Let me take a few moments to describe some works in progress. Many of these are past planning stages and have partial implementations. I find some of these very exciting.

Solver Method Customization

The most common reason to not use a differential equation solver library is because you need more customization. However, as described in this blog post, we have developed a design which solves this problem. The advantages huge. Soon you will be able to choose the linear and nonlinear solvers which are employed in the differential equation solving methods. For linear solvers, you will be able to use any method which solves a linear map. This includes direct solvers from Base, iterative solvers from IterativeSolvers.jl, parallel solvers from PETSc.jl, GPU methods from CUSOLVER.jl: it will be possible to even use your own linear solver if you wish. The same will be true for nonlinear solvers. Thus you can choose the internal methods which match the problem to get the most efficiency out.

Specialized Problem Types and Promotions

One interesting setup that we have designed is a hierarchy of types. This is best explained by example. One type of ODE which shows up are “Implicit-Explicit ODEs”, written as:

 u' = f(t,u) + g(t,u)

where f is a “stiff” function and g is a “nonstiff” function. These types of ODEs with a natural splitting commonly show up in discretizations of partial differential equations. Soon we will allow one to define an IMEXProblem(f,g,u0,tspan) for this type of ODE. Specialized methods such as the ARKODE methods from Sundials will then be able to utilize this form to gain speed advantages.

However, lets say you just wanted to use a standard Runge-Kutta method to solve this problem? What we will automatically do via promotion is make

h(t,u) = f(t,u) + g(t,u)

and then behind the scenes the Runge-Kutta method will solve the ODE

 u' = h(t,u)

Not only that, but we can go further and define

 k(t,u,u') = h(t,u) - u'

to get the equation

 k(t,u,u') = 0

which is a differential algebraic equation solver. This auto-promotion means that any method will be able to solve any problem type which is lower than it.

The advantages are two-fold. For one, it allows developers to write a code to the highest problem available, and automatically have it work on other problem types. For example, the classic Backwards Differentiation Function methods (BDF) which are seen in things like MATLAB’s ode15s are normally written to solve ODEs, but actually can solve DAEs. In fact, DASSL.jl is an implementation of this algorithm. When this promotion structure is completed, DASSL’s BDF method will be a native BDF method not just for solving DAEs, but also ODEs, and there is no specific development required on the part of DASSL.jl. And because Julia’s closures compile to fast functions, all of this will happen with little to no overhead.

In addition to improving developer productivity, it allows developers to specialize methods to problems. The splitting methods for implicit-explicit problems can be tremendously more performant since it reduces the complexity of the implicit part of the equation. However, with our setup we go even further. One common case that shows up in partial differential equations is that one of these equations is linear. For example, in a discretization of the semilinear Heat Equation, we arise at an ODE

 u' = Au + g(u)

where A is a matrix which is the discretization of the LaPlacian. What our ecosystem will allow is for the user to specify that the first function f(t,u) = Au is a linear function by wrapping it in a LinearMap type from LinearMaps.jl. Then the solvers can use this information like:

if is_linear(f)
  # Perform linear solve
else
  # Perform nonlinear solve
end

This way, the solvers will be able to achieve even more performance by specializing directly to the problem at hand. In fact, it will allow methods require this type of linearity like Exponential Runge-Kutta methods to be able to be developed for the ecosystem and be very efficient methods when applicable.

In the end, users can just define their ODE by whatever problem type makes sense, and promotion magic will make tons of methods available, and type-checking within the solvers will allow them to specialize directly to every detail of the ODE for as much speed as possible. With DifferentialEquations.jl also choosing smart default methods to solve the problem, the user-burden is decreased and very specialized methods can be used to get maximal efficiency. This is a win for everybody!

Ability to Solve Fun Types from ApproxFun.jl

ApproxFun.jl provides an easy way to do spectral approximations of functions. In many cases, these spectral approximations are very fast and are used to decompose PDEs in space. When paired with timestepping methods, this gives an easy way to solve a vast number of PDEs with really good efficiency.

The link between these two packages is currently found in SpectralTimeStepping.jl. Currently you can fix the basis size for the discretization and use that to solve the PDE with any ODE method in the common interface. However, we are looking to push further. Since OrdinaryDiffEq.jl can handle many different Julia-defined types, we are looking to make it support solving the ApproxFun.jl Fun type directly, which would allow the ODE solver to adapt the size of the spectral basis during the computation. This would tremendously speedup the methods and make it as fast as if you were to specifically design a spectral method to a PDE. We are really close to getting this!

New Methods for Stochastic Differential Equations

I can’t tell you too much about this because these methods will be kept secret until publication, but there are some very computationally-efficient methods for nonstiff and semi-stiff equations which have already been implemented and are being thoroughly tested. Go tell the peer review process to speedup if you want these quicker!

Improved Plot Recipes

There is already an issue open for improving the plot recipes. Essentially what will come out of this will be the ability to automatically draw phase plots and other diagrams from the plot command. This should make using DifferentialEquations.jl even easier than before.

Uncertainty Quantification

One major development in scientific computing has been the development of methods for uncertainty quantification. This allows you to quantify the amount of error that comes from a numerical method. There is already a design for how to use the ODE methods to implement a popular uncertainty quantification algorithm, which would allow you to see a probability distribution for the numerical solution to show the uncertainty in the numerical values. Like the sensitivity analysis and parameter estimation, this can be written in a solver-independent manner so that way it works with any solver on the common interface (which supports callbacks). Coming soon!

Optimal Control

We have in the works for optimal control problem types which will automatically dispatch to PDE solvers and optimization methods. This is a bit off in the distance, but is currently being planned.

Geometric and Symplectic Integrators

A newcomer to the Julia-sphere is GeometricIntegrators.jl. We are currently in the works for attaching this package to the common interface so that way it will be easily accessible. Then, Partitioned ODE and DAE problems will be introduced (with a promotion structure) which will allow users to take advantage of geometric integrators for their physical problems.

Bifurcation Analysis

Soon you will be able to take your ParameterizedFunction and directly generate bifurcation plots from it. This is done by a wrapper to the PyDSTool library via PyDSTool.jl, and a linker from this wrapper to the JuliaDiffEq ecosystem via DiffEqBifurcate.jl. The toolchain already works, but… PyCall has some nasty segfaults. When these segfaults are fixed in PyCall.jl, this functionality will be documented and released.

Models Packages

This is the last coming soon, but definitely not the least. There are already a few “models packages” in existence. What these packages do is provide functionality which makes it easy to define very specialized differential equations which can be solved with the ODE/SDE/DAE solvers. For example, FinancialModels.jl makes it easy to define common equations like Heston stochastic volatility models, which will then convert into the appropriate stochastic differential equation or PDE for use in solver methods. MultiScaleModels.jl allows one to specify a model on multiple scales: a model of proteins, cells, and tissues, all interacting dynamically with discrete and continuous changes, mixing in stochasticity. Also planned is PhysicalModels.jl which will allow you to define ODEs and DAEs just by declaring the Hamiltonian or Legrangian functions. Together, these packages should help show how the functionality of DifferentialEquations.jl reaches far beyond what previous differential equation packages have allowed, and make it easy for users to write very complex simulations (all of course without the loss of performance!).

Conclusion

I hope this has made you excited to use DifferentialEquaitons.jl, and excited to see what comes in the future. To support this development, please star the DifferentialEquations.jl repository. I hope to use these measures of adoption to one day secure funding. In the meantime, if you want to help out, join in on the issues in JuliaDiffEq, or come chat in the JuliaDiffEq Gitter chatroom. We’re always looking for more hands! And to those who have already contributed: thank you as this would not have been possible without each and every one of you.

The post 6 Months of DifferentialEquations.jl: Where We Are and Where We Are Going appeared first on Stochastic Lifestyle.

Modular Algorithms for Scientific Computing in Julia

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/modular-algorithms-scientific-computing-julia/

When most people think of the Julia programming language, they usually think about its speed. Sure, Julia is fast, but what I emphasized in a previous blog post is that what makes Julia tick is not just-in-time compilation, rather it’s specialization. In that post, I talked a lot about how multiple dispatch allows for Julia to automatically specialize functions on the types of the arguments. That micro-specialization is what allows Julia to compile functions which are as fast as those from C/Fortran.

However, there is a form of “macro specialization” that Julia’s design philosophy and multiple dispatch allows one to build libraries for. It allows you to design an algorithm in a very generic form, essentially writing your full package with inputs saying “insert scientific computing package here”, allowing users to specialize the entire overarching algorithm to the specific problem. JuliaDiffEq, JuliaML, JuliaOpt, and JuliaPlots have all be making use of this style, and it has been leading to some really nice results. What I would like to do is introduce how to program in this style, and the advantages that occur simply by using this design. I want to show that not only is this style very natural, but it solves many extendability problems that libraries in other languages did not have a good answer for. The end result is a unique Julia design which produces both more flexible and more performant code.

A Case Study: Parameter Estimation in Differential Equations

I think the easiest way to introduce what’s going on and why it’s so powerful is to give an example. Let’s look at parameter estimation in differential equations. The setup is as follows. You have a differential equation y_{\theta}'=f(t,y_{\theta},\theta) which explains how y_\theta(t) evolves over time with \theta being the coefficients of the model. For example, \theta could be the reproduction rates of different species, and this ODE describes how the populations of the ecosystem (rabbits, wolves, etc.) evolve over time (with y(t) being a vector at each time which is indicative of the amount of each species). So if we just have rabbits and wolves, y(1) = [1.5;.75] would mean there’s twice as many rabbits as there are wolves at time t=1, and this could be with reproduction rates \theta=[0.25,0.5] births per time unit for each species.

Assume that we have some measurements \bar{y}(t) which are real measurements for the number of rabbits and wolves. What we would like to do is come up with a good number for the reproduction rates \theta such that our model’s outputs match reality. The way that one does this is you set up a optimization problem. The easiest problem is to simply say you want to minimize the (Euclidian) difference between the model’s output and the data. In mathematical notation, this is written as:

 \text{argmin}_{\theta} L(\theta) = \Vert \bar{y}(t) - y_\theta(t) \Vert

or more simply, find the \theta such that the loss function is minimized (the norm notation \Vert x \Vert just means some of the squares, so this is the sum of squared differences between the model and reality. This is the same loss measure which is used in regressions). Intuitive, right?

The Standard Approach to Developing an Algorithms for Estimation

That’s math. To use math, we need to pull back on the abstraction a little bit. In reality, \bar{y}(t) is not a function at all times, we have discrete times where we measured data points. And also, we don’t know y(t)! We know the differential equation, but most differential equations cannot be solved analytically. So what we must do to solve this is:

1. Numerically solve the differential equation.
2. Use the numerical solution in some optimization algorithm.

At this point, the simple mathematical problem no longer sounds so simple: we have to develop a software for solving differential equations, and software for doing optimization!

If you check out how packages normally will do this, you will see that they tend to re-invent the wheel. There are issues with this approach. For one, you probably won’t develop the fastest most complex numerical differential equation algorithms or optimization algorithms because this might not be what you do! So you’ll opt for simpler implementations of these parts, and then the package won’t get optimal performance, and you’ll have more to do/maintain.

Another approach is to use packages. For example, you can say “I will call _____ for solving the differential equations, and ________ for solving the optimization problem”. However, this has some issues. First of all, many times in scientific computing, in order to compute the solution more efficiently, the algorithms may have to be tailored to the problem. This is why there’s so much research in new multigrid methods, new differential equations methods, etc! So the only way to make this truly useful is to make it expansive enough such that users can pick from many choices. This sounds like a pain.

But what if I told you there’s a third way, where “just about any package” can be used? This is the modular programming approach in Julia.

Modular Programming through Common Interfaces

The key idea here is that multiple dispatch allows many different packages to implement the same interface, and so if you write code to that interface, you will automatically be “package-independent”. Let’s take a look at how DiffEqParamEstim.jl does it (note that this package is still under development so there are a few rough edges, but it still illustrates the idea beautifully).

In JuliaDiffEq, there is something called the “Common Interface”. This is documented in the DifferentialEquations.jl documentation. What it looks like is this: to solve an ODE y'=f(t,y) with initial condition y_0 over a time interval tspan, you make the problem type:

prob = ODEProblem(f,y0,tspan)

and then you call solve with some algorithm. For example, if we want to use the Tsit5 algorithm from OrdinaryDiffEq.jl, we do:

sol = solve(prob,Tsit5())

and we can pass a bunch more common arguments. More details for using this can be found in the tutorial. But the key idea is that you can call the solvers from Sundials.jl, a different package, using

sol = solve(prob,CVODE_BDF())

Therefore, this “solve(prob,alg)” is syntax which is “package-independent” (the restrictions will be explained in a later section). What we can then do is write our algorithm at a very generic level with “put ODE solver here”, and the user could then use any ODE solver package which implements the common interface.

Here’s what this looks like. We want to take in an ODEProblem like defined above (assume it’s the problem which defines our rabbit/wolves example), the timepoints for which we have data at, the array of data values, and the common interface algorithm to solve the ODE. The resulting code is very simple:

  function build_optim_objective(prob::AbstractODEProblem,t,data,alg;loss_func = L2DistLoss,kwargs...)
    f = prob.f
    cost_function = function (p)
      for i in eachindex(f.params)
        setfield!(f,f.params[i],p[i])
      end
 
      sol = solve(prob,alg;kwargs...)
 
      y = vecvec_to_mat(sol(t))
      norm(value(loss_func(),vec(y),vec(data)))
    end
  end

In takes in what I said and spits out a function which:

1. Sets the model to have the parameters p
2. Solves the differential equation with the chosen algorithm
3. Computes the loss function using LossFunctions.jl

We can then use this as described in the DifferentialEquations.jl manual with any ODE solver on the common interface by plugging it into Optim.jl or whatever optimization package you want, and can change the loss function to any that’s provided by LossFunctions.jl. Now any package which follows these interfaces can be directly used as a substitute without users having to re-invent this function (you can even use your own ODE solver).

(Note this algorithm still needs to be improved. For example, if an ODE solver errors, which likely happens if the parameters enter a bad range, it should still give a (high) loss value. Also, this implementation assumes the solver implements the “dense” continuous output, but this can be eased to something else. Still, this algorithm in its current state is a nice simple example!)

How does the common interface work?

I believe I’ve convinced you by now that this common interface is kind of cool. One question you might have is, “I am not really a programmer but I have a special method for my special differential equation. My special method probably doesn’t exist in a package. Can I use this for parameter estimation?”. The answer is yes! Let me explain how.

All of these organizations (JuliaDiffEq, JuliaML, JuliaOpt, etc.) have some kind of “Base” library which has the functions which are extended. In JuliaDiffEq, there’s DiffEqBase.jl. DiffEqBase defines the method “solve”. Each other the component packages (OrdinaryDiffEq.jl, Sundials.jl, ODE.jl, ODEInterface.jl, and recently LSODA.jl) import DiffEqBase:

import DiffEqBase: solve

and add a new dispatch to “solve”. For example, in Sundials.jl, it defines (and exports) the algorithm types (for example, CVODE_BDF) all as subtypes of SundialsODEAlgorithm, like:

# Abstract Types
abstract SundialsODEAlgorithm{Method,LinearSolver} <: AbstractODEAlgorithm
 
# ODE Algorithms
immutable CVODE_BDF{Method,LinearSolver} <: SundialsODEAlgorithm{Method,LinearSolver}
  # Extra details not necessary!
end

and then it defines a dispatch to solve:

function solve{uType,tType,isinplace,F,Method,LinearSolver}(
    prob::AbstractODEProblem{uType,tType,isinplace,F},
    alg::SundialsODEAlgorithm{Method,LinearSolver},args...;kwargs...)
 
  ...

where I left out the details on the extra arguments. What this means is that now

sol = solve(prob,CVODE_BDF())

which points to the method defined here in the Sundials.jl package. Since Julia is cool and compiles all of the specializations based off of the given types, this will compile to have no overhead, giving us the benefit of the common API with essentially no cost! All that Sundials.jl has to do is make sure that its resulting Solution type acts according to the common interface’s specifications and then this package can seamlessly be used in any place where an ODE solver call is used.

So yes, if you import DiffEqBase and put this interface on your ODE solver, parameter estimation from DiffEqParamEstim.jl will “just work”. Sensitivity analysis from DiffEqSensitivity.jl will “just work”. If the callbacks section of the API is implemented, the coming (still in progress) uncertainty quantification from DiffEqUncertainty.jl will “just work”. Any code written to the common interface doesn’t actually care what package was used to solve the differential equation, it just wants a solution type to come back and be used the way the interface dictates!

Summary of what just happened!

This means that if domains of scientific computing in Julia conform to common interfaces, not only do users need to learn only one API, but that API can be used to write modular code where users can stick in any package/method/implementation they like, without even affecting the performance.

Where do we go from here?

Okay, so I’ve probably convinced you that these common interfaces are extremely powerful. What is their current state? Well, let’s look the workhorse algorithms of scientific computing:

1. Optimization
2. Numerical Differential Equations
3. Machine Learning
4. Plotting
5. Linear Solving (Ax=b)
6. Nonlinear Solving (F(x)=0, find x)

Almost every problem in scientific computing seems to boil down to repeated application of these algorithms, so if one could write all of these in a modular fashion on common APIs like above, then the level of control users would have over algorithms from packages would be unprecedented: no other language will have ever been this flexible and achieve this high of performance!

One use case is as follows: say your research is in numerical linear algebra, and you’ve developed some new multigrid method with helps speed up Rosenbrock ODE solvers. If the ODE solver took in an argument which lets you specify by a linear solver interface how to solve the linear equation Ax=b, you’d be able to implement your linear solver inside of the Rosenbrock solver from OrdinaryDiffEq.jl by passing it in like Rosenbrock23(linear_solver=MyLinearSolver()). Even better, since this ODE solver is on the common interface, this algorithm could then be used in the parameter estimation scheme described above. This means that you can simply make a linear solver, and instantly be able to test how it helps speed up parameter estimation for ODEs with only ~10 lines of code, without sacrificing performance! Lastly, since the same code is used for everywhere except the choice of the linear solver, this is trivial to benchmark. You can just lineup and test every possibility with almost no work put in:

algs = [Rosenbrock23(linear_solver=MyLinearSolver());
        Rosenbrock23(linear_solver=qrfact());
        Rosenbrock23(linear_solver=lufact());
        ...
       ]
 
for alg in algs
 sol = solve(prob,alg)
 @time sol = solve(prob,alg)
end

and tada you’re benchmarking every linear solver on the same ODE using the Rosenbrock23 method (this is what DiffEqBenchmarks.jl does to test all of the ODE solvers).

Are we there yet? No, but we’re getting there. Let’s look organization by organization.

JuliaOpt

JuliaOpt provides a common interface for optimization via MathProgBase.jl. This interface is very mature. However, Optim.jl doesn’t seem to conform to it, so there may be some updates happening in the near future. However, this is already usable, and as you can see from JuMP’s documentation, you can already call many different solver methods (including commercial solvers) with the same codes

JuliaDiffEq

JuliaDiffEq is rather new, but it has rapidly developed and now it offers a common API which has been shown in this blog post and is documented at DiffEqDocs.jl with developer documentation at DiffEqDevDocs.jl.

DifferentialEquations.jl no longer directly contains any solver packages. Instead, its ODE solvers were moved to OrdinaryDiffEq.jl, its SDE (stochastic differential equation) solvers were moved to StochasticDiffEq.jl, etc. (I think you get the naming scheme!), with the parts for the common API moved to DiffEqBase.jl. DifferentialEquations.jl is now a metapackage which pulls the common interface packages into one convenient metapackage, and the components can thus be used directly if users/developers please. In addition, this means that any one could add their own methods to solve an ODEProblem, DAEProblem, etc. by importing DiffEqBase and implementing a solve method. The number of packages which have implemented this interface is rapidly increasing, even to packages outside of JuliaDiffEq. Since I have the reins here, expect this common interface to be well supported.

JuliaML

JuliaML is still under heavy development (think of it currently as a “developer preview”), but its structure is created in this same common API / modular format. You can expect Learn.jl to be a metapackage like DifferentialEquations.jl in the near future, and the ecosystem to have a common interface where you can mix and match all of the details for machine learning. It will be neat!

JuliaPlots

Plots.jl (technically not in JuliaPlots, but ignore that) implements a common interface for plotting using many different plotting packages as “backends”. Through its recipes system, owners of other packages can extend Plots.jl so that way it has “nice interactions”. For example, the solutions from DiffEqBase.jl have plot recipes so that way the command

plot(sol)

plots the solution, and using Plots.jl’s backend controls, this works with a wide variety of packages like GR, PyPlot, Plotly, etc. This is really well-designed and already very mature.

Linear Solving

We are close to having a common API for linear solving. The reason is because Base uses the type system to dispatch on \. For example, to solve Ax=b by LU-factorization, one uses the command:

K = lufact(A)
x = b\K

and to solve using QR-factorization, one instead would use

K = qrfact(A)
x = b\K

Thus what one could do is take in a function for a factorization type and use that:

function test_solve(linear_solve)
  K = linear_solve(A)
  x = b\K
end

Then the user can pass in whatever method they think is best, or implement their own linear_solve(A) which makes a type and has a dispatch on \ to give the solution via their method.

Fortunately, this even works with PETSc.jl:

K = KSP(A, ksp_type="gmres", ksp_rtol=1e-6)
x = K \ b

so in test_solve, one could make the user do:

test_solve((A)->KSP(A, ksp_type="gmres", ksp_rtol=1e-6))

(since linear_solve is supposed to be a function of the matrix, so use an anonymous function to make a closure have all of the right arguments). However, this setup is not currently compatible with IterativeSolvers.jl. I am on a mission to make it happen, and have opened up issues and a discussion on Discourse. Feel free to chime in with your ideas. I am not convinced this anonymous function / closure API is nice to use.

Nonlinear Solving

There is no common interface in Julia for nonlinear solving. You’re back to the old way of doing it right now. Currently I know of 3 good packages for solving nonlinear equations F(x)=0:

1. NLsolve.jl
2. Roots.jl
3. Sundials.jl (KINSOL)

NLsolve.jl and Roots.jl are native Julia methods and support things like higher precision numbers, while Sundials.jl only supports Float64 and tends to be faster (it’s been being optimized for ages). In this state, you have to choose which package to use and stick with it. If this had a common interface, then users could pass in the method for nonlinear solving and it could “just work” like how the differential equations stack works. I opened up a discussion on Discourse to try and get this going!

Conclusion

These common interfaces which we are seeing develop in many different places in the Julia package ecosystem are not just nice for users, but they are also incredibly powerful for algorithm developers and researchers. I have shown that this insane amount of flexibility allows one to choose the optimal method for every mathematical detail of the problem, without performance costs. I hope we soon see the scientific computing stack all written in this manner. The end result would be that, in order to test how your research affects some large real-world problem, you simply plug your piece into the modular structure and let it run.

That is the power of multiple dispatch.

The post Modular Algorithms for Scientific Computing in Julia appeared first on Stochastic Lifestyle.

7 Julia Gotchas and How to Handle Them

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/7-julia-gotchas-handle/

Let me start by saying Julia is a great language. I love the language, it is what I find to be the most powerful and intuitive language that I have ever used. It’s undoubtedly my favorite language. That said, there are some “gotchas”, tricky little things you need to know about. Every language has them, and one of the first things you have to do in order to master a language is to find out what they are and how to avoid them. The point of this blog post is to help accelerate this process for you by exposing some of the most common “gotchas” offering alternative programming practices.

Julia is a good language for understanding what’s going on because there’s no magic. The Julia developers like to have clearly defined rules for how things act. This means that all behavior can be explained. However, this might mean that you need to think about what’s going on to understand why something is happening. That’s why I’m not just going to lay out some common issues, but I am also going to explain why they occur. You will see that there are some very similar patterns, and once you catch onto the patterns, you will not fall for any of these anymore. Because of this, there’s a slightly higher learning curve for Julia over the simpler languages like MATLAB/R/Python. However, once you get the hang of this, you will fully be able to use the conciseness of Julia while obtaining the performance of C/Fortran. Let’s dig in.

Gotcha #1: The REPL (terminal) is the Global Scope

For anyone who is familiar with the Julia community, you know that I have to start here. This is by far the most common problem reported by new users of Julia. Someone will go “I heard Julia is fast!”, open up the REPL, quickly code up some algorithm they know well, and execute that script. After it’s executed they look at the time and go “wait a second, why is this as slow as Python?”

Because this is such an important issue and pervasive, let’s take some extra time delving into why this happens so we understand how to avoid it.

Small Interlude into Why Julia is Fast

To understand what just happened, you have to understand that Julia is about not just code compilation, but also type-specialization (i.e. compiling code which is specific to the given types). Let me repeat: Julia is not fast because the code is compiles using a JIT compiler, rather it is fast because type-specific code is compiled an ran.

If you want the full story, checkout some of the notes I’ve written for an upcoming workshop. I am going to summarize the necessary parts which are required to understand why this is such a big deal.

Type-specificity is given by Julia’s core design principle: multiple dispatch. When you write the code:

function f(a,b)
  return 2a+b
end

you may have written only one “function”, but you have written a very large amount of “methods”. In Julia parlance, a function is an abstraction and what is actually called is a method. If you call f(2.0,3.0), then Julia will run a compiled code which takes in two floating point numbers and returns the value 2a+b. If you call f(2,3), then Julia will run a different compiled code which takes in two integers and returns the value 2a+b. The function f is an abstraction or a short-hand for the multitude of different methods which have the same form, and this design of using the symbol “f” to call all of these different methods is called multiple dispatch. And this goes all the way down: the + operator is actually a function which will call methods depending on the types it sees.

Julia actually gets its speed is because this compiled code knows its types, and so the compiled code that f(2.0,3.0) calls is exactly the compiled code that you would get by defining the same C/Fortran function which takes in floating point numbers. You can check this with the @code_native macro to see the compiled assembly:

@code_native f(2.0,3.0)
 
# This prints out the following:
 
pushq	%rbp
movq	%rsp, %rbp
Source line: 2
vaddsd	%xmm0, %xmm0, %xmm0
vaddsd	%xmm1, %xmm0, %xmm0
popq	%rbp
retq
nop

This is the same compiled assembly you would expect from the C/Fortran function, and it is different than the assembly code for integers:

@code_native f(2,3)
 
pushq	%rbp
movq	%rsp, %rbp
Source line: 2
leaq	(%rdx,%rcx,2), %rax
popq	%rbp
retq
nopw	(%rax,%rax)

The Main Point: The REPL/Global Scope Does Not Allow Type Specificity

This brings us to the main point: The REPL / Global Scope is slow because it does not allow type specification. First of all, notice that the REPL is the global scope because Julia allows nested scoping for functions. For example, if we define

function outer()
  a = 5
  function inner()
    return 2a
  end
  b = inner()
  return 3a+b
end

you will see that this code works. This is because Julia allows you to grab the “a” from the outer function into the inner function. If you apply this idea recursively, then you understand the highest scope is the scope which is directly the REPL (which is the global scope of a module Main). But now let’s think about how a function will compile in this situation. Let’s do the same case as before, but using the globals:

a=2.0; a=3.0
function linearcombo()
  return 2a+b
end
ans = linearcombo()
a = 2; b = 3
ans2= linearcombo()

Question: What types should the compiler assume “a” and “b” are? Notice that in this example we changed the types and still called the same function. In order for this compiled C function to not segfault, it needs to be able to deal with whatever types we throw at it: floats, ints, arrays, weird user-defined types, etc. In Julia parlance, this means that the variables have to be “boxed”, and the types are checked with every use. What do you think that compiled code looks like?

pushq	%rbp
movq	%rsp, %rbp
pushq	%r15
pushq	%r14
pushq	%r12
pushq	%rsi
pushq	%rdi
pushq	%rbx
subq	$96, %rsp
movl	$2147565792, %edi       # imm = 0x800140E0
movabsq	$jl_get_ptls_states, %rax
callq	*%rax
movq	%rax, %rsi
leaq	-72(%rbp), %r14
movq	$0, -88(%rbp)
vxorps	%xmm0, %xmm0, %xmm0
vmovups	%xmm0, -72(%rbp)
movq	$0, -56(%rbp)
movq	$10, -104(%rbp)
movq	(%rsi), %rax
movq	%rax, -96(%rbp)
leaq	-104(%rbp), %rax
movq	%rax, (%rsi)
Source line: 3
movq	pcre2_default_compile_context_8(%rdi), %rax
movq	%rax, -56(%rbp)
movl	$2154391480, %eax       # imm = 0x806967B8
vmovq	%rax, %xmm0
vpslldq	$8, %xmm0, %xmm0        # xmm0 = zero,zero,zero,zero,zero,zero,zero,zero,xmm0[0,1,2,3,4,5,6,7]
vmovdqu	%xmm0, -80(%rbp)
movq	%rdi, -64(%rbp)
movabsq	$jl_apply_generic, %r15
movl	$3, %edx
movq	%r14, %rcx
callq	*%r15
movq	%rax, %rbx
movq	%rbx, -88(%rbp)
movabsq	$586874896, %r12        # imm = 0x22FB0010
movq	(%r12), %rax
testq	%rax, %rax
jne	L198
leaq	98096(%rdi), %rcx
movabsq	$jl_get_binding_or_error, %rax
movl	$122868360, %edx        # imm = 0x752D288
callq	*%rax
movq	%rax, (%r12)
L198:
movq	8(%rax), %rax
testq	%rax, %rax
je	L263
movq	%rax, -80(%rbp)
addq	$5498232, %rdi          # imm = 0x53E578
movq	%rdi, -72(%rbp)
movq	%rbx, -64(%rbp)
movq	%rax, -56(%rbp)
movl	$3, %edx
movq	%r14, %rcx
callq	*%r15
movq	-96(%rbp), %rcx
movq	%rcx, (%rsi)
addq	$96, %rsp
popq	%rbx
popq	%rdi
popq	%rsi
popq	%r12
popq	%r14
popq	%r15
popq	%rbp
retq
L263:
movabsq	$jl_undefined_var_error, %rax
movl	$122868360, %ecx        # imm = 0x752D288
callq	*%rax
ud2
nopw	(%rax,%rax)

For dynamic languages without type-specialization, this bloated code with all of the extra instructions is as good as you can get, which is why Julia slows down to their speed.

To understand why this is a big deal, notice that every single piece of code that you write in Julia is compiled. So let’s say you write a loop in your script:

a = 1
for i = 1:100
  a += a + f(a)
end

The compiler has to compile that loop, but since it cannot guarantee the types do not change, it conservatively gives that nasty long code, leading to slow execution.

How to Avoid the Issue

There are a few ways to avoid this issue. The simplest way is to always wrap your scripts in functions. For example, with the previous code we can do:

function geta(a)
  # can also just define a=1 here
  for i = 1:100
    a += a + f(a)
  end
  return a
end
a = geta(1)

This will give you the same output, but since the compiler is able to specialize on the type of a, it will give the performant compiled code that you want. Another thing you can do is define your variables as constants.

const b = 5

By doing this, you are telling the compiler that the variable will not change, and thus it will be able to specialize all of the code which uses it on the type that it currently is. There’s a small quirk that Julia actually allows you to change the value of a constant, but not the type. Thus you can use “const” as a way to tell the compiler that you won’t be changing the type and speed up your codes. However, note that there are some small quirks that come up since you guaranteed to the compiler the value won’t change. For example:

const a = 5
f() = a
println(f()) # Prints 5
a = 6
println(f()) # Prints 5

this does not work as expected because the compiler, realizing that it knows the answer to “f()=a” (since a is a constant), simply replaced the function call with the answer, giving different behavior than if a was not a constant.

This is all just one big way of saying: Don’t write your scripts directly in the REPL, always wrap them in a function.

Let’s hit one related point as well.

Gotcha #2: Type-Instabilities

So I just made a huge point about how specializing code for the given types is crucial. Let me ask a quick question, what happens when your types can change?

If you guessed “well, you can’t really specialize the compiled code in that case either”, then you are correct. This kind of problem is known as a type-instability. These can show up in many different ways, but one common example is that you initialize a value in a way that is easy, but not necessarily that type that it should be. For example, let’s look at:

function g()
  x=1
  for i = 1:10
    x = x/2
  end
  return x
end

Notice that “1/2” is a floating point number in Julia. Therefore it we started with “x=1”, it will change types from an integer to a floating point number, and thus the function has to compile the inner loop as though it can be either type. If we instead had the function:

function h()
  x=1.0
  for i = 1:10
    x = x/2
  end
  return x
end

then the whole function can optimally compile knowing x will stay a floating point number (this ability for the compiler to judge types is known as type inference). We can check the compiled code to see the difference:

pushq	%rbp
movq	%rsp, %rbp
pushq	%r15
pushq	%r14
pushq	%r13
pushq	%r12
pushq	%rsi
pushq	%rdi
pushq	%rbx
subq	$136, %rsp
movl	$2147565728, %ebx       # imm = 0x800140A0
movabsq	$jl_get_ptls_states, %rax
callq	*%rax
movq	%rax, -152(%rbp)
vxorps	%xmm0, %xmm0, %xmm0
vmovups	%xmm0, -80(%rbp)
movq	$0, -64(%rbp)
vxorps	%ymm0, %ymm0, %ymm0
vmovups	%ymm0, -128(%rbp)
movq	$0, -96(%rbp)
movq	$18, -144(%rbp)
movq	(%rax), %rcx
movq	%rcx, -136(%rbp)
leaq	-144(%rbp), %rcx
movq	%rcx, (%rax)
movq	$0, -88(%rbp)
Source line: 4
movq	%rbx, -104(%rbp)
movl	$10, %edi
leaq	477872(%rbx), %r13
leaq	10039728(%rbx), %r15
leaq	8958904(%rbx), %r14
leaq	64(%rbx), %r12
leaq	10126032(%rbx), %rax
movq	%rax, -160(%rbp)
nopw	(%rax,%rax)
L176:
movq	%rbx, -128(%rbp)
movq	-8(%rbx), %rax
andq	$-16, %rax
movq	%r15, %rcx
cmpq	%r13, %rax
je	L272
movq	%rbx, -96(%rbp)
movq	-160(%rbp), %rcx
cmpq	$2147419568, %rax       # imm = 0x7FFF05B0
je	L272
movq	%rbx, -72(%rbp)
movq	%r14, -80(%rbp)
movq	%r12, -64(%rbp)
movl	$3, %edx
leaq	-80(%rbp), %rcx
movabsq	$jl_apply_generic, %rax
vzeroupper
callq	*%rax
movq	%rax, -88(%rbp)
jmp	L317
nopw	%cs:(%rax,%rax)
L272:
movq	%rcx, -120(%rbp)
movq	%rbx, -72(%rbp)
movq	%r14, -80(%rbp)
movq	%r12, -64(%rbp)
movl	$3, %r8d
leaq	-80(%rbp), %rdx
movabsq	$jl_invoke, %rax
vzeroupper
callq	*%rax
movq	%rax, -112(%rbp)
L317:
movq	(%rax), %rsi
movl	$1488, %edx             # imm = 0x5D0
movl	$16, %r8d
movq	-152(%rbp), %rcx
movabsq	$jl_gc_pool_alloc, %rax
callq	*%rax
movq	%rax, %rbx
movq	%r13, -8(%rbx)
movq	%rsi, (%rbx)
movq	%rbx, -104(%rbp)
Source line: 3
addq	$-1, %rdi
jne	L176
Source line: 6
movq	-136(%rbp), %rax
movq	-152(%rbp), %rcx
movq	%rax, (%rcx)
movq	%rbx, %rax
addq	$136, %rsp
popq	%rbx
popq	%rdi
popq	%rsi
popq	%r12
popq	%r13
popq	%r14
popq	%r15
popq	%rbp
retq
nop

Verses:

pushq	%rbp
movq	%rsp, %rbp
movabsq	$567811336, %rax        # imm = 0x21D81D08
Source line: 6
vmovsd	(%rax), %xmm0           # xmm0 = mem[0],zero
popq	%rbp
retq
nopw	%cs:(%rax,%rax)

Notice how many fewer computational steps are required to compute the same value!

How to Find and Deal with Type-Instabilities

At this point you might ask, “well, why not just use C so you don’t have to try and find these instabilities?” The answer is:

  1. They are easy to find
  2. They can be useful
  3. You can handle necessary instabilities with function barriers

How to Find Type-Instabilities

Julia gives you the macro @code_warntype to show you where type instabilities are. For example, if we use this on the “g” function we created:

@code_warntype g()
 
Variables:
  #self#::#g
  x::ANY
  #temp#@_3::Int64
  i::Int64
  #temp#@_5::Core.MethodInstance
  #temp#@_6::Float64
 
Body:
  begin
      x::ANY = 1 # line 3:
      SSAValue(2) = (Base.select_value)((Base.sle_int)(1,10)::Bool,10,(Base.box)(Int64,(Base.sub_int)(1,1)))::Int64
      #temp#@_3::Int64 = 1
      5:
      unless (Base.box)(Base.Bool,(Base.not_int)((#temp#@_3::Int64 === (Base.box)(Int64,(Base.add_int)(SSAValue(2),1)))::Bool)) goto 30
      SSAValue(3) = #temp#@_3::Int64
      SSAValue(4) = (Base.box)(Int64,(Base.add_int)(#temp#@_3::Int64,1))
      i::Int64 = SSAValue(3)
      #temp#@_3::Int64 = SSAValue(4) # line 4:
      unless (Core.isa)(x::UNION{FLOAT64,INT64},Float64)::ANY goto 15
      #temp#@_5::Core.MethodInstance = MethodInstance for /(::Float64, ::Int64)
      goto 24
      15:
      unless (Core.isa)(x::UNION{FLOAT64,INT64},Int64)::ANY goto 19
      #temp#@_5::Core.MethodInstance = MethodInstance for /(::Int64, ::Int64)
      goto 24
      19:
      goto 21
      21:
      #temp#@_6::Float64 = (x::UNION{FLOAT64,INT64} / 2)::Float64
      goto 26
      24:
      #temp#@_6::Float64 = $(Expr(:invoke, :(#temp#@_5), :(Main./), :(x::Union{Float64,Int64}), 2))
      26:
      x::ANY = #temp#@_6::Float64
      28:
      goto 5
      30:  # line 6:
      return x::UNION{FLOAT64,INT64}
  end::UNION{FLOAT64,INT64}

Notice that it tells us at the top that the type of x is “ANY”. It will capitalize any type which is not inferred as a “strict type”, i.e. it is an abstract type which needs to be boxed/checked at each step. We see that at the end we return x as a “UNION{FLOAT64,INT64}”, which is another non-strict type. This tells us that the type of x changed, causing the difficulty. If we instead look at the @code_warntype for h, we get all strict types:

@code_warntype h()
 
Variables:
  #self#::#h
  x::Float64
  #temp#::Int64
  i::Int64
 
Body:
  begin
      x::Float64 = 1.0 # line 3:
      SSAValue(2) = (Base.select_value)((Base.sle_int)(1,10)::Bool,10,(Base.box)(Int64,(Base.sub_int)(1,1)))::Int64
      #temp#::Int64 = 1
      5:
      unless (Base.box)(Base.Bool,(Base.not_int)((#temp#::Int64 === (Base.box)(Int64,(Base.add_int)(SSAValue(2),1)))::Bool)) goto 15
      SSAValue(3) = #temp#::Int64
      SSAValue(4) = (Base.box)(Int64,(Base.add_int)(#temp#::Int64,1))
      i::Int64 = SSAValue(3)
      #temp#::Int64 = SSAValue(4) # line 4:
      x::Float64 = (Base.box)(Base.Float64,(Base.div_float)(x::Float64,(Base.box)(Float64,(Base.sitofp)(Float64,2))))
      13:
      goto 5
      15:  # line 6:
      return x::Float64
  end::Float64

Indicating that this function is type stable and will compile to essentially optimal C code. Thus type-instabilities are not hard to find. What’s harder is to find the right design.

Why Allow Type-Instabilities?

This is an age old question which has lead to dynamically-typed languages dominating the scripting language playing field. The idea is that, in many cases you want to make a tradeoff between performance and robustness. For example, you may want to read a table from a webpage which has numbers all mixed together with integers and floating point numbers. In Julia, you can write your function such that if they were all integers, it will compile well, and if they were all floating point numbers, it will also compile well. And if they’re mixed? It will still work. That’s the flexibility/convenience we know and love from a language like Python/R. But Julia will explicitly tell you (via @code_warntype) when you are making this performance tradeoff.

How to Handle Type-Instabilities

There are a few ways to handle type-instabilities. First of all, if you like something like C/Fortran where your types are declared and can’t change (thus ensuring type-stability), you can do that in Julia. You can declare your types in a function with the following syntax:

local a::Int64 = 5

This makes “a” an 64-bit integer, and if future code tries to change it, an error will be thrown (or a proper conversion will be done. But since the conversion will not automatically round, it will most likely throw errors). Sprinkle these around your code and you will get type stability the C/Fortran way.

A less heavy handed way to handle this is with type-assertions. This is where you put the same syntax on the other side of the equals sign. For example:

a = (b/c)::Float64

This says “calculate b/c, and make sure that the output is a Float64. If it’s not, try to do an auto-conversion. If it can’t easily convert, throw an error”. Putting these around will help you make sure you know the types which are involved.

However, there are cases where type instabilities are necessary. For example, let’s say you want to have a robust code, but the user gives you something crazy like:

arr = Vector{Union{Int64,Float64},2}(4)
arr[1]=4
arr[2]=2.0
arr[3]=3.2
arr[4]=1

which is a 4×4 array of both integers and floating point numbers. The actual element type for the array is “Union{Int64,Float64}” which we saw before was a non-strict type which can lead to issues. The compiler only knows that each value can be either an integer or a floating point number, but not which element is which type. This means that naively performing arithmetic on this array, like:

function foo{T,N}(array::Array{T,N})
  for i in eachindex(array)
    val = array[i]
    # do algorithm X on val
  end
end

will be slow since the operations will be boxed.

However, we can use multiple-dispatch to run the codes in a type-specialized manner. This is known as using function barriers. For example:

function inner_foo{T<:Number}(val::T)
  # Do algorithm X on val
end
 
function foo2{T,N}(array::Array{T,N})
  for i in eachindex(array)
    inner_foo(array[i])
  end
end

Notice that because of multiple-dispatch, calling inner_foo either calls a method specifically compiled for floating point numbers, or a method specifically compiled for integers. In this manner, you can put a long calculation inside of inner_foo and still have it perform well do to the strict typing that the function barrier gives you.

Thus I hope you see that Julia offers a good mixture between the performance of strict typing and the convenience of dynamic typing. A good Julia programmer gets to have both at their disposal in order to maximize performance and/or productivity when necessary.

Gotcha #3: Eval Runs at the Global Scope

One last typing issue: eval. Remember this: eval runs at the global scope.

One of the greatest strengths of Julia is its metaprogramming capabilities. This allows you to effortlessly write code which generates code, effectively reducing the amount of code you have to write and maintain. Macro is a function which runs at compile time and (usually) spits out code. For example:

macro defa()
  :(a=5)
end

will replace any instance of “@defa” with the code “a=5” (“:(a=5)” is the quoted expression for “a=5”. Julia code is all expressions, and thus metaprogramming is about building Julia expressions). You can use this to build any complex Julia program you wish, and put it in a function as a type of really clever shorthand.

However, sometimes you may need to directly evaluate the generated code. Julia gives you the “eval” function or the “@eval” macro for doing so. In general, you should try to avoid eval, but there are some codes where it’s necessary, like my new library for transferring data between different processes for parallel programming. However, note that if you do use it:

@eval :(a=5)

then this will evaluate at the global scope (the REPL). Thus all of the associated problems will occur. However, the fix is the same as the fixes for globals / type instabilities. For example:

function testeval()
  @eval :(a=5)
  return 2a+5
end

will not give a good compiled code since “a” was essentially declared at the REPL. But we can use the tools from before to fix this. For example, we can bring the global in and assert a type to it:

function testeval()
  @eval :(a=5)
  b = a::Int64
  return 2b+5
end

Here “b” is a local variable, and the compiler can infer that its type won’t change and thus we have type-stability and are living in good performance land. So dealing with eval isn’t difficult, you just have to remember it works at the REPL.

That’s the last of the gotcha’s related to type-instability. You can see that there’s a very common thread for why it occurs and how to handle them.

Gotcha #4: How Expressions Break Up

This is one that got me for awhile at first. In Julia, there are many cases where expressions will continue if they are not finished. For this reason line-continuation operators are not necessary: Julia will just read until the expression is finished.

Easy rule, right? Just make sure you remember how functions finish. For example:

a = 2 + 3 + 4 + 5 + 6 + 7
   +8 + 9 + 10+ 11+ 12+ 13

looks like it will evaluate to 90, but instead it gives 27. Why? Because “a = 2 + 3 + 4 + 5 + 6 + 7” is a complete expression, so it will make “a=27” and then skip over the nonsense “+8 + 9 + 10+ 11+ 12+ 13”. To continue the line, we instead needed to make sure the expression wasn’t complete:

a = 2 + 3 + 4 + 5 + 6 + 7 +
    8 + 9 + 10+ 11+ 12+ 13

This will make a=90 as we wanted. This might trip you up the first time, but then you’ll get used to it.

The more difficult issue dealing with array definitions. For example:

x = rand(2,2)
a = [cos(2*pi.*x[:,1]).*cos(2*pi.*x[:,2])./(4*pi) -sin(2.*x[:,1]).*sin(2.*x[:,2])./(4)]
b = [cos(2*pi.*x[:,1]).*cos(2*pi.*x[:,2])./(4*pi) - sin(2.*x[:,1]).*sin(2.*x[:,2])./(4)]

at glance you might think a and b are the same, but they are not! The first will give you a (2,2) matrix, while the second is a (1-dimensional) vector of size 2. To see what the issue is, here’s a simpler version:

a = [1 -2]
b = [1 - 2]

In the first case there are two numbers: “1” and “-2”. In the second there is an expression: “1-2” (which is evaluated to give the array [-1]). This is because of the special syntax for array definitions. It’s usually really lovely to write:

a = [1 2 3 -4
     2 -3 1 4]

and get the 2×4 matrix that you’d expect. However, this is the tradeoff that occurs. However, this issue is also easy to avoid: instead of concatenating using a space (i.e. in a whitespace-sensitive manner), instead use the “hcat” function:

a = hcat(cos(2*pi.*x[:,1]).*cos(2*pi.*x[:,2])./(4*pi),-sin(2.*x[:,1]).*sin(2.*x[:,2])./(4))

Problem solved!

Gotcha #5: Views, Copy, and Deepcopy

One way in which Julia gets good performance is by working with “views”. An “Array” is actually a “view” to the contiguous blot of memory which is used to store the values. The “value” of the array is its pointer to the memory location (and its type information). This gives (and useful) interesting behavior. For example, if we run the following code:

a = [3;4;5]
b = a
b[1] = 1

then at the end we will have that “a” is the array “[1;4;5]”, i.e. changing “b” changes “a”. The reason is “b=a” set the value of “b” to the value of “a”. Since the value of an array is its pointer to the memory location, what “b” actually gets is not a new array, rather it gets the pointer to the same memory location (which is why changing “b” changes “a”).

This is very useful because it also allows you to keep the same array in many different forms. For example, we can have both a matrix and the vector form of the matrix using:

a = rand(2,2) # Makes a random 2x2 matrix
b = vec(a) # Makes a view to the 2x2 matrix which is a 1-dimensional array

Now “b” is a vector, but changing “b” still changes “a”, where “b” is indexed by reading down the columns. Notice that this whole time, no arrays have been copied, and therefore these operations have been excessively cheap (meaning, there’s no reason to avoid them in performance sensitive code).

Now some details. Notice that the syntax for slicing an array will create a copy when on the right-hand side. For example:

c = a[1:2,1]

will create a new array, and point “c” to that new array (thus changing “c” won’t change “a”). This can be necessary behavior, however note that copying arrays is an expensive operation that should be avoided whenever possible. Thus we would instead create more complicated views using:

d = @view a[1:2,1]
e = view(a,1:2,1)

Both “d” and “e” are the same thing, and changing either “d” or “e” will change “a” because both will not copy the array, just make a new variable which is a Vector that only points to the first column of “a”. (Another function which creates views is “reshape” which lets you reshape an array.)

If this syntax is on the left-hand side, then it’s a view. For example:

a[1:2,1] = [1;2]

will change “a” because, on the left-hand side, “a[1:2,1]” is the same as “view(a,1:2,1)” which points to the same memory as “a”.

What if we need to make copies? Then we can use the copy function:

b = copy(a)

Now since “b” is a copy of “a” and not a view, changing “b” will not change “a”. If we had already defined “a”, there’s a handy in-place copy “copy!(b,a)” which will essentially loop through and write the values of “a” to the locations of “a” (but this requires that “b” is already defined and is the right size).

But now let’s make a slightly more complicated array. For example, let’s make a “Vector{Vector}”:

a = Vector{Vector{Float64}}(2)
a[1] = [1;2;3]
a[2] = [4;5;6]

Each element of “a” is a vector. What happens when we copy a?

b = copy(a)
b[1][1] = 10

Notice that this will change a[1][1] to 10 as well! Why did this happen? What happened is we used “copy” to copy the values of “a”. But the values of “a” were arrays, so we copied the pointers to memory locations over to “b”, so “b” actually points to the same arrays. To fix this, we instead use “deepcopy”:

b = deepcopy(a)

This recursively calls copy in such a manner that we avoid this issue. Again, the rules of Julia are very simple and there’s no magic, but sometimes you need to pay closer attention.

Gotcha #6: Temporary Allocations, Vectorization, and In-Place Functions

In MATLAB/Python/R, you’re told to use vectorization. In Julia you might have heard that “devectorized code is better”. I wrote about this part before so I will refer back to my previous post which explains why vectorized codes give “temporary allocations” (i.e. they make middle-man arrays which aren’t needed, and as noted before, array allocations are expensive and slow down your code!).

For this reason, you will want to fuse your vectorized operations and write them in-place in order to avoid allocations. What do I mean by in-place? An in-place function is one that updates a value instead of returning a value. If you’re going to continually operate on an array, this will allow you to keep using the same array, instead of creating new arrays each iteration. For example, if you wrote:

function f()
  x = [1;5;6]
  for i = 1:10
    x = x + inner(x)
  end
  return x
end
function inner(x)
  return 2x
end

then each time inner is called, it will create a new array to return “2x” in. Clearly we don’t need to keep making new arrays. So instead we could have a cache array “y” which will hold the output like so:

function f()
  x = [1;5;6]
  y = Vector{Int64}(3)
  for i = 1:10
    inner(y,x)
    for i in 1:3
      x[i] = x[i] + y[i]
    end
    copy!(y,x)
  end
  return x
end
function inner!(y,x)
  for i=1:3
    y[i] = 2*x[i]
  end
  nothing
end

Let’s dig into what’s happening here. “inner!(y,x)” doesn’t return anything, but it changes “y”. Since “y” is an array, the value of “y” is the pointer to the actual array, and since in the function those values were changed, “inner!(y,x)” will have “silently” changed the values of “y”. Functions which do this are called in-place. They are usually denoted with a “!”, and usually change the first argument (this is just by convention). So there is no array allocation when “inner!(y,x)” is called.

In the same way, “copy!(y,x)” is an in-place function which writes the values of “x” to “y”, updating it. As you can see, this means that every operation only changes the values of the arrays. Only two arrays are ever created: the initial array for “x” and the initial array for “y”. The first function created a new array every since time “x + inner(x)” was called, and thus 11 arrays were created in the first function. Since array allocations are expensive, the second function will run faster than the first function.

It’s nice that we can get fast, but the syntax bloated a little when we had to write out the loops. That’s where loop-fusion comes in. In Julia v0.5, you can now use the “.” symbol to vectorize any function (also known as broadcasting because it is actually calling the “broadcast” function). While it’s cool that “f.(x)” is the same thing as applying “f” to each value of “x”, what’s cooler is that the loops fuse. If you just applied “f” to “x” and made a new array, then “x=x+f.(x)” would have a copy. However, what we can instead do is designate everything as array functions:

x .= x .+ f.(x)

The “.=” will do element-wise equals, so this will essentially turn be the code

for i = 1:length(x)
  x[i] = x[i] + f(x[i])
end

which is the allocation-free loop we wanted! Thus another way to write our function
would’ve been:

function f()
  x = [1;5;6]
  for i = 1:10
    x .= x .+ inner.(x)
  end
  return x
end
function inner(x)
  return 2x
end

Therefore we still get the concise vectorized syntax of MATLAB/R/Python, but this version doesn’t create temporary arrays and thus will be faster. This is how you can use “scripting language syntax” but still get C/Fortran-like speeds. If you don’t watch for temporaries, they will bite away at your performance (same in the other languages, it’s just that using vectorized codes is faster than not using vectorized codes in the other languages. In Julia, we have the luxury of something faster being available).

**** Note: Some operators do not fuse in v0.5. For example, “.*” won’t fuse yet. This is still a work in progress but should be all together by v0.6 ****

Gotcha #7: Not Building the System Image for your Hardware

This is actually something I fell pray to for a very long time. I was following all of these rules thinking I was a Julia champ, and then one day I realized that not every compiler optimization was actually happening. What was going on?

It turns out that the pre-built binaries that you get via the downloads off the Julia site are toned-down in their capabilities in order to be usable on a wider variety of machines. This includes the binaries you get from Linux when you do “apt-get install” or “yum install”. Thus, unless you built Julia from source, your Julia is likely not as fast as it could be.

Luckily there’s an easy fix provided by Mustafa Mohamad (@musm). Just run the following code in Julia:

include(joinpath(dirname(JULIA_HOME),"share","julia","build_sysimg.jl")); build_sysimg(force=true)

If you’re on Windows, you may need to run this code first:

Pkg.add("WinRPM"); WinRPM.install("gcc")

And on any system, you may need to have administrator privileges. This will take a little bit but when it’s done, your install will be tuned to your system, giving you all of the optimizations available.

Conclusion: Learn the Rules, Understand Them, Then Profit

To reiterate one last time: Julia doesn’t have compiler magic, just simple rules. Learn the rules well and all of this will be second nature. I hope this helps you as you learn Julia. The first time you encounter a gotcha like this, it can be a little hard to reason it out. But once you understand it, it won’t hurt again. Once you really understand these rules, your code will compile down to essentially C/Fortran, while being written in a concise high-level scripting language. Put this together with broadcast fusing and metaprogramming, and you get an insane amount of performance for the amount of code you’re writing!

Here’s a question for you: what Julia gotchas did I miss? Leave a comment explaining a gotcha and how to handle it. Also, just for fun, what are your favorite gotchas from other languages? [Mine has to be the fact that, in Javascript inside of a function, “var x=3” makes “x” local, while “x=3” makes “x” global. Automatic globals inside of functions? That gave some insane bugs that makes me not want to use Javascript ever again!]

The post 7 Julia Gotchas and How to Handle Them appeared first on Stochastic Lifestyle.