Author Archives: Andrew Collier

#MonthOfJulia Day 22: Optimisation

Julia-Logo-Optimisation

Sudoku-as-a-Service is a great illustration of Julia’s integer programming facilities. Julia has several packages which implement various flavours of optimisation: JuMP, JuMPeR, Gurobi, CPLEX, DReal, CoinOptServices and OptimPack. We’re not going to look at anything quite as elaborate as Sudoku today, but focus instead on finding the extrema in some simple (or perhaps not so simple) mathematical functions. At this point you might find it interesting to browse through this catalog of test functions for optimisation.

Optim

We’ll start out by using the Optim package to find extrema in Himmelblau’s function:

f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2.

This function has one maximum and four minima. One of the minima is conveniently located at

(x, y) = (3, 2).

800px-Himmelblau_function.svg

As usual the first step is to load the required package.

julia> using Optim

Then we set up the objective function along with its gradient and Hessian functions.

julia> function himmelblau(x::Vector)
           (x[1]^2 + x[2] - 11)^2 + (x[1] + x[2]^2 - 7)^2
       end
himmelblau (generic function with 1 method)
julia> function himmelblau_gradient!(x::Vector, gradient::Vector)
           gradient[1] = 4 * x[1] * (x[1]^2 + x[2] - 11) + 2 * (x[1] + x[2]^2 - 7)
           gradient[2] = 2 * (x[1]^2 + x[2] - 11) + 4 * x[2] * (x[1] + x[2]^2 - 7)
       end
himmelblau_gradient! (generic function with 1 method)
julia> function himmelblau_hessian!(x::Vector, hessian::Matrix)
           hessian[1, 1] = 4 * (x[1]^2 + x[2] - 11) + 8 * x[1]^2 + 2
           hessian[1, 2] = 4 * x[1] + 4 * x[2]
           hessian[2, 1] = 4 * x[1] + 4 * x[2]
           hessian[2, 2] = 4 * (x[1] + x[2]^2 -  7) + 8 * x[2]^2 + 2
       end
himmelblau_hessian! (generic function with 1 method)

There are a number of algorithms at our disposal. We’ll start with the Nelder Mead method which only uses the objective function itself. I am very happy with the detailed output provided by the optimize() function and clearly it converges on a result which is very close to what we expected.

julia> optimize(himmelblau, [2.5, 2.5], method = :nelder_mead)
Results of Optimization Algorithm
 * Algorithm: Nelder-Mead
 * Starting Point: [2.5,2.5]
 * Minimum: [3.0000037281643586,2.0000105449945313]
 * Value of Function at Minimum: 0.000000
 * Iterations: 35
 * Convergence: true
   * |x - x'| < NaN: false
   * |f(x) - f(x')| / |f(x)| < 1.0e-08: true
   * |g(x)| < NaN: false
   * Exceeded Maximum Number of Iterations: false
 * Objective Function Calls: 69
 * Gradient Call: 0

Next we’ll look at the limited-memory version of the BFGS algorithm. This can be applied either with or without an explicit gradient function. In this case we’ll provide the gradient function defined above. Again we converge on the right result, but this time with far fewer iterations required.

julia> optimize(himmelblau, himmelblau_gradient!, [2.5, 2.5], method = :l_bfgs)
Results of Optimization Algorithm
 * Algorithm: L-BFGS
 * Starting Point: [2.5,2.5]
 * Minimum: [2.999999999999385,2.0000000000001963]
 * Value of Function at Minimum: 0.000000
 * Iterations: 6
 * Convergence: true
   * |x - x'| < 1.0e-32: false
   * |f(x) - f(x')| / |f(x)| < 1.0e-08: false
   * |g(x)| < 1.0e-08: true
   * Exceeded Maximum Number of Iterations: false
 * Objective Function Calls: 25
 * Gradient Call: 25

Finally we’ll try out Newton’s method, where we’ll provide both gradient and Hessian functions. The result is spot on and we’ve shaved off one iteration. Very nice indeed!

julia> optimize(himmelblau, himmelblau_gradient!, himmelblau_hessian!, [2.5, 2.5],
                method = :newton)
Results of Optimization Algorithm
 * Algorithm: Newton's Method
 * Starting Point: [2.5,2.5]
 * Minimum: [3.0,2.0]
 * Value of Function at Minimum: 0.000000
 * Iterations: 5
 * Convergence: true
   * |x - x'| < 1.0e-32: false
   * |f(x) - f(x')| / |f(x)| < 1.0e-08: true
   * |g(x)| < 1.0e-08: true
   * Exceeded Maximum Number of Iterations: false
 * Objective Function Calls: 19
 * Gradient Call: 19

There is also a Simulated Annealing solver in the Optim package.

NLopt

NLopt is an optimisation library with interfaces for a variety of programming languages. NLopt offers a variety of optimisation algorithms. We’ll apply both a gradient-based and a derivative-free technique to maximise the function

\sin\alpha \cos\beta

subject to the constraints

2 \alpha \leq \beta

and

\beta \leq \pi/2.

Before we load the NLopt package, it’s a good idea to restart your Julia session to flush out any remnants of the Optim package.

julia> using NLopt

We’ll need to write the objective function and a generalised constraint function.

julia> count = 0;
julia> function objective(x::Vector, grad::Vector)
           if length(grad) > 0
               grad[1] = cos(x[1]) * cos(x[2])
               grad[2] = - sin(x[1]) * sin(x[2])
           end

           global count
           count::Int += 1
           println("Iteration $count: $x")

           sin(x[1]) * cos(x[2])
       end
objective (generic function with 1 method)
julia> function constraint(x::Vector, grad::Vector, a, b, c)
           if length(grad) > 0
               grad[1] = a
               grad[2] = b
           end
           a * x[1] + b * x[2] - c
       end
constraint (generic function with 1 method)

The COBYLA (Constrained Optimization BY Linear Approximations) algorithm is a local optimiser which doesn’t use the gradient function.

julia> opt = Opt(:LN_COBYLA, 2);                       # Algorithm and dimension of problem
julia> ndims(opt)
2
julia> algorithm(opt)
:LN_COBYLA
julia> algorithm_name(opt)                             # Text description of algorithm
"COBYLA (Constrained Optimization BY Linear Approximations) (local, no-derivative)"

We impose generous upper and lower bounds on the solution space and use two inequality constraints. Either min_objective!() or max_objective!() is used to specify the objective function and whether or not it is a minimisation or maximisation problem. Constraints can be either inequalities using inequality_constraint!() or equalities using equality_constraint!().

julia> lower_bounds!(opt, [0., 0.])
julia> upper_bounds!(opt, [pi, pi])
julia> xtol_rel!(opt, 1e-6)
julia> max_objective!(opt, objective)
julia> inequality_constraint!(opt, (x, g) -> constraint(x, g, 2, -1, 0), 1e-8)
julia> inequality_constraint!(opt, (x, g) -> constraint(x, g, 0, 2, pi), 1e-8)

After making an initial guess we let the algorithm loose. I’ve purged some of the output to spare you from the floating point deluge.

julia> initial = [0, 0];                               # Initial guess
julia> (maxf, maxx, ret) = optimize(opt, initial)
Iteration 1: [0.0,0.0]
Iteration 2: [0.7853981633974483,0.0]
Iteration 3: [0.7853981633974483,0.7853981633974483]
Iteration 4: [0.0,0.17884042066163552]
Iteration 5: [0.17562036827601815,0.3512407365520363]
Iteration 6: [0.5268611048280544,1.053722209656109]
Iteration 7: [0.7853981633974481,1.5707963267948961]
Iteration 8: [0.7526175675681757,0.9963866471510139]
Iteration 9: [0.785398163397448,1.570796326794896]
Iteration 10: [0.35124073655203625,0.7024814731040726]
.
.
.
Iteration 60: [0.42053333513020824,0.8410666702604165]
Iteration 61: [0.42053467500728553,0.8410693500145711]
Iteration 62: [0.4205360148843628,0.8410720297687256]
Iteration 63: [0.4205340050687469,0.8410680101374938]
Iteration 64: [0.4205340249920041,0.8410677994554656]
Iteration 65: [0.42053333513020824,0.8410666702604165]
Iteration 66: [0.42053456716611504,0.8410679945560181]
Iteration 67: [0.42053333513020824,0.8410666702604165]
Iteration 68: [0.42053365382801033,0.8410673076560207]
(0.27216552697496077,[0.420534,0.841067],:XTOL_REACHED)
julia> println("got $maxf at $maxx after $count iterations.")
got 0.27216552697496077 at [0.42053365382801033,0.8410673076560207] after 68 iterations.

It takes a number of iterations to converge, but arrives at a solution which seems eminently reasonable (and which satisfies both of the constraints).

Next we’ll use the MMA (Method of Moving Asymptotes) gradient-based algorithm.

julia> opt = Opt(:LD_MMA, 2);

We remove the second inequality constraint and simply confine the solution space appropriately. This is definitely a more efficient approach!

julia> lower_bounds!(opt, [0., 0.])
julia> upper_bounds!(opt, [pi, pi / 2])
julia> xtol_rel!(opt, 1e-6)
julia> max_objective!(opt, objective)
julia> inequality_constraint!(opt, (x, g) -> constraint(x, g, 2, -1, 0), 1e-8)

This algorithm converges more rapidly (because it takes advantage of the gradient function!) and we arrive at the same result.

julia> (maxf, maxx, ret) = optimize(opt, initial)
Iteration 1: [0.0,0.0]
Iteration 2: [0.046935706114911574,0.12952531487499092]
Iteration 3: [0.1734128499487191,0.5065804625164063]
Iteration 4: [0.3449211909390502,0.7904095832845456]
Iteration 5: [0.4109653874949588,0.8281977630709889]
Iteration 6: [0.41725447118163134,0.8345944447401356]
Iteration 7: [0.4188068871033356,0.8376261095301502]
Iteration 8: [0.4200799333613666,0.8401670014914709]
Iteration 9: [0.4203495290598476,0.8406993867808531]
Iteration 10: [0.4205138682235357,0.8410278412850836]
Iteration 11: [0.4205289336960578,0.8410578710185219]
Iteration 12: [0.42053231747822034,0.8410646372592685]
Iteration 13: [0.42053444274035756,0.8410688833806734]
Iteration 14: [0.4205343574933894,0.8410687141629858]
Iteration 15: [0.4205343707980632,0.8410687434944638]
Iteration 16: [0.420534312041705,0.8410686169530415]
Iteration 17: [0.4205343317839936,0.8410686604482764]
Iteration 18: [0.42053433111342814,0.8410686565253115]
Iteration 19: [0.42053433035398824,0.8410686525997696]
(0.27216552944315736,[0.420534,0.841069],:XTOL_REACHED)
julia> println("got $maxf at $maxx after $count iterations.")
got 0.27216552944315736 at [0.42053433035398824,0.8410686525997696] after 19 iterations.

I’m rather impressed. Both of these packages provide convenient interfaces and I could solve my test problems without too much effort. Have a look at the videos below for more about optimisation in Julia and check out github for the complete code for today’s examples. We’ll kick off next week with a quick look at some alternative data structures.


The post #MonthOfJulia Day 22: Optimisation appeared first on Exegetic Analytics.

#MonthOfJulia Day 21: Differential Equations

Julia-Logo-Pendulum

Yesterday we had a look at Julia’s support for Calculus. The next logical step is to solve some differential equations. We’ll look at two packages today: Sundials and ODE.

Sundials

The Sundials package is based on a library which implements a number of solvers for differential equations. First off you’ll need to install that library. In Ubuntu this is straightforward using the package manager. Alternatively you can download the source distribution.

sudo apt-get install libsundials-serial-dev

Next install the Julia package and load it.

julia> Pkg.add("Sundials")
julia> using Sundials

To demonstrate we’ll look at a standard “textbook” problem: a damped harmonic oscillator (mass on a spring with friction). This is a second order differential equation with general form

\ddot{x} + a \dot{x} + b x = 0

where x is the displacement of the oscillator, while a and b characterise the damping coefficient and spring stiffness respectively. To solve this numerically we need to convert it into a system of first order equations:

\begin{aligned}  \dot{x} &= v \\ \dot{v} &= - a v - b x  \end{aligned}

We’ll write a function for those relationships and assign specific values to a and b.

julia> function oscillator(t, y, ydot)
           ydot[1] = y[2]
           ydot[2] = - 3 * y[1] - y[2] / 10
       end
oscillator (generic function with 2 methods)

Next the initial conditions and time steps for the solution.

julia> initial = [1.0, 0.0];                   # Initial conditions
julia> t = float([0:0.125:30]);                # Time steps

And finally use cvode() to integrate the system.

julia> xv = Sundials.cvode(oscillator, initial, t);
julia> xv[1:5,:]
5x2 Array{Float64,2}:
 1.0        0.0     
 0.97676   -0.369762
 0.908531  -0.717827
 0.799076  -1.02841 
 0.65381   -1.28741 

The results for the first few time steps look reasonable: the displacement (left column) is decreasing and the velocity (right column) is becoming progressively more negative. To be sure that the solution has the correct form, have a look at the Gadfly plot below. The displacement (black) and velocity (blue) curves are 90° out of phase, as expected, and both gradually decay with time due to damping. Looks about right to me!

damped-harmonic-oscillator

ODE

The ODE package provides a selection of solvers, all of which are implemented with a consistent interface (which differs a bit from Sundials).

julia> using ODE

Again we need to define a function to characterise our differential equations. The form of the function is a little different with the ODE package: rather than passing the derivative vector by reference, it’s simply returned as the result. I’ve consider the same problem as above, but to spice things up I added a sinusoidal driving force.

julia> function oscillator(t, y)
           [y[2]; - a * y[1] - y[2] / 10 + sin(t)]
       end
oscillator (generic function with 2 methods)

We’ll solve this with ode23(), which is a second order adaptive solver with third order error control. Because it’s adaptive we don’t need to explicitly specify all of the time steps, just the minimum and maximum.

julia> a = 1;                                  # Resonant
julia> T, xv = ode23(oscillator, initial, [0.; 40]);
julia> xv = hcat(xv...).';                     # Vector{Vector{Float}} -> Matrix{Float}

The results are plotted below. Driving the oscillator at the resonant frequency causes the amplitude of oscillation to grow with time as energy is transferred to the oscillating mass.
resonant-harmonic-oscillator

If we move the oscillator away from resonance the behavior becomes rather interesting.

julia> a = 3;                                  # Far from resonance

Now, because the oscillation and the driving force aren’t synchronised (and there’s a non-rational relationship between their frequencies) the displacement and velocity appear to change irregularly with time.
non-resonant-harmonic-oscillator

How about a double pendulum (a pendulum with a second pendulum suspended from its end)? This seemingly simple system exhibits a rich range of dynamics. It’s behaviour is sensitive to initial conditions, one of the characteristics of chaotic systems.

Double-Pendulum

First we set up the first order equations of motion. The details of this system are explained in the video below.

julia> function pendulum(t, y)
           Y = [
           6 * (2 * y[3] - 3 * cos(y[1] - y[2]) * y[4]) / (16 - 9 * cos(y[1] - y[2])^2);
           6 * (8 * y[4] - 3 * cos(y[1] - y[2]) * y[3]) / (16 - 9 * cos(y[1] - y[2])^2)
           ]
           [
           Y[1];
           Y[2];
           - (Y[1] * Y[2] * sin(y[1] - y[2]) + 3 * sin(y[1])) / 2;
           - (sin(y[2]) - Y[1] * Y[2] * sin(y[1] - y[2])) / 2;
           ]
       end
pendulum (generic function with 1 method)

Define initial conditions and let it run…

julia> initial = [pi / 4, 0, 0, 0];            # Initial conditions -> deterministic behaviour
T, xv = ode23(pendulum, initial, [0.; 40]);

Below are two plots which show the results. The first is a time series showing the angular displacement of the first (black) and second (blue) mass. Next is a phase space plot which shows a different view of the same variables. It’s clear to see that there is a regular systematic relationship between them.

pendulum-time-deterministic

pendulum-phase-deterministic

Next we’ll look at a different set of initial conditions. This time both masses are initially located above the primary vertex of the pendulum. This represents an initial configuration with much more potential energy.

julia> initial = [3/4 * pi, pi, 0, 0];         # Initial conditions -> chaotic behaviour

The same pair of plots now illustrate much more interesting behaviour. Note the larger range of angles, θ2, achieved by the second bob. With these initial conditions the pendulum is sufficiently energetic for it to “flip over”. Look at the video below to get an idea of what this looks like with a real pendulum.

pendulum-time-chaotic

pendulum-phase-chaotic

It’s been a while since I’ve played with any Physics problems. That was fun. The full code for today is available at github. Come back tomorrow when I’ll take a look at Optimisation in Julia.

The post #MonthOfJulia Day 21: Differential Equations appeared first on Exegetic Analytics.

#MonthOfJulia Day 20: Calculus

Julia-Logo-Calculus

Mathematica is the de facto standard for symbolic differentiation and integration. But many other languages also have great facilities for Calculus. For example, R has the deriv() function in the base stats package as well as the numDeriv, Deriv and Ryacas packages. Python has NumPy and SymPy.

Let’s check out what Julia has to offer.

Numerical Differentiation

First load the Calculus package.

julia> using Calculus

The derivative() function will evaluate the numerical derivative at a specific point.

julia> derivative(x -> sin(x), pi)
-0.9999999999441258
julia> derivative(sin, pi, :central)       # Options: :forward, :central or :complex
-0.9999999999441258

There’s also a prime notation which will do the same thing (but neatly handle higher order derivatives).

julia> f(x) = sin(x);
julia> f'(0.0)                             # cos(x)
0.9999999999938886
julia> f''(0.0)                            # -sin(x)
0.0
julia> f'''(0.0)                           # -cos(x)
-0.9999977482682358

There are functions for second derivatives, gradients (for multivariate functions) and Hessian matrices too. Related packages for derivatives are ForwardDiff and ReverseDiffSource.

Symbolic Differentiation

Symbolic differentiation works for univariate and multivariate functions expressed as strings.

julia> differentiate("sin(x)", :x)
:(cos(x))
julia> differentiate("sin(x) + exp(-y)", [:x, :y])
2-element Array{Any,1}:
 :(cos(x))    
 :(-(exp(-y)))

It also works for expressions.

julia> differentiate(:(x^2 * y * exp(-x)), :x)
:((2x) * y * exp(-x) + x^2 * y * -(exp(-x)))
julia> differentiate(:(sin(x) / x), :x)
:((cos(x) * x - sin(x)) / x^2)

Have a look at the JuliaDiff project which is aggregating resources for differentiation in Julia.

Numerical Integration

Numerical integration is a snap.

julia> integrate(x -> 1 / (1 - x), -1 , 0)
0.6931471805602638

Compare that with the analytical result. Nice.

julia> diff(map(x -> - log(1 - x), [-1, 0]))
1-element Array{Float64,1}:
 0.693147

By default the integral is evaluated using Simpson’s Rule. However, we can also use Monte Carlo integration.

julia> integrate(x -> 1 / (1 - x), -1 , 0, :monte_carlo)
0.6930203819567551

Sympy-logo

Symbolic Integration

There is also an interface to the Sympy Python library for symbolic computation. Documentation can be found here. You might want to restart your Julia session before loading the SymPy package.

julia> using Sympy

Revisiting the same definite integral from above we find that we now have an analytical expression as the result.

julia> integrate(1 / (1 - x), (x, -1, 0))
log(2)
julia> convert(Float64, ans)
0.6931471805599453

To perform symbolic integration we need to first define a symbolic object using Sym().

julia> x = Sym("x");                       # Creating a "symbolic object"
julia> typeof(x)
Sym (constructor with 6 methods)
julia> sin(x) |> typeof                    # f(symbolic object) is also a symbolic object
Sym (constructor with 6 methods)

There’s more to be said about symbolic objects (they are the basis of pretty much everything in SymPy), but we are just going to jump ahead to constructing a function and integrating it.

julia> f(x) = cos(x) - sin(x) * cos(x);
julia> integrate(f(x), x)
     2            
  sin (x)         
- ─────── + sin(x)
     2     

What about an integral with constant parameters? No problem.

julia> k = Sym("k");
julia> integrate(1 / (x + k), x)
log(k + x)

We have really only grazed the surface of SymPy. The capabilities of this package are deep and broad. Seriously worthwhile checking out the documentation if you are interested in symbolic computation.

newton_and_leibniz

I’m not ready to throw away my dated version of Mathematica just yet, but I’ll definitely be using this functionality often. Come back tomorrow when I’ll take a look at solving differential equations with Julia.

The post #MonthOfJulia Day 20: Calculus appeared first on Exegetic Analytics.