Tag Archives: Julia

Adjusting to Julia: Generating the Fibonacci sequence

By: Jan Vanhove

Re-posted from: https://janhove.github.io/posts/2022-12-20-julia-fibonacci/index.html

I’m currently learning a bit of Julia and I thought I’d share with you a couple of my attempts at writing Julia code. I’ll spare you the sales pitch, and I’ll skip straight to the goal of this blog post: writing three different Julia functions that can generate the Fibonacci sequence.

The Fibonacci sequence

The famous Fibonacci sequence is an infinite sequence of natural numbers, the first of which are 1, 1, 2, 3, 5, 8, 13, …. The sequence is defined as follows:

Let’s write some Julia functions that can generate this sequence.

Julia

You can download Julia from julialang.org. I’m currently using the Pluto.jl package that allows you to write Julia code in a reactive notebook. Check out the Pluto.jl page for more information.

First alternative: A purely recursive function

The Fibonacci sequence is defined recursively: To obtain the nth Fibonacci number, you first need to compute the n-1th and n-2th Fibonacci number and then add them. We can write a Julia function that exactly reflects the definition of the Fibonacci sequence like so:

function fibonacci(n)
  if n <= 2
    return 1
  end
  return fibonacci(n - 1) + fibonacci(n - 2)
end
fibonacci (generic function with 1 method)

This function tacitly assumes that n is a non-zero natural number. If n is equal to or lower than 2, i.e., if n is 1 or 2, it immediately returns 1, as per the definition of the sequence. If this condition isn’t met, the output is computed recursively. The function can be run as follows:

fibonacci(10)
55

Checks out! But from a computational point of view, the fibonacci() function is quite wasteful. In order to obtain fibonacci(10), we need to compute fibonacci(9) and fibonacci(8). But in order to compute fibonacci(9), we also need to compute fibonacci(8). For both fibonacci(9) and fibonacci(8), we need to compute fibonacci(7), etc. In fact, we need to compute the value of fibonacci(8) two times, that of fibonacci(7) three times, that of fibonacci(6) five times, and that of fibonacci(5) seven times. So we’d be doing lots of computations over and over again. For this reason, the fibonacci() function is hopelessly inefficient: While you can compute fibonacci(10) in a fraction of a second, it may take minutes to compute, say, fibonacci(60). Luckily, we can speed up our function considerably.

Second alternative: Recursion with memoisation

Memoisation is a programming technique where any intermediate result that you computed is stored in an array. Before computing any further intermediate results, you then first look up in the array if you haven’t in fact already computed it, saving you a lot of unnecessary computations. The following Julia function is a bit more involved that the previous one, but it’s much more efficient.

function fib_memo(n)
  known = zeros(Int64, n)
  function memoize(k)
    if known[k] != 0
      # do nothing
    elseif k == 1 || k == 2
      known[k] = 1
    else
      known[k] = memoize(k-1) + memoize(k-2)
    end
    return known[k]
  end
  return memoize(n)
end
fib_memo (generic function with 1 method)

The overall function that we’ll actually call is fib_memo(). It creates an array called known with n zeroes. Then it defines an inner function memoize(). This latter function obtains an integer k that in practice will range from 0 to n and does the following. First, it checks if the kth value in the array known is still 0. If it got changed, the function just returns the kth value in known. Otherwise, if k is equal to either 1 or 2, it sets the first or second value of known to 1. If k is greater than 2, the kth value of known is computed recursively. In all cases, the memoize() function returns the k value of the known array. The outer fib_memo() function then just returns the result of memoize(n).

Perhaps by now, your computer has finished running fibonacci(60) and you can try out the alternative implementation:

fib_memo(60)
1548008755920

Notice how much faster this new function is! Even the 200th Fibonacci number can be computed in a fraction of a second:

fib_memo(200)
-1123705814761610347

Unfortunately, we’ve ran into a different problem now: integer overflow. The result of the computations has become so large that it exceeded the range of 64-bit integers. To fix this problem, we can work with BigIntegers instead:

function fib_memo(n)
  known = zeros(BigInt, n)
  function memoize(k)
    if known[k] != 0
      # do nothing
    elseif k == 1 || k == 2
      known[k] = 1
    else
      known[k] = memoize(k-1) + memoize(k-2)
    end
    return known[k]
  end
  return memoize(n)
end
fib_memo (generic function with 1 method)
fib_memo(200)
280571172992510140037611932413038677189525

Nice!

Third alternative: Using Binet’s formula

The third alternative is more of a mathematical solution rather than a programming solution. According to Binet’s formula, the nth Fibonacci number can be computed as where , the Golden Ratio, and , its conjugate. In Julia:

function fib_binet(n)
  φ = (1 + sqrt(5))/2
  ψ = (1 - sqrt(5))/2
  fib_n = 1/sqrt(5) * (φ^n - ψ^n)
  return BigInt(round(fib_n))
end
fib_binet (generic function with 1 method)

Note that you can use mathematical symbols like and in Julia. This function runs very fast, too:

fib_binet(60)
1548008755920
fib_binet(200)
280571172992512015699912586503521287798784

Notice, however, that the result for the 200th Fibonacci number differs by 27 orders of magnitude from the one obtained using fib_memo():

fib_binet(200) - fib_memo(200)
1875662300654090482610609259

By using Binet’s formula, we’ve left the fairly neat world of integer arithmetic and entered the realm of floating point arithmetic that is rife with approximation errors. While we’re at it, we might as well compute and plot the size of these approximation errors. In the snippet below, I first use list comprehension in order to compute the first 200 Fibonacci numbers using both fib_memo() and fib_binet(). Note that I added a dot (.) to both function names. This is Julia notation for running vectorised computations. Further note that I end all lines with a semi-colon so that the results don’t get printed to the prompt. Then, I compute the absolute values of the differences between the numbers obtained by both computation methods. Note again the use of a dot in both abs.() and .- that is required to have both of these functions work on vectors. Finally, I convert these absolute differences to differences relative to the correct answers;

fib_integer = fib_memo.(1:200);
fib_math    = fib_binet.(1:200);
abs_diff = abs.(fib_math .- fib_integer);
rel_diff = abs_diff ./ fib_integer;

To wrap off this blog post, let’s now plot these absolute and relative differences using the Plots.jl package. While Figure 1 shows that the absolute error becomes huge, Figure 2 shows that these discrepancies only amount to a negligble fraction of the correct answers.

using Plots
plot(1:200, abs_diff, seriestype=:scatter,
     xlabel = "n",
     ylabel = "absolute difference",
     label = "")

Figure 1. Absolute difference between the Fibonacci numbers obtained using fib_binet() and those obtained using fib_memo().

plot(1:200, rel_diff, seriestype=:scatter, 
     xlabel = "n",
     ylabel = "relative difference",
     label = "")

Figure 2. Relative difference between the Fibonacci numbers obtained using fib_binet() and those obtained using fib_memo().

Integrating equation solvers with probabilistic programming through differentiable programming

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/integrating-equation-solvers-with-probabilistic-programming-through-differentiable-programming/

Part of the COMPUTATIONAL ABSTRACTIONS FOR PROBABILISTIC AND DIFFERENTIABLE PROGRAMMING WORKSHOP

Abstract: Many probabilistic programming languages (PPLs) attempt to integrate with equation solvers (differential equations, nonlinear equations, partial differential equations, etc.) from the inside, i.e. the developers of the PPLs like Stan provide differential equation solver choices as part of the suite. However, as equation solvers are an entire discipline to themselves with many active development communities and subfields, this places an immense burden on PPL developers to keep up with the changing landscape of tens of thousands of independent researchers. In this talk we will explore how Julia PPLs such as Turing.jl support of equation solvers from the outside, i.e. how the tools of differentiable programming allows equation solver libraries to be compatible with PPLs without requiring any co-development between the communities. We will discuss how this has enabled many advanced methods, such as adaptive solvers for stochastic differential equations and nonlinear tearing of differential-algebraic equations, to be integrated into the Turing.jl environment with no development effort required, and how this enables many workflows in scientific machine learning (SciML).

The post Integrating equation solvers with probabilistic programming through differentiable programming appeared first on Stochastic Lifestyle.