Tag Archives: Programming

Gadfly

I’m Moving to Gadfly

### Christina Lee

So far in this blog, all my plots have either used PyPlot, or the experimental GLVisualize. BUT, NO LONGER!

While writing up my research notes, I’ve been trying to improve my PyPlot graphs. I was hoping for them to look nicer; Julia’s PyPlot was saying no. We had a falling out over background colors and resolution. So, I have decided to move on to a plotting partner more willing to cope with my demands.

When I first started learning Julia, Gadfly seemed too unfamiliar, and control too tricky. I didn’t give it much further thought. After some digging, Gadfly has changed my mind, and I hope today I can change yours too. Right now, like Julia, Gadfly doesn’t have the maturity or documentation of PyPlot, but once it does, you better watch out.


Gadfly preferentially produces plots in SVG (scalable vector graphics) form, instead of the rasterized form of PNG or JPG favored by PyPlot. As displayed in the figure left, rasterized images are composed of colored squares, so don’t look the best when highly magnifified. SVG, which is based on XML, stores the position of vector elements and scales to high resolution without artifacts. Tools such as Inkscape also work in SVG, and can edit the individual pieces of a graph since they haven’t been rasterized together.

Let’s plot some graphs!

By Yug, modifications by 3247 (Unknown) [CC BY-SA 2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons
</div>

# Time to load our Packages
#Pkg.update()
#Pkg.add("Gadfly")
using Gadfly

#Pkg.add("Compose")
using Compose
# Some test data
x=collect(1:.4:10);
y1=sin(x);
y2=cos(x);
# A simple plot
plot(x=x,y=y1)

Unlike PyPlot, we can give plot functions as well as points. After receiving a function, a max value, and a min value, Gadfly figures everything else out for itself.

For the function, we can pass in either an inbuilt function, a user defined function, or an anonymous function.

The brackets [ ] allow us to group multiple functions together in the plot.

# plotting function

function testfunction(x::Real)
    return 1-x^2
end

plot([cos,testfunction,x->x^2],0,1)

But that’s just one set of points…
How do we plot more than one set of data? That’s where layers come in. Instead of using plot, we use layer, and set it as a variable. We can then plot those layers.

p1=layer(x=x,y=y1,Geom.point)
p2=layer(x=x,y=y2,Geom.point)
plot(p1,p2)

Different style of plotting: Geom

Now, what if we want to plot something other than lines?
Gadfly allows a wide variety of other ways to plot our data. We control this with the Geom (Geometry) object.

plot(x=x,y=y1,Geom.point,Geom.line)
plot(x=x,y=y1,ymin=y1-.1,ymax=y1+.1,Geom.point,Geom.errorbar)
plot(x=x,y=y1,ymin=y1-.1,ymax=y1+.1,Geom.line,Geom.ribbon)
plot(x=x,y=y1,Geom.bar)
plot(x=x,y=y1,Geom.step)
plot(x=y1,y=y2,Geom.path())

Edit Point and Line style

Disclaimer: I’ve learned how to do the next two things by reading the code, not the documentation.

It seems like they are newer and less developed features than everything else I’m discussing here. The syntax seems less polished than in other areas, so I believe it’s still under construction.

z=collect(0:.2:2)

xp=[z;z+.5;z+1;z+1.5;z+2;z+2.5;z+3;z+3.5]
yp=[z;z;z;z;z;z;z;z]
sh=[ones(z);2*ones(z);3*ones(z);4*ones(z);5*ones(z);6*ones(z);7*ones(z);8*ones(z)]

plot(x=xp,y=yp,shape=sh,Geom.point,Theme(default_point_size=5pt))
# or Compose.mm for smaller sizes
# These ratios and numbers changed around how ever you like

dash = 6 * Compose.cm
dot = 1.5 * Compose.cm
gap = 1 * Compose.cm

l1=layer(x=x,y=zeros(x),Geom.line,Theme(line_style=[dot]))
l2=layer(x=x,y=ones(x),Geom.line,Theme(line_style=[dash]))
l3=layer(x=x,y=2*ones(x),Geom.line,Theme(line_style=[gap]))
l4=layer(x=x,y=3*ones(x),Geom.line,Theme(line_style=[dot,dash,dot]))
l5=layer(x=x,y=4*ones(x),Geom.line,Theme(line_style=[gap,dash]))

plot(l1,l2,l3,l4,l5,Coord.Cartesian(ymin=-.5,ymax=4.5),Theme(grid_line_width=0pt))

Guide: Labeling Axes

Where Geom alters how we plot, Guide alters the labeling.
Guide ties in with Compose.jl through the Guide.annotate command, but that will take a tutorial in itself.

plot(x=x,y=y2,color=y2,
Guide.xlabel("xlabel"),Guide.ylabel("ylabel"),Guide.title("Something explanatory"),
Guide.colorkey("y2"))

Here’s something we can do with a combination of Guide and Scale. Using Guide, we can set where we specifically want our xticks to be, namely at multiples of $\pi$. But then, the labeling would write out some irrational number, making the plot look horrible. So, we create a function that takes in a number and outputs a string label for Scale to use.

function labelname(x::Real)
    n=round(Int,x/π) #nearest integer*pi
    if n==0
        return "0"
    elseif n==1
        return "π"
    end
    return("$n π")
end

xmarks=[0,π,2π,3π]
ymarks=[-1,0,1]

plot(x=x,y=y2,
Guide.yticks(ticks=ymarks),
Guide.xticks(ticks=xmarks),Scale.x_continuous(labels=labelname))

Some other cool things we can do with Scale:
* Automatically transform the axis according to log, log10,log2, asinh, or sqrt.
* Write numbers in :plain, :scientific, :engineering, or :auto

plot(x->exp(x),0,10,Scale.y_log,Scale.x_continuous(format=:scientific))

Themes

I mostly chose Gadfly because of the control I could have with the Theme command. http://dcjones.github.io/Gadfly.jl/themes.html has a much more exhaustive list than what I will be demonstrating with here.

# Solarized Colors that I like working with
# http://ethanschoonover.com/solarized
using Colors
base03=parse(Colorant,"#002b36");
base02=parse(Colorant,"#073642");
base01=parse(Colorant,"#586e75");
base00=parse(Colorant,"#657b83");
base0=parse(Colorant,"#839496");
base1=parse(Colorant,"#839496");
base2=parse(Colorant,"#eee8d5");
base3=parse(Colorant,"#fdf6e3");

yellow=parse(Colorant,"#b58900");
orange=parse(Colorant,"#cb4b16");
red=parse(Colorant,"#dc322f");
magenta=parse(Colorant,"#d33682");
violet=parse(Colorant,"#6c71c4");
blue=parse(Colorant,"#268bd2");
cyan=parse(Colorant,"#3aa198");
green=parse(Colorant,"#859900");
plot(x=x,y=y1,
    Theme(highlight_width=0pt, # lose the white ring around the points
    default_point_size=3pt,    # size of the dots
    default_color=magenta,     # color of the dots
    background_color=base03,   # ... background color ...
    grid_color=base2,     # the lines
    minor_label_color=base2,  # numbers on axes color
    key_label_color=base2, # color key numbers color
    key_title_color=cyan,  # label of color key color
    major_label_color=cyan, # axes label and title color
    major_label_font_size=20pt, # font size for axes label and title
    minor_label_font_size=15pt, #font size for numbers on axes
    panel_opacity=1 #setting background to opaque
);)

Coord: Setting the boundries

The documentation states that you can change the range of the axes using Scale, but I’ve found it better to use this format to set my min and max values.

plot(x=x,y=y1,Geom.line,Coord.Cartesian(ymin=0,ymax=1,xmin=0,xmax=10))

So far, we have covered seperately Guide, Themes, and partially Coord and Scale. Individually, each aspect doesn’t add too much clunkiness to the code. However, if we start to add everything together, then the plot function would look quite nasty.

Luckily, just like layers for data points, we can put our modifiers into variables. Then we can simply call the variables in our plot function.

This also helps for when we want to use one theme for a series of graphs. We can define the theme variable up top, and then include it in every graph there after, never having to declare it again. This helps me to keep my graphs uniform in style.

function labelname(x::Real)
    n=round(Int,x/π)
    if n==0
        return "0"
    elseif n==1
        return "π"
    else
        return("$n π")
    end
    return("$n π")
end

xticks=[0,π,2π,3π]
yticks=[-1,0,1]

data=layer(x=x,y=y1,ymin=y1-.1,ymax=y1+.1,Geom.line,Geom.ribbon)
f=layer(cos,0,3π)

yt=Guide.yticks(ticks=yticks)
xt=Guide.xticks(ticks=xticks)
xs=Scale.x_continuous(labels=labelname)

t= Theme(highlight_width=0pt,
    default_point_size=3pt,
    default_color=blue,
    background_color=base03,
    grid_color=base2,
    minor_label_color=base2,
    key_label_color=base2,
    key_title_color=cyan,
    major_label_color=cyan,
    major_label_font_size=20pt,
    minor_label_font_size=15pt,
    panel_opacity=1)

xl=Guide.xlabel("time")
yl=Guide.ylabel("y axis")
GT=Guide.title("Important title")

plot(data,f,yt,xt,xs,t,xl,yl,GT)

Although we still have to call quite a few variables, this is a much simpler way of doing things.

Saving the Figure

Julia naturally saves to SVG (or SVGJS).
We have to specify the x and y dimensions of the plot, but since these images rescale so easily, we can choose some reasonable numbers.

p=plot(data,f,yt,xt,xs,t,xl,yl,GT)
draw(SVG("myplot.svg", 15cm, 9cm), p)

If you want to save to PNG, PDF, or PS, the package Cairo.jl provides that ability.

using Cairo
draw(PNG("myplot.png",15cm,9cm),p)


# Happy Plotting!

“`julia

“`

Using Julia’s Type System For Hidden Performance Gains

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/using-julias-type-system-hidden-performance-chunkedarrays-growablearrays-ellipsisnotation/

What I want to share today is how you can use Julia’s type system to hide performance gains in your code. What I mean is this: in many cases you may find out that the optimal way to do some calculation is not a “clean” solution. What do you do? What I want to do is show how you can define special arrays which are wrappers such that these special “speedups” are performed in the background, while having not having to keep all of that muck in your main algorithms. This is easiest to show by example.

The examples I will be building towards are useful for solving ODEs and SDEs. Indeed, these tricks have all been implemented as part of DifferentialEquations.jl and so these examples come from a real use case! They really highlight a main feature of Julia: since Julia code is fast (as long as you have type stability!), you don’t need to worry about writing code outside of Julia, and you can take advantage of higher-level features (with caution). In Python, normally one would only get speed benefits by going down to C, and so utilizing these complex objects would not get speed benefits over simply using numpy arrays in the most vectorized fashion. The same holds for R. In MATLAB… it would be really tough to implement most of this in MATLAB!

Taking Random Numbers in Chunks: ChunkedArrays

Let’s say we need to take random numbers in a loop like the following:

for i = 1:N
  dW = randn(size(u))
  #Do some things and add dW to u
end

While this is the “intuitive” code to write, it’s not necessarily the best. While there have been some improvements made since early Julia, in principle it’s just slower to make 1000 random numbers via randn() than to use randn(1000). This is because of internal speedups due to caching, SIMD, etc. and you can find mentions of this fact all over the web especially when people are talking about fast random number generators like from the VSL library.

So okay, what we really want to do is the following. Every “bufferSize” steps, create a new random number dW which is of size size(u)*bufferSize, and go through using the buffer until it is all used up, and then grab another buffer.

for i = 1:N
  if i%bufferSize == 0
    dW = randn(size(u),bufferSize)
  end
  #Do some things and add dW[..,i] to u
end

But wait? What if we don’t always use one random number? Sometimes the algorithm may need to use more than one! So you can make an integer k which tracks the current state in the buffer, and then at each point where it can be incremented, you add the conditional to grab a new buffer, etc. Also, what if you want to have the buffer generated in parallel? As you can see, code complexity explosion, just to go a little faster?

This is where ChunkedArrays come in. What I did is defined an array which essentially does the chunking/buffering in the background, so that way the code in the algorithm could be clean. A ChunkedArray is a wrapper over an array, and then used the next command to hide all of this complexity. Thus, to generate random numbers in chunks to get this speed improvement, you can use code like this:

rands = ChunkedArray(u)
for i = 1:N
  if i%bufferSize == 0
    dW = next(rands)
  end
  #Do some things and add dW[..,i] to u
end

Any time another random number is needed, you just call next. It internally stores an array and the state of the buffer, and the next function automatically check / replenishes the buffer, and can launch another process to do this in parallel if the user wants. Thus we get the optimal solution without sacrificing cleanliness. I chopped off about 10% of a runtime in Euler-Maruyama code in DifferentialEquations.jl by switching to ChunkedArrays, and haven’t thought about doing a benchmark since.

Safe Vectors of Arrays and Conversion: GrowableArrays

First let’s look at the performance difference between Vectors of Arrays and higher-dimensional contiguous arrays when using them in a loop. Julia’s arrays can take in a parametric type which makes the array hold arrays, this makes the array essentially an array of pointers. The issue here is that this adds an extra cost every time the array is dereferenced. However, for high-dimensional arrays, the : way of referencing has to generate a slice each time. Which way is more performant?

function test1()
  u = Array{Int}(4,4,3)
  u[:,:,1] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[:,:,2] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[:,:,3] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  j = 1
  for i = 1:100000
    j += sum(u[:,:,1] + u[:,:,2] + 3u[:,:,3] + u[:,:,i%3+1] -  u[:,:,(i%j)%2+1])
  end
end
 
 
 
function test2()
  u = Vector{Matrix{Int}}(3)
  u[1] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[2] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[3] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  j = 1
  for i = 1:100000
    j += sum(u[1] + u[2] + 3u[3] + u[i%3+1] - u[(i%j)%2+1])
  end
end
 
 
function test3()
  u = Array{Int}(4,4,4)
  u[1,:,:] = reshape([1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1],(1,4,4))
 
  u[2,:,:] = reshape([1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1],(1,4,4))
 
  u[3,:,:] = reshape([1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1],(1,4,4))
 
  j = 1
  for i = 1:100000
    j += sum(u[1,:,:] + u[2,:,:] + 3u[3,:,:] + u[i%3+1,:,:] - u[(i%j)%2+1,:,:])
  end
end
 
#Pre-compile
test1()
test2()
test3()
 
t1 = @elapsed for i=1:10 test1() end
t2 = @elapsed for i=1:10 test2() end
t3 = @elapsed for i=1:10 test3() end
 
println("Test results: t1=$t1, t2=$t2, t3=$t3")
#Test results: t1=1.239379946, t2=0.576053075, t3=1.533462129

So using Vectors of Arrays is fast for dereferecing.

Now think about adding to an array. If you have a Vector of pointers and need to resize the array, it’s much easier to resize and copy over some pointers then it is to copy over all of the arrays. So, if you’re going to grow an array in a loop, the Vector of Arrays is the fastest implementation! Here’s a quick benchmark from GrowableArrays.jl:

using GrowableArrays, EllipsisNotation
using Base.Test
 
tic()
const NUM_RUNS = 100
const PROBLEM_SIZE = 1000
function test1()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = u
  for i = 1:PROBLEM_SIZE
    uFull = hcat(uFull,u)
  end
  uFull
end
 
function test2()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = u
 
  for i = 1:PROBLEM_SIZE
    uFull = vcat(uFull,u)
  end
  uFull
end
 
function test3()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{Int}(0)
  sizehint!(uFull,PROBLEM_SIZE*16)
  append!(uFull,vec(u))
 
  for i = 1:PROBLEM_SIZE
    append!(uFull,vec(u))
  end
  reshape(uFull,4,4,PROBLEM_SIZE+1)
  uFull
end
 
function test4()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{Array{Int}}(0)
  push!(uFull,copy(u))
 
  for i = 1:PROBLEM_SIZE
    push!(uFull,copy(u))
  end
  uFull
end
 
function test5()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{Array{Int,2}}(0)
  push!(uFull,copy(u))
 
  for i = 1:PROBLEM_SIZE
    push!(uFull,copy(u))
  end
  uFull
end
 
function test6()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{typeof(u)}(0)
  push!(uFull,u)
 
  for i = 1:PROBLEM_SIZE
    push!(uFull,copy(u))
  end
  uFull
end
 
function test7()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = GrowableArray(u)
  for i = 1:PROBLEM_SIZE
    push!(uFull,u)
  end
  uFull
end
 
function test8()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = GrowableArray(u)
  sizehint!(uFull,PROBLEM_SIZE)
  for i = 1:PROBLEM_SIZE
    push!(uFull,u)
  end
  uFull
end
 
println("Run Benchmarks")
println("Pre-Compile")
#Compile Test Functions
test1()
test2()
test3()
test4()
test5()
test6()
test7()
test8()
 
println("Running Benchmarks")
t1 = @elapsed for i=1:NUM_RUNS test1() end
t2 = @elapsed for i=1:NUM_RUNS test2() end
t3 = @elapsed for i=1:NUM_RUNS test3() end
t4 = @elapsed for i=1:NUM_RUNS test4() end
t5 = @elapsed for i=1:NUM_RUNS test5() end
t6 = @elapsed for i=1:NUM_RUNS test6() end
t7 = @elapsed for i=1:NUM_RUNS test7() end
t8 = @elapsed for i=1:NUM_RUNS test8() end
 
println("Benchmark results: $t1 $t2 $t3 $t4 $t5 $t6 $t7 $t8")
 
#Benchmark results: 1.923640854 2.131108443 0.012493308 0.00866045 0.005246504 0.00532613 0.00773568 0.00819909

As you can see in test7 and test8, I created a “GrowableArray” which is an array which acts like a Vector of Arrays. However, it has an added functionality that if you copy(G), then what you get is the contiguous array. Therefore in the loop you can grow the array the quickest way as a storage machine, but after the loop (say to plot the array), but at any time you can copy it to a contiguous array which is more suited for interop with C and other goodies.

It also hides a few painful things. Notice that in the code we pushed a copy of u (copy(u)). This is because when u is an array, it’s only the reference to the array, so if we simply push!(uFull,u), every element of uFull is actually the same item! This benchmark won’t catch this issue, but try changing u and you will see that every element of uFull changes if you don’t use copy. This can be a nasty bug, so instead we build copy() into the push!() command for the GrowableArray. This gives another issue. Since copying a GrowableArray changes it, you need to make sure push! doesn’t copy on arguments of GrowableArrays (to create GrowableArrays of GrowableArrays). However, this is easily managed via dispatch.

Helping Yourself and the Julia Community

Simple projects like these lead to re-usable solutions to improve performance while allowing for ease of use. I have just detailed some projects I have personally done (and have more to do!), but there are others that should be pointed out. I am fond of projects like VML.jl which speedup standard functions, and DoubleDouble.jl which implements efficient quad-precision numbers that you can then use in place of other number types.

I think Julia will succeed not by the “killer packages” that are built in Julia, but by a rich type ecosystem that will make everyone want to build their “killer package” in Julia.

The post Using Julia’s Type System For Hidden Performance Gains appeared first on Stochastic Lifestyle.

Optimal Number of Workers for Parallel Julia

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/236-2/

How many workers do you choose when running a parallel job in Julia? The answer is easy right? The number of physical cores. We always default to that number. For my Core i7 4770K, that means it’s 4, not 8 since that would include the hyperthreads. On my FX8350, there are 8 cores, but only 4 floating-point units (FPUs) which do the math, so in mathematical projects, I should use 4, right? I want to demonstrate that it’s not that simple.

Where the Intuition Comes From

Most of the time when doing scientific computing you are doing parallel programming without even knowing it. This is because a lot of vectorized operations are “implicitly paralleled”, meaning that they are multi-threaded behind the scenes to make everything faster. In other languages like Python, MATLAB, and R, this is also the case. Fire up MATLAB and run

A = randn(10000,10000)
B = randn(10000,10000)
A.*B

and you will see that all of your cores are used. Threads are a recent introduction to Julia, and so in version 0.5 this will also be the case.

Another large place where implicit parallelization comes up is in linear algebra. When one uses a matrix multiplication, it is almost surely calling an underlying program which is an implementation of BLAS. BLAS (Basic Linear Algebra Subroutines) is aptly named just a set of functions for solving linear algebra problems. These are written in either C or Fortran and are heavily optimized. They are well-studied and many smart people have meticulously crafted “perfect code” which minimizes cache misses and all of that other low level stuff, all to make this very common operation run smoothly.

Because BLAS (and LINPACK, Linear Algebra Package, for other linear algebra routines) is so optimized, people say you should always make sure that it knows exactly how many “real” processors it has to work with. So in my case, with a Core i7 with 4 physical cores and 4 from hyperthreading, forget the hyperthreading and thus there are 4. With the FX8350, there are only 4 processors for doing math, so 4 threads. Check to make sure this is best.

What about for your code?

Most likely this does not apply to your code. You didn’t carefully manage all of your allocations and tell the compiler what needs to be cached etc. You just wrote some beautiful Julia code. So how many workers do you choose?

Let’s take my case. I have 4 real cores, do I choose 4? Or do I make 3 workers to allow for 1 to “command” the others freely? Or do I make 7/8 due to hyperthreading?

I decided to test this out on a non-trivial example. I am not going to share all of the code (I am submitting it as part of a manuscript soon), but the basic idea is that it is a high-order adaptive solver for stochastic differential equations. The code sets up the problem and then calls pmap to do a Monte Carlo simulation and solve the equation 1000 times in parallel. The code is mostly math, but there is a slight twist where some values are stored on stacks (very lightweight datastructure). To make sure I could trust the times, I ran the code 1000 times and took the average, min, and max times.

So in this case, what was best? The results speak for themselves.

Average wall times vs Number of Worker Processes, 1000 iterations
Number of Workers Average Wall Time Max Wall Time Min Wall Time
1 62.8732 64.3445 61.4971
3 25.749 26.6989 25.1143
4 22.4782 23.2046 21.8322
7 19.7411 20.2904 19.1305
8 19.0709 20.1682 18.5846
9 18.3677 18.9592 17.6
10 18.1857 18.9801 17.6823
11 18.1267 18.7089 17.5099
12 17.9848 18.5083 17.5529
13 17.8873 18.4358 17.3664
14 17.4543 17.9513 16.9258
15 16.5952 17.2566 16.1435
16 17.5426 18.4232 16.2633
17 16.927 17.5298 16.4492

Note there are two “1000”s here. I ran the Monte Carlo simulation (each solving the SDE 1000 times itself) 1000 times. I plotted the mean, max, and min times it took to solve the simulation. From the plot it’s very clear that the minimum exists somewhere around 15. 15!

What’s going on? My guess is that this is because of the time that’s not spent on the actual mathematics. Sometimes there are things performing logic, checking if statements, allocating new memory as the stacks grow bigger, etc. Although it is a math problem, there is more than just the math in this problem! Thus it seems the scheduler is able to effectively let the processes compete and more fully utilize the CPU by pushing the jobs around. This can only go so far: if you have too many workers, then you start to get cache misses and then the computational time starts to increase. Indeed, at 10 workers I could already see signs of problems in the resource manager.

Overload at 10 Workers as seen in Window's Resource Manager

Overload at 10 Workers as seen in Window’s Resource Manager

However, allowing one process to start re-allocating memory but causing a cache miss (or whatever it’s doing) seems to be a good tradeoff at low levels. Thus for this code the optimal number of workers is far above the number of physical cores.

Moral of the Story

The moral is, test your code. If your code is REALLY efficient, then sure, making sure you don’t mess with your perfect code is optimal. If your code isn’t optimal (i.e. it’s just some Julia code that is pretty good and you want to parallelize it), try some higher numbers of workers. You may be shocked what happens. In this case, the compute time dropped more than 30% by overloading the number of workers.

The post Optimal Number of Workers for Parallel Julia appeared first on Stochastic Lifestyle.